Finding a zero of a maximal monotone operator is fundamental in convex optimization and monotone operator theory, and proximal point algorithm (PPA) is a primary method for solving this problem. PPA converges not only globally under fairly mild conditions but also asymptotically at a fast linear rate provided that the underlying inverse operator is Lipschitz continuous at the origin. These nice convergence properties are preserved by a relaxed variant of PPA. Recently, a linear convergence bound was established in [M. Tao, and X. M. Yuan, J. Sci. Comput., 74 (2018), pp. 826-850] for the relaxed PPA, and it was shown that the bound is tight when the relaxation factor $ \gamma $ lies in $ [1,2) $. However, for other choices of $ \gamma $, the bound obtained by Tao and Yuan is suboptimal. In this paper, we establish tight linear convergence bounds for any choice of $ \gamma\in(0,2) $ using a unified and much simplified analysis. These results sharpen our understandings to the asymptotic behavior of the relaxed PPA and make the whole picture for $ \gamma\in(0,2) $ clear.