In this work, we demonstrate that a broad range of convex optimization algorithms, including alternating projection, operator splitting, and multiplier methods, can be derived from the framework of subspace correction methods via convex duality. To formalize this connection, we introduce the notion of \textit{dualization}, a process that transforms an iterative method for the dual problem into an equivalent method for the primal problem. This concept establishes new connections across these algorithmic classes. In particular, it reveals that classical algorithms such as the von Neumann, Dykstra, Peaceman--Rachford, and Douglas--Rachford methods can be interpreted as dualizations of subspace correction methods applied to suitable dual formulations. This unified viewpoint facilitates systematic algorithm design, enables the transfer of theoretical results, and promotes new developments in convex optimization.