Emerging applications in IoT and edge computing have reignited interest in distributed optimization and learning. Yet, a unified approach for decentralizing centralized algorithms remains elusive. In the first part of this talk, I will introduce a general framework that enables centralized contraction-based algorithms to be executed in a distributed fashion across networks—while preserving their original convergence rates. The second part tackles a key challenge in distributed systems: privacy. I will present a consensus protocol built on Secret Sharing, which allows nodes to agree without revealing their private states. Unlike traditional approaches, this method is robust to node failures, enabling recovery of the consensus result without compromising individual privacy.