# Sparsity

Leverage sparse structure in order to render large-scale semidefinite programs tractable

Interior-point methods for Semidefinite Programming (SDP) scale in a polynomial manner with the size of the largest Positive Semidefinite (PSD) block and the number of affine constraints (when solving up to epsilon-optimality). This complexity may be reduced if the problem instance contains structure, such as symmetry or sparsity. The cost and all constraints of an SDP with symmetry are invariant under group actions. An SDP has sparse structure if there exist elements which are not active in the cost nor any constraint matrix: their only function is to ensure that the matrix is PSD.

### Decomposed Structured Subsets

Structured subset approximations are cones that inner/outer approximate the PSD cone to generate upper/lower bounds on the SDP optimum. An example of a structured subset for a minimization objective is the cone of diagonally dominant (DD) matrices (inner/upper bound) or its dual (outer/lower bound). Our observation in (Miller et al., 2022) is twofold: imposition of a structured subset destroys sparse structure in an SDP, and requiring clique-submatrices are in a structured subset yields a broader set of solutions (more accurate bounds) as compared to requiring the original matrix is in the subset.

The below figures visualize an affine slice of a spectahedron involving a PSD-completable matrix constraint of size 4x4, with cliques of size 2x2 and 3x3. The black region is the slice of the PSD set. Imposing that the 4x4 PSD-completable matrix is in a structured subset (resp. DD or scaled DD) is stricter than forcing its cliques to be in the same structured subset.

This scheme allows for mixing cones: intractably large blocks can be approximated while small PSD blocks remain PSD.

Multiple kinds of structure can be combined together. For a system with simultaneous symmetry and sparsity properties, applying the symmetric then sparse decomposition before approximating yields the tightest bounds to the SDP.

### Rank Minimization

Our work in (Miller et al., 2019) aimed to solve SDP that contained rank-constraints. Rank-constraints and rank-minimization problems are NP-hard, but there exist convex heuristics that attempt to approximate the optimal solution. On their own, rank-constrained SDPs scale in an exponential manner with the size of the largest rank-constrained PSD block. After utilizing a minimum-rank completion, the complexity now scales in an exponential manner with the size of the largest rank-constrained clique. This phenomenon may be used to construct regularizers (reweighted trace heuristic) based on the sum of clique matrices. An application of this method is in Subspace Clustering, which is a highly sparse polynomial optimization problem that classifies noisy points into subspaces. We developed a more efficient chordal extension to reduce the complexity of subspace clustering, and applied it to solve clustering and system identification problems.