M. Pawan Kumar
 
 

HOME

RESEARCH

PUBLICATIONS

GROUP

TALKS

TEACHING

CV

 

 

 

 

 

 

ANALYSIS OF CONVEX RELAXATIONS FOR MAP ESTIMATION

M. Pawan Kumar, V. Kolmogorov and P. Torr

 

Overview

We analyze several convex relaxations for maximum a posteriori (MAP) estimation of discrete Markov random fields (MRF). We show that the standard linear programming relaxation dominates (provides a tighter approximation than) a large class of quadratic programming and second order cone programming relaxations. Our analysis leads to new second order cone programming relaxations that are tighter than the linear programming relaxation.

The MAP Estimation Problem

A pairwise MRF is specified by the following: (i) a set of random variables V (shown as circles); (ii) a set of discrete labels (shown as trellises); and (iii) a neighborhood relationship over V (neighbors are shown connected to each other).

The energy of a labeling is the sum of two types of potentials: (i) unary potential of random variable Va taking a label li; and (ii) pairwise potential of neighboring random variables Va and Vb taking labels li and lj respectively.

MAP estimation (or energy minimization) requires us to obtain the lowest energy labeling, and is known to be NP-hard in general.

Integer Programming Formulation

The MAP estimation problem can be formulated as an integer program, that is, an optimization problem whose variables are constrained to be integers. The formulation is equivalent to minimizing a linear objective function over the convex hull of all valid assigments to the integer variables --- referred to as the marginal polytope. Here a valid assignment is one that ensures that each random variables gets assigned exactly one label.

The marginal polytope has an exponential number of facets. Thus, despite the reformulation as an integer program, the problem remains NP-hard. However, it does allow us to develop approximate algorithms using convex relaxations. See the exact form of the MAP estimation integer program.

Linear Programming Relaxation

The linear programming relaxation, proposed independently by several researchers, is one of the most well-studied relaxations for the MAP estimation problem. For example, [1] describes the relaxation in the context of metric labeling and provides some multiplicative bounds.

The relaxation replaces the marginal polytope with the local polytope, such that the optimization of the linear objective function can be performed in polynomial time. Note that the local polytope is a superset of the marginal polytope. See the exact form of the linear programming relaxation.

Second Order Cone Programming Relaxation

A general strategy for obtaining a second order cone programming relaxation of an integer program was described in [2]. Specific instances of such relaxations for the MAXCUT problem and the MAP estimation problem were proposed in [3] and [4] respectively.

Second order cone programming relaxations replace the marginal polytope by a set of second order cones such that their intersection forms a superset of the marginal polytope. Each second order cone is defined using optimization variables that correspond to a subgraph of the given MRF. See the exact form of second order cone programming relaxations.

Results

The linear programming relaxation dominates (provides a lower value of the objective function and thus, a tighter relaxation) any second order cone programming relaxation whose cones are defined on the following types of subgraphs: (i) tree-structured subgraphs with arbitrary potentials; (ii) even-cycles with pairwise potentials that have the same sign; (iii) odd-cycles where one edge has pairwise potentials that are a different sign from the rest.

Some instances of the relaxations that are dominated by the linear programming relaxation are: (i) the quadratic programming relaxation for MAP estimation described in [5]; (ii) the second order cone programming relaxation for MAXCUT described in [3]; and (iii) the second order cone programming relaxation for MAP estimation described in [4].

Second order cone programming relaxations that dominate the linear programming relaxation can be obtained by taking the intersection of the local polytope with second order cones that do not satisfy the above properties.

Publications

M. Pawan Kumar, V. Kolmogorov and P. Torr
Analyzing Convex Relaxations for MAP Estimation
Advances in Markov Random Fields for Vision and Image Processing

M. Pawan Kumar, V. Kolmogorov and P. Torr
An Analysis of Convex Relaxations for MAP Estimation of Discrete MRFs
In Journal of Machine Learning Research, 2009

M. Pawan Kumar, V. Kolmogorov and P. Torr
An Analysis of Convex Relaxations for MAP Estimation
In Proceedings of Advances in Neural Information Processing Systems, 2007

References

[1] C. Chekuri, S. Khanna, J. Naor and L. Zosin. Approximation algorithms for the metric labeling problem via a new linear programming formulation. In SODA, 2001.
[2] S. Kim and M. Kojima. Second-order cone programming relaxation for nonconvex quadratic optimization problems. Technical Report, Tokyo Institute of Technology, 2000.
[3] M. Muramatsu and T. Suzuki. A new second-order cone programming relaxation for max-cut problems. Journal of Operations Research of Japan, 2003.
[4] M. Pawan Kumar, P. Torr and A. Zisserman. Solving Markov random fields using second order cone programming relaxations. In CVPR, 2006.
[5] P. Ravikumar and J. Lafferty. Quadratic programming relaxations for metric labeling and Markov random field MAP estimation. In ICML, 2006.