$$ \newcommand{\pmi}{\operatorname{pmi}} \newcommand{\inner}[2]{\langle{#1}, {#2}\rangle} \newcommand{\Pb}{\operatorname{Pr}} \newcommand{\E}{\mathbb{E}} \newcommand{\RR}{\mathbf{R}} \newcommand{\script}[1]{\mathcal{#1}} \newcommand{\Set}[2]{\{{#1} : {#2}\}} \newcommand{\argmin}[2]{\underset{#1}{\operatorname{argmin}} {#2}} \newcommand{\optmin}[3]{ \begin{align*} & \underset{#1}{\text{minimize}} & & #2 \\ & \text{subject to} & & #3 \end{align*} } \newcommand{\optmax}[3]{ \begin{align*} & \underset{#1}{\text{maximize}} & & #2 \\ & \text{subject to} & & #3 \end{align*} } \newcommand{\optfind}[2]{ \begin{align*} & {\text{find}} & & #1 \\ & \text{subject to} & & #2 \end{align*} } $$
Nut graf: YALMIP is a MATLAB-embedded domain specific language for mathematical optimization. It is among the oldest non-commerical modeling langauges, and can target a variety of convex and non-convex solvers.
YALMIP, an orphaned acronym that once stood for "Yet Another LMI Parser", supports a hodgepodge of problem classes: LPs, QPs, SOCPs, and SDPs among them.
The paper consists largely of examples that transcribe somewhat complex SDPs into but a few lines of YALMIP. The thesis that motivated the creation of YALMIP is: Many control problems can be reduced to SDPs, and SDPs can be solved efficiently; however, converting problems into LMI/SDP form and interfacing with software packages that solve SDPs is a tedious task. YALMIP democraticizes this process by abstracting away the compilation of naturally expressed mathematical programs to solver-defined standard forms.
Lofberg presents two applications that I find interesting: sum-of-squares (SOS) decompositions and multiparametric programming. In SOS, the goal is to write a polynomial as a positive semidefinite quadratic form with respect to some polynomial vector . Such a decomposition, if it were to exist, would certify that were non-negative. SOS is trending in the theoretical computer science community, where it is being used to prove lower bounds.
Multiparametric programming is a type of optimization where the goal is to find an explicit solution to parametrized optimization problems, reducing a sequence of optimization problems to a sequence of function evaluations. Perhaps the simplest example of this is the orthogonal projection problem: the projection onto a subspace is parametrized by the point that is being projected. All that is needed for an explicit solution is to compute the linear projector; with this operator in hand, every projection onto the subspace can be computed via a matrix multiply.