Monday, March 26, 2012

Constrained Optimization - Single Equality Constraint

As a prerequisite for  studies on Compressed Sensing algorithms there are a lot of mathematical concepts to be studied. Most important among them being optimization technique. The optimization problem can be broadly classified as Constrained and unconstrained. Constrained optimization is a type of optimization problem where the search space is dictated by constraints. While unconstrained optimization problem have an unrestricted search space.

The constraints may be of two types of which they may be equality constraints or inequality constraints. This post aims at discussing the equality constrained optimization technique. An example of such an optimization is depicted below.

which calls for a search of an optimal value at which f(x) is minimum on the set of points x that satisfies g(x)=0.

The condition for this is totally different from that of the unconstrained optimization problem which states the solution to be

The concept of Levelset  is to be exploited to solve this problem. A levelset is a set of points on which function value is a given constant. The objective function is represented as a levelset sequence starting from the minimum possible level set. 

There should be a value of levelset for which the corresponding levelset curve of the objective function f(x) just touches the g(x) = 0. At this contact point denoted by we can draw a common tangent  to the two levelsets. The tangent would be orthogonal to both the gradients   and  drawn at  which states that they are collinear.
This has been shown in the figure below.


The gradients may be in the same direction or opposite direction depending on direction f(x) and g(x) is increasing at the point  and the optimal conditions for the two cases are as below.

Case 1: If the gradients are in same direction then the solution is found by 


Case 2: If the gradients are in the different direction the the solution is given by


The following post we shall discuss the multiple equality constrained problem. The visualization of the levelset concept comes very handy in the familiarization of these concepts.




Tuesday, February 21, 2012

Overcomplete Basis for Sparse Coding?

An overcomplete basis has the number of basis vectors greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Overcomplete bases is expected to yield better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures. Initially the technique might seem to contradict with the concept of sparseness due to the increased number of analytical coefficients. But the increased dimensions produces extra degrees of freedom which can be exploited to increase the sparsity of the set of coefficients. Thus only those coefficients that deemed significant are evident in the representation. This type of coding forms the basis for what is called the 'Holy Grail' of Audio coding called the Object coding which can be seen as a footstep towards musically intelligent acoustic coding.