Posts Tagged Optimization

An example for MySQL query optimization

When I developed an MySQL / PHP web application, I struggled to optimized the database queries to make the application more efficient. One thing we can do is to optimize the SQL queries in the application. Here is one simple example.

I have a data table that includes a counter field. Every time user click the item, the counter should be updated. Before optimization, I had two SQL statements to complete the job. The first SQL statement is used to get the current counter and the second SQL is used to update the counter with the current counter value plus 1.

// get total clicks
$sql = "SELECT nclicks FROM links WHERE id_links=$id_links;";
$query_result = $this->_bbmsql->execute($sql);
$row1 = mysqli_fetch_array($query_result);
$nclicks = $row1[0] + 1;

// update links table
$sql = "UPDATE links SET nclicks = nclicks + 1 WHERE id_links=$id_links;";
$query_result = $this->_bbmsql->execute($sql);

As I learned more SQL knowledge, I realized that MySQL provide an easy way to do it. Since MySQL evaluate SQL statement from left to right, that allows us update a counter in one statement. After SQL optimization, I just use one simple SQL statement to fulfill the same job.

$sql = "UPDATE links SET nclicks = nclicks + 1 WHERE id_links=$id_links;";
$query_result = $this->_bbmsql->execute($sql);

Share

Tags: , , , , ,

Construction of objective function in optimization problem

Many optimization procedures require a single objective function. How to construct an efficient objective function in optimization problem is critical. On one side, we have multiple goals, how to create a single function to integrate all the goals together?  On the other end, we may have a single goal with specific aspect of requests for the goal. There are many factors to impact the efficiency of objective functions.

Recently I encountered a problem when I trained a neural network model to fit a single target with 4 correlated input variables. To find a good solution for the given neural network structure is not a easy thing when I adopted traditional objective function – root mean square error (RMSE) between the observed and predicted target values. The real problem is the observed target values are between 0 and 1. The estimated RMSE will be very small, especially after square. One simple solution to solve the problem is to multiply the observed and predicted target values by 100 and then calculate RMSE. A more sophisticated and general solution is to construct a objective function based on properties of the target variable.

Here is an examples. We keep the original RMSE objective function as one component of the final composite objective function. Since we want the predicted values are as close as possible to the observed values, we want all data points in a scatter plot (observed vs predicted) aligning in 1:1 line. Therefore, we want the slope and R2 to be 1 and the constant to be zero if we get the linear regression equation. So, the slope and R2 can be another component of the final composite objective function. So the final objective function should look like:

RMSE + (Slope – 1) ^2 + (R2 – 1) ^2

We want to minimize the value. Even though the function is complicated, good solutions were achieved much fast than the previous RMSE.

In addition, we can consider the ratio between observed and predicted value as an criteria. Our goal is to minimize the ratio. So the objective function could be:

Sum(abs(Predicted – Observed) / Observed)

All in all, we can construct simple or sophisticated objective functions for a specific optimization problem. What you have to do is to analyze your problem thoroughly and consider all possible solutions.

Share

Tags: , , , , , ,

Downhill Simplex Method in Multidimensions

In this article, C program of the downhill simplex method in Numeric Recipe in C – the Art if Scientific computing (Second Edition, Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P., 1992, Cambridge University Press, New York) was translated into MATLAB code. The following text comes from the book. (p.408-410)

The downhill simplex mehtod is due to Nelder and Mead. The mothod requires only function evaluations, not derivatives. It is not very efficient in terms of the number of function evaluations that it requires.
A simplex if the geometrical figure consisting, in N dimensions, of N+1 points (or vertices) and all their interconnecting line segments, polygonal faces, etc. In two dimensions, a simplex is a triangle. In three dimensions it is a tetrahedron, not necessarily the regular tetrahedron. In general we are only interested in simplexes that are nondegenerate, i.e., that enclose a finite inner N-dimensional volume. If any point of a nondegenerate simplex is taken as the origin, then the N other points define vector directions that span the N-dimensional vector space.

In one-dimensional minimizations, it was possible to bracket a minimum, so that the success of a subsequent isolation was guaranteed. There is no analogous procedure in multidimensional space. For multidimensional minimization, the best we can do is give our algorithm a starting guess, that is, an N-vector of independent variables as the first point to try. The algorithm is then supposed to make iss own waydownhill through the unimaginable complexity of an N-dimensional topography until it encounters a (local, at least) minimum.
The downhill simplex method must be started not just with a single point, but with N+1 points, defining an initial simplex. If you think of one of these points (it matters not which) as being your initial starting point P0, then you can take the other N points to be
Pi = P0 + Lei
where the ei’s are N unit vectors, and where L is a constant which is your guess of the problem’s characteric length scale. You could have different Ls for each vector direction.

The downhill simplex method now takes a series of steps, most steps just moving the point of the simplex where the function if largest (“highest point”) through the opposite face of the simplex to a lower point. These steps are called reflections, and they are constructed to conserve the volume of the simplex (hence maintain its nondegeneracy). When it can do so, the method expands the simplex in one or another direction to take larger steps. When it reaches a valley floor, the method constracts itself in the transerve direction and tries to ooze down the valley. If there is a situation where the simplex itself in all directions, pulling itself in around its lowest (best) point. The routine name amoeba is intended to be descriptive of this kind of behavior.
Termination criteria can be delicate in any multidimensional minimization routine. Without bracketing, and with more than one independent variable, we no longer have the option of requiring a certain tolerance for a single independent variable. We typically can identify one cycle or step of our multidimensional algorithms. It is then possible to terminate when the vector distance moved in that step is fractionally small in magnitude than some tolerance tol. Alternatively, we could require that the descrease in the function value in the terminating step be fractionally smaller than some tolerance ftol. Note that while tol should not usually be smaller than the square root of the machine precision, it is perfectly approciate to let ftol be of order the machine precision (or perhaps slightly larger so as not to be diddled br roundoff).

Note well that either of hte above criteria might be fooled by a single anomalous step that, for one reason of another, failed to get anywhere. Therefore, it is frequently a good idea to restart a multidimensional minimization routine at a point where it claims to have found a minimum. For this restart, you should reinitialize any ancillary input quantities. In the downhill simplex method, for example, you should reinitialize N of N + 1 vertices of the simplex again by the above equation, with P0 being one of the vetices of the claimed minimum.

AMOEBA.M

function [p,y,nfunk] = amoeba(funk, p, y, ndim, ftol, NMAX)
%function [p,y,nfunk] = amoeba(funk, p, y, ndim, ftol, NMAX)
%
% funk - the name of the function that you want to minimize
% p - the N+1-by-N matrix, row stands for the vertices, column stands for
% the variables
% y - the function values of N+1 vertices
% ndim - the number of variables
% ftol - functional convergence tolerance
% NMAX - the maximum number of function evaluations
% 
% OUTPUT
%   p,y has the same meaning
%   nfunk - the number of function evaluations
%
nfunk = 0;
TINY = 1e-20;
mpts = ndim + 1;
psum = sum(p,1);
while (1==1)
    ilo = 1;
    if y(1)>y(2)
        inhi = 2;
        ihi  = 1;
    else
        inhi = 1;
        ihi  = 2;
    end
    for i=1:mpts
        if (y(i) <= y(ilo)) 
            ilo = i;
        end
        if (y(i) > y(ihi)) 
            inhi = ihi;
            ihi  = i;
        elseif (y(i) > y(inhi)) && (i ~= ihi)
            inhi = i;
        end
    end
    
    rtol = 2.0 * abs(y(ihi)-y(ilo))/(abs(y(ihi)) + abs(y(ilo))+TINY);
    if (rtol < eps)
        [y(1),y(ilo)] = swap(y(1),y(ilo));
        [p(1,:),p(ilo,:)] = swap(p(1,:),p(ilo,:));
        break;
    end
    if (nfunk >= NMAX) 
        break;
    end
    nfunk = nfunk + 2;
    [ytry, p, y, psum] = amotry(funk,p,y,psum,ndim,ihi,-1.0);
    if (ytry <= y(ilo))
        [ytry, p, y, psum] = amotry(funk,p,y,psum,ndim,ihi,2.0);
    elseif (ytry >= y(inhi))
        ysave=y(ihi);
        [ytry, p, y, psum] = amotry(funk,p,y,psum,ndim,ihi,0.5);
        if (ytry >= ysave)
            for i=1:mpts
                if (i ~= ilo)
                    psum=0.5*(p(i,:) + p(ilo,:));
                    p(i,:)=psum;
                    y(i)=feval(@funk,psum);
                end
            end
            nfunk = nfunk + ndim;
            psum = sum(p,1);
        end
    else
        nfunk = nfunk - 2;
    end
end

function [ytry, p, y, psum] = amotry(funk, p, y, psum, ndim,ihi, fac)
fac1 = (1.0-fac) / ndim;
fac2 = fac1 - fac;
ptry = psum .* fac1 - p(ihi,:) .* fac2;
ytry = feval(funk,ptry);
if (ytry < y(ihi))
    y(ihi) = ytry;
    psum = psum + ptry - p(ihi,:);
    p(ihi,1:ndim)= ptry;
end

function [a,b]=swap(a,b)
swapv=a;
a=b;
b=swapv;

Example.M

for i=1:MP
    for j=1:NP
        if i == j+1
            p(i,j) = 1.0;
        else
            p(i,j) = 0.0;
        end
        x(j)=p(i,j);
    end
    y(i)=func(x);
end
disp(sprintf('%3s %10s %12s %12s %14s\n\n','i','x[i]','y[i]','z[i]','function'));
for i=1:MP
    ss1 = [sprintf('%3d ',i)];
    for j=1:NP
        ss1 = [ss1 sprintf('%12.6f ',p(i,j))];
    end
    ss1 = [ss1 sprintf('%12.6f\n',y(i))];
    disp(ss1);
end
[p,y,nfunc] = amoeba(@func,p,y,ndim,eps,1000);
disp(sprintf('\nNumber of function evaluations: %3d\n',nfunc));
disp(sprintf('Vertices of final 3-d simplex and\n'));
disp(sprintf('function values at the vertices:\n\n'));
disp(sprintf('%3s %10s %12s %12s %14s\n\n','i','x[i]','y[i]','z[i]','function'));
for i=1:MP
    ss1 = [sprintf('%3d ',i)];
    for j=1:NP
        ss1 = [ss1 sprintf('%12.6f ',p(i,j))];
    end
    ss1 = [ss1 sprintf('%12.6f\n',y(i))];
    disp(ss1);
end
disp(sprintf('\nTrue minimum is at (0.5,0.6,0.7)\n'));  

FUNC.M

function [res] = func(x)
res = 0.6 - bessj0((x(1)-0.5)^2 + (x(2)-0.6)^2 + (x(3)-0.7)^2);

function [ans] = bessj0(x)
ax=abs(x);
if (ax < 8.0) 
    y=x*x;
    ans1=57568490574.0+y*(-13362590354.0+y*(651619640.7 ...
        +y*(-11214424.18+y*(77392.33017+y*(-184.9052456)))));
    ans2=57568490411.0+y*(1029532985.0+y*(9494680.718 ...
        +y*(59272.64853+y*(267.8532712+y*1.0))));
    ans=ans1/ans2;
else 
    z=8.0/ax;
    y=z*z;
    xx=ax-0.785398164;
    ans1=1.0+y*(-0.1098628627e-2+y*(0.2734510407e-4 ...
        +y*(-0.2073370639e-5+y*0.2093887211e-6)));
    ans2 = -0.1562499995e-1+y*(0.1430488765e-3 ...
        +y*(-0.6911147651e-5+y*(0.7621095161e-6 ...
        -y*0.934935152e-7)));
    ans=sqrt(0.636619772/ax)*(cos(xx)*ans1-z*sin(xx)*ans2);
end
Share

Tags: , , ,

Simple way to utilize Excel Solver

Excel Solver is a powerful tool to solve any nonlinear problem in Excel. Excel provide a interface to allow use to select options and fill target and parameters. If you only work on one set of data and explore multiple non-linear function, it is fine to use the Excel Solver interface. More often you are in a situation that you need fit a same equation for multiple data set. Assume one row represents one data set and you have 1000 rows. How can you use Excel Solver to fit all 1000 rows one by one. It will be pretty tough to use Excel Solver interface to do that.
There solution is simple because you can use VBA code to call and control Excel Solver. The following is a very simple example to demonstrate how we can use the Excel Solver.

1: For I = 2 To 1001
2:    TARGET = "$DU$" & I
3:    PARAM = "$DR$" & I & ":$DT$" & I
4:    SolverOk SetCell:=TARGET, MaxMinVal:=2, ValueOf:="0", ByChange:=PARAM
5:    SolverSolve USERFINISH:=True
6:    SolverFinish KEEPFINAL:=1
7: Next

Line 1: provide the first and last row information in the for-loop.
Line 2: provide a cell that holds target, in this situation, it is SSE in DU2…
Line 3: provide a range reference to model parameters
Line 4: Set solver parameters and options
Line 5: Set USERFINISH to true allow program run multiple dataset automatically
Line 6: Set KEEPFINAL to 1 allow write the solution to spreadsheet
Line 7: Close the loop.
When you use this code clip, please change the target and param cell or range references.

Share

Tags: , , , ,