Cutting Plane Techniques

We have found a large class of problems, which can be stated conveniently as integer programming problems. We have also discovered that a large subclass of this cannot be solved by straight linear programming techniques. Thus we need to look further for ways to solve integer programming problems.

Previously we mentioned the idea of solving our integer programs with linear programming methods. Removing the constraint, which requires integer solutions, is called the relaxation of the integer programming problem. Solving this relaxed problem always brings an optimum solution, but as we have seen, rounding off this solution often does not always provide the optimum integer solution. We do know however that:

The optimum solution to the relaxation of an integer programming problem is an upper bound for the optimum integer solution

Consider the following example. Figure 1a shows a convex region of feasible solutions defined by several constraints. The grid indicates where inside the polygon the feasible integer solutions lie. The dot represents the optimal solution (for the linear programming problem) gained from maximizing x1 + x2. Note that although it is not an integer solution it is the upper bound for an optimum one.

Figure 1 - Cutting Plane Example

If we could shave off some of the area, which contains noninteger solutions, we could possibly find an optimal integer solution. Examine the vertical line through x1 = 3 in figure 1a. Cutting the polygon on this line will not destroy any feasible integer solutions to the problem. In figure 1b we have done this and have a new polygon.

The line we used to shave off part of the polygon is called a cut or a cutting plane since it pares off some of the noninteger area we do not care about. And, to do the carving, all we need do is to introduce this cut as an additional constraint. Note that no feasible integer solutions were omitted by including this new constraining line in out linear programming problem.

The dot in figure 1b again represents the optimum solution we get from solving the relaxation. In figure 1c we have added yet another constraint and finally arrive at an optimum integer solution.

This seems like a good idea. All we need do is solve the relaxation of the integer programming problem and generate additional constraints until we get an integer solution. During this process we would like to guarantee that as we add constraints:

a) No feasible integer solutions are excluded.
b) Each constraint reduces the feasible solution region.
c) Each constraint passes through an integer point.
d) An optimum solution is eventually found.

Let's go straight to an example. In figure 2a we have the linear programming problem specified by

maximize z = x1 + x2
subject to the constraints:
-5x1 + 4x2 0
5x
1 + 2x2 15
where all xi 0.

Figure 2 - Cutting Plane Application

Solving the relaxation of the integer program gives us the optimum solution indicated by the dot at the top of the triangle in figure 2a. The values for the variables in this feasible solution are:

x1

x2

y1

y2

2

5/2

0

0

 

and the final tableau after solving for this solution is:

x1

x2

y1

y2

bi

 

0

0

-1/10

-3/10

-9/2

-z

0

1

1/6

1/6

5/2

x2

1

0

-1/15

-1/15

2

x1

 

In the second row of the tableau there is a noninteger value for the variable x2. In this row we find the equation:

Let us leave the fractional portions of our variables on the left hand side of the equation and move x2 to the right. If we separate the right side into integer and fraction portions, we get the following.

Let us examine this equation. Suppose that all of the variables were set to their optimum integer solutions. Since we do not allow negative solutions, the left hand side of the equation must be greater than zero. This means that the right hand side of the equation cannot be negative either. Thus:

This in turn forces the quantity (x2 - 2) to be no more than 1/2. Since the value of x2 must be a nonnegative integer, x2 can only be zero, one or two. This means that the left side of the equation above will always have a value of more than 1/2. Putting this all together we assert that if x2 is to have an integer value then the following holds.

This is a necessary (but not sufficient) condition for optimum integer values for the variables. Adding this condition to our collection of constraints (along with its surplus variable y3) at this point in the solution has the same effect as beginning with the additional constraint x2 2. This cuts off the area above 2 for x2 and gives us the polygon in figure 2b and the following tableau for our linear programming problem.

x1

x2

y1

y2

y3

bi

 

0

0

-1/10

-3/10

0

-9/2

-z

0

1

1/6

1/6

0

5/2

x2

1

0

-1/15

-1/15

0

2

x1

0

0

1/6

1/6

-1

1/2

 

We are now one column shy of a basis and must remedy that immediately. Examination of the tableau reveals that y3 cannot enter the basis, but both y1 and y2 might if so desired. We may select either. We choose to pivot on the bottom row and place y1 into the basis. This results in the tableau:

x1

x2

y1

y2

y3

bi

 

0

0

0

-1/5

-4/5

-21/5

-z

0

1

0

0

1

2

x2

1

0

0

1/5

-2/5

11/5

x1

0

0

1

1

-6

3

y1

Again we have an optimum feasible solution. This one is indicated by the dot on the picture in figure 2b and corresponds to:

x1

x2

y1

y2

y3

11/5

2

3

0

0

As before, we select the row that provided a noninteger solution, this time involving x1. This gives us the equation:

We wish to do as before and end up with positive fractions on the left hand side so that it will be greater than zero. To do this, we just add y3 to both sides. Then we transform the equation into:

by moving the integer portions of x1 and y3 to the right. Now we group the integer portions of the right hand side together and get:

by moving x1 and y3 to the right and group the integer portions of that side. Again we see that the left side must be positive. Thus the right side must be positive and by employing similar arguments to those used above, we may assert that the following holds.

Adding this new cutting plane restricts our solution to values of x1 below two. So, we add the new cutting plane to the collection of constraints and pivot. Again we need one more variable in the basis and this time we chose y2. This leads to the final tableau:

x1

x2

y1

y2

y3

y4

bi

 

0

0

0

0

-3/5

-1

-4

-z

0

1

0

0

1

0

2

x2

1

0

0

0

-1

1

2

x1

0

0

1

0

-9

5

2

y1

0

0

0

1

3

-5

1

y2

with the optimum integer solution given below and shown in figure 2c.

x1

x2

y1

y2

y3

y4

2

2

2

1

0

0

A recapitulation is in order. First we relax the integer programming problem and solve for the optimum solution with linear programming methods. If we achieve an integer solution, then of course we are finished. If not, then there is a row of the tableau such as:

where b is not an integer. We then split b and all of the ai into integer and nonnegative fractional parts. (The integer portion of b is written bI and its fractional part is bF.) Now the equation is rearranged so that it looks like this:

We now consider the case where all of the variables are set to their optimum integer values (which must of course be nonnegative), and deduce several things from the above equation. If the fractional portions of all the ai are nonnegative, then we know that both sides of the equation are no less than zero. Thus

since it has an integer value and is not less than -bF. This in turn makes

If we add the above fractional equation to the collection of constraints in the tableau, it is the same as if we began with the previous integer equation as an initial condition. This is the essence of the cutting plane method of solving integer linear programming problems. It makes linear programming problems larger and larger as new constraints are added.

We merely iterate this process and hope for integer solutions to appear quickly. But, there are several problems. First, the tableaux can become very large indeed. Often though this is avoided by dropping slack variables introduced with cutting planes whenever they enter the basis.

A second problem enters because we are using computers to solve our linear programming equations and computers have finite precision. Thus, noninteger solutions (such as 5.99999999) might be difficult to detect. Employing algorithms in which coefficients remain integers does solve this. For example, save the numerator and denominator of fractions. But, this adds to the execution time.