Even for this toy example, a large number of cuts are need to solve the problem. This is true in general. “Deep cuts” are much better, but cannot compete with “branch & bound methods” discussed next. One could use the objective function to derive a cut: Since all variables must be integer, the value of the objective function z = 3y_{1} + 2y_{2} must also be integer. The LP relaxation of the problem has an objective value of , hence ≤ 10 must hold. A cutting plane is then 3y_{1} + 2y_{2} + S_{4} = 10. Solving the problem with this added constraint results in the solution , , , , and with = 10. (Since the objective value has not changed, we presently encounter dual degeneracy). The next cutting plane is then y_{2} + S_{4} ≥ 1, (or, alternatively, 3y_{1} + y_{2} + S_{5} = 9). Adding the cut results in an optimal solution , , , , 0.6667, , with The next cut is S_{4} + S_{5} ≥ 1, or, rewritten in terms of the original variables and the new slack variable S_{6}, it is written as 6y_{1} + 3y_{2} + S_{6} = 18. The optimal solution is then , , , , , , with . This solution is an integer optimum. The cuts are shown in the figure below.

4.3.2 Branch-and-Bound Methods These methods are very flexible & are applicable to AILP & MILPs. Idea: Starting with the LP relaxation, subdivide the problem into subproblems, whose union includes all integer solutions that are not worse than the best known integer solution. For instance, if presently y_{3} = 5.2, we subdivide the problem (the “parent”) by adding the constraint y_{3} ≤ 5 & y_{3} ≥ 6, respectively (thus creating “children”). Example: Max z = 5y_{1} + 9y_{2}

s.t. 5y_{1} + 11y_{2} 94 Constraint I

10y_{1} + 6y_{2} 87 Constraint II

y_{1} , y_{2} ≥ 0 and integer.

Solution Tree

Note: Each node of the solution tree represents one linear program. The constraints at a node are all original constraints plus all additional constraints between the root of the tree & the node in question. As we move down the tree, the problems get to be more constrained & thus their objective values cannot improve. At any stage, the problem to be worked on is the “best” active node (whose z-value is the present upper bound (for max problems, lower bound for min problems)), the best known integer solution is the present lower bound (for max problems, upper bound for min problems). Different modes: fully automatic (specify integrality conditions & let the optimizer do its thing), fully manual (manually construct the solution tree & solve the LPs graphically), or semi-automatic (manually construct the solution tree, whose LP solutions are obtained by some LP solver). Same example: If the IP problem has no feasible solution: P: Max z = y_{1} + 4y_{2}

s.t. 28y_{1} + 7y_{2} ≤ 49

30y_{1} 6y_{2} ≥ 36

y_{1}, y_{2} ≥ 0 and integer.

4.3.3 Heuristic Methods Knapsack problem: P: Max z = 12y_{1} + 20y_{2} + 31y_{3} + 17y_{4} + 24y_{5} + 29y_{6}

Greedy Method “Value per weight” of the individual items

Variable

y_{1}

y_{2}

y_{3}

y_{4}

y_{5}

y_{6}

Value per weight

12/2 = 6

20/4 = 5

31/6 = 5.1667

17/3 = 5.6667

24/5 = 4.8

29/5 = 5.8

Rank

1

5

4

3

6

2

Idea: Increase the values of variables one by one, starting with the highest rank, as long as resources are available. Solution: y = [y_{1}, y_{2}, y_{3}, y_{4}, y_{5}, y_{6}] = [1, 0, 1, 1, 0, 1] with resource consumption 16 & z-value 89. Swap method: First swap move:

New solution y = [0, 1, 1, 1, 0, 1] with resource consumption 18 & z-value 97.

Further swap moves (terminate whenever no local improvements are possible):

Leaving variable

Entering variable

New solution

ΔR

Δz

y_{2}

y_{1}

1, 0, 1, 1, 0, 1

4 + 2 = 2

20 + 12 = 8

y_{2}

y_{5}

0, 0, 1, 1, 1, 1

4 + 5 = 1

20 + 24 = +4

New solution y = [0, 0, 1, 1, 1, 1] with resource consumption 19 & z-value 101.

Leaving variable

Entering variable

New solution

ΔR

Δz

y_{3}

y_{1}

1, 0, 0, 1, 1, 1

6+2 = 4

31+12 = 19

y_{3}

y_{2}

0, 1, 0, 1, 1, 1

6+4 = 2

31+20 = 11

y_{4}

y_{1}

1, 0, 1, 0, 1, 1

3 + 2 = 1

17 + 12 = 5

y_{4}

y_{2}

0, 1, 1, 0, 1, 1

3+4 = +1: infeasible

y_{5}

y_{1}

1, 0, 1, 1, 0, 1

5+2 = 3

24+12 = 12

y_{5}

y_{2}

0, 1, 1, 1, 0, 1

5+4 = 1

24+20 = 4

y_{6}

y_{1}

1, 0, 1, 1, 1, 0

5+2 = 3

29+12 = 17

y_{6}

y_{2}

0, 1, 1, 1, 1, 0

5+4 = 1

29+20 = 9

No further improvements are possible, stop! Note: Greedy alone may result in very poor solutions. Example: P: Max z = 10y_{1} + 8y_{2} + 7y_{3}

s.t. 54y_{1} + 48y_{2} + 47y_{3} 100

y_{1}, y_{2}, y_{3} = 0 or 1. Greedy solution:y = [1, 0, 0] with resource consumption 54 & z-value 10. Optimal solution:y = [0, 1, 1] with resource consumption 95 & z-value 15. Another heuristic uses Lagrangean relaxation (see Section 4.2). Given an IP problem P_{IP} with some objective function z. Set up the Lagrangean relaxation P_{Lagr} of P_{IP} by dualizing its i-th constraint using a Lagrangean multiplier , obtaining the objective function. Denote an optimal solution to P_{IP} by and an optimal solution to P_{Lagr} by . Whether or not the optimal solution to P_{Lagr} is feasible for P_{IP}, denote by the objective function value for P_{IP} with the point inserted. Since is optimal for P_{Lagr} , we have ≥ , which, in turn, is greater than or equal to (since and due to the assumption of feasibility of P_{IP}). We have now established the relationship from Section 4.1. Consider now the effect of the term on the relaxed objective function z_{Lagr}. If (y_{1}, y_{2}, …, y_{n}) is feasible for P_{IP}, the expression in brackets will be negative, so that multiplying it with , where , results in a positive contribution. If, on the other hand, (y_{1}, y_{2}, …, y_{n}) is not feasible for P_{IP}, the term will cause a negative contribution to the value of z_{Lagr}. Hence we have penalized the value of z_{Lagr} and the value of this penalty is , and the larger the value of the Lagrangean multiplier , the larger the magnitude of this penalty. Since we are maximizing z_{Lagr}, the optimization process tends to favor points (y_{1}, y_{2}, …, y_{n}) that are feasible for P_{IP}, and this tendency will be stronger with larger values of ; for very large values of , the resulting will therefore be feasible as long as feasible solutions to P_{IP} exist. Heuristic procedure: Start with some arbitrary value of , and solve the corresponding relaxation P_{Lagr}. If the solution is feasible for P_{IP}, will be reduced and P_{Lagr} solved again. If, on the other hand, the solution is not feasible for P_{IP}, increase , and solve P_{Lagr} again. This process can be applied to any number of constraints rather than just a single constraint as in this example.

Example: Consider the assignment problem from Section 2.2.7 with a maximization objective:

P: Max z = 4x_{11} + 3x_{12} + 1x_{13} + 8x_{21} + 5x_{22} + 3x_{23} +

x_{13} + x_{23} + x_{33} = 1 x_{11}, x_{12}, x_{13}, x_{21}, x_{22}, x_{23}, x_{31}, x_{32}, x_{33} ≥ 0. In addition to the usual constraints, we add the constraint 2x_{11} + 3x_{12} + x_{13} + 4x_{21} + 6x_{23} + 5x_{32} + 2x_{33} ≤ 8. This is a complicating constraint, as due to its inclusion, the nice properties of the problem (such as integrality of all extreme points) vanishes. An obvious way to deal with this complication is to dualize the complicating constraint and solve the resulting (simple) assignment problem. Without the complicating constraint, the unique optimal solution is , and _{ }otherwise, with . (This solution does not satisfy the complicating constraint). Dualizing the additional constraint as , we add this penalty term to the objective function of the original problem. Using the dual variable ½ and solving the Lagrangean relaxation, we obtain , and otherwise. This is the same solution obtained for the assignment problem without the complicating constraint, so that the additional constraint is still violated. Concluding that the value of the dual variable must be increased, we arbitrarily set . The unique optimal solution to the relaxed problem is then and = 0 otherwise, with z = 8. This solution satisfies the additional constraint and is therefore feasible to the original problem. Try a value between = ½ and = 5, e.g., 1. The resulting relaxed problem then has the unique optimal solution , and otherwise, with z = 11. This solution is feasible for the original problem. We may continue with penalty values between ½ and 1, but terminate the process at this point. (Actually, the solution happens to be optimal).

Alternatively, we could have dualized all six assignment problem constraints and solved the Lagrangean relaxation, which would then be a knapsack problem. Since we are maximizing and since all coefficients are nonnegative, we may write the six assignment constraints as “≤” constraints: P: Max z = 4x_{11} + 3x_{12} + 1x_{13} + 8x_{21} + 5x_{22} + 3x_{23} +

2x_{31} + 6x_{32} + 2x_{33}

– u_{1}(x_{11} + x_{12} + x_{13} – 1)

– u_{2}(x_{21} + x_{22} + x_{23} – 1)

– u_{3}(x_{31} + x_{32} + x_{33} – 1)

– u_{4}(x_{11} + x_{21} + x_{31} – 1)

– u_{5}(x_{12} + x_{22} + x_{32} – 1)

– u_{6}(x_{13} + x_{23} + x_{33} – 1) s.t.

x_{ij} = 0 or 1 for all i, j. We must simultaneously choose dual variables u_{1}, u_{1}, …, u_{6}. The resulting relaxation is an integer problem which is typically not easy to solve. (In the previous approach, the relaxation could be solved as an LP, it always had integer solutions). Arbitrarily choose ½, which results in the objective function z_{Lagr} = 3x_{11} + 2x_{12} + 7x_{21} + 4x_{22} + 2x_{23} + x_{31} + 5x_{32} + x_{33} + 3. Maximizing this objective function subject to the knapsack constraints results in the solution , and = 0 otherwise. This solution violates the second, third, and fourth of the six assignment constraints, so that we increase the penalty parameters for the violated constraints to, say, . The resulting knapsack problem, has an optimal solution and otherwise. This solution violates only the fifth constraint in the original problem. At this point, we could just increase the value of and the process will continue. The above example demonstrates that (1) it is not always apparent beforehand which approach (i.e., dualizing which constraints) is preferable, and (2) which values of the dual variables will result in quick convergence. Actually, choosing good values for the dual variable is more of an art than a science.