4 Integer Programming



Download 299.78 Kb.
Page4/4
Date11.10.2016
Size299.78 Kb.
#8
1   2   3   4

Even for this toy example, a large number of cuts are need to solve the problem. This is true in general. “Deep cuts” are much better, but cannot compete with “branch & bound methods” discussed next.
One could use the objective function to derive a cut: Since all variables must be integer, the value of the objective function z = 3y1 + 2y2 must also be integer. The LP relaxation of the problem has an objective value of , hence ≤ 10 must hold.
A cutting plane is then 3y1 + 2y2 + S4 = 10. Solving the problem with this added constraint results in the solution , , , , and with = 10. (Since the objective value has not changed, we presently encounter dual degeneracy).
The next cutting plane is then y2 + S4 ≥ 1, (or, alternatively, 3y1 + y2 + S5 = 9). Adding the cut results in an optimal solution , , , , 0.6667, , with
The next cut is S4 + S5 ≥ 1, or, rewritten in terms of the original variables and the new slack variable S6, it is written as 6y1 + 3y2 + S6 = 18. The optimal solution is then , , , , , , with . This solution is an integer optimum. The cuts are shown in the figure below.

4.3.2 Branch-and-Bound Methods
These methods are very flexible & are applicable to AILP & MILPs.
Idea: Starting with the LP relaxation, subdivide the problem into subproblems, whose union includes all integer solutions that are not worse than the best known integer solution.
For instance, if presently y3 = 5.2, we subdivide the problem (the “parent”) by adding the constraint y3 ≤ 5 & y3 ≥ 6, respectively (thus creating “children”).
Example:
Max z = 5y1 + 9y2

s.t. 5y1 + 11y2  94 Constraint I

10y1 + 6y2  87 Constraint II

y1 , y2 ≥ 0 and integer.

Solution Tree


Note: Each node of the solution tree represents one linear program.
The constraints at a node are all original constraints plus all additional constraints between the root of the tree & the node in question.
As we move down the tree, the problems get to be more constrained & thus their objective values cannot improve.
At any stage, the problem to be worked on is the “best” active node (whose z-value is the present upper bound (for max problems, lower bound for min problems)), the best known integer solution is the present lower bound (for max problems, upper bound for min problems).
Different modes: fully automatic (specify integrality conditions & let the optimizer do its thing), fully manual (manually construct the solution tree & solve the LPs graphically), or semi-automatic (manually construct the solution tree, whose LP solutions are obtained by some LP solver).
Same example:
If the IP problem has no feasible solution:
P: Max z = y1 + 4y2

s.t. 28y1 + 7y2 ≤ 49

30y1  6y2 ≥ 36

y1, y2 ≥ 0 and integer.

4.3.3 Heuristic Methods
Knapsack problem:
P: Max z = 12y1 + 20y2 + 31y3 + 17y4 + 24y5 + 29y6

s.t. 2y1 + 4y2 + 6y3 + 3y4 + 5y5 + 5y6  19

y1, y2, y3, y4, y5, … y6 = 0 or 1.

Greedy Method
Value per weight” of the individual items


Variable

y1

y2

y3

y4

y5

y6

Value per weight

12/2 = 6

20/4 = 5

31/6 = 5.1667

17/3 = 5.6667

24/5 = 4.8

29/5 = 5.8

Rank

1

5

4

3

6

2


Idea: Increase the values of variables one by one, starting with the highest rank, as long as resources are available.
Solution: y = [y1, y2, y3, y4, y5, y6] = [1, 0, 1, 1, 0, 1] with resource consumption 16 & z-value 89.
Swap method:
First swap move:



Leaving variable

Entering variable

New solution

ΔR

Δz

y1

y2

0, 1, 1, 1, 0, 1

2 + 4 = +2

12 + 20 = + 8

New solution y = [0, 1, 1, 1, 0, 1] with resource consumption 18 & z-value 97.

Further swap moves (terminate whenever no local improvements are possible):


Leaving variable

Entering variable

New solution

ΔR

Δz

y2

y1

1, 0, 1, 1, 0, 1

4 + 2 = 2

20 + 12 = 8

y2

y5

0, 0, 1, 1, 1, 1

4 + 5 = 1

20 + 24 = +4


New solution y = [0, 0, 1, 1, 1, 1] with resource consumption 19 & z-value 101.


Leaving variable

Entering variable

New solution

ΔR

Δz

y3

y1

1, 0, 0, 1, 1, 1

6+2 = 4

31+12 = 19

y3

y2

0, 1, 0, 1, 1, 1

6+4 = 2

31+20 = 11

y4

y1

1, 0, 1, 0, 1, 1

3 + 2 =  1

17 + 12 = 5

y4

y2

0, 1, 1, 0, 1, 1

3+4 = +1: infeasible




y5

y1

1, 0, 1, 1, 0, 1

5+2 = 3

24+12 = 12

y5

y2

0, 1, 1, 1, 0, 1

5+4 = 1

24+20 = 4

y6

y1

1, 0, 1, 1, 1, 0

5+2 = 3

29+12 = 17

y6

y2

0, 1, 1, 1, 1, 0

5+4 = 1

29+20 = 9


No further improvements are possible, stop!
Note: Greedy alone may result in very poor solutions.
Example:
P: Max z = 10y1 + 8y2 + 7y3

s.t. 54y1 + 48y2 + 47y3  100

y1, y2, y3 = 0 or 1.
Greedy solution: y = [1, 0, 0] with resource consumption 54 & z-value 10.
Optimal solution: y = [0, 1, 1] with resource consumption 95 & z-value 15.
Another heuristic uses Lagrangean relaxation (see Section 4.2). Given an IP problem PIP with some objective function z. Set up the Lagrangean relaxation PLagr of PIP by dualizing its i-th constraint using a Lagrangean multiplier , obtaining the objective function.
Denote an optimal solution to PIP by and an optimal solution to PLagr by . Whether or not the optimal solution to PLagr is feasible for PIP, denote by the objective function value for PIP with the point inserted. Since is optimal for PLagr , we have , which, in turn, is greater than or equal to (since and due to the assumption of feasibility of PIP). We have now established the relationship from Section 4.1.
Consider now the effect of the term on the relaxed objective function zLagr. If (y1, y2, …, yn) is feasible for PIP, the expression in brackets will be negative, so that multiplying it with , where , results in a positive contribution. If, on the other hand, (y1, y2, …, yn) is not feasible for PIP, the term will cause a negative contribution to the value of zLagr.
Hence we have penalized the value of zLagr and the value of this penalty is , and the larger the value of the Lagrangean multiplier , the larger the magnitude of this penalty. Since we are maximizing zLagr, the optimization process tends to favor points (y1, y2, …, yn) that are feasible for PIP, and this tendency will be stronger with larger values of ; for very large values of , the resulting will therefore be feasible as long as feasible solutions to PIP exist.
Heuristic procedure: Start with some arbitrary value of , and solve the corresponding relaxation PLagr. If the solution is feasible for PIP, will be reduced and PLagr solved again. If, on the other hand, the solution is not feasible for PIP, increase , and solve PLagr again.
This process can be applied to any number of constraints rather than just a single constraint as in this example.

Example: Consider the assignment problem from Section 2.2.7 with a maximization objective:

P: Max z = 4x11 + 3x12 + 1x13 + 8x21 + 5x22 + 3x23 +

2x31 + 6x32 + 2x33
s.t. x11 + x12 + x13 = 1

x21 + x22 + x23 = 1

x31 + x32 + x33 = 1
x11 + x21 + x31 = 1

x12 + x22 + x32 = 1

x13 + x23 + x33 = 1
x11, x12, x13, x21, x22, x23, x31, x32, x33 ≥ 0.
In addition to the usual constraints, we add the constraint
2x11 + 3x12 + x13 + 4x21 + 6x23 + 5x32 + 2x33 ≤ 8.
This is a complicating constraint, as due to its inclusion, the nice properties of the problem (such as integrality of all extreme points) vanishes. An obvious way to deal with this complication is to dualize the complicating constraint and solve the resulting (simple) assignment problem.
Without the complicating constraint, the unique optimal solution is , and otherwise, with . (This solution does not satisfy the complicating constraint).
Dualizing the additional constraint as , we add this penalty term to the objective function of the original problem. Using the dual variable ½ and solving the Lagrangean relaxation, we obtain , and otherwise. This is the same solution obtained for the assignment problem without the complicating constraint, so that the additional constraint is still violated.
Concluding that the value of the dual variable must be increased, we arbitrarily set . The unique optimal solution to the relaxed problem is then and = 0 otherwise, with z = 8. This solution satisfies the additional constraint and is therefore feasible to the original problem.
Try a value between = ½ and = 5, e.g., 1. The resulting relaxed problem then has the unique optimal solution , and otherwise, with z = 11. This solution is feasible for the original problem. We may continue with penalty values between ½ and 1, but terminate the process at this point. (Actually, the solution happens to be optimal).

Alternatively, we could have dualized all six assignment problem constraints and solved the Lagrangean relaxation, which would then be a knapsack problem. Since we are maximizing and since all coefficients are nonnegative, we may write the six assignment constraints as “≤” constraints:
P: Max z = 4x11 + 3x12 + 1x13 + 8x21 + 5x22 + 3x23 +

2x31 + 6x32 + 2x33

u1(x11 + x12 + x13 – 1)

u2(x21 + x22 + x23 – 1)

u3(x31 + x32 + x33 – 1)



u4(x11 + x21 + x31 – 1)

u5(x12 + x22 + x32 – 1)



u6(x13 + x23 + x33 – 1)
s.t.

xij = 0 or 1 for all i, j.
We must simultaneously choose dual variables u1, u1, …, u6. The resulting relaxation is an integer problem which is typically not easy to solve. (In the previous approach, the relaxation could be solved as an LP, it always had integer solutions).
Arbitrarily choose ½, which results in the objective function zLagr = 3x11 + 2x12 + 7x21 + 4x22 + 2x23 + x31 + 5x32 + x33 + 3. Maximizing this objective function subject to the knapsack constraints results in the solution , and = 0 otherwise. This solution violates the second, third, and fourth of the six assignment constraints, so that we increase the penalty parameters for the violated constraints to, say, .
The resulting knapsack problem, has an optimal solution and otherwise. This solution violates only the fifth constraint in the original problem. At this point, we could just increase the value of and the process will continue.
The above example demonstrates that (1) it is not always apparent beforehand which approach (i.e., dualizing which constraints) is preferable, and (2) which values of the dual variables will result in quick convergence. Actually, choosing good values for the dual variable is more of an art than a science.


Download 299.78 Kb.

Share with your friends:
1   2   3   4




The database is protected by copyright ©ininet.org 2024
send message

    Main page