MINISTRY OF EDUCATION AND TRAINING
HANOI PEDAGOGICAL UNIVERSITY 2
——————–o0o———————
NGUYEN THI THUY DUONG
DUAL LINEAR PROGRAMMING PROBLEM
AND ITS APPLICATIONS
BACHELOR THESIS
Hanoi - 2019
MINISTRY OF EDUCATION AND TRAINING
HANOI PEDAGOGICAL UNIVERSITY 2
——————–o0o———————
NGUYEN THI THUY DUONG
DUAL LINEAR PROGRAMMING PROBLEM
AND ITS APPLICATIONS
Major: Applied Mathematics
BACHELOR THESIS
Advisor:
Dr. Nguyen Trung Dung
Hanoi - 2019
Thesis Assurance
I assure that the data and the results of this thesis are not true and
not identical to other topics. I also assure that all the help for this thesis
has been acknowledged and that the results presented in the thesis has
been identified clearly.
Student
Nguyen Thi Thuy Duong
1
Thesis Acknowledgement
First and foremost, my heartfele goes to my Advisor Dr. Nguyen
Trung Dung (Hanoi Pedagogical University No2), for his continuous supports when I met obstacles during the journey. The completion of this
study would not have been possible without his expert advice, close attention and unswerving guidance.
Secondly, I am keen to express my deep gratitude for my family
for encourage me to continute this thesis. I owe my special thanks to
my parents for their emotional and material sacrifices as well as their
understanding and unconditional supports.
Finally, I own my thanks to many people who helped me and encourage me during my work. I am specifically thankful to all my best
friends at university for endless encouragement.
Student
Nguyen Thi Thuy Duong
2
CONTENT
Thesis Assurance
. . . . . . . . . . . . . . . . . . . . . . . . . .
1
Thesis Acknowledgement . . . . . . . . . . . . . . . . . . . . .
2
Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
Introduction
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
1. DUAL LINEAR PROGRAMING PROBLEM . . . . . . . . . .
8
1.1. The Linear Programming Problem . . . . . . . . . . . . . .
8
1.1.1. The general form of LPP . . . . . . . . . . . . . . . .
8
1.1.2. The standard form of LPP . . . . . . . . . . . . . . . 10
1.1.3. The canonical form of LPP . . . . . . . . . . . . . . . 11
1.2. The Dual Problem . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.1. Lagrange Function and Saddle Point . . . . . . . . . 12
1.2.2. Standard Form of Duality . . . . . . . . . . . . . . . 13
1.2.3. Canonical Form of Duality . . . . . . . . . . . . . . . 15
1.2.4. General Form of Duality . . . . . . . . . . . . . . . . 16
1.3. Primal-Dual Relationships . . . . . . . . . . . . . . . . . . . 18
2. DUAL SIMPLEX ALGORITHM
. . . . . . . . . . . . . . . . 22
2.1. The basic of the dual simplex algorithm . . . . . . . . . . . 22
2.2. The dual simplex algorithm . . . . . . . . . . . . . . . . . . 25
2.3. Using Matlab to solve LPP . . . . . . . . . . . . . . . . . . 29
2.4. Applications of the dual linear programming . . . . . . . . . 32
2.4.1. Checking a feasible solution whether or not an optimal solution . . . . . . . . . . . . . . . . . . . . . . . 32
3
2.4.2. Find the optimal solution set of dual problem . . . . 34
CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
REFERENCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4
NOTATIONS
R+
Set of non-negative real numbers
Rn
n-dimensional Euclidean vector space
A
Transpose matrix of matrix A
(x1 , ..., xn )T Vector column x
Ai
Row i of matrix A
Aj
Column j of matrix A
LPP
Linear Programming Problem
5
INTRODUCTION
1. Motivation
Duality theory is an important part of optimization theory and has
many applications in reality. For an optimal problem, mathematicians
researched a problem closely related to it that is called duality problem.
If the Origin problem is a minimum problem, the Dual problem will be
the maximum problem, it is expected that the dual problem is easier
to handle than the origin problem. From that desire, new methods and
software are created to help solve those problems more easily. On the
other hand, mathematicians also researched the practical applications of
duality problem through the theory system.
Duality theorems, in particular, the Complementary slackness theorem for applications for test optimization and the relationship between
the optimal solution of the original problem and the corresponding dual
problem. The theory of duality and application has been researched by
many domestic and foreign authors and obtained many impressive results. Due to the enthusiastic help of Dr. Nguyen Trung Dung, along
with my desire to research more deeply about the Dual problem, I would
like to choose the topic ”Dual linear programming problem and its
applications” as my research topic.The thesis focuses on the issues of
dual theory and applications including statements and how to construct a
dual problem from the original problem; Understanding the relationship
between the original problem and the dual problem; How to use Dual
Simplex Algorithm solving primal linear programming problem; Using
Matlab to find solutions of linear programming problems; Application of
duality problem through weak theorem of offset deviation.
6
2. Thesis Objectives
The main purpose of the thesis is to understand the duality problem
and its applications. More specifically, the thesis focuses on the following
two main topics:
1. Dual linear programing problem and its applications
2. Dual Simplex algorithm
3. Research methods
This thesis uses methods to collect, synthesize, analyze and research
on documents.
4. Thesis organization
This thesis is organised as follows:
• Chapter 1 Dual linear programing problem
In this chapter, Some basic concepts involving linear programming
problems; how to construct a Dual linear programming problem from
the forms of an original Linear programing problem; Relationships
between a pair of primal-dual linear problem is presented, respectively.
• Chapter 2 Dual Simplex Algorithm
This chapter addresses the ways to solve a Primal linear programming problem by using Dual Simplex Algorithm and using Matlab
to solve the Linear programming problems.
7
Chapter 1
DUAL LINEAR PROGRAMING PROBLEM
In this chapter, we recall some concepts and results of dual linear
programming problems that using to next chapter.
1.1. The Linear Programming Problem
In this section, we introduce some forms of linear programming
problems such as general form, standard form, and canonical form.
1.1.1. The general form of LPP
Find a vector x = (x1 , x2 , . . . , xn )T ∈ Rn such that
n
cj xj → min(max)
f (x) =
(1.1)
j=1
subject to
D:
n
aij xj ≥ bj
(i = 1, m1 )
aij xj ≤ bj
(i = m1 + 1, m2 )
j=1
n
j=1
n
aij xj = bj (i = m2 + 1, m)
j=1
xj ≥ 0 (j = 1, n1 )
xj ≤ 0 (j = n1 + 1, n2 )
xj unrestricted in sign (j = n2 + 1, n).
Common terminology for the LPP can be summarized as follows:
• The variables x1 , x2 , . . . , xn are called the decision variables.
8
• The constants cj , bi , aij are the input constants of LPP.
• The function being minimized or maximized f (x) is called the objective function or cost function.
• The restrictions are referred to as constraints.
n
• The first m constraints (those a function of all variables
aij xj on
j=1
the left-hand side) are called functional constraints.
• The variables xj ≥ 0, xj ≤ 0 are called the sign constraints.
For the LPP (1.1), we have some simple properties as follows:
1. Because maximizing a quantity is eqivalent to minimizing its negative, so each maximization problem can be transformed to a minimization problem:
f (x∗ ) = min{f (x), x ∈ D} ⇐⇒ −f (x∗ ) = max{−f (x), x ∈ D}.
2. Each inequality can be transformed to an equality by introducing
an extra nonnegative variable:
n
n
aij xj ≤ bi ⇐⇒
and
aij xj + xn+i = bi , xn+i ≥ 0;
j=1
j=1
n
n
aij xj ≥ bi ⇐⇒
j=1
aij xj − xn+i = bi , xn+i ≥ 0.
j=1
The variable xn+i is called slack variable.
3.
n
n
aij xj ≤ bi ⇐⇒ −
j=1
4.
n
j=1
n
aij xj = bi ⇐⇒
j=1
aij xj ≥ −bi .
n
aij xj ≤ bi and
j=1
aij xj ≥ bi .
j=1
9
5. Each unrestricted variable xj can be eliminated by the following
variable substitution:
xj = xj − xj , xj ≥ 0, xj ≥ 0.
From the above properties, we can convert any LP problems into one of
two forms: standard form or canonical form that will introduce in the
next section.
1.1.2. The standard form of LPP
We say that a linear programming problem is in standard form if it
has the following form:
Find a vector x = (x1 , x2 , . . . , xn )T ∈ Rn such that
n
cj xj → min
f (x) =
(1.2)
j=1
subject to
D:
n
aij xj ≥ bj
(i = 1, m)
j=1
xj ≥ 0 (j = 1, n).
Using vector and matrix notations as the follows:
x1
c1
b1
a11 a12 . . . a1n
x2
c
b
a
a
.
.
.
a
2
2
21
22
2n
∈ Rn , c = ∈ Rn , b = ∈ Rm , A =
x=
..
.. ,
..
..
..
..
. ...
.
.
.
.
.
xn
cn
bm
am1 am2 . . . amn
the standard form of LPP (1.2) can be put into compact form below
(so-called matrix form of LPP):
f (x) = cT x → min
subject to
D:
Ax ≥ b
x ≥ 0.
10
(1.3)
1.1.3. The canonical form of LPP
We say that a linear programming problem is in canonical form if
it has the following form:
Find a vector x = (x1 , x2 , . . . , xn )T ∈ Rn such that
n
cj xj → min
f (x) =
(1.4)
j=1
subject to
D:
n
aij xj = bj
(i = 1, m)
j=1
xj ≥ 0 (j = 1, n).
Using the same notations as the standard form, the canonical form
of LPP (1.4) can be put into compact form below (so-called matrix form
of LPP):
f (x) = cT x → min
(1.5)
subject to
D:
Ax = b
x ≥ 0.
Example 1.1.1. Write the following LPP into standard form:
2x1 − 3x2 + 4x3 − 5x4 → max
(1.6)
subject to
x1 + x2
− x4 = 6
− 2x2 + 9x3 + 3x4 ≥ 4
D:
5x1 + x2 − 7x3
≤2
x1 , x2 , x3 , x4 ≥ 0.
First, the equation x1 + x2 − x4 = 6 can be written as two below inequalities
x1 + x2 − x4 ≥ 6,
− x1 − x2 + x4 ≥ −6.
11
So, the standard form of LPP (1.6) as the follows:
−2x1 + 3x2 − 4x3 + 5x4 → min
subject to
x1 + x2
− x4 ≥ 6
− x1 − x2
+ x4 ≥ −6
D:
− 2x2 + 9x3 + 3x4 ≥ 4
− 5x1 − x2 + 7x3
≤2
x1 , x2 , x3 , x4 ≥ 0.
Moreover, by adding two slack variables x5 ≥ 0, x6 ≥ 0, we also have the
canonical form of LPP (1.6) as the follows:
−2x1 + 3x2 − 4x3 + 5x4 → min
subject to
x1 + x2
− x4
=6
− 2x2 + 9x3 + 3x4 − x5 = 4
D:
5x1 + x2 − 7x3
+ x6 = 2
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0.
1.2. The Dual Problem
In this section, we recall Lagrange Function and Saddle Point. We
establish Standard form of duality, Canonical form of duality and General form of duality.
1.2.1. Lagrange Function and Saddle Point
Consider a linear programming problem in standard form as follows:
f (x) = c x → min
subject to
DP :
Ax ≥ b
x ≥ 0.
12
(P)
Definition 1.2.1 (Lagrange Function). A function L(x, y) is called the
Lagrange function of problem (P) if it has the following form:
L(x, y) = c x + (b − Ax) y, x ∈ Rn+ , y ∈ Rm
+
Definition 1.2.2 (Saddle Point). A point (x∗ , y ∗ ) ∈ Rn+ × Rm
+ is called
the saddle point if the following inequality holds:
L(x∗ , y) ≤ L(x∗ , y ∗ ) ≤ L(x, y ∗ ), ∀(x, y) ∈ Rn+ × Rm
+.
Figure 1.1: The saddle point (x∗ , y ∗ ).
1.2.2. Standard Form of Duality
Obiviously, for each x ∈ Rn+ satisfying Ax ≥ b implies that b − Ax ≤
0. Thus,
c x
max L(x, y) =
y
+∞
if x ∈ Rn+ satisfies Ax ≥ b
otherwise.
Therefore, the linear programming (P) is equivalent to the following
problem:
min max L(x, y).
x
y
(1.7)
Change the order of finding minima and maxima of the problem
(1.7) we have the following problem:
max min L(x, y).
y
x
13
(1.8)
The problem (1.7) is called the primal problem, the problem (1.8) is
called the dual problem.
Next, we will find a linear programming problem so that it is equivalent to the problem (1.8). Letting
m
n
g(y) = min L(x, y) = min
cj xj +
bi yi −
x
x
j=1
i=1
n
m
m
= min
bi yi +
(cj −
aij yi )xj
x
i=1
j=1
m
n
aij xj yi
i=1 j=1
i=1
= min b y + (c − A y) x .
x
This implies that
b y
g(y) =
−∞
if c ≥ A y
otherwise.
Therefore, we obtain the linear programming form of the problem
(1.8) as follows:
g(y) = b y → min
subject to
A y ≤ c
DQ :
y ≥ 0.
(Q)
Example 1.2.1. If the primal problem is
f (x) = −x1 + 3x2 − 3x3 → min
subject to
4x1
+3x2
−3x3 −x4 ≥
−x1
+x2
+x3
x1
+5x2
−5x3 +x4 ≥
≥ −4
xj ≥ 0, j = 1, 4
then the dual problem is
g(y) = 8y1 − 4y2 + 6y3 → max
14
8
6
subject to
4y1
−y2
+y3 ≤ −1
3y1
+y2
+5y3 ≤
3
−3y1
+y2
−5y3 ≤ −3
−y1
+y3 ≤ 0
y ≥ 0, i = 1, 3.
i
1.2.3. Canonical Form of Duality
Consider a linear programming problem in canonical form as follows:
f (x) = c x → min
subject to
DP :
Ax = b
x ≥ 0.
(P)
In order to receive the dual problem of the problem (P), we first
convert problem (P) to the standard form as follows:
f (x) = c x → min
subject to
≥
Ax
b
−Ax ≥ −b
x ≥ 0.
(1.9)
Using the same argument in Subsection 1.2.2, we have the dual of the
problem (1.9) as follows:
b u + (−b v) → max
subject to
A u + (−A) v ≤ c
u ≥ 0, v ≥ 0.
Now, letting y = u − v, then we have the dual problem of (P)
g(y) = b y → max
15
(1.10)
subject to
A y ≤ c
DQ :
y unrestricted.
(Q)
Example 1.2.2. If the primal problem is
f (x) = x1 − 3x2 + 4x3 → min
subject to
2x1
+3x2
−3x3 +x4 = 12
x1
+x2
+x3
3x1
+5x2
−5x3 −x4 = 6
= 4
xj ≥ 0, j = 1, 4
then the dual problem is
g(y) = 12y1 + 4y2 + 6y3 → max
subject to
2y1
+y2
+3y3
+y2
+5y3
3y1
−3y1
+y2
−5y3
y1
−y3
y1 , y2 , y3 free.
≤
1
≤ −3
≤
4
≤
0
1.2.4. General Form of Duality
Correspondence between primal and dual problems are summarized
to the following table:
16
Primal Problem
Index
Dual Problem
c x → min
b y → max
n
aij xj ≥ bi
i = 1, 2, . . . , m1
yi ≥ 0
aij xj = bi
i = m1 + 1, m1 + 2, . . . , m2
yi unrestricted
aij xj ≤ bi
i = m2 + 1, m2 + 2, . . . , m
yi ≤ 0
j=1
n
j=1
n
j=1
m
xj ≥ 0
aij yi ≤ cj
j = 1, 2, . . . , n1
i=1
m
xj unrestricted
j = n1 + 1, n1 + 2, . . . , n2
aij yi = cj
i=1
m
xj ≤ 0
aij yi ≥ cj
j = n2 + 1, n2 + 2, . . . , n
i=1
Table 1.1: Relationships Between Primal and Dual Problems
Example 1.2.3. Find the dual problem of the following primal problem:
f (x) = 3x1 + 2x2 + 4x3 − x4 → min
subject to
2x1
+3x2
−3x3
+x4
≥ 12
x1
−2x2
+5x3
−2x4
= 4
3x1
+5x2
−5x3
−x4
≤ 6
x1 ≥ 0, x2 ≤ 0, x3 ≥ 0, x4 free.
Applying the rules in Table 1.1, the dual problem is
g(y) = 12y1 + 4y2 + 6y3 → max
subject to
2y1
+y2
+3y3
≤
3
3y1
−2y2
+5y3
≥
2
−3y1
+5y2
−5y3 ≤ 4
y1
−2y2
−y3 = −1
y ≥ 0, y free, y ≤ 0.
1
2
3
17
1.3. Primal-Dual Relationships
In this section, we will survey some important properties of relationships between the primal problem and the dual problem.
Consider a pair of primal- dual problems in canonical form as follows:
f (x) = c x → min
subject to
DP :
Ax = b
x ≥ 0
(P)
and
g(y) = b y → max
subject to
A y ≤ c
DQ :
y unrestricted.
(Q)
Theorem 1.3.1. For all x ∈ DP , for all y ∈ DQ , then we have f (x) ≥
g(y).
Proof. Let x, y be arbitrary feasible solutions of the primal and dual
problem, respectively. Then we have
g(y) = b y =< b, y >=< Ax, y >
=< x, A y >≤< x, c >= c x
= f (x).
Theorem 1.3.2. Let x∗ , y ∗ be feasible solutions of the primal and dual
problem, respectively. Then x∗ , y ∗ are optimal solutions of the pair of the
problems if f (x∗ ) = g(y ∗ ).
Proof. By Theorem 1.3.1, we have f (x∗ ) = g(y ∗ ) ≤ f (x) for all x ∈ DP .
Therefore, x∗ is an optimal solution of the primal problem. Analogously,
y ∗ is an optimal solution of the primal problem.
18
From Theorem 1.3.1, we have the following corollary.
Corollary 1.3.3. For the pair of the primal and dual problem ((P), (Q)),
we have the following statements:
i) If the primal problem is unbounded below, then the dual problem has
no feasible solution.
ii) If the dual problem is unbounded upper, then the primal problem has
no feasible solution.
Theorem 1.3.4. For the pair of the primal and dual problem ((P), (Q)),
we have the following statements:
i) If x∗ is an optimal solution of the primal problem, then the dual
problem has an optimal solution y ∗ . Moreover, we have f (x∗ ) =
g(y ∗ ).
ii) If y ∗ is an optimal solution of the dual problem, then the primal
problem has an optimal solution x∗ . Moreover, we have f (x∗ ) =
g(y ∗ ).
Proof. First, we prove the statement i). Without lost of generality, we
assume x∗ is an optimal basic feasible solution of the primal problem
with the basis J. Since x∗j = 0, ∀j ∈ J, so we have
AJ x∗ = b or x∗J = A−1
J b.
(1.11)
On the other hand, x∗ is the optimal basic feasible solution, i.e., ∆k ≤ 0
for all k = 1, 2, . . . , n. This implies
cJ z k − ck ≤ 0, k = 1, 2, . . . , n,
(1.12)
where z k is the coordinates of the column vector Ak with respect to the
k
basis J, i.e, Ak = AJ z k . Substituting z k = A−1
J A into (1.12), we have
cJ z k − ck ≤ 0, k = 1, 2, . . . , n,
(1.13)
k
cJ A−1
J A − ck ≤ 0, k = 1, 2, . . . , n.
(1.14)
or
19
This yields
cJ A−1
J A − c ≤ 0.
(1.15)
−1
Letting y ∗ = (A−1
J ) cJ . From (1.15), we have A (AJ ) cJ − c ≤ 0
this implies A y ∗ ≤ c, i.e., y ∗ ∈ DQ . Moreover, we have
g(y ∗ ) = b y ∗ = b (A−1
J ) cJ
∗
∗
= cJ A−1
J b = cJ xJ = f (x ).
By Theorem 1.3.2, y ∗ is the optimal solution of the dual problem.
Using the fact that the dual of the dual problem is the primal problem, so the statement ii) also holds.
From Theorem 1.3.2 and 1.3.4 we have the following corollary.
Corollary 1.3.5. Assume that x∗ ∈ DP and y ∗ ∈ DQ . Then x∗ is an
optimal of (P) and y ∗ is an optimal of (Q) if and only if f (x∗ ) = g(y ∗ ).
Next, we consider a pair of primal and dual problems in standard
form as follows:
f (x) = c x → min
subject to
DP :
Ax ≥ b
x ≥ 0
(P)
and
g(y) = b y → max
subject to
A y ≤ c
DQ :
y ≥ 0.
(Q)
Theorem 1.3.6 (Complementary Slackness Theorem). Assume x∗ ∈ DP
and y ∗ ∈ DQ . Then they are respectively optimal solutions of the pair of
primal and dual problems ( (P), (Q)) if and only if
n
aij x∗j − bi yi∗ = 0, ∀i = 1, 2, . . . , m
j=1
20
(1.16a)
m
aij yi∗ x∗j = 0, ∀j = 1, 2, . . . , n
cj −
(1.16b)
i=1
Proof. First, we put
n
aij x∗j − bi yi∗ , i = 1, 2, . . . , m
αi =
j=1
and
m
βj =
aij yi∗ x∗j , j = 1, 2, . . . , n.
cj −
i=1
Then, from the assumption that x∗ ∈ DP and y ∗ ∈ DQ implies αi ≥
0, ∀i = 1, 2, . . . , m and βj ≥ 0, ∀j = 1, 2, . . . , n.
m
i=1 αi
Next, letting α =
n
j=1 βj .
and β =
Note that α ≥ 0
and β ≥ 0. Therefore, conditions (1.16a), (1.16b) are equivalent to the
following condition
α = 0
(1.17)
β = 0.
This condition is eqivalent to α + β = 0.
On the other hand, we have
m
α+β =
n
αi +
i=1
m
=
j=1
n
aij x∗j
i=1
n
βj
j=1
j=1
n
m
aij yi∗ x∗j
cj −
+
j=1
i=1
m
cj x∗j −
=
−
bi yi∗
bi yi∗ = f (x∗ ) − g(y ∗ ).
i=1
Thus, the condition α + β = 0 if and only if f (x∗ ) = g(y ∗ ). By Corollary
1.3.5, the proof is completed.
21
Chapter 2
DUAL SIMPLEX ALGORITHM
In the last chapter, we know that for any linear programming problem, there is a corresponding dual linear programming problem. The
dual linear programming problem which constructed from the cost and
constraints of the original linear programming problem. The original
linear programming problem is called primal linear programming problem. Moreover, a linear programming problem is solved by the simplex
method. Therefore, a dual linear programming problem can also be
solved using the simplex method because every dual linear programming
problem is a linear programming problem. Duality is used to improve
the performance of the simplex algorithm (leading to dual algorithm).
In this chapter we present the dual simplex algorithm to solve LPP.
2.1. The basic of the dual simplex algorithm
The dual simplex algorithm is an attractive alternative for solving
linear programming problems (LPs). The dual simplex algorithm is very
efficient on many types of problems and is especially useful in integer
linear programming.
Consider a pair of primal and dual problems in standard form as
follows:
f (x) = c x → min
subject to
DP :
Ax = b
x ≥ 0
22
(P)
and
g(y) = b y → max
subject to
A y ≤ c
DQ :
y unrestricted in sign.
(Q)
Assume that rank(A) = m and the vectors {Aj , j ∈ J} consisting
m column vectors of the matrix A are linearly independent. Then the
column vectors {Aj , j ∈ J} is called a basis of the matrix A and denoted
by AJ . Denote the set K = {1, 2, . . . , n} \ J. The basic solution (xJ , xK )
of the problem (P) corresponding to the basis J is defined by
xJ = A−1 b
J
xK = 0.
(P)
Definition 2.1.1. A basis J is called a primal feasible basis if the corresponding basic solution is a feasible solution, i.e., xJ = A−1
J b ≥ 0. The
basis J is called the optimal basis if the basic solution corresponding to
J is an optimal solution.
Definition 2.1.2. A vector y is called a dual basic solution corresponding to the basis J if y = (AJ )−1 cJ ∈ DQ . A basis J is called a dual
feasible basis if the corresponding dual basic solution is a feasible solution of dual problem (Q).
Example 2.1.1. Consider the following linear programming problem
f (x) = x1 + x2 + x3 → min
subject to
x1
x2
DP :
+x4
−x5
−x4
x3
−2x4
6
+3x6 = −2
+x5
xj ≥ 0, j = 1, 6.
23
=
−x6 =
9