1. Introduction
Linear programming is the main mathematical methodology of operations research. In this note, we propose an alternative approach, relying on partially ordered linear spaces. Especially, we propose the treatment of such problems through positive bases, and sub- lattices of Euclidean spaces. The main result of this paper is based on a reformulation of the set of constraints of any linear programming problem. By this reformulation, we may determine whether the feasible set is compact, and as a consequence a solution of it does exist. A more detailed comparison between the Simplex Method and the proposed approach remains an open topic.
(2020) Mathematics Subject Classification: 46A40; 90C05
2. Preliminaries
2.1. Partially ordered vector spaces
Let us recall some essential notions about ordered linear spaces (We denote the partial ordering relation by the symbol “≥” throughout the paper): If E is a non-trivial vector space, then a partial -ordering is a binary relation ³ between the elements of this vector space. The properties of such a binary relation are the following:
- x ³ x, for any x ∈ E (reflexive)
- If x ³ y and y ³ x for some x, y ∈ E, then x = y (antisymmetric)
- If x ³ y and y ³ z, then x ³ z (transitive)
- If x ³ y and l ∈ ℝ+, then l × x ³ l × y, where denotes the scalar product, being defined on E and ℝ+ denotes the set of the real numbers.
- If x ³ y for some x, y ∈ E, then x + z ³ y + z, for any z ∈ E. Also, if x ³ y for some x, y ∈ E, then x - z ³ y - z, for any z ∈ E.
The last two properties denote that ³ is compatible to the linear structure of E.
Then the vector space, being endowed with a partial ordering relation ³, which satisfies the above properties, is a partially ordered vector space. The set
is called positive cone of this partially vector space. 0 denotes the zero element of the vector space E. Suppose that E is a partially ordered vector space, being endowed with the partial ordering ≥. Let us suppose some x, y ∈ E, where E is a partially ordered vector space, endowed by ≥. The supremum of {x, y} where x, y ∈ E, with respect to ≥ is some u ∈ E, such that u ≥ x, u ≥ y, while for any other r ∈ E, such that r ≥ x, r ≥ y, then r ≥ u. The infimum of {x, y} where x, y ∈ E, with respect to ≥ is some n E, such that x ≥ n, y ≥ n, while for any other m ∈ E, such that x ≥ m, y ≥ m, then n ≥ m. If the infimum and the supremum of {x, y}, where x, y belong in E, then E is called vector -lattice. In this case, the infimum of {x, y} is denoted by x ∧ y and the supremum of {x, y} is denoted by x ∨ y. If S is some non -trivial subspace of a vector -lattice, such that x ∨ y ∈ S and x ∧ y ∈ S for any x, y ∈ S, then S is called sub-lattice of E. A sub-lattice is actually a sub-space of a vector -lattice, which is a vector- lattice as well. Both the supremum and infimum of any two elements of S belongs in S. ℝl is considered to admit the point -wise partial ordering in the present paper. The point-wise partial ordering is defined in the following way: x ≥ y, if and only if x(s) ≥ y(s) for any s = 1,2, …, l. A positive vector of ℝl is any x ∈ ℝl, such that x(s) ≥ 0, for any s = 1,2, …, l. The set of the positive vectors of ℝl is denoted by
For any x, y ∈ℝl, x ∨ y under the point-wise partial ordering is actually the vector of ℝl, whose components are max{x(s), y(s)} for any s = 1, …, l. x ∧ y under the point-wise partial ordering is actually the vector of ℝl, whose components are min{x(s), y(s)} for any any s = 1, …, l. ℝl is a vector- lattice, under the point -wise partial ordering. For any x ∈ℝl, x ∨ 0 is denoted by x+ and (−x) ∨ 0 is denoted by x−. 0 is actually the zero element of ℝl. Hence, x = x+ − x−, under the point -wise partial ordering of ℝl. We denote a non-trivial subspace of ℝl by L. A basis {b1, b2,…,br} of L is called positive basis of L, if
where
and b1 ∈ L+, for any i = 1,…,r . Every component of bi is positive. The existence of positive basis for an arbitrary, non -trivial subspace L of
is not assured. The support of a positive vector x ∈ L+ is the set
Meaning the set of non-zero coordinates of x. We do recall the following results obtained by [1]:
- For any sub-lattice Z of ℝl, a unique positive basis exists, up to a positive multiplication for any vector of the positive basis. If Z is a sub -lattice of ℝl, whose a positive basis is {b1, b2,…,bm}, then for any
we have
.
- If some ordered subspace Z of ℝl is a sub -lattice of ℝl, then
, for any i ¹ j and i, j ∈ {1,…,m}], where {b1,…,bm} is the positive basis ofZ.
- If Z is a sub -lattice of ℝl, whose a positive basis is{b1, b2…,bm}, then for any s ∈ supp(b1) and
,
- We assume that Z is a sub -lattice of ℝl and the positive basis of it is{b1, b2…,bm}. If
, then
- We assume that Z is a sub -lattice of ℝl, whose positive basis is {b1, b2…,bm}. Then for each i = 1,2,…, m the vector b1 has minimal support in Z, namely a positive vector x∈ Z, x ¹ 0, such that supp(x) is a pure subset of supp(b1), does not exist.
Figure 1 illustrates the concept of a sub-lattice intersecting with the positive cone in ℝ³, showing the feasible region for a linear programming problem formulated in sublattice terms.
Figure 1: Geometric visualization of a positive cone in ℝ³ intersected by a sublattice. The shaded region represents the feasible set derived from the lattice-based formulation of the linear programming problem.
2.2. Optimization literature and applications of vector -lattices
Applications of vector lattices and sublattices have been explored in optimization since the early 2000s, particularly in problems related to portfolio optimization and option pricing. These studies demonstrate that lattice structures can offer alternative formulations for linear programming problems, especially when applied to real-world financial data where only a finite number of market states are considered.
Early contributions in this area include works such as Aliprantis, et al. [2], which discuss portfolio insurance using positive bases, and Polyrakis [1], which develops the theoretical foundations of positive lattice subspaces. More recent computational methods for identifying positive bases and sublattices have been proposed [3], along with numerical examples demonstrating their practical applicability.
Recent studies have continued exploring applications of lattice theory and convex optimization in linear programming and financial mathematics [4-6]. The main contributions in the first direction is [2,7], where authors provide a simplified way to view portfolio insurance through vector -lattices and positive bases. A detailed approach of positive bases and their relation to subspaces of vector -lattices is [8]. The fact that linear programming problems in [7] involve Euclidean spaces makes its content closer to applications. This is true, since a finite number of ‘states of the world’ is appropriate for securities’ payoff function determination, in case we use real -market data. As an example of such a computational approach for solution to the linear programming problem appearing in [7], we have to mention the recent works [3,9,10]. The formulation of finite -dimensional linear programming problems in the way appearing in the present paper, also appears in [11], in an attempt to provide another way of the determination of the vectors of a positive basis for a sub-lattice. This thesis also contains useful diagrams about cones, vector -lattices and sub -lattices. The reader of the paper may see in [12] a lot of numerical examples on the determination of positive bases’ vectors for a sub-lattice of a Euclidean space.
3. Main results
3.3. Applying Sub-lattices on Linear programming
The usual statement of a finite-dimensional Linear Programming Problem is the following: Minimize c × x, , if x ∈ C, where
C ∈ ℝk is the vector of cost variables and b ∈ ℝl. A is some matrix whose rows are equal to l and its columns are equal to k. A standard step of the so -called Simplex Method for solving Linear Programming Problems is turning the inequalities into equalities in the constraints' set C by adding the so -called complementary variables. This transformation appears if we suppose that the matrix of the constraints under this consideration is some l ´ j matrix V, where the first k columns of V are the same to the columns of A. We denote the columns of V by R1,…, Rj. To better understand the above typical form, we have to emphasize on the role of k and l. l is the number of the constraints in such a problem, while k is the number of the initial variables. The reader may Refer to Chapter 4 of [13] for a mathematical introduction to the so -called Simplex Method for solving Linear Programming Problems. As it is well-known, the Simplex Algorithm has a variety of methods. In the present paper we refer to the reformulation of the feasible set through the positive basis generated by the columns of A and b in the above typical statement (1). This statement provides a way to control whether a linear programming problem has solution or not. This is provided in the next Paragraph.
However, we keep the above notation and we determine a maximal set of linearly independent vectors among the set
Suppose that such a set is{z1, z2, …, zr }, where any z1,…, zr is a positive vector of ℝl. Z = [z1, z2, …, zr] is the subspace of ℝl generated by {z1, z2, …, zr }.
The determination of a positive basis for the sub -lattice W of ℝl generated by the set {z1, z2, …, zr} is specified via [8, Th. 3.6]: The function
for each s = 1, 2, …, l, where
if z(s) > 0. b is called basic function of{z1, z2, …, zr }. We denote by cardR(b) the number of vectors, belong to the range of b. The range of b is denoted by R(b).
1. The subspace Z is a sub -lattice of ℝl if and only if cardR(b) = r. If R(b) = {P1, P2, …, Pr}, then a positive basis {b1, b2, …, br} of Z is given by the following matrix equation:
where B is the r ´ r- matrix, whose i-th column is the vector Pi, for each i = 1,2, …, r. (b1, b2, …, br)T, (z1, z2, …, zr)T are the matrices, whose rows are the vectors {z1, z2, …, zr} and {b1, b2, …, br}, respectively.
2. The dimension of the sub -lattice W of ℝl generated by the vectors zi, i = 1,2,…, r is equal to the number of the distinct values of b, denoted by cardR(b). If R(b) = {P1, P2, …, Pm}, the sub -lattice W of ℝl generated by {z1, z2, …, zr} is specified through the rest steps of the algorithm of [8, Th.3.7]:
(a) We enumerate R(b), so that its first r vectors are linearly independent. We denote the new enumeration again by Pi,i = 1,2, …, r and denote by Ir+d the characteristic function of the set
for each d = 1,2, …, m - r.
(b) We define the vectors
, as follows:
, where
.
(c)
.
(d) We consider the basic function g for the set of vectors
. and suppose that
is the range of g (the range of g is consisted by m vectors of ℝm ). Then, the vectors of the positive basis of W are the set {b1, b2, …, bm} of ℝk. They do arise from the following matrix equation:
where D is a m = m matrix. The columns of D are the vectors of the set
we observe that the vectors z1, …, zm belong to ℝl, as well. Namely,
are both m ´ l matrices.
3.4. Existence of solution for linear programming problems
We do recall that a standard transformation step in the so-called Simplex Method is the addition of the complementary variables. Under the above determinations, a Linear Programming Problem is stated in the following way: Minimize
, such that
, where {b1, …, bm} is the positive basis of the sub- lattice generated by the columns of the matrix V mentioned above.
is the associated vector of the cost variables. The coefficients of
have to be positive and non-zero.
Proposition 3.1 The constraint set of the form (2) is closed and bounded in ℝm if
and ti > 0 for any i = 1 , .., m and there exist some positive, non -zero real number a, such that the constraint vector
satisfies the following condition: hi ³ a > 0, for any i = 1 , .., m.
Proof. We notice that the variable vector is actually h, which is a positive vector of ℝm. We also notice that b is a vector of ℝk whose expansion coefficients ti, i = 1 , .., mwith respect of the positive basis {b1, …, bm} are positive. If the above assumption is true, then the coefficients hi, i = 1 , .., m do take values in the intervals [a, ti] for any i = 1 , .., m, which implies that the set of constraints in the form (2) is closed and bounded. The last property arises from the properties of a positive basis of a sub -lattice in ℝl.
The significance of consideration that the constraints' set of a Linear Programming Problem is closed and bounded is important, since we obtain that a solution of it actually exists.
However, the formulation denoted above implies the following
Proposition 3.2 If some of the coefficients ti, i = 1 , .., m is zero, then the constraints' set in the form (2) is unbounded.
Proof. Assume there exists a vector y in ℝl such that
where
and tj = 0 for some
. Then, the vector
where ei = ti for any i = 1, …, m and ej = l × t for any l being a positive real number is also equal to zero. However, w lies in the constraints' set (2), as well. This implies that the set of constraints is unbounded.
Remark 3.3 If the constraints' set is unbounded, then a solution of a Linear Programming Problem may fail to exist.
The solution of a Linear Programming Problem of the form (2) is provided by the following
Proposition 3.4 The element of the sub-lattice generated by the columns of the matrix V in the Proposition 3.1 is a singleton, under the assumption mentioned in the statement of 3.1.
Proof. The proof arises from the fact that the supports of the elements of the positive basis are disjoint. Then, the solution is the vector b itself.
Alike in the so -called Simplex Method, when the set of constraints is unbounded, a solution for a Linear Programming Problem either does not exist, or its specification is hard.
Remark 3.5 The transformations of the constraints' set into the form (2) through sub -lattices and positive bases may be used to verify whether it is bounded or not. Hence, it is significant about understanding whether a solution of a Linear Programming Problem exists or not.
Remark 3.6 The reader may see in [9] for a MATLAB implementation that computes the positive basis of any sub- lattice generated by vectors of a Euclidean space.
3.5. Comparison with the simplex method
The classical simplex method iteratively moves along the edges of the feasible polyhedron to find the optimal solution, relying on vertex enumeration. In contrast, the present sublattice approach reformulates the feasible region using positive bases and partially ordered vector spaces, allowing boundedness and solution existence to be assessed directly from the lattice structure. While the simplex method is algorithmically efficient for many practical problems, the sublattice method provides a theoretical framework that may offer advantages in problems with specific lattice or convex geometry, especially in portfolio optimization where state spaces are finite. Further computational analysis is required to evaluate the efficiency of this approach relative to existing algorithms.
4. Conclusion
In this paper, we proposed an alternative lattice-theoretic formulation of finite-dimensional linear programming problems using positive bases and sublattices of Euclidean spaces. The reformulation allows for a direct examination of boundedness and feasibility by analyzing the structure of positive bases. Although the comparison with the simplex method remains open for computational complexity, the presented approach contributes conceptually by offering new theoretical insights into linear programming structure. Future work may include algorithmic development, computational complexity analysis, and broader applications in convex optimization and financial mathematics.
Declarations
The authors declare that the content of the paper does not include any data except the results and the remainder of its content.