Although it’s called “Programming”, Linear Programming has little to do with what you’d call programming today. The term dates back to a time where programming had more to with filling out tables than writing code.
Let me first hit you with the formal definition. In Linear Programming we have a number of real variables , and some linear value function we want to maximize (or minimize). The interesting part is the set of constraints on the variables we need to satisfy (otherwise optimizing would be trivial). The constraints are a set of linear inequalities
for .
Now this likely seems difficult to you and you can’t see any obvious applications. Good. This way you can feel very smart once you understand it. Let’s proceed with an examples.
In the Diet Problem we want to find a cheap diet, say for the military. This is one of the original applications of Linear Programming. and dates back to the first half of the twentieth century.
Let’s introduce a mathematical model of a diet. We have types of food and each foodstuff has different features, e.g. calories, price, different vitamins, etc. A diet, that is, a solution to our problem, consists of (kilograms) of food (say per month). The are the variables we want to find.
We want to optimize the price of our diet, so our objective function is the sum of the amount, , of food , times the cost, cost(), for . This is a linear function, because we don’t multiply two or more variables with each other. The cost of food is not a variable, it’s just a number.
The cheapest diet is a starvation diet where you don’t eat anything, but of course we want a diet that is cheap and nutritious, so we need to constrain our solution. To do so we can introduce a number of constraints. Things like
As you can easily see these are all linear constraints, since for example the calories of each food in our diet simply add up. Since we have a linear cost function and a set of linear constraints, our model is in the form of a linear program. Using a piece of software called an LP solver we can find an optimal solution, that is a diet that is as cheap as possible while not violating any of our nutritiousness constraints. I will later explain how these programs work.
One has to be somewhat careful in designing these constraints, otherwise the solutions one gets can be amusingly impractical. The article I linked at the beginning contains a funny anecdote about this.
Note that we implicitly assumed that we can have any real number as the amount of food we eat. This is bad because we might get a solution where we have to eat negative amounts of some food (Too many calories in your diet? Just try this one weird trick and eat negative amounts of butter!). We can fix this with additional constraints . Another problem is that some foods are not easily divided into small parts. It’s hard to eat 0.213kg of bananas a day without wasting some bananas. We’d like to have integer values for some of the . This can not be expressed with additional linear constraints1 and you’ll see later on that imposing integrality constraints on our solution makes the problem a lot harder.
But note that dropping some constraints, notably integrality constraints, can only improve the cost of the optimal solution. So a solution with fractional variables can be used as a lower bound for the optimal cost. This is an extremely useful insight if you want to prove bounds on the quality of some approximation algorithm you just came up with. Fractional solutions can also be a good start for finding true solution to your problem, for example by rounding.
Solving a fractional LP and then rounding is often an easy way to get an approximate solution to a problem that is hard to solve exactly. But of course it’s not the only way. The field of approximation algorithms tries to find approximate solutions, typically for NP-complete problems. It also contains hardness results. Some problems are not only hard to solve exactly, they’re also hard to approximately solve. This is interesting because it gives us a more nuanced view of the difficulty of hard problems. The typical CS undergraduate will stop after a NP-completeness proof, because NP-complete problems are a) all hard and b) all about the same, after all once you can solve one, you can solve them all. Approximation theory reveals a whole world of nuance between different NP-complete problems.
Let’s see an example of a simple approximation algorithm.
Suppose we have jobs that take times to complete and two machines that can work of them. We want to assign the jobs to the machines such that the total running time is minimized. This problem is NP-complete (reduction e.g. Knapsack). Let’s try a heuristic solution and see how good it performs. The natural approach is assigning the first job to the first machine and then assigning the -th job to the machines that has less work among the first jobs. Note that this algorithm doesn’t need to know all the jobs before hand, it can handle jobs as they come in. This is sometimes a very desirable property of an algorithm. Algorithms of this kind are called online algorithms.
How does a schedule such computed compare to the optimal schedule that might use arbitrarily clever algorithms and advance knowledge of all jobs? Every sensible algorithm is at least within a factor of two of the optimal schedule. Since we only have two machines we can’t do more than two units of work concurrently. The heuristic I described above is however better. It takes at most the time of an optimal schedule. We say it’s a approximation.
The proof is easy: Let be the size of the maximal job and let be the total amount of work we have to do. No schedule can be faster than . Hence the time an optimal schedule takes can be bounded as . Consider the situation before the last job is assigned. The size of the already assigned jobs is so the less loaded machine (wlog 2) has a load at most . Assuming machine 2 determines the total length, the algorithm produces a schedule of length
Click here to proceed to the next part, Solving LPs with Fourier Motzkin
Click here to go back to the index
at least not with a reasonable number of linear constraints, unless P=NP. We’ll maybe come back to this question. Eventually this series of articles will talk about techniques for finding solutions to linear programs where some variables have to be integers.↩