Sophisticated Optimization

Model the Best Set of Options to Achieve Your Goal

orange triangle

Maximize Value and Minimize Cost

Good business decisions give your organization a competitive advantage. Optimal decisions make that advantage massive. The complex interplay between inputs, assumptions and constraints within the model structure make achieving optimality an impossible task without the right tool.

Sophisticated optimization techniques tell you exactly what you should do to get the best result possible in any situation for your organization.

What is Optimization?

f

Decision Models for Any Industry

Optimization refers to a process or analysis that determines the set of decisions that maximize or minimize a key model output. These techniques can be applied to any application and is commonly used in construction & engineering, logistics & transportation, finance & banking, insurance & reinsurance, energy & utilities, manufacturing & consumer goods and other industries. Use cases are varied, and include:

  • Inventory management
  • Budgeting
  • Resource and production scheduling
  • Product and marketing mix
  • Supply chain planning
  • Market entry timing and more
  • Project, loan, and investment portfolio maximization
d

Prescriptive Recommendations

Sophisticated optimization gives the decision maker the precise values for all choices that need to be made – essentially what each controllable input should be. These optimal decisions will preserve logical real-life constraints and business rules, ensuring only feasible solutions are presented. Progress can be tracked visually, during the analysis, to assist in communicating the process and final, optimal outcome.

How Does Optimization Work?

Optimization operates on your model by trialing a potential solution, checking for feasibility, “learning” from the result, and then determining the best “direction” to search for superior solutions. This process continues until a global optimal solution is found.

To ensure feasibility, decision variables are constrained to the ranges available to the decision maker. Such constraints could be real-life limitations (such as availability of airlines seats or similar inventory) or business rules (such as avoiding overtime costs). Additional, complex constraints on any aspect of the model, or interplay between the decision variables and model assumptions, are tested with each trial to ensure only viable solutions are permitted.

The current best value of the output during an optimization is tracked in real time and can be displayed graphically, highlighting the rapid initial improvement in solutions, occasional spikes as breakthroughs are made, and finally as convergence to the optimal solution is achieved.

Startup business people group meeting, Business team meeting working with new startup project, discussion and analysis data the charts and graphs, professional business team.

The progress of an optimization in real-time, showing how the specified output changes as different solutions are trialed.

Linear and Nonlinear Optimization

In general, optimization problems fall into one of two categories: linear and nonlinear.

There are many different optimization, or “solving,” methods, some better suited to different types of problems than others. Linear solving methods include techniques known as tabu search, linear programming, and scatter search. Nonlinear solving methods include genetic algorithms. Genetic algorithms mimic the evolutionary processes of biology by introducing random new solutions (called “mutations”) while simultaneously developing what appears to be a promising solution (an “organism”). By introducing random mutations, genetic algorithms are able to “learn” or “evolve” a better overall, or global, solution than linear methods typically can.

Linear Optimization

Linear problems are characterized by a linear mathematical relationship between the input decision variables and all constraints, as well as with the output. For example, if an input goes up by x, then the output goes down by 2x, subject to a similarly simple constraint. Scheduling the shortest route (assuming straight lines and constant speed) to stop at a given number of destinations is a common example of a linear optimization problem.

Nonlinear Optimization

Nonlinear problems are a bit more advanced and feature at least one nonlinear relationship between decision variables and a constraint or the output. For example, if an input goes up by x, then the output goes up by some value to the power of x. Maximizing return on an investment portfolio subject to constraints on risk is an example of a common nonlinear optimization problem.

Types of Optimization Models

Beyond simple linear versus nonlinear classification of optimization problems, there are a number of other dimensions to this type of analysis.

Unconstrained Versus Constrained Optimization Models

Most practical optimization problems involve constraints of some kind – real-life limitations such as budget ceilings, schedules, or resource availability. These are called constrained optimization models.  Sometimes, however, unconstrained optimization techniques arise, especially as a way to revisit a constrained model.  Often, a constrained optimization analysis may not produce results that are good enough, and so the “ideal” constraints must be removed or relaxed, and the model reconsidered.  When this happens, constraints can be replaced by penalty functions which allow the formerly “illegal” values to be considered, but apply some kind of “penalty,” such as an additional cost, when they occur.  In this way, more realistic situations and options can be modeled.

Continuous Versus Discrete Optimization Models

In some optimization models, the variables in question lend themselves to a defined, limited set of possible values – often integers.  You can only select full integers of people for a production schedule, for example.  These are discrete optimization models.  Other models contain variables that can take on any value. For instance, you could invest any amount of dollars and cents in a given asset class of a portfolio.  These are continuous optimization models.  Continuous optimization problems tend to be easier to solve than discrete optimization problems because the availability of so many values enable algorithms to better infer data about other, better possible solutions.  However, improvements in algorithms and computing technology have made even complex discrete optimization problems more solvable than ever.

No-Objective, Single-Objective, and Multi-Objective Optimization Models

Most optimization problems have a single goal (or objective function) to solve – minimize a cost, or maximize a return, for example.  However, there are cases when optimization models have no objective function. In feasibility problems, the goal is to find values for the variables that satisfy the constraints of a model with no particular objective to optimize. By contrast, multi-objective optimization problems arise as well, in fields such as engineering, economics, and logistics. In these cases, optimal decisions need to be made while considering trade-offs between two or more conflicting objectives. For example, developing a new industrial component might involve minimizing weight while maximizing strength, or choosing a financial portfolio might involve maximizing the expected return while minimizing risk. These problems are modeled in optimization software as single objective models by either creating a weighted combination of the different objectives or by replacing some of the objectives with constraints.

Stochastic Versus Deterministic Optimization Models

In deterministic optimization, it is assumed that all the data for the given model are known with certainty. However, for many actual problems, the data cannot be known accurately because they represent unknown information about the future (for example, product demand or price for a future time period). In stochastic optimization, or optimization under uncertainty, such uncertainty is incorporated into the model. Probability distributions describing the unknown data can be estimated, and then a Monte Carlo simulation is run for each trial solution the optimization algorithm selects. In this way, a statistic of the simulated solution is optimized – for instance, you may want to minimize the standard deviation of the results to reduce risk.  The goal is to find some policy that is feasible for all (or almost all) the possible outcomes and optimizes the expected performance of the model.

Sophisticated Optimization Software from Palisade

Palisade’s Evolver and RISKOptimizer software bring a variety of optimization techniques to the modeling environment where most users work – Microsoft Excel. Typically, optimization programs are found in large, proprietary, enterprise systems that are inaccessible to most analysts, but Evolver and RISKOptimizer bring these techniques to the everyday decision-maker. Evolver utilizes both linear and genetic algorithm solving methods, making it well-suited to virtually any type of optimization challenge. RISKOptimizer does the same, but adds in Monte Carlo simulation. This means that you can accurately account for the uncertainty inherent in any trial solution presented by running a Monte Carlo on each. Because you are examining the simulation results of each trial solution, you are able to maximize or minimize a statistic of each simulation – like its mean or its standard deviation – to better focus your goals.

Join decision-makers around the world who

RELY ON PALISADE.