Affiliation:
1. Department of Industrial Engineering, South Dakota School of Mines and Technology, Rapid City, SD 57701, USA
2. Department of Mechanical and Civil Engineering, Florida Institute of Technology, Melbourne, FL 32901, USA
3. Engineering and Technology, University of New South Wales, Canberra 2600, Australia
4. The Systems Realization Laboratory, The University of Oklahoma, Norman, OK 73019, USA
Abstract
When dealing with engineering design problems, designers often encounter nonlinear and nonconvex features, multiple objectives, coupled decision making, and various levels of fidelity of sub-systems. To realize the design with limited computational resources, problems with the features above need to be linearized and then solved using solution algorithms for linear programming. The adaptive linear programming (ALP) algorithm is an extension of the Sequential Linear Programming algorithm where a nonlinear compromise decision support problem (cDSP) is iteratively linearized, and the resulting linear programming problem is solved with satisficing solutions returned. The reduced move coefficient (RMC) is used to define how far away from the boundary the next linearization is to be performed, and currently, it is determined based on a heuristic. The choice of RMC significantly affects the efficacy of the linearization process and, hence, the rapidity of finding the solution. In this paper, we propose a rule-based parameter-learning procedure to vary the RMC at each iteration, thereby significantly increasing the speed of determining the ultimate solution. To demonstrate the efficacy of the ALP algorithm with parameter learning (ALPPL), we use an industry-inspired problem, namely, the integrated design of a hot-rolling process chain for the production of a steel rod. Using the proposed ALPPL, we can incorporate domain expertise to identify the most relevant criteria to evaluate the performance of the linearization algorithm, quantify the criteria as evaluation indices, and tune the RMC to return the solutions that fall into the most desired range of each evaluation index. Compared with the old ALP algorithm using the golden section search to update the RMC, the ALPPL improves the algorithm by identifying the RMC values with better linearization performance without adding computational complexity. The insensitive region of the RMC is better explored using the ALPPL—the ALP only explores the insensitive region twice, whereas the ALPPL explores four times throughout the iterations. With ALPPL, we have a more comprehensive definition of linearization performance—given multiple design scenarios, using evaluation indices (EIs) including the statistics of deviations, the numbers of binding (active) constraints and bounds, the numbers of accumulated linear constraints, and the number of iterations. The desired range of evaluation indices (DEI) is also learned during the iterations. The RMC value that brings the most EIs into the DEI is returned as the best RMC, which ensures a balance between the accuracy of the linearization and the robustness of the solutions. For our test problem, the hot-rolling process chain, the ALP returns the best RMC in twelve iterations considering only the deviation as the linearization performance index, whereas the ALPPL returns the best RMC in fourteen iterations considering multiple EIs. The complexity of both the ALP and the ALPPL is O(n2). The parameter-learning steps can be customized to improve the parameter determination of other algorithms.
Funder
LA Comp Chair and the John and Mary Moore Chair at the University of Oklahoma
Pietz Professorship funds
Research Affairs Office at South Dakota School of Mines and Technology