
Carlos Andrés Elorza Casas
Contact Information
Links
Additional Information
I completed my undergraduate degree in Chemical Engineering at the University of Waterloo in 2022. In my co-op experience, I worked in various fields and industries including oil and gas, medical devices, batteries and nanotechnology. However, the areas that have attracted my attention the most are process control, process modelling and projects involving computer simulations in Python and MATLAB. Hence, in my fourth year of undergrad, I started the accelerated master’s program at the University with Prof. Luis as my supervisor. Currently, I am a MASc. student and was awarded the Engineering Excellence Master’s Fellowship from the University of Waterloo for my high academic standing.
So far, my research has focused on the application of Non-linear Model Predictive Control (NMPC) on large-scale chemical processes. NMPC is the state-of-the-art technology in process control. It involves the formulation of a dynamic mathematical model of a process in an optimization problem, where the optimal control actions are determined based on an objective function, subject to physical and process constraints. We have studied the application of scenario-based robust NMPC, together with state estimators, such as Extended Kalman Filter (EKF) and Moving Horizon Estimator (MHE), on the benchmark Tennessee-Eastman process. The main disadvantages of NMPC are that the optimization problem must be solved at every sampling step, and it only guarantees constraint satisfaction when the model is accurate. Robust and stochastic optimization aim to consider model uncertainty. Explicit NMPC seeks to solve the optimization problem offline to generate control laws that can be evaluated online, under much less computational costs, as direct functions of the state feedback. The research objectives are to develop a controller that can account for process uncertainty by using explicit control laws that can move the computational challenges offline. For this, we are exploring the potential of adjustable robust optimization (ARO), or reinforcement learning to train neural networks that can evaluate the control laws.