🧩 The Simplest Optimal Control Problem: Unpacking the Necessary Conditions
Before rockets, before economics, before high-level chaos—there was a humble question: What’s the simplest way to steer a system optimally? No frills. Just essentials. And from that question, optimal control theory blooms. Let’s explore the **simplest problem**, step-by-step, and uncover the necessary conditions that govern its solution.
🧠 The Problem Statement
We want to find a control u(t) that minimizes a cost functional:
Minimize: J = ∫₀ᵗᴲ L(x(t), u(t)) dt
Subject to: ẋ(t) = f(x(t), u(t)), x(0) = x₀
That’s it. No terminal cost. No path constraints. Just a system, a control, and an objective.
🔍 Necessary Conditions (Pontryagin Light)
Even in this simple setup, the solution isn’t obvious. Enter the machinery of optimal control. We introduce the **Hamiltonian**:
H(x, u, λ) = L(x, u) + λ · f(x, u)
Here’s what the necessary conditions say:
- State equation:
ẋ(t) = ∂H/∂λ = f(x, u) - Costate equation:
λ̇(t) = -∂H/∂x - Maximum principle:
u(t)minimizesH(x, u, λ)at each time
✨ Boundary Conditions
We know the initial state: x(0) = x₀. What about the costate? If the final state x(T) is free, then the terminal condition is:
λ(T) = 0
📌 Example: Minimum Fuel
Let’s minimize control effort:
Minimize: J = ∫₀ᵗᴲ u²(t) dt
Subject to: ẋ(t) = u(t), x(0) = x₀, x(T) = xₜ
This yields:
- Hamiltonian:
H = u² + λ·u - State equation:
ẋ = u - Costate:
λ̇ = 0 → λ(t) = const = -2u(t)
u*(t) = const, chosen to match x(T) = xₜ.
🎯 Final Thoughts
Even this barebones control problem reveals the structure beneath dynamic decisions. The state evolves. The costate watches. The Hamiltonian binds them. And the control dances between them, choosing paths that optimize over time, not just now.
“Simple problems teach us complex truths. Even the smallest system has a best path.” — your next favorite control theorist