| CARVIEW |
GPU-native convex optimization
for end-to-end AI training
Moreau embeds convex optimization layers directly into PyTorch, JAX, or TensorFlow—fully batched, fully differentiable, entirely on GPU.
What's your application?
The Optimization Layer
Differentiable by Design
Problem data flows in, optimal solutions flow out—with gradients propagating back through the entire solve.
Input
Zero · Nonneg · SOC · Exp · Power
Output
Gradients Out
implicit differentiation
Gradients In
Backward Pass
implicit differentiation
Capabilities
Built for ML training workflows
Moreau provides a Python API with PyTorch and JAX bindings. Problem data stays on GPU throughout training—no CPU round-trips.
GPU-Native
All computation stays in VRAM. No CPU round-trips during training loops. Compatible with PyTorch and JAX tensor workflows.
Batched
Solve 128–1024 problem instances in parallel on a single GPU. Designed for the batch sizes used in modern ML training.
Differentiable
Computes gradients of the solution with respect to problem data via implicit differentiation. Enables backpropagation through optimization layers.
Who It's For
Teams embedding optimization in training
For researchers and engineers who need hard constraints respected during training—not as a post-processing step. Constraints as first-class citizens, not soft penalties.
Robotics & Embodied AI
Train policies where actions come from solving constrained optimization. Backpropagate through dynamics, contact constraints, and joint limits.
Quantitative Finance
Differentiate through portfolio optimization with risk, leverage, and regulatory constraints during training.
Power Systems
Train models that respect network constraints, capacity limits, and operational safety requirements.
Supply Chain & Logistics
Differentiate through routing, scheduling, and inventory decisions with real-world constraints.
Team
Stanford optimization experts
Built by researchers who created the tools powering optimization at scale.
Creators of CVXPY (3M+ downloads/mo)
Creators of CVXPYlayers (900+ citations)
Stanford PhDs from Stephen Boyd's lab
Authors of 50+ papers on optimization