Experiments
Catalyst Experiments help you optimize processes in both lab and fab environments—whether you're tuning etch parameters and deposition rates in semiconductor manufacturing, optimizing reaction conditions and catalyst for mulations in chemical processes, or improving drug formulations and fermentation conditions in pharmaceutical development.
Traditional approaches like full factorial DOE or grid search require exhaustive testing, while trial-and-error wastes resources on configurations that don't improve results. Instead, Catalyst Experiments recommend which settings to test next, learning from each trial to zero in on optimal configurations faster—typically requiring 50-80% fewer trials while still finding better solutions.
You get a guided workflow: define what you want to optimize, run trials, and let the system suggest promising settings, without needing expertise in Bayesian statistics or building custom tools.
Under the hood, Experiments use BayBE, an open‑source library for Bayesian experimentation and optimization. Catalyst hides the math, but if you want the full details, see the BayBE docs.
What Is an Experiment?
An experiment in Catalyst is a structured study where you:
- Define targets (what you care about, e.g. yield, defect rate, cycle time)
- Set up parameters (the knobs you can turn, with allowed ranges or options)
- Optionally add constraints (rules valid settings must follow)
- Run trials (specific parameter combinations) and record the results
Experiments are versioned and shareable, so teams can see what was tried, what worked, and what didn’t.
For formal definitions (targets, parameters, constraints, etc.), see the BayBE documentation
Key Concepts (Plain Language)
-
Targets
The outcomes you want to optimize or monitor.
Examples: minimize defects, maximize throughput, hit a desired quality value. -
Parameters
The inputs you control in a trial.
Examples: temperature, pressure (continuous); tool type, recipe (categorical). -
Constraints
Rules that describe what is allowed or safe.
Examples:x1 + x2 ≤ 100, temperature ≤ 250°C, only certain combinations valid.
BayBE uses these to avoid impossible or unsafe configurations. -
Trials
Individual runs (or simulations) you perform.
Each trial = chosen parameter settings + measured target values.
What Experiments Let You Do
Set Up an Experiment
- Define one or more targets (e.g. maximize yield, minimize rework)
- Configure continuous and categorical parameters with bounds or allowed options
- Add constraints to reflect real‑world limits (safety, capacity, cost)
- Start from templates and examples (e.g. Hartmann 3D) to get going quickly
BayBE provides the underlying model and structure; Catalyst gives you a guided UI.
Run and Manage Trials
- Create trials (specific parameter combinations)
- Record measured or simulated results
- Add comments and metadata to capture context (“why we tried this”)
- Extend experiments as new ideas or constraints appear
BayBE uses completed trials to improve its recommendations over time.
Analyze Results
- Send data to Analytics to:
- Inspect distributions, trends, and outliers
- Check correlations between parameters and targets
- Use Feature Importances to see which parameters matter most
- Reuse datasets across experiments where appropriate
How Catalyst Decides What to Try Next
Instead of guessing the next settings yourself, Catalyst can use BayBE to:
- Look at completed trials
- Estimate which regions of the parameter space look promising
- Suggest new trials that balance:
- Exploring new areas where you have little data
- Refining the best areas found so far
This is Bayesian optimization in practice.
Benefits:
- Fewer trials than naive grid or random search
- Less manual parameter tuning
- Optimization logic treated as a service: you provide inputs and results; Catalyst + BayBE choose what to try next
If you want to understand the underlying algorithms (models, acquisition functions, etc.), see the BayBE docs.
Example: Hartmann 3D
The Hartmann 3D Function example shows how to:
- Configure an experiment with three parameters and a constraint
- Use a built‑in calculator to score candidate settings
- Upload batches of parameter sets and download results
This is a simple, standard test problem that uses the same ideas BayBE applies to real processes.
When to Use Experiments
Use Experiments when you want to:
- Systematically explore process or design spaces instead of trying random settings
- Track and compare multiple trial configurations over time
- Give engineers and data scientists a shared, transparent view of what was tried
- Bridge day‑to‑day process tuning with more advanced analytics and optimization
- Leverage BayBE’s Bayesian optimization without writing code or knowing the math