Web API¶
Design guidelines¶
The Web API service aims to be a computational companion to experimental glass development. In the GlasAgent project, experimental glass developers will interact with the Web API through a chatbot powered by a large language model.
Ease of use¶
Experimental glass developers should be able to use the the API and understand the results.
- When mediated by a large language model, using the web API should not require expertise in the underlying computational methods.
- Explain what is computed and how.
Time to result: "overnight"¶
Time to result matters when integrating with experimental glass development cycles.
- Aim to provide results "overnight".
- Save time by parallelization where possible.
Accuracy: useful for glass development¶
The atomistic simulation of glasses comes with many potential sources of error vs. experiment (interatomic potentials, cooling rate, system size, equilibration times, ...). Aim to provide useful insights into structure-property relationships and their trends rather than quantitative predictions.
- Provide insight into trends in structure-property relationships.
- Communicate statistical uncertainty in results.
- Results should be reproducible and meaningful. If in doubt, warn the user.
Architecture¶
graph LR
A[FastAPI App] --> B[SQLite Cache]
B --> C[executorlib]
subgraph FastAPI
A1[Request hash]
A2[Cache lookup]
A3[Job creation]
end
subgraph SQLite
B1[Job metadata]
B2[Results]
B3[Hash index]
end
subgraph executorlib
C1[Local exec]
C2[SLURM cluster]
C3[Job caching]
end
Composition Normalization¶
The server uses a Composition model that accepts a dict (e.g. {"SiO2": 70, "Na2O": 15, "CaO": 15}) and generates a canonical string internally for database storage and matching. This ensures that {"SiO2": 70, "Na2O": 15, "CaO": 15} and {"Na2O": 15, "SiO2": 70, "CaO": 15} resolve to the same material. The canonical form (alphabetical oxide ordering, rounded values) is an implementation detail — API consumers always work with dicts.
DAG Resolution¶
The user never specifies intermediate steps. If they request elastic, the server knows it needs structure generation → melt-quench → elastic. The progress dict on the job status response exposes the resolved pipeline so the user can see what's happening.
Error Handling¶
- Job-level status is
completedeven if some analyses failed. Only core pipeline failure (melt-quench crash) results in job statusfailed. - Individual analysis failures appear in the
errorsdict on the job status and are omitted from results. - The
missingfield on the/glassesendpoint tells the LLM what hasn't been computed yet.
Google Custom Method Convention¶
Actions that don't map to CRUD use the colon convention: /jobs:search, /jobs/{id}:cancel. This avoids polluting the resource ID namespace (e.g., search being confused with a job ID) and clearly signals "this is a verb, not a noun."
MCP Tool Mapping¶
The API is designed to map cleanly to MCP tools:
| MCP Tool | Endpoint |
|---|---|
get_glass_properties |
POST /glasses:lookup |
search_simulations |
POST /jobs:search |
submit_simulation |
POST /jobs |
check_simulation_status |
GET /jobs/{id} |
get_simulation_results |
GET /jobs/{id}/results |
cancel_simulation |
POST /jobs/{id}:cancel |
The LLM's typical workflow:
1. get_glass_properties — check what's already known
2. If missing properties → search_simulations — check for cached/similar jobs
3. If no good match → submit_simulation — run new computation (after confirming with user)
4. check_simulation_status — poll until done
5. get_simulation_results — retrieve and present results
Simulation Data Lifecycle¶
Simulation data falls into three tiers with different retention guarantees:
-
Ephemeral simulation files — Raw output files in the LAMMPS working directory (trajectories, log files, restart files). These are not parsed or retained by the API. If the simulation directory is purged, the data is gone.
-
Cached intermediate results — Large data returned by the Python analysis functions that is too voluminous to store in the database. This includes, for example, the full melt-quench trajectory and the raw stress-autocorrelation arrays from the viscosity calculation. These live in the executorlib cache and can be re-materialised by re-running the function with the same inputs. However, if the cache is invalidated (e.g. after a Python version upgrade), the data is lost.
-
Persistent database results — Compact, presentation-ready data that enters the SQLite
result_datacolumn and is retained indefinitely. This includes scalar properties (viscosity values, elastic moduli), per-composition metadata, and downsampled plot data (e.g. convergence curves reduced to ≤ 1 000 points via log-spaced sampling). These results survive cache purges and are the authoritative record of a completed job.
When adding a new analysis, decide for each output field which tier it belongs to. The guiding rule: only store in the database what is needed to reproduce the plots and summary tables shown in the results page.