AI Agents Need a Robust Data Foundation
Andre Franca
Jan 26, 2026

Before AI agents can optimize your supply chain, they need something to reason about. That something is a context graph they can traverse, query, and simulate against.
AI agents need a structured representation of the supply chain to make decisions that hold up under constraints. A graph of entities, relationships, and events gives them that substrate.
In a previous article on context graphs, we described supply chains as graphs: nodes for suppliers, plants, warehouses, carriers, and customers; edges for supply, transformation, and routing; events flowing through the structure. That foundation lets an agent traverse dependencies, compute buffers, and evaluate alternate paths instead of improvising answers.
What an Agent Actually Does
An agent loops: ingest an event, retrieve context, simulate options, act.
When "Supplier X is late" arrives, the agent queries the graph to find every node that depends on X, checks current inventory at each point, and scores alternate suppliers by lead time, cost, and capacity. Those lookups require typed relationships and attributes you can query, not a folder of PDFs.
Simulation as a Service
Once the agent can traverse, it needs a safe way to test actions. Simulation-as-a-service treats the graph as a sandbox the agent can modify without touching production state.
The pieces:
- Base graph: the latest state of the network.
- Overlay: a copy-on-write view for hypothetical changes.
- Propagation: rules that push changes through the graph over time.
- Metrics: outcome scores like on-time delivery rate, cost, and customer impact.
The agent proposes interventions, simulates them, and picks the policy that scores best on the metrics you care about.
A Concrete Example
Port congestion hits Long Beach. The agent queries the graph to see which shipments route through that node, which orders they satisfy, and which customers are at risk. It simulates rerouting a subset through alternative ports, compares cost versus SLA risk, then executes the best option and drafts the explanation humans will want to see.
Without the graph and simulation, you're back to tickets, spreadsheets, and guesswork.
Why "Just Give the Model Access" Fails
A language model can't hold a real enterprise graph in context and can't compute cascades under capacity constraints. It can explain, summarize, and translate intent. It can't replace querying and simulation.
The architecture is hybrid: models handle intent and explanations; the graph system handles traversal, causal chains, and computation.
Getting Started (Without Lying to Yourself)
This is a sequence, not a toggle:
Build the data foundation, construct the graph from transactional reality, then add the causal links that make simulation accurate. That's where automated root cause analysis matters: it forces cause and effect into the model. Validate against history. Only then do you let agents act with real authority.
If you skip the foundation, the agent still produces output. It just won't be grounded.
A Minimal Simulation API
For engineers, the core loop can be exposed as three primitives:
createScenario(baseState, modifications) → scenarioId
propagate(scenarioId, timeHorizon) → projectedState
evaluate(scenarioId, metrics) → scores
Agents generate candidates, simulate, score, pick, act.
Companies that win won't be the ones with the best prompts. They'll be the ones with graphs their agents can traverse, and simulations their agents can trust.
