Using React Three Fiber with AI
Pillar
# Why typed tool calls beat code generation
Letting an LLM emit JavaScript that calls scene.add(new THREE.Mesh(...)) looks elegant in demos and breaks in production. The model emits old API calls, made-up methods, or correct-looking-but-broken material props. Typed tool calls fix this: the model sees a JSON-schema, every call is validated, malformed calls are rejected before they touch the scene.
# The scene-store pattern
// Pseudo-code for a Zustand scene store with AI tool calls
const useSceneStore = create((set) => ({
objects: {},
addObject: (type, overrides) => {
const id = nanoid(10)
set((s) => ({ objects: { ...s.objects, [id]: { id, type, ...overrides } } }))
return id
},
updateObject: (id, patch) => set((s) => ({
objects: { ...s.objects, [id]: { ...s.objects[id], ...patch } },
})),
// … 19 tools …
}))
// AI tool dispatch
const TOOL_DISPATCH = {
add_object: (input) => useSceneStore.getState().addObject(input.type, input),
update_object: (input) => useSceneStore.getState().updateObject(input.id, input.patch),
// … 17 more …
}
# Avoiding the re-render storm
When the AI emits 10 parallel tool calls, naive subscriptions cause every component to re-render 10 times. Mitigations:
- Read state with
store.getState()in event handlers, not via subscriptions. - Subscribe at the leaf component (per-object) so a single object update only re-renders that object.
- Apply tool calls as a single batch transaction when possible.
# The agentic loop
Single-shot mode: the user prompts; the LLM emits N parallel tool calls in one response; the client dispatches them. Multi-iteration mode: tool results feed back to the LLM for chained reasoning ("what did I just place? now place the lamp on top"). Yugma supports both, gated by a budget.
FAQ
Can I copy this architecture?
Yes — Yugma is closed-source today, but the architecture is well-documented in our research notes and engineering blog. The Zustand-store + typed-tool-call pattern is a general approach that works in any R3F app.
Does the LLM need to know all 19 tools?
Yes — they're sent as the tool schema in the LLM call. Modern LLMs handle 19 tools fine; even 50+ is within budget.
What LLM does Yugma use?
Cerebras Qwen-3 235B by default for speed; Anthropic Claude as fallback for harder reasoning.