Managing Scene Objects in React Three Fiber (with an AI Tool-Call Layer)
Three.js Discourse has a recurring question that nobody has answered well: "How should I manage scene objects (add, remove, get), update game object properties, manage object relationships, etc., within the r3f workflow?" Yugma's been live for a year on a scene-store pattern that solves this — and the same pattern is what lets the AI Director operate the scene graph through tool calls. Here it is in detail.
# TL;DR
- Use a Zustand store as the source of truth for the scene graph.
- Read state with
store.getState()in event handlers; subscribe at the leaf component. - Type your tool calls as JSON-schemas the LLM sees; never let the LLM emit raw Three.js code.
- One commit = one tool call = one undoable transaction.
# The pattern
// Zustand store — single source of truth
const useSceneStore = create<SceneState>((set, get) => ({
objects: {},
objectOrder: [],
addObject: (type, overrides = {}) => {
const id = nanoid(10)
set((s) => ({
objects: { ...s.objects, [id]: { id, type, ...defaultsFor(type), ...overrides } },
objectOrder: [...s.objectOrder, id],
}))
return id
},
updateObject: (id, patch) => set((s) => {
const obj = s.objects[id]
if (!obj) return s
return { objects: { ...s.objects, [id]: deepMerge(obj, patch) } }
}),
removeObject: (id) => set((s) => {
const { [id]: _, ...rest } = s.objects
return {
objects: rest,
objectOrder: s.objectOrder.filter((x) => x !== id),
}
}),
}))
Every render-time component subscribes only to the slice it needs:
function SceneObjectMesh({ objectId }: { objectId: string }) {
const obj = useSceneStore((s) => s.objects[objectId]) // re-renders only when this id changes
if (!obj) return null
return <mesh position={obj.transform.position}>…</mesh>
}
# Avoiding the re-render storm
When the AI emits 10 parallel add_object calls, naive subscriptions can re-render every component 10 times. Two mitigations:
- Per-object subscriptions. Each
subscribes only to that id. Adding object N+1 doesn't re-render objects 1..N. - Read in handlers, not props. Inside event callbacks, use
useSceneStore.getState()instead of subscribing. AIPanel does this; the handler doesn't re-render when the scene changes.
function AIPanel() {
// ❌ Don't do this — every scene mutation re-renders the panel.
// const objects = useSceneStore((s) => s.objects)
async function send(prompt: string) {
// ✅ Read at fire time. Handler doesn't subscribe.
const { objects, environment, selectedObjectId } = useSceneStore.getState()
const result = await aiCompose({ prompt, sceneContext: { objects, environment } })
for (const tc of result.toolCalls) TOOL_DISPATCHtc.name
}
// …
}
# The AI tool-call layer
The trick to letting an LLM operate this store reliably: typed schemas.
const TOOL_SCHEMAS = {
add_object: {
description: 'Create an object with full material properties.',
input_schema: {
type: 'object',
properties: {
type: { type: 'string', enum: ['box', 'sphere', 'cylinder', 'plane', /* … */] },
name: { type: 'string' },
position: { type: 'array', items: { type: 'number' }, minItems: 3, maxItems: 3 },
scale: { type: 'array', items: { type: 'number' }, minItems: 3, maxItems: 3 },
color: { type: 'string', pattern: '^#[0-9a-fA-F]{6}$' },
roughness: { type: 'number', minimum: 0, maximum: 1 },
metalness: { type: 'number', minimum: 0, maximum: 1 },
// …
},
required: ['type', 'name', 'position'],
},
},
// … 18 more …
}
const TOOL_DISPATCH: Record<string, (input: any) => void> = {
add_object: (input) => useSceneStore.getState().addObject(input.type, input),
update_object: (input) => useSceneStore.getState().updateObject(input.id, input.patch),
// …
}
The model sees the schema, emits validated tool calls, the dispatch routes them. Malformed calls are rejected at the schema layer before they ever touch the scene store.
# One commit, one tool, one undo
// Each tool call is one undoable history entry
function dispatch(toolCall) {
useHistoryStore.getState().commit() // snapshot before
TOOL_DISPATCH[toolCall.name](toolCall.input)
}
User wants to undo? useHistoryStore.getState().undo() restores the snapshot. Works for AI-emitted calls and human-emitted calls equally.
# Performance benchmarks
We've stress-tested up to 1000 objects in a single Yugma scene. Frame rate stays above 60fps on M1 MacBook Pro. The bottleneck shifts from React reconciliation to GPU triangles — at which point you reach for from Drei.
# What the LLM sees
When the user types "a wooden table with two chairs", the LLM gets the system prompt + the current scene context (in compact YSL format) + the 19 tool schemas. It emits something like:
[
{"name": "add_object", "input": {"type": "box", "name": "table_main", "position": [0, 0.375, 0], "scale": [1.2, 0.75, 0.6], "color": "#8B6914", "roughness": 0.7, "metalness": 0}},
{"name": "add_object", "input": {"type": "box", "name": "chair_left", "position": [-0.7, 0.225, 0.5], "scale": [0.45, 0.45, 0.45], "color": "#5a3e1f", "roughness": 0.7, "metalness": 0}},
{"name": "add_object", "input": {"type": "box", "name": "chair_right", "position": [0.7, 0.225, 0.5], "scale": [0.45, 0.45, 0.45], "color": "#5a3e1f", "roughness": 0.7, "metalness": 0}}
]
Three tool calls, parallel, one batch, one undo step. Same store mutations a human would make.
# The takeaway
The pattern is: typed scene store + per-object subscriptions + getState() in handlers + typed tool calls. It scales from a simple R3F project to a production AI 3D editor. Open-source projects can adopt the pattern; Yugma productizes it with a UI and an AI Director.