AI 3D for Digital Twins — A Preview of Yugma's Phase 13
Digital twins are 3D representations of real-world systems — buildings, factories, cities — that update with live data. The market is large ($15B+ projected by 2030) and the existing tools (Bentley iTwin, Siemens NX, NVIDIA Omniverse) are heavy enterprise software. Yugma's Phase 13 is a lighter-weight take.
# What we're building
A "digital twin" mode in Yugma where:
- A physical asset is represented as a Yugma scene.
- Sensor data streams into the scene via API.
- The scene updates in real time as data arrives.
- Operators view, query, and annotate the scene from a browser.
- AI Director can answer questions about the scene state.
# The early demo
Imagine a small warehouse twin:
- Physical: 8 zones, 200 inventory bins, 3 forklifts.
- Yugma scene: 8 colored zones, bin-position markers, 3 forklift markers.
- API stream: bin fill levels (every 30s), forklift positions (every 10s).
- Operator: opens browser, sees current state, asks AI "which zones are below 20% fill?", AI highlights them.
The architecture: Firebase Realtime Database receives sensor pings, scene-graph store updates, R3F re-renders. The AI Director has read access to the scene and answers questions in natural language.
# Why we think this works
- Browser-native lowers the deployment friction enterprise tools impose.
- AI Director as the operator interface scales to thousands of bins/zones without UI complexity.
- Real-time collab means operations + maintenance + analytics all see the same scene.
- Yugma's existing scene graph is most of the engineering already.
# What's still being designed
- Schema for sensor → scene mapping. Bins have a position; sensors have a stream. We need a clean binding layer.
- Permissions. Operators see different things than executives. RBAC needs work.
- Scale. A small warehouse is 200 bins. A real-world warehouse is 50,000. We have to figure out instancing, level-of-detail, and culling at twin scale.
- Historical playback. Operators want to see how zones evolved over a shift; that's not just a real-time scene, it's a time-series viewer.
# Who would buy this
- Mid-market warehouses + logistics (10k-100k SKUs). Currently using spreadsheets + maybe Power BI.
- Smart-building operators (commercial real estate, hospitals). Currently using BMS dashboards from Siemens / Honeywell.
- Smart-city pilots (signal density, parking, waste). Currently using ArcGIS + custom visualizations.
We're not aiming at NVIDIA Omniverse's enterprise-aerospace-and-automotive crowd. Different price point, different complexity.
# Timeline
We're not committing to a public ship date. Phase 13 needs a paying design partner before we build it; reach out if you want to be that partner.
# What this means for current Yugma users
Nothing yet. The scene-composition product stays the focus. Digital twins are a Phase 13 expansion, not a refocus.
Read research/v8 (Digital twin plan) Read the AI 3D scene composition pillar →