Mission
Turn CAD intelligence into real-time, low-latency MR overlays usable on the shop floor. You own the layer between computed alignment data and what an operator actually sees through their headset. If overlays lag, flicker, or misalign – operators won't trust it, and the product is dead.
System Ownership
- Primary: MR rendering pipeline (alignment transform → holographic overlay → operator display)
- Primary: Spatial anchoring system (persistent world mapping, anchor drift compensation)
- Primary: Industrial UX layer (heatmaps, tolerance visualisation, contextual guidance, gesture/voice interaction)
- Secondary interface: 3D Perception team (you consume their alignment transforms + deviation maps)
- Secondary interface: SaaS Backend (multi-user session state, anchor sync across devices)
- Does NOT own: Point cloud registration (CV team), edge inference models (Edge AI team), cloud analytics dashboards (Backend team)
What You Will Build
- Real-time 3D overlay rendering – Render CAD geometry holographically overlaid on physical structures at 90 Hz. The operator must see design intent superimposed on the real object with < 2mm positional error.
- Spatial anchors & persistent world mapping – Place anchors that survive across sessions, device restarts, and multiple users. Handle anchor drift in large open industrial sites (> 500m²).
- Deviation heatmap visualisation – Colour-coded deviation overlays (green/yellow/red) mapped directly onto physical surfaces. Operators must instantly see where a structure deviates from CAD tolerance.
- Contextual guidance system – Directional arrows guiding operators to inspection points. Step-by-step measurement sequences. Voice-activated annotation.
- Industrial device UX – Interface designed for gloved hands, noisy environments, bright sunlight. Not consumer AR – this is factory-grade.
- Multi-user shared MR sessions – Multiple operators viewing the same holographic overlay simultaneously with consistent spatial state.
- Apple Vision Pro / Meta Quest Pro enterprise integrations – Build and maintain device-specific rendering paths and SDK integrations for enterprise MR hardware.
Core Technical Responsibilities
- Build and maintain the render loop at ≤ 11ms per frame (90 Hz) on target MR hardware – profile GPU/CPU per frame, eliminate jank
- Implement spatial anchor management: placement, persistence, drift detection, re-anchoring. Handle the case where an anchor drifts 5mm over 4 hours of continuous use
- Consume alignment transforms from the CV pipeline (SE(3) rigid body transforms) and project CAD geometry into MR world coordinates with sub-mm accuracy
- Build the deviation heatmap renderer: map per-point deviation values to colour gradients on 3D surfaces in real-time
- Solve Z-fighting when holographic overlay edges coincide with physical surface edges – implement depth offset strategies that work across viewing angles
- Design and implement the multi-user session synchronisation protocol over unreliable site WiFi (packet loss, high latency, intermittent connectivity)
- Build the gesture and voice interaction layer for industrial environments (competing noise, PPE gloves, limited hand tracking in bright light)
Required Technical Mastery
- Game engines: Unity 3D (primary) or Unreal Engine – deep knowledge, not tutorial-level. Custom render pipelines, shader programming, GPU profiling
- AR frameworks: ARKit, ARCore, AR Foundation, Meta SDK. You must have shipped at least one AR application to real users
- MR SDKs: OpenXR, MRTK (Mixed Reality Toolkit), HoloLens 2 SDK, or Apple Vision Pro SDK
- Spatial computing: Spatial anchors, world mapping, plane detection, mesh reconstruction from device sensors
- 3D rendering optimisation: Draw call batching, LOD management, occlusion culling, GPU instancing for large CAD models
- 3D maths: Coordinate system transforms (device → world → CAD), quaternion interpolation, projection matrices
- Networking: Real-time state sync for multi-user MR (WebRTC, custom UDP protocols, conflict resolution)
- Languages: C# (Unity), C++ (native plugins, performance-critical paths), Swift (Apple ecosystem)
Production Challenges You'll Solve
- Anchor drift on large sites – A 2000m² factory floor. Spatial anchors placed at one end drift by 8mm relative to anchors at the other end over a 4-hour shift. Build drift detection and automatic re-anchoring without disrupting the operator's session.
- Z-fighting at overlay edges – The CAD overlay of a steel beam coincides exactly with the physical beam edge. The rendering oscillates between showing the overlay and the real surface. Solve this across all viewing angles without introducing visible offset.
- Bandwidth on site WiFi – Multi-user session sync over a factory WiFi network with 200ms latency spikes and 5% packet loss. Session state must remain consistent. Build a protocol that handles this gracefully.
- Gloves and noise – The operator wears heavy-duty gloves (no finger tracking) and stands next to a running CNC machine (90dB ambient noise). Design interaction that actually works – large gesture zones, voice commands with noise cancellation, or physical button triggers.
- Sunlight washout – Outdoor construction site, direct sunlight. The MR overlay is barely visible. Implement adaptive rendering: high-contrast overlays, outline-only modes, and automatic brightness compensation.
Success KPIs
| KPI | Target | Measurement |
|---|
| Render frame time | ≤ 11ms (90 Hz sustained) | GPU profiler on target device, P99 metric |
| Overlay positional accuracy | < 2mm vs. ground truth | Metrology comparison against physical targets |
| Spatial anchor stability | < 3mm drift over 4 hours | Measured against fixed reference markers |
| Session uptime | ≥ 4 hours without re-anchor | Continuous field test on target hardware |
| Multi-user sync latency | < 500ms state convergence | Measured between 2+ devices on site WiFi |
| Operator task completion rate | ≥ 90% without assistance | Field usability test with real operators |
Failure If Underperforming
- Overlay positional error exceeds 2mm → operators see design intent misaligned with physical reality. They lose trust in the system and revert to manual measurement.
- Frame drops below 90 Hz → motion sickness, visual discomfort, operator fatigue. Device gets shelved after one shift.
- Anchor drift undetected → deviation heatmaps display incorrect values. False pass/fail decisions on QC inspections. Liability risk.
- Multi-user sync breaks → two operators viewing the same structure see different overlays. Confusion, lost time, eroded confidence.
Collaboration Interfaces
| With | Interface |
|---|
| Lead CV Engineer | You consume their alignment transform (4×4 SE(3) matrix) and deviation heatmap (per-point scalar). Coordinate system contract must be exact. |
| Edge AI Engineer | They handle on-device inference. You handle on-device rendering. Shared GPU/CPU budget on the same hardware – coordinate resource allocation. |
| Backend Engineer | Multi-user session state, anchor persistence across sessions, and scan data upload flow through their APIs. |
| CAD Geometry Engineer | They provide the mesh representations you render. File format and LOD contract must be defined jointly. |
Why This Role Is Mission-Critical
We don't sell software – we sell trust. The MR overlay is the moment the operator decides whether to trust the measurement. If the overlay is smooth, accurate, and intuitive, the product becomes indispensable. If it stutters, drifts, or confuses, the headset goes back in the box. You own the trust layer. Every millimetre of accuracy achieved by the CV team, every optimisation by the Edge AI team – all of it is invisible if your rendering fails.
About Us
Building the D2R (Design-to-Reality) platform – sub-millimetre CAD alignment + edge AI + mixed-reality overlay for industrial field workers. Venture-backed, seed-stage, < 20 engineers. Your work will be used by operators on factory floors and construction sites across India and globally.
- Location: Bangalore / Hyderabad
- Stage: Seed / Pre-Series A (venture-backed)
- Industries: Construction, Manufacturing, Infrastructure, Energy