Contextual models
Once promoted to Level 3, we employ Universal Scene Description (USD), a framework for interchange of 3D graphics data. It enhances collaboration, non-destructive editing, and enables multiple views and perspectives. It is rapidly becoming the standard for industries such as visual effects, architecture, design, robotics, and rendering.
State Space Models offer exciting benefits over (and with) transformer models bringing more parallelized and efficient handling of rich graphics and large, continuous data.
For users, this means richer context-aware experiences across more diverse devices.
1. Level 1: Text-Only, No Absolute Location Reference
- Autonomy Stage: Manual Operation
- Basic text-based biographies with simple navigation
- All content creation and structuring done manually by users
2. Level 2: Text and Static Media, Basic Timeline
- Autonomy Stage: Assisted Operation
- Incorporates static images and audio, with a basic timeline view
- System assists with timeline generation and basic content suggestions
3. Level 3: Multimedia Integration, 3D Environments, Basic AI Assistance
- Autonomy Stage: Partial Autonomy
- Includes videos, interactive media, and basic 3D environments
- AI-powered content suggestions and fact-checking
4. Level 4: Immersive Experiences, Advanced AI Interaction, Cross-Platform
- Autonomy Stage: High Autonomy
- Fully immersive 3D and VR experiences with AI-driven interactive storytelling
- Seamless experience across multiple platforms
5. Level 5: Multi-Modal, Generative AI-Enabled, Real-Time and Portable
- Autonomy Stage: Full Autonomy
- Multi-modal interaction with real-time, AI-generated content
- Fully portable and adaptable experiences with minimal user input required
Spatial data and advanced visualizations using Cloud/AI are eliminating the physical-digital divide. This makes extended reality and digital twins practical as we employ simulations and edge sensors to close the gap between desired and actual states.