AU Class
AU Class
class - AU

Using AI-Enabled Speech Control to Increase Immersion for XR Design Review

このクラスを共有
ビデオ、プレゼンテーション スライド、配布資料のキーワードを検索する:

    説明

    When using immersive systems such as virtual reality (VR), augmented reality (AR), and mixed reality (MR) for design review, a key attribute of the system is a seamless interaction with at-scale, realistic digital models. VRED software is a well-accepted, manufacturing design-review application offering close-to-truth photorealism and a highly immersive extended reality (XR) interface. Using artificial intelligence (AI)-enabled speech control with VRED software can increase the level of immersion, allowing a user to have direct interaction with the digital model without the need for scene-occluding graphical user interfaces (GUIs)—and also allowing the user to naively interact with VRED, enabling more users to perform unassisted design reviews in XR. Project Mellon is NVIDIA's internal speech-enablement project that uses the innovative Riva automatic speech recognition (ASR) software with a prototype dialogue manager and 0-shot learning natural language processing (NLP) to achieve a developer-friendly integration of AI-enabled speech. In this session, we’ll show Mellon and VRED, and we’ll discuss how Mellon is used to easily update command lists without the need for extensive NLP training.

    主な学習内容

    • Learn about how speech was used to drive immersion with VRED in XR.
    • Learn about how NLP uses an architecture of intents and slots to understand system commands and command variables.
    • Learn how AI is used in DM and NLP models.
    • Discover how a unique user experience can be built using variant sets in VRED combined with NVIDIA's Project Mellon.
    このクラスが好きな人はこんなも好きです