Virtual Reality Design Examples

Explore top LinkedIn content from expert professionals.

  • View profile for Oleg Frolov

    Product Design and UX Engineering

    16,420 followers

    Here is another experiment with a spatial pie menu. This time, I tried to see how different types of buttons would interact with each other. In this prototype, there are three types: — Action Button. Invokes some functions of the system. Media, Settings, Apple Intelligence; — Binary Toggle. Changes the state of the system and reflects the changes. Passthrough Mode On/Off, Microphone On/Off; — App Button. Incapsulates some system functions inside itself, reflecting its states. Recorder. Observation: Reducing the dimensions of interaction is the simplest way to increase its accessibility and decrease the number of possible error cases. Using polar coordinates in the pie menu allows you to restrict the item selection to angular movement (1D) and control the selection states (hover, selected, active, default) with radius (1D), which makes the interaction easier and more forgiving to perform (compare it to Quest OS's action menu with 2D selection approach). Also, the Pie Menu is idle when the radius between the center of the menu and your pinch position projected on the UI is under a certain threshold. This helps prevent false positive selections when you accidentally trigger the menu or change your mind. #handtracking #quest #virtualreality #unity3d #spatialcomputing #spatialdesign #spatialui #spatialux #vrdesign #prototyping #uxengineering #csharp #piemenu #xrdesign #interactiondesign

  • View profile for Ken Pfeuffer

    Associate Professor | Sapere Aude Research Leader | Explorer in HCI, XR, AI

    4,116 followers

    New Linkedin article: Design Principles & Issues for Gaze and Pinch Interaction With the imminent release of the Apple Vision Pro, a wave of innovative technology will be going to get into people's hands. The "eyes and hands" interface mixes up interaction design, indicating a need for principles, frameworks, and standards. This article highlights 5 design principles and 5 issues for designing eyes & hands interfaces, drawing insights from both my personal experience and scientific articles in the area of human-computer interaction. Whether you're interested in design, tech, or research in this evolving space, the article provides valuable perspectives to enhance your understanding. Feel free to contact me if you have any questions! Join the conversation on the intriguing intersection of design, spatial technology, and human factors. #VR #AR #VirtualReality #AugmentedReality #Spatialcomputing #VisionPro #InteractionDesign #HumanComputerInteraction #designprinciples

  • View profile for Michael J. Proulx

    Research Scientist, Behavioral AI | Professor of Cognition and Technology | Integrating AI, Multimodal Evaluation & Sensors, Cognitive Science, and Accessibility for Enhanced, Ethical Tech Applications & Interpretability

    5,045 followers

    New paper alert! Gaze Inputs for Targeting: The Eyes Have It, Not With a Cursor Eye tracking is enabling people to use gaze for interactions in AR, VR and other domains. Can eye tracking enable users to target and select elements as well or better than controller or head-based targeting? Here, we explored visual feedback methods (none, cursor, outline, and resize) that let a person know where the system thinks their gaze is pointing. We tested this with different sizes of objects and we also assessed signal quality requirements. We found that: 1) high-quality eye tracking outperforms controllers in throughput and matches them in movement time and subjective measures; 2) cursors are the least preferred form of visual feedback; and 3) even with degraded eye tracking accuracy, gaze input remains comparable to controllers and outperforms head tracking for larger elements. For more details, you can access the full paper, open access, with the link in the comments. Thanks to a great team of collaborators at Meta Reality Labs Research: Ajoy Fernandes, Immo Schuetz, and Scott Murdison and support for study logistics from Joseph Zhang, Carina Thiemann, Duane Sawyer, Joel Shook and Ken Koh.

  • View profile for Ahsen Khaliq

    ML @ Hugging Face

    36,024 followers

    Scaling Up Dynamic Human-Scene Interaction Modeling Confronting the challenges of data scarcity and advanced motion synthesis in human-scene interaction modeling, we introduce the TRUMANS dataset alongside a novel HSI motion synthesis method. TRUMANS stands as the most comprehensive motion-captured HSI dataset currently available, encompassing over 15 hours of human interactions across 100 indoor scenes. It intricately captures whole-body human motions and part-level object dynamics, focusing on the realism of contact. This dataset is further scaled up by transforming physical environments into exact virtual models and applying extensive augmentations to appearance and motion for both humans and objects while maintaining interaction fidelity. Utilizing TRUMANS, we devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length, taking into account both scene context and intended actions. In experiments, our approach shows remarkable zero-shot generalizability on a range of 3D scene datasets (e.g., PROX, Replica, ScanNet, ScanNet++), producing motions that closely mimic original motion-captured sequences, as confirmed by quantitative experiments and human studies.

Explore categories