Designing for Virtual Reality Experiences

Explore top LinkedIn content from expert professionals.

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    building AI systems @meta

    206,653 followers

    Meet Marble, a new generative AI model and platform from the team at World Labs. Marble turns text or image prompts into persistent, navigable 3D worlds. Not static renders. Not stitched panoramas. Explore, modify, and export right in your browser. Fei-Fei Li, the AI pioneer behind ImageNet and co-director of Stanford’s HAI, is now focused on spatial intelligence. And this is a serious step forward. With Marble: → You generate coherent 3D environments, not just objects → Outputs are exportable Gaussian splats, ready for use in web apps → It supports a wide range of styles with realism, anime, low-poly → You can stitch multiple scenes to build large-scale 3D spaces Now this is what you call large-scale 3D generation. Built-in tools (like the open-source Spark renderer) make it easy to bring these worlds into interactive projects across web, mobile, and VR. Link to blog: https://lnkd.in/gQuaGafn Explore different worlds: https://lnkd.in/gQJGbXqR

  • View profile for Gabriele Romagnoli

    The Best of XR & AI for Creatives and Professionals | Tech Ambassador | Podcast Host | Speaker

    40,114 followers

    This is the most important advice I gave to the class of #design students from the Delft University of Technology last week: When building an #XR prototype start by defining a clear question you want to answer: - Where will the user be once the experience starts? - How do you manage the user's attention in a #3D space? - How do you communicate certain affordances like objects that can be picked up (or not) or places that can be explored (or not)? - What will be the rough series of events the user will go through? Starting by building the "whole thing" too early will lead you to "expensive" tools like Unity, Blender or Unreal (from a time and learning curve perspective) before you have figure out and agreed on foundational aspects of your experience. This is true for students but it is also a common pitfall of more seasoned #XR teams too eager to jump into game engines without realizing the costs of rebuilding something once you realize it doesn't work well when experienced spatially. This is the main reason why I strongly believe ShapesXR is the best tool for that initial stage of design and ideation, and the fact the various groups were able to build simple environments and interactions (with sounds and haptics included 🤯) in less than 2 hours without having prior experience in Shapes is a testament to that.

  • View profile for Benjamin Desai

    Creative Technologist | Radical Realities | AI, XR & Digital Sovereignty

    2,555 followers

    I experimented with a workflow that combines Gravity Sketch, mixed reality, and Runway's Gen-3 video-to-video AI and got some impressive results, here is what I did: 🚀 Step 1: Using Gravity Sketch in VR, I designed stasis tubes with humanoid figures inside. I placed these models throughout my hallway, integrating them into the real space, using mixed reality mode on my Meta Quest 3 headset. 🎥 Step 2: I filmed myself walking through this mixed reality set, holding a 3D object, capturing my real environment with the 3D models layered in. This gave a first-person view of the scene, as if I were navigating through an alien ship. 🧪 Step 3: Finally, I ran the footage through Runway’s Gen-3 video-to-video AI, using prompts to transform the scene into a space marine navigating an alien ship, complete with eerie stasis tubes and ambient sound effects to drive the atmosphere home. A fast, intuitive way to pre-visualize complex scenes that would otherwise take much longer to design and film traditionally. What this means for creative workflows: 🔹Advanced Storyboarding: With mixed reality, you can set up rough models and get a realistic sense of scale and positioning. You can actually walk through you scene, interacting with it and capturing raw footage directly. 🔹 Quick Pre-Visualization: Using video-to-video genAI, this rough footage can quickly be transformed into something more. It’s a great way to experiment with looks, check in with your client vision, and even lighting before diving into final production. 🔹 Future-Ready Workflows: As video-to-video AI improves, this workflow won’t just be for pre-viz. We’re looking at a future where you could create final-quality outputs straight from this setup, acting out scenes in a mixed reality environment while the AI enhances and polishes everything in real time. Moving towards final generated outputs vs rendered. This opens up a lot of possibilities. You could set up a mixed reality scene, interact with it, and create an entire short film without needing a massive crew or extensive post-production. For now, it’s a powerful way to prototype, storyboard, and explore creative concepts quickly and intuitively. ❓Curious about how mixed reality and AI could transform your creative process? Let’s connect-I’d love to share more insights and explore how these tools can push your projects to the next level.

  • View profile for Dale Tutt

    Industry Strategy Leader @ Siemens, Aerospace Executive, Engineering and Program Leadership | Driving Growth with Digital Solutions

    7,765 followers

    I had the chance a few weeks ago to sit down (virtually of course) with Woodrow Bellamy III, host of SAE’s Aerospace & Defense Technology podcast, for a candid conversation about virtual prototyping. While I work across many industries these days, I always enjoy a chance to share insights from my years working in Aerospace. The key question was about the reliability of virtual prototyping as a replacement for physical prototyping. How much faith can be put into a digital model for accurately simulating a real-word system, when you have real-world stakes and consequences? The answer is: a lot. When we first started using simulation for some of the aircraft programs I worked, we questioned if we could trust the results to predict peformance of a new aircraft. It was an appropriate question to ask. It was only after conducting extensive testing to validate the results of the simulation against an existing aircraft that we started to trust the simulation models while designing a new aircraft. The same litmus test will be needed for companies in any industry to take the leap with virtual prototyping: A company can start by developing the virtual model and ensure the validity and robustness of the simulation by comparing against existing physical products. Once the team is sufficiently confident about the performance of the digital model, they can take the insights from existing products and systems and apply them toward the development of future projects. The digital twin of a physical system is infinitely more malleable. It can be designed, tested, and experimented upon with far more ease and using significantly fewer resources. Virtual prototypes allow for limitless design exploration, help to identify design issues early and before building physical prototypes, and make physical testing more effective. Before going for the real-life testing, virtual analyses highlight critical areas in the design, and the test plans can be adjusted to focus on areas of greatest concern. The aerospace industry adopted the use of the digital twin as a revolutionary design tool decades ago. And the digital twin has evolved quite a bit over the ensuing decades. It is no longer just a 3D model of a product or process. Today, the comprehensive digital twin is a precise virtual representation of the product that replicates its physical form, function and behavior and encompasses all cross-domain models and data from mechanical and electrical through software code. So, although the current ratio of prototyping testing, worldwide, may be 90% physical and 10% digital, it isn’t an overstatement to conceive of a future where the ratio is flipped; maybe even 100% digital. I'm excited about the opportunities offered by virtual prototyping and testing as a means to enable companies to develop and validate innovative products faster! To check out my conversation with Woodrow, please check out the link in the comments. #digitaltransformation #siemensxcelerator

  • View profile for Josef José Kadlec

    Co-Founder at GoodCall | 🦾HR Tech - AI - RecOps - Talent Sourcing - Linkedln | 🪖Defence, Dual-use & MilTech Industry Consultant+Investor 🎤Keynote Speaker 📚Bestselling Author 🏆 Fastest Growing by Financial Times

    47,888 followers

    What happens when the room itself becomes the interface? This immersive theater is designed to convince your brain that solid ground is optional. Using real-time projection mapping, spatial audio, synchronized motion cues, and depth-warped visuals, the environment surrounds you completely. No headsets. No screens. Just space that moves with you. Walls, floors, and ceilings transform together, creating the sensation of shifting terrain, floating reference points, or underwater worlds. When the visuals imply motion like tilting, falling, or forward momentum, the brain responds instinctively. Even though your body is still, your perception is not. That moment of hesitation or reaching out is the proof. By blending real-time rendering engines with AR-style environmental cues, this approach creates a shared metaverse experience that feels physical, social, and surprisingly real. This is not watching the future of immersive media. It is standing inside it. Video credits: wealth #ImmersiveExperience #Metaverse #SpatialComputing #ProjectionMapping #FutureOfEntertainment #XR #Innovation

  • View profile for Dr. Dirk Alexander Molitor

    Industrial AI | Dr.-Ing. | Scientific Researcher | Consultant @ Accenture Industry X

    10,338 followers

    This is the moment simulation becomes more important than prototyping. In our last posts, Pascalis and I showed two things: First, how you can generate a full production and warehouse environment in NVIDIA Omniverse using Claude Code and the USDA data format. Second, how NVIDIA’s new Kimodo model can generate robot motions from simple text prompts. Now we are taking the next step: Transferring robot motion into Omniverse and merging both use cases. Omniverse is not just for static visualizations. It allows dynamic simulation of movements, interactions and behavior with CAD components inside a virtual environment. And this is where it gets interesting for future product development. The vision is clear: If we can model production environments, warehouses, and real operating environments of products, we can simulate mechatronic products in realistic conditions before they physically exist. Environment → Sensor & actuator interaction → Model-in-the-loop simulation. Very similar to how autonomous vehicles are developed today, but applied to all kinds of mechatronic products. The effects are huge: • Less physical prototyping • Earlier insights without building hardware • Faster iteration cycles • Better product decisions earlier in development • Simulation becomes the main development environment Omniverse already shows how granular these simulations can be created today. Not through months of manual modeling, but increasingly through prompts that generate environments, movements and soon maybe even control logic. We are moving from designing products to designing behavior in simulated worlds first. And that will fundamentally change how we develop products. Curious to hear your thoughts! When will simulation become the primary development environment in your industry? Vlad Larichev | Rüdiger Stern | Rick Bouter | Ruben Hetfleisch | Dr.-Ing. Tobias Guggenberger

  • View profile for Bahareh Jozranjbar, PhD

    UX Researcher at PUX Lab | Human-AI Interaction Researcher at UALR

    9,890 followers

    Prototyping is how ideas turn into evidence. It surface hidden assumptions, generate better stakeholder conversations, test specific hypotheses, reveal unforeseen interactions, and give you a concrete artifact to evaluate before code or tooling locks you in. Use low fidelity sketches and storyboards when you need speed and divergent thinking. They help teams externalize ideas, reason about user goals, and map flows before pixels appear. They are deliberately rough to avoid premature polish. Move to click through wireframes in Figma when the question is structure and navigation. Validate information architecture, menu depth, labeling, and path efficiency while changes are still cheap. When the feel of interaction matters, use interactive digital prototypes to evaluate micro interactions, timing, and visual polish. Treat them as validation instruments, not trophies. Plan change criteria up front so attachment to a pretty artifact does not silence real feedback. Some questions require real performance and materials. Coded prototypes and functional hardware mockups tell you about latency, reliability, durability, ergonomics, and safety. In medical devices and other regulated domains, high fidelity functional and contextual testing is expected for Human Factors validation. Not every question lives on screens. Experience prototyping and bodystorming put bodies in space to surface constraints that lab tasks miss. Acting out a shared autonomous ride with props reveals comfort, cue timing, and social norms. Wearing a telehealth mockup for a week exposes stigma, routine friction, and alert patterns that actually fit domestic life. Before building intelligence, simulate it. Wizard of Oz studies let a hidden human drive system responses while participants believe the system is autonomous. You learn vocabulary, trust dynamics, acceptable latency, and recovery strategies without heavy engineering. AI of Oz replaces the human with a large language model so you can study conversational realism early. Manage risks like model bias, hallucinations, and outages with guardrails and logging so findings remain trustworthy. Strategic prototypes also matter. Provotypes and research through design artifacts challenge assumptions, surface values, and force early conversations about privacy, power, and trade offs that slides tend to dodge.

  • View profile for Adam Kyle Wilson

    Head of Design at Lazer

    16,395 followers

    Designing for spatial interfaces feels like being handed a blank room and told, “Make it make sense.” AR, XR, VR (whatever acronym you want) forces you to rethink everything. But here’s the twist: The oldest principles in design are still the most useful. When I work on AR interfaces at Polyform Studio, I don’t start with sci-fi metaphors or gestural fireworks. I start with the same tool every designer learned in their first year: The grid. Why? Because when you’re working in 3D space, you need structure more than ever. Your UI elements aren’t just “on screen.” They’re floating, scaling, fading, orbiting. Without invisible order, it’s chaos. Here’s how I apply traditional layout to spatial design: Create an anchor plane - Even in 3D, most interactions need a visual home base. - Build a primary surface to hold your core elements, menus, inputs, feedback. Apply grid logic to Z-space - Treat depth like a layout dimension. - Give UI elements clear visual hierarchy not just left to right, but front to back. Use rhythm to reduce motion sickness - Spacing, pacing, balance, those Bauhaus rules you ignored? - They’re crucial when your interface moves with your head. The most advanced interfaces aren’t chaotic. They’re structured. Gridded. Timeless. → See how we structure emerging interfaces at Polyform .co

  • View profile for Hugues Bruyere

    Partner, Chief of Innovation at Dpt.

    5,304 followers

    For the past few weeks, we at Dpt. have been exploring the use of generative AI workflows in a mixed reality context. The prototype I’m sharing here builds on my earlier experiments that used physical interfaces to feed and interact with a real-time img2img workflow [https://lnkd.in/eVdyaCyB]. In this iteration, I’m focusing on a first-person perspective to make the experience even more immersive. I’m not (yet 🙂) relying on a live video stream; instead, I capture a series of single snapshots from the Quest 3 passthrough feed, instantly process them with Stable Diffusion, and display the results back in the same spatial/physical location where they were taken. As Meta hasn’t yet released the Quest’s Camera API—which will give developers direct access to the device’s camera feed—I’m using the Android Media Projection API (normally used for screen recording or casting) as a temporary workaround. The diffusion workflow, exported from ComfyUI as Python, runs on a cloud GPU, letting me continue testing the prototype even when I’m outside. In the attached video, you’ll see screen recordings of me using the app at home, in the office, and outdoors. I quickly capture a series of spatial snapshots in close proximity, and once they’re processed, they form an alternate reality patchwork—the snapshots, not being perfectly aligned, create a sense of depth. You can see how my desk might look after being abandoned for years or as though it belongs in a graphic novel. You’ll also notice me spatially layering and exploring snapshots in my living room, or trying to escape the winter by recalling how Montreal’s alleyways appear in the summer, ... This meshing of virtual and physical is at the heart of what we do at Dpt. #MixedReality #AI #XR #MR #stablediffusion #rnd

Explore categories