PROTOTYPES ACCELERATE DISCOVERY, NOT DELIVERY Prototypes are powerful tools for the discovery phase — helping teams quickly explore product directions, validate concepts with customers through high-fidelity experiences, and align executives around tangible visions. The leverage they provide in answering "what's the right experience to build?" is remarkable. However, I frequently see PMs expecting to hand prototypes directly to engineering teams for production implementation. This approach consistently leads to disappointment. Here's why: prototype code isn't built to meet the security, reliability, robustness, and maintainability standards that production systems require. Your engineering team rightfully prioritizes these critical attributes. And that's perfectly fine. The value of prototypes lies entirely in discovery. Even when engineering teams ultimately rebuild from scratch, prototypes have already delivered tremendous ROI by: - Accelerating team alignment on product direction - Validating customer demand with realistic experiences - Securing executive buy-in through tangible demonstrations The code was never meant to ship — the insights were.
Importance Of Prototyping In Engineering
Explore top LinkedIn content from expert professionals.
-
-
A $12 prototype can make $50,000 of engineering analysis look ridiculous A team of engineers was stuck on a bearing failure analysis for six weeks. Vibration data, FFT analysis, metallurgy reports - they had everything except answers. The client kept asking for root cause and the engineers kept finding more variables to analyze. Temperature gradients, load distributions, contamination levels, manufacturing tolerances. Each analysis created more questions. Then the intern did something that made the engineers feel stupid. She 3D printed a transparent housing and filled it with clear oil so the engineers could actually see what was happening inside the bearing assembly. Took her four hours and $12 in materials. They watched the oil flow patterns and immediately saw the lubrication wasn't reaching the critical contact points. All their sophisticated analysis was based on assuming proper lubrication distribution. Wrong assumption. Six weeks of wasted effort. The visual prototype didn't just solve the problem - it changed how the engineers approach these types of investigations. Now they build crude mockups before diving into analysis rabbit holes. Cardboard, tape, clear plastic, whatever works. Physical models force you to confront your assumptions before you spend weeks analyzing the wrong thing. Sometimes the cheapest prototype teaches you more than the most expensive simulation. #engineering #prototyping #problemsolving
-
Prototypes aren't for testing your product. They're for testing your assumptions. Most teams get this backward, and it costs them weeks of wasted effort and a product nobody wants. A prototype isn't a tiny product; it's a medium for learning. It's a tool designed to ask a specific question and test a core assumption with the right audience. An unintentionally designed prototype is a flawed input, and even with advanced teams and tools, flawed inputs only amplify flaws. The true power of a prototype isn't in its polish, but in the intentional "message" it sends. To unlock this power and truly accelerate collective learning across your organization, you must design with intent: ✺ Low-Fidelity Prototypes: These are for asking foundational, "Does this even solve the right problem?" questions. They signal that everything is up for debate. The intentional message is: "Let's explore the idea, not the pixels." ✺ Medium-Fidelity Prototypes: Use these to test core user flows and information architecture. The intentional message is: "Is this journey intuitive?" By keeping them a little rough, you prevent stakeholders from getting fixated on visual design. ✺ High-Fidelity Prototypes: Reserve these for the final stages to test things like micro-interactions, brand consistency, or subtle emotional responses. The intentional message is: "We're almost there. What are we missing?" This is how you turn prototyping from a simple task into a strategic lever for change and Team Learning. It ensures your team isn't just building things, but is learning together and making better decisions about what to build and why. It's how you break down silos and create a "Holding Environment" for generative dialogue. What's a time you intentionally used a low-fidelity prototype to prevent a high-stakes meeting from spiraling? Let’s discuss in the comments below. #ProductDesign #SystemsThinking #StrategicDesign #UXStrategy #DesignLeadership #ComplexSystems #TeamLearning #Prototyping #OrganizationalDesign #Innovation
-
I used to think user research was easy. But then I switched to B2B. And oh boy... reality hit hard Back when I was working on a B2C product, I could run 10 user interviews in a day. Users would happily spend 45 minutes answering questions and testing new designs. I thought this was just regular product design. Turns out, I was riding a perfect wave of continuous discovery without even realizing it. Then I switched to B2B. And I admit it really felt scary at first. Users were just too busy to pick up my phone calls. It took 3 weeks to schedule 5 calls. Some users left a bad CSAT score with barely any comment. Damn. How can we build anything serious without ever talking to users? At that time, it really felt like an impossible task. And any way I tried to put it, there were just no efficient process to get those users on the phone. But then it hit me. What if the best discovery touch points weren’t designers or PMs at all? What if they were already happening… in sales calls, support chats, internal Slack threads? And we had this feedback scattered across tools, threads, and people. But no one was making sense of it. So we built a Feedback Management System. We plugged every feedback into a single source of truth directly in Notion: - Intercom conversations and Modjo calls with customers - Internal tickets from sales and support to discuss user pain points or feature requests - User feedback forms submitted on the platform All filtered and organized per team through Notion automations. Each designer spends 2 hours per week turning raw feedback into structured insights. Then each team reviews it together weekly, and it feeds product decisions and the roadmap. It’s simple. It’s scalable. And it changed everything. Product designers no longer design based on shaky assumptions or partial data. They're now the source of customer truth and alignment. In B2B, discovery doesn’t happen in a lab. It happens in the wild. You just need to know where to listen. #productdesign #uxdesign #userresearch
-
Designers, when building digital products, speed is exciting, but speed without validation can lead you in the wrong direction. Recently, I decided to test something. I wanted to see how quickly I could go from idea to working prototype using Replit. Within minutes, I had a simple product flow live: → A sign-up page → A demo booking system → and a basic user journey that felt functional. From a building perspective, it was incredibly fast. But here’s something I’ve learned over time as a designer: a working prototype doesn’t automatically mean a usable experience. Just because we understand the flow doesn’t mean users will. So the next step was validation. I used Lyssna to test the prototype with people who actually match the target audience: UX designers, UX researchers, and tech-savvy professionals in the UK who would realistically book a session. Instead of guessing, the test helped answer questions like: → Do users understand the flow without any explanation? → Where do they hesitate or feel uncertain? → Does the experience match what they expect? The results were encouraging. Most participants navigated the flow confidently, which validated the core concept. But the testing also revealed small usability issues I hadn’t noticed while designing, the kind of insights you almost never catch without observing real users. That experience reinforced something important for me: Rapid prototyping helps you move fast. User testing ensures you're moving in the right direction. The best product workflows combine both. Build quickly with tools like Replit, and validate early with Lyssna. If you want to validate your prototype, take a look at this free template from Lyssna → https://lnkd.in/d2rQCZbt I hope that this will help you. Like & Repost, If you find this helpful. Share your thoughts in the comments. Enable notification 🔔 Don't forget to follow Abraham John #uiux #design #designgod #uidesign #uiuxdesign #uidesign #ui #uxdesign
-
User experience surveys are often underestimated. Too many teams reduce them to a checkbox exercise - a few questions thrown in post-launch, a quick look at average scores, and then back to development. But that approach leaves immense value on the table. A UX survey is not just a feedback form; it’s a structured method for learning what users think, feel, and need at scale- a design artifact in its own right. Designing an effective UX survey starts with a deeper commitment to methodology. Every question must serve a specific purpose aligned with research and product objectives. This means writing questions with cognitive clarity and neutrality, minimizing effort while maximizing insight. Whether you’re measuring satisfaction, engagement, feature prioritization, or behavioral intent, the wording, order, and format of your questions matter. Even small design choices, like using semantic differential scales instead of Likert items, can significantly reduce bias and enhance the authenticity of user responses. When we ask users, "How satisfied are you with this feature?" we might assume we're getting a clear answer. But subtle framing, mode of delivery, and even time of day can skew responses. Research shows that midweek deployment, especially on Wednesdays and Thursdays, significantly boosts both response rate and data quality. In-app micro-surveys work best for contextual feedback after specific actions, while email campaigns are better for longer, reflective questions-if properly timed and personalized. Sampling and segmentation are not just statistical details-they’re strategy. Voluntary surveys often over-represent highly engaged users, so proactively reaching less vocal segments is crucial. Carefully designed incentive structures (that don't distort motivation) and multi-modal distribution (like combining in-product, email, and social channels) offer more balanced and complete data. Survey analysis should also go beyond averages. Tracking distributions over time, comparing segments, and integrating open-ended insights lets you uncover both patterns and outliers that drive deeper understanding. One-off surveys are helpful, but longitudinal tracking and transactional pulse surveys provide trend data that allows teams to act on real user sentiment changes over time. The richest insights emerge when we synthesize qualitative and quantitative data. An open comment field that surfaces friction points, layered with behavioral analytics and sentiment analysis, can highlight not just what users feel, but why. Done well, UX surveys are not a support function - they are core to user-centered design. They can help prioritize features, flag usability breakdowns, and measure engagement in a way that's scalable and repeatable. But this only works when we elevate surveys from a technical task to a strategic discipline.
-
Client: Can you build a prototype in 3 weeks? Me: Yes... but only if we build a prototype in 1 day. We're jumping in to help one of our long-term clients. The challenge? Build a fully functional prototype to test with a major client. They did not have the short term capacity. They needed us to step in. So why build a prototype in 1 day if we have 3 weeks? Because we need focus. We can't afford to go back to the drawing board after 3 weeks. We need to make sure we are working in the right direction - from Day 1. That’s why we built a one-day prototype based on the initial design brief. It is made of cardboard, 3D prints, and duct tape It looks terrible But... it’s amazing Day 2: Everyone is playing with the prototype. 👍 "I like this!" 👎 "I don’t like this!" 🤔 "Do we even need this?" ✅ "We need to check this feature with our client." ↘️ "It feels like too much effort - we need to minimise user effort." These are critical insights that can make or break a prototype. Better to find them on Day 1 than during the pilot. This helps us to refine the design brief, requirements, and device flow. Now we all know exactly what to build. And we have a working prototype from Day 1 The next 3 weeks are all about fast iterations, testing, and optimization Hardware and software development can run in parallel to save time. We leverage rapid prototyping, laser cutting, 3D printing, off-the-shelf components. And we use parts with a max 3-day lead time to stay agile. Speed without direction is wasted time. Speed with focus will get you there. 🚀 The result? Client: Better than expected!
-
"The value of a prototype is in the insight it imparts, not the code" Prototyping lets us fail fast and cheap, or get the data to make a concrete decision on direction. It helps answer the question, "What happens if we try this?". Most significantly, prototyping provides us with the guardrails to safely and productively fail. Prototyping is the right tool if you have an idea to validate, a clear path to get feedback on, or a proposal requiring further data. It provides crucial insights to move forward. By creating a rough version of a feature or system you've been considering, you gain the flexibility to either discard the idea or fully commit to it. It's a skill that assists product and engineering teams in making pivotal business decisions. Whether it's a website, mobile app, or landing page, no matter what product you're working on, it's always essential to verify your design decisions before shipping them to the end-users. Some development teams delay the validation stage until they have a solution that is almost complete. But that's an extremely risky strategy. As we all know, the later we come across the problem, the more costly it becomes to fix it. Luckily, no matter what point you are in the design process, it is still possible to build and test a concrete image of your concept—a prototype. Consider an architect tasked with designing a grand building. Before laying the first stone, the architect crafts a miniature scale model, allowing them to visualize the end result, understand the project's complexities, and present their ideas convincingly to others. However, this model is far from being the final product; it's a means to an end. This principle applies just as aptly in the world of software development. A software prototype—whether it's a low-fidelity wireframe, a high-fidelity interactive model, or a simplified mock-up of a more complex system—is much like the architect's scale model. It's a visual, often interactive, model of the software that provides developers, stakeholders, and users with an early glimpse into the software's workings, long before the final product is ready. The prototype isn't about the code per se; the code is merely a tool used to create it. Instead, it is about gathering valuable insights, comprehending user needs, identifying functional requirements, validating technical feasibility, and discovering potential stumbling blocks that might arise during full-scale development. The prototype's strength lies in its capacity to provide these insights without necessitating a significant investment of time or resources. I'm a big fan of using prototypes in our work at Google. Their value is often high. Wrapping up... The aim of prototyping is not the prototype itself or its immediate output but the knowledge that comes from it. I wrote more on this topic in https://lnkd.in/gEEGFwJp #softwareengineering #programming #ux #design
-
So many product teams work on new features they believe will be a game-changer for users. But how do you really know if a feature will be adopted by users? This is where UX research comes in. As UX researchers, we can help identify the probability of feature adoption by digging deep into user needs, behaviors, and expectations. Here are some ways we measure and predict feature adoption: 1. User Interviews and Surveys: By speaking directly to users, we can gauge their interest in a new feature. Through surveys or interviews, we explore how they might use the feature, what problems it would solve for them, and how it fits into their current workflows. These qualitative insights give us an early understanding of potential adoption barriers. 2. Usability Testing: A feature may seem like a great idea on paper, but how do users actually interact with it? Conducting usability tests on prototypes allows us to see whether users understand the feature, how intuitive it is, and where they might get stuck. If the feature feels cumbersome, adoption rates will likely be lower. 3. Task Success Rate: This metric allows us to measure how easily users can complete tasks using the new feature. A low success rate indicates friction, and users are less likely to adopt a feature if it doesn’t make their experience easier. 4. User Journey Mapping: By mapping out the user journey, we can see where the new feature fits into the overall user experience. Does it make sense within the flow of their tasks? Are there unnecessary steps or points of confusion? A smooth, integrated feature is more likely to be adopted. 5. A/B Testing: Once a feature is live, we can run A/B tests to see if it’s driving the desired behavior. Does the feature increase engagement or task completion compared to the previous version? These quantitative insights allow us to measure real-world adoption and refine the feature based on user interactions. 6. Feature Feedback: After a feature is released, gathering feedback is key. By monitoring user comments, satisfaction scores, and support tickets, we can understand how users feel about the feature. Are they using it as intended? Are there any pain points that need addressing? As UX researchers, our role is to validate whether a feature truly meets user needs and fits within their daily tasks. We can predict adoption rates, identify potential issues early, and help product teams make informed decisions before launching a feature. How do you measure feature adoption in your research?
-
Some of you disagreed with my last post. Fair. Let's talk. Let me explain the topic a bit more and give you a deep dive into how I see the new process. The old way: Think → Research → Wireframe → Design → Spec → Hand off → Build → Test → Iterate Weeks. Sometimes months. Before anyone touches real code. The new way: 👉 Step 1: Start with a problem, not a doc. I don't need a full PRD. I need one thing. Example: "𝘗𝘦𝘰𝘱𝘭𝘦 𝘴𝘵𝘳𝘶𝘨𝘨𝘭𝘦 𝘵𝘰 𝘨𝘦𝘵 𝘩𝘰𝘯𝘦𝘴𝘵 𝘧𝘦𝘦𝘥𝘣𝘢𝘤𝘬 𝘰𝘯 𝘵𝘩𝘦𝘪𝘳 𝘱𝘰𝘳𝘵𝘧𝘰𝘭𝘪𝘰." That's it. That's the brief. 👉 Step 2: Build the ugliest working version. I open Lovable or Cursor and prompt my way to a prototype. Not a mockup. Not a Figma file. A real, clickable, functional thing. 30 minutes. Maybe an hour. 👉 Step 3: Use it. Don't refine it. Don't show it to anyone yet. Use it yourself like a real user would. Click every button. Try to break it. Feel where it's awkward. 👉 Step 4: Now design. This is where design skill actually matters. You're not guessing what the experience should feel like. You already know because you felt it. Now you fix what's broken, remove what's unnecessary, and polish what works. Maybe pivot or try other solutions. 👉 Step 5: Show it, don't spec it. Instead of a 20-page spec, I send a link. "Here, try this. What's confusing?" Real feedback on a real thing beats hypothetical feedback on a hypothetical thing every single time. 👉 Step 6: Iterate in minutes, not weeks. Here's where this workflow really pulls ahead. Someone says, "This flow is confusing." You don't update a Figma file, write a ticket, and wait for the next sprint. You open Cursor, fix it, and send a new link. Same conversation. Same day. The feedback loop goes from weeks to hours. Sometimes minutes. And each round gets sharper because you're iterating on something real. 3-4 rounds of this, and you have something more validated than most products get after months of traditional process. 👉 Step 7: Document what you built, not what you plan to build. Documentation becomes a record, not a prediction. It's accurate because the thing already exists. You can do it at the end or during the process. Why this works: You make decisions with information instead of assumptions. You eliminate 80% of the back-and-forth. You design from experience, not imagination. And you iterate at the speed of conversation, not the speed of sprints. Why it feels wrong at first: Because we were trained to think before we build. And thinking first felt responsible. But we did that because we couldn't build. Now we can. And I don't think it's about ignoring thinking. (𝘔𝘢𝘯𝘺 𝘰𝘧 𝘺𝘰𝘶 𝘢𝘤𝘤𝘶𝘴𝘦𝘥 𝘮𝘦 𝘰𝘧 𝘵𝘩𝘢𝘵) I believe it's about doing it at every step. Refining it based on real feedback. Insights you can get internally and from user testing. If you're still reading this, let me know what you think about it all. ✌️