UX Design Feedback Loops

Explore top LinkedIn content from expert professionals.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    225,350 followers

    🧭 How To Manage Challenging Stakeholders and Influence Without Authority (free eBook, 95 pages) (https://lnkd.in/e6RY6dQB), a practical guide on how to deal with difficult stakeholders, manage difficult situations and stay true to your product strategy. From HiPPOs (Highest Paid Person’s Opinion) to ZEbRAs (Zero Evidence But Really Arrogant). By Dean Peters. Key takeaways: ✅ Study your stakeholders as you study your users. ✅ Attach your decisions to a goal, metric, or a problem. ✅ Have research data ready to challenge assumptions. ✅ Explain your tradeoffs, decisions, customer insights, data. 🚫 Don’t hide your designs: show unfinished work early. ✅ Explain the stage of your work and feedback you need. ✅ For one-off requests, paint and explain the full picture. ✅ Create a space for small experiments to limit damage. ✅ Build trust for your process with regular key updates. 🚫 Don’t invite feedback on design, but on your progress. As designers, we often sit on our work, waiting for the perfect moment to show the grand final outcome. Yet one of the most helpful strategies I’ve found is to give full, uncensored transparency about the work we are doing. The decision making, the frameworks we use to make these decisions, how we test, how we gather insights and make sense of them. Every couple of weeks I would either write down or record a short 3–4 mins video for stakeholders. I explain the progress we’ve made over the weeks, how we’ve made decisions and what our next steps will be. I show the design work done and abandoned, informed by research, refined by designers, reviewed by engineers, finetuned by marketing, approved by other colleagues. I explain the current stage of the design and what kind of feedback we would love to receive. I don’t really invite early feedback on the visual appearance or flows, but I actively invite agreement on the general direction of the project — for that stakeholders. I ask if there is anything that is quite important for them, but that we might have overlooked in the process. It’s much more difficult to argue against real data and a real established process that has led to positive outcomes over the years. In fact, stakeholders rarely know how we work. They rarely know the implications and costs of last-minute changes. They rarely see the intricate dependencies of “minor adjustments” late in the process. Explain how your work ties in with their goals. Focus on the problem you are trying to solve and the value it delivers for them — not the solution you are suggesting. Support your stakeholders, and you might be surprised how quickly you might get the support that you need. Useful resources: The Delicate Art of Interviewing Stakeholders, by Dan Brown 🤎 https://lnkd.in/dW5Wb8CK Good Questions For Stakeholders, by Lisa Nguyen, Cori Widen https://lnkd.in/eNtM5bUU UX Research to Win Over Stubborn Stakeholders, by Lizzy Burnam 🐞 https://lnkd.in/eW3Yyg5k [continues below ↓] #ux #design

  • View profile for Nicholas Nouri

    Founder | Author

    132,633 followers

    Navigating the product development process is a bit like guiding a frog through its habitat - close observation and adaptation to feedback are essential. This approach not only aligns products with user needs but also significantly improves resource efficiency. 𝐓𝐡𝐞 𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐜𝐞 𝐨𝐟 𝐅𝐞𝐞𝐝𝐛𝐚𝐜𝐤 𝐢𝐧 𝐏𝐫𝐨𝐝𝐮𝐜𝐭 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭 - MDP Creation: Start by launching a Minimal Desirable Product - this serves as your basic model to initiate user interaction. - User Observation: Monitor how users interact with the MVP. Do they find it intuitive? Are there unforeseen issues? - Feedback Collection: Actively seek user feedback through surveys, direct observations, and interviews to gather valuable insights for improvement. - Iterative Design: Refine and enhance the product based on this feedback, focusing on features that genuinely add value. - Continuous Improvement: Maintain a cycle of feedback and improvement, ensuring the product remains relevant and effective over time. 𝐀𝐛𝐨𝐮𝐭 42% 𝐨𝐟 𝐬𝐭𝐚𝐫𝐭𝐮𝐩𝐬 𝐅𝐀𝐈𝐋 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐨𝐟 𝐧𝐨𝐭 𝐢𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐧𝐠 𝐦𝐚𝐫𝐤𝐞𝐭 𝐟𝐢𝐭 𝐫𝐞𝐬𝐞𝐚𝐫𝐜𝐡 𝐢𝐧𝐬𝐢𝐠𝐡𝐭𝐬 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞𝐢𝐫 𝐩𝐫𝐨𝐝𝐮𝐜𝐭 𝐝𝐞𝐬𝐢𝐠𝐧 - High Failure Rates: According to CB Insights, one of the top reasons startups fail is a lack of market need for their product. About 42% of startups cited "no market need" for their product as the primary reason for their failure. - Wasteful Spending: Harvard Business Review highlights that many companies waste money developing features that users don’t want. Studies suggest that approximately 35% of features in a typical system are never used, and around 19% are rarely used. 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬 𝐨𝐟 𝐚 𝐅𝐞𝐞𝐝𝐛𝐚𝐜𝐤-𝐎𝐫𝐢𝐞𝐧𝐭𝐞𝐝 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡 - Increased User Satisfaction: Products developed with user input are more likely to meet the actual needs and preferences of the target audience. - Cost Efficiency: Reducing time spent on unwanted features saves money and directs resources towards more impactful developments. - Enhanced Adaptability: A feedback loop facilitates quick pivots and adjustments, which is crucial in the fast-paced market environments. 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬 - Continuous Commitment: Integrating continuous user feedback requires dedication and can be resource-intensive. - Handling Negative Feedback: Developers must be prepared to receive and constructively use negative feedback, which can sometimes lead to significant changes in the project scope. 🔄 How do you integrate user feedback into your product development process? What lessons have you learned from observing user interaction with your products? #innovation #technology #future #management #startups

  • View profile for Mabel Loh

    Founder @Maibel | Building emotional AI companions for real-world behavior change

    1,792 followers

    I went to an AI UX workshop last night expecting recycled LinkedIn advice about "building AI trust through transparency." Instead, Isabella Yamin tore down LinkedIn's job posting flow using her CarbonCopies AI framework in real-time, while founders shared raw implementation struggles. It completely changed how I'm rethinking Maibel's onboarding flow. Here's what I stole from B2B SaaS principles to redesign emotional AI for B2C: 1️⃣ Progressive disclosure with purpose LinkedIn's fatal flaw? Optimizing for completion ease > Outcome quality. Recruiters are drowning in irrelevant applications because AI never learns what "qualified" means. The personalization paradox: How do we give users enough control without overwhelming them? Users don't want "frictionless". They want INFORMED control. 📌 At Maibel: I was falling into the same trap, making emotional coaching setup so simple that the AI couldn't understand user context. Now? Progressive complexity with clear trade-offs. Show users how their choices impact outcomes. → Want deeper insights? Add more context. → Want faster setup? Here's what the AI can't personalize. 2️⃣ Closed-loop data intelligence: What Platfio gets right They've built a platform for software agencies where where every data point feeds back into the entire system. User preferences in marketing flows shape proposals. Campaign performance shapes future recommendations. Every interaction becomes intelligence for future recommendations. 📌 At Maibel: Most wellness apps store emotional check-ins like digital journals. I'm turning them into predictive feedback loops. Emotional intelligence isn’t static but COMPOUNDS. Today's reflections shift tomorrow's suggestions. Patterns fuel prevention. Users' inputs on Monday could predict AND prevent Friday's breakdown. 3️⃣  Multi-modal creativity: Wubble's transparency approach Translating images and files into music - who'd have thought? They've cracked multi-modal creativity where users become co-creators, not passive consumers. The breakthrough moment for me: What if users could see how their visual environment contributes to emotional context? 📌 At Maibel: Users upload images of their day and see how AI analyzes emotional cues: cluttered workspace = overwhelm, junk food = stress eating. Multi-modal understanding users can contribute to and influence. 💡 The bottom line? B2B Saas gets one thing right: Every interaction has to earn trust. In B2B, failed AI means churn. In emotional AI, failed trust breaks belief in tech entirely. 📌 Here's what we're doing differently at Maibel: → Progressive complexity → Context-aware feedback → Multi-modal participation → Intelligence that compounds with every input. It's not just about building WITH AI. I'm designing systems that learn understand YOU before you even need to explain yourself. Kudos to Isabella, Shivang Gupta The Generative Beings, Shaad Sufi Hayden Cassar and everyone who shared deep product insights.

  • View profile for Karen Kim

    CEO @ Human Managed, the AI Service Platform for Cyber, Risk, and Digital Ops.

    5,876 followers

    User Feedback Loops: the missing piece in AI success? AI is only as good as the data it learns from -- but what happens after deployment? Many businesses focus on building AI products but miss a critical step: ensuring their outputs continue to improve with real-world use. Without a structured feedback loop, AI risks stagnating, delivering outdated insights, or losing relevance quickly. Instead of treating AI as a one-and-done solution, companies need workflows that continuously refine and adapt based on actual usage. That means capturing how users interact with AI outputs, where it succeeds, and where it fails. At Human Managed, we’ve embedded real-time feedback loops into our products, allowing customers to rate and review AI-generated intelligence. Users can flag insights as: 🔘Irrelevant 🔘Inaccurate 🔘Not Useful 🔘Others Every input is fed back into our system to fine-tune recommendations, improve accuracy, and enhance relevance over time. This is more than a quality check -- it’s a competitive advantage. - for CEOs & Product Leaders: AI-powered services that evolve with user behavior create stickier, high-retention experiences. - for Data Leaders: Dynamic feedback loops ensure AI systems stay aligned with shifting business realities. - for Cybersecurity & Compliance Teams: User validation enhances AI-driven threat detection, reducing false positives and improving response accuracy. An AI model that never learns from its users is already outdated. The best AI isn’t just trained -- it continuously evolves.

  • View profile for Rishav Gupta
    Rishav Gupta Rishav Gupta is an Influencer

    The “Why” behind the “How” | Product @ ETS

    12,325 followers

    The 10-50-99 rule that improved my product launches: When reviewing designs: - At 10% completion, critique the concept - At 50%, critique the approach - At 99%, just check for bugs When I implemented this, our delivery time dropped by 40%. Previously: - We'd debate visual details at 10% - Request major changes at 99% - Create endless revision cycles Design feedback without structure creates waste. Different stages need different types of feedback. Early feedback should open possibilities. Late feedback should close gaps. Your team doesn't need your opinion at every stage. They need the right guidance at the right time. What feedback framework has helped your team deliver projects more efficiently? #ProductManagement #Leadership #ProductDevelopment #PMLife

  • View profile for Elizabeth Laraki

    Design Partner, Electric Capital

    8,299 followers

    When something feels off, I like to dig into why. I came across this feedback UX that intrigued me because it seemingly never ended (following a very brief interaction with a customer service rep). So here's a nerdy breakdown of feedback UX flows — what works vs what doesn't. A former colleague once introduced me to the German term "salamitaktik," which roughly translates to asking for a whole salami one slice at a time. I thought about this recently when I came across Backcountry’s feedback UX. It starts off simple: “Rate your experience.” But then it keeps going. No progress indicator, no clear stopping point—just more questions. What makes this feedback UX frustrating? – Disproportionate to the interaction (too much effort for a small ask) – Encourages extreme responses (people with strong opinions stick around, others drop off) – No sense of completion (users don’t know when they’re done) Compare this to Uber’s rating flow: You finish a ride, rate 1-5 stars, and you’re done. A streamlined model—fast, predictable, actionable (the whole salami). So what makes a good feedback flow? – Respect users’ time – Prioritize the most important questions up front – Keep it short—remove anything unnecessary – Let users opt in to provide extra details – Set clear expectations (how many steps, where they are) – Allow users to leave at any time Backcountry’s current flow asks eight separate questions. But really, they just need two: 1. Was the issue resolved? 2. How well did the customer service rep perform? That’s enough to know if they need to follow up and assess service quality—without overwhelming the user. More feedback isn’t always better—better-structured feedback is. Backcountry’s feedback UX runs on Medallia, but this isn’t a tooling issue—it’s a design issue. Good feedback flows focus on signal, not volume. What are the best and worst feedback UXs you’ve seen?

  • View profile for Odette Jansen

    ResearchOps & Strategy | Founder UxrStudy.com | UX leadership | People Development & Neurodiversity Advocacy | AuDHD

    21,923 followers

    So many product teams work on new features they believe will be a game-changer for users. But how do you really know if a feature will be adopted by users? This is where UX research comes in. As UX researchers, we can help identify the probability of feature adoption by digging deep into user needs, behaviors, and expectations. Here are some ways we measure and predict feature adoption: 1. User Interviews and Surveys: By speaking directly to users, we can gauge their interest in a new feature. Through surveys or interviews, we explore how they might use the feature, what problems it would solve for them, and how it fits into their current workflows. These qualitative insights give us an early understanding of potential adoption barriers. 2. Usability Testing: A feature may seem like a great idea on paper, but how do users actually interact with it? Conducting usability tests on prototypes allows us to see whether users understand the feature, how intuitive it is, and where they might get stuck. If the feature feels cumbersome, adoption rates will likely be lower. 3. Task Success Rate: This metric allows us to measure how easily users can complete tasks using the new feature. A low success rate indicates friction, and users are less likely to adopt a feature if it doesn’t make their experience easier. 4. User Journey Mapping: By mapping out the user journey, we can see where the new feature fits into the overall user experience. Does it make sense within the flow of their tasks? Are there unnecessary steps or points of confusion? A smooth, integrated feature is more likely to be adopted. 5. A/B Testing: Once a feature is live, we can run A/B tests to see if it’s driving the desired behavior. Does the feature increase engagement or task completion compared to the previous version? These quantitative insights allow us to measure real-world adoption and refine the feature based on user interactions. 6. Feature Feedback: After a feature is released, gathering feedback is key. By monitoring user comments, satisfaction scores, and support tickets, we can understand how users feel about the feature. Are they using it as intended? Are there any pain points that need addressing? As UX researchers, our role is to validate whether a feature truly meets user needs and fits within their daily tasks. We can predict adoption rates, identify potential issues early, and help product teams make informed decisions before launching a feature. How do you measure feature adoption in your research?

  • View profile for Shahid Miah 🦄

    Founder at wavespace | UX design agency digital partner | Our client raised $10B+, 500+ companies for UX design Y Combinator(YC), Techstars, Seedcamp | Digital design partner for future Unicorns | SaaS | Web3 | AI/ML/VC

    10,128 followers

    The right tools = better design decisions. Here’s my 5-part toolkit for building usable, lovable products. After working with 500+ SaaS teams and startup founders... I realized most design problems aren’t creativity-related. They’re process-related. So I built a tool-based workflow to fix it end-to-end.👇 1. Prototyping You don’t need perfect screens to test ideas. You need working flows. Use: → FigJam for quick sketches → Framer for interactive mockups → ProtoPie for smart transitions Great design starts with rough ideas, tested early. 2. Surveys Stop assuming what users want. Ask. Use: → Typeform for engaging surveys → Google Forms for quick feedback → Qualtrics for deep insights Good products start with good questions. 3. AI Design Assistants AI doesn’t replace designers, It helps them think faster. Use: → ChatGPT to structure flows → Uizard to turn sketches into screens → Khroma for AI-driven color palettes Think of AI as your co-pilot, not your competition. 4. Usability Testing You’re not the user. So, stop designing like you are. Use: → Maze to test quickly → UserTesting to observe behavior → Hotjar to track attention Real feedback is better than 100 assumptions. 5. A/B Testing Design is iteration, not a one-shot deal. Use: → VWO or Optimizely to compare versions → Convert to test page-level performance → Unbounce for landing experiments Let data guide your next move, not guesses. This is how we design smarter. - Use tools that work together. - Focus on insights, not opinions. - Build, test, learn, repeat. If you're a founder or product owner, This is your cheat code to ship a better UX. 💾Save this post.  ♻️Send it to your team. And upgrade your design process.

  • View profile for Jordan McMorris

    Level Designer @ Apogee Entertainment

    4,590 followers

    
 How to Iterate on a Level Blockout (Without Losing Your Mind) Iteration is the heart of great level design, but it can also be one of the most frustrating parts of the process if you don’t have a structure to guide you. When you're deep in a blockout, it’s easy to feel overwhelmed by feedback, second-guess your layout, or get stuck in a loop of endless changes.
 Here’s how I approach iteration to keep my head clear and the project moving forward:
 1. Set Clear Intentions Before You Build Before the first shape hits the grid, I define the goal of the space.
 Is it a combat arena? A narrative moment? A quiet place for exploration?
 This intention acts like a compass—every iteration decision is measured against it. If a change doesn’t support the core purpose of the area, it’s either reworked or cut.
 2. Feedback is Fuel, Not Fire It’s tempting to treat all feedback as immediate action items, but not all feedback is created equal.
 When I get notes, I do three things:
 • Group similar feedback
 • Separate opinion from objective issues
 • Ask “What’s the root problem?” Sometimes, what sounds like a request for a new layout is really about flow, pacing, or visibility. Identifying the why behind the feedback saves you from chasing your tail.
 3. Small Changes First, Big Changes Last I’ve learned to start iteration by tweaking flow and clarity within the existing structure before scrapping whole sections.
 You’d be surprised how far small shifts—like changing sightlines or moving cover—can go toward solving problems.
 Only when small changes can’t resolve the issue do I consider major reworks.
 4. Play, Test, Reflect—Then Tweak If you’re not playing your blockout regularly, you’re designing in the dark.
 After each iteration pass, I do a quick playthrough, often with a specific question in mind: “Does this feel too linear?” “Is the player seeing the objective when they need to?” “Can they identify a path forward without stopping to think?” Recording a play session or watching someone else play it blind can reveal blind spots instantly.
 5. Track Your Changes and Learn From Them I keep a running log of iteration changes and the reason for each. This not only helps me understand how a level evolved, but it also builds my library of design solutions I can refer back to on future projects.
 Final Thought:
 Iteration isn’t failure—it’s refinement. Every tweak is a step toward a better player experience. Build with intention, respond with clarity, and keep your goals visible. You’ll be amazed at how smooth the process becomes. If you’ve got your tips for surviving the iteration loop, I’d love to hear them. How do you keep your brain from melting during blockout changes?

  • View profile for Kim Breiland A.npn

    Founder, Breiland Consulting Group | Helping SMBs Save Time & Grow Revenue l Strong Systems First. AI Second. l Dyslexia Advocate | Tennis, not pickleball | Creator, #AIOpsEdit

    8,867 followers

    Communication gaps and weak feedback loops hurt business success. [Client Case Study] A large hospital network noticed declining patient satisfaction scores. Even with state-of-the-art facilities and technology, patients reported feeling unheard, frustrated, and confused about their care plans. The executive team assumed the problem was with staff training or outdated workflows. ‼️ Mistake: Relying on high-level reports and not direct frontline feedback. Nurses, doctors, and administrative staff communicate differently based on their backgrounds, generations, and roles. - Senior physicians prefer face-to-face or email communication - Younger nurses and tech staff rely on instant messaging and digital dashboards - Patients (especially elderly ones) need clear verbal explanations, but many received rushed instructions or digital paperwork ‼️ Mistake: Differences weren't acknowledged and crucial patient information was lost, leading to errors, frustration, and decreased trust. Frontline staff experienced communication challenges daily but lacked a way to share them with leadership in a meaningful way. ❌️ Reporting structures were too slow or ineffective. Feedback was either ignored, filtered through multiple levels of management, or only addressed after major complaints. ❌️ Executives made decisions based on outdated assumptions. They focused on training programs instead of fixing communication systems. ❌️ Systemic decline Employee burnout increased as staff struggled with inefficient systems. Patient satisfaction declined, leading to lower hospital ratings and reimbursement penalties. Staff turnover rose, increasing costs for recruitment and training. 💡 The Solution: A Multi-Channel Communication Strategy & Real-Time Feedback Loop ✅ Physicians, nurses, and patients receive information in ways that align with their preferences (e.g., verbal updates for elderly patients, digital dashboards for younger staff). ✅ Digital tool that allows staff to flag communication issues immediately rather than waiting for annual surveys. ✅ Executives hold regular listening sessions with frontline employees to better understand challenges before making changes. The Result - Patient satisfaction scores improved - Employee engagement increased - Operational efficiency improved Failing to adapt communication strategies and strengthen feedback loops affects reputation, retention, and revenue. (The 3Rs of a successful organization.) Frontline operations directly impact customer and employee experiences. This hospital’s struggle isn’t unique. Every industry faces the risk of misalignment between leadership decisions and frontline realities. Weak feedback loops and outdated communication strategies create costly inefficiencies. If your employees don’t feel heard, your customers won’t feel valued. Business suffers. Are you listening to the voices that matter most in your business? If not, it’s time to start.

Explore categories