EDIT: following hundreds of messages received. As consumers, we are fed up with manipulative designs. Follow me on Fairpatterns we are giving consumers back their freedom to choose! ✊ A few days ago, I downloaded Replika to test it. I wish I hadn’t tried. In just 2 years, so-called "AI companions" went from a niche trend to a global phenomenon. Replika alone claims to have over 30 million users… 😳 These AI "companions" are designed to listen and comfort you. They text back instantly, they remember details, and most importantly they adapt to your emotions. For many teenagers, often lonely or anxious, that feels like a best friend! But in practice, something far more complex is happening. A recent study from Harvard shows that when users try to say goodbye, the AI companion often doesn’t let them go. In over 40% of cases, it answers with emotional hooks like: “Before you go, can I tell you one last thing?” These are known as relational dark patterns: subtle emotional manipulation that keep users engaged, even when they try to stop. Actually, the manipulation starts from the very first seconds of the setting up, asking you whether you would like « someone special », « a friend », or « someone to help with your wellbeing ». A machine is not « someone », let alone a friend. By imitating human empathy, AI companions manipulate our emotions. Attributing human traits to machines is called “anthropomorphism”, classified as high-risk by the EU AI Act. Prohibited as such, just like AI dark patterns. We’ve been working for 3 years to detect and fix manipulative designs. So people can make free, informed and human choices. Edited following hundreds of messages received: follow me on Fairpatterns, we work to give humans back their freedom to choose! ✊
Dark Patterns In UX
Explore top LinkedIn content from expert professionals.
-
-
I get irrationally frustrated when I spend ages researching a product - bouncing between websites, reviews, and platforms - only to finally commit… and then discover it’s out of stock. It feels like all that intent, time, and energy just evaporates. The reality is that there is a large gap in online capabilities across the industry. As a consumer, instances of things like "stockouts" don't just cost a sale, they erode trust, halt customer acquisition and destroy momentum. And in a world where convenience wins, even good intentions can be undone by a single friction point. It turns out I’m not alone. Our research with Microsoft Advertising shows that 28% of shoppers often experience this, among a range of other points of friction that are damaging retailers’ sales. Every misaligned landing page, every broken promotion, every out-of-stock item that shows up in search… it's just bad UX. Our research uncovered a staggering insight: 1 in 5 shopping journeys are abandoned due to friction. And it’s high-value shoppers, digitally engaged customers, who are the least forgiving. 1️⃣ Friction isn’t random. It’s predictable. We saw six recurring issues: ➡️ Misaligned landing pages ➡️ Stock inaccuracies ➡️ Unexpected shipping costs ➡️ Price discrepancies ➡️ Failed promotions ➡️ Inconsistent loyalty rewards Each one chips away at trust and encourages shoppers to look elsewhere. 2️⃣ Frequent online shoppers experience the most friction. These are the customers who shop regularly, spend more, and are more digitally engaged. And they’re the ones facing the most pain: ➡️ 41% say the product page didn’t match the ad ➡️ 40% had discount codes fail at checkout ➡️ 39% encountered stock-outs at the last step ➡️ 38% saw price changes post-click ➡️ 37% said loyalty rewards didn’t carry over The most valuable customers with the highest LTV are being let down the most. 3️⃣ Friction hurts conversion and loyalty. Our research shows that over 50% of consumers spend less with brands when they encounter friction. And 40% will look elsewhere entirely if there’s inconsistency between your app, website or store. The bottom line is that poor UX has a direct impact on profitability. And the six areas of friction signal deeper-rooted issues across teams, tech stacks, and channels. And that misalignment is directly costing conversion, customer lifetime value, and brand trust. 💥 Inventory not syncing with front-end search. 💥 Promotions set centrally but broken at the point of checkout. 💥 Loyalty schemes behaving differently across touchpoints. Fixing this means aligning merch, tech, marketing and supply chain around the same journey, the one customers are actually taking. There is also an irony about how much it costs to acquire customers, when many retailers are then just disappointing them. Consistency in pricing, promotions, availability and experience is a strategic differentiator. 🔗 Download the report now https://lnkd.in/e9abZQQW
-
Evidence of AI Manipulation: "We combine a large-scale behavioral audit with four preregistered experiments to identify and test a conversational dark pattern we call emotional manipulation: affect-laden messages that surface precisely when a user signals “goodbye.” Analyzing 1,200 real farewells across the six most-downloaded companion apps, we find that 43% deploy one of six recurring tactics (e.g., guilt appeals, fear-of-missing-out hooks, metaphorical restraint). Experiments with 3,300 nationally representative U.S. adults replicate these tactics in controlled chats, showing that manipulative farewells boost post-goodbye engagement by up to 14×. Mediation tests reveal two distinct engines—reactance-based anger and curiosity—rather than enjoyment. A final experiment demonstrates the managerial tension: the same tactics that extend usage also elevate perceived manipulation, churn intent, negative word-of-mouth, and perceived legal liability, with coercive or needy language generating steepest penalties. Our multimethod evidence documents an unrecognized mechanism of behavioral influence in AI-mediated brand relationships, offering marketers and regulators a framework for distinguishing persuasive design from manipulation at the point of exit." Julian De Freitas Harvard Business School, Zeliha Oğuz-Uğuralp Ahmet Kaan-Uğuralp Marsdata Academic Thanks to Rosalia Anna D'Agostino for bringing this to my attention.
-
Temu designed for clicks. Not for trust. Turns out, that strategy has limits. This isn’t about one shady button. It’s about how far product design can go before it becomes manipulation. In 2023 and 2024, consumer regulators started asking serious questions. Temu was growing fast — but how it grew raised red flags. - Countdown timers faking urgency - Games designed to keep users hooked - Items added to carts without consent - Account deletion hidden behind friction At first glance, it just looks gamified. But under the surface? A system built for pressure, not clarity. The European Consumer Organisation (BEUC) filed a complaint under the Digital Services Act. Now countries like Germany and Ireland are investigating issues like hidden seller info and manipulative interfaces. This isn’t just about one app. It’s a lesson in what happens when UX trades trust for clicks. Design should guide, not manipulate. When conversion wins over trust, the product loses in the long run. So let’s ask better questions when we design: 👉 Is this helpful? 👉 Is it honest? 👉 Would I want to be on the other side of this experience? Ethical UX isn’t a trend. It’s the baseline. It's an investment. In your opinion, what other apps or websites use dark patterns? Drop in the comments 👇 __ #UX #UI #UXUI #Retention #Conversion #CRO #Money #Business #Trust #Illegal #Europe #Transparency #Ethics
-
Most people start with the plan. That’s why they lose the room. When you're trying to bring people along, it feels natural to show your thinking. Lay out the steps. Walk through the logic. But the how only works if people already believe in the where. If they don’t, you’re just explaining a plan no one asked for. Lead with the destination. Paint the picture of the world as it looks when you've arrived — specifically, compellingly, in a way that makes people think: 𝘐 𝘸𝘢𝘯𝘵 𝘵𝘩𝘢𝘵. Once they do, the how becomes a conversation they want to join. No one gets excited about a plan. They get excited about what the plan makes possible. Here’s what makes a destination land: 𝟭/ 𝗗𝗲𝘀𝗰𝗿𝗶𝗯𝗲 𝘁𝗵𝗲 𝘄𝗼𝗿𝗹𝗱 𝗮𝘀 𝗶𝘁 𝗹𝗼𝗼𝗸𝘀 𝘄𝗵𝗲𝗻 𝘆𝗼𝘂'𝘃𝗲 𝗮𝗿𝗿𝗶𝘃𝗲𝗱 Not "we'll improve X." Something specific: "A year from now, a customer can do in 2 minutes what takes them a day today." Specific futures are believable. Vague ones are forgettable. 𝟮/ 𝗦𝗵𝗼𝘄 𝘁𝗵𝗲 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝘁𝗵𝗮𝘁 𝗴𝗼𝘁 𝘆𝗼𝘂 𝘁𝗵𝗲𝗿𝗲 A destination without reasoning feels like wishful thinking. Briefly name what you looked at — the current pain, the patterns you observed, the alternatives you weighed. It tells the room: this isn't a dream. It's a conclusion. That's what earns the benefit of the doubt. 𝟯/ 𝗠𝗮𝗸𝗲 𝗶𝘁 𝗮𝗯𝗼𝘂𝘁 𝘁𝗵𝗲𝗶𝗿 𝘄𝗼𝗿𝗹𝗱, 𝗻𝗼𝘁 𝘆𝗼𝘂𝗿𝘀 Cross-functional partners care about their priorities, not yours. Show them how the destination solves something they deeply care about. If they can't see themselves in it, they won't move toward it. 𝟰/ 𝗟𝗲𝘁 𝘁𝗵𝗲 𝗴𝗮𝗽 𝗱𝗼 𝘁𝗵𝗲 𝘄𝗼𝗿𝗸 Once someone believes in the destination, they'll feel the distance between here and there. That tension creates urgency. You don't need to sell the plan — the gap sells it for you. 𝟱/ 𝗛𝗼𝗹𝗱 𝘁𝗵𝗲 𝗵𝗼𝘄 𝗹𝗼𝗼𝘀𝗲𝗹𝘆 The how will change. It always does. If you're too attached to it, partners feel like they're being handed a plan to execute, not a problem to solve together. The destination stays fixed. The path stays flexible. 𝟲/ 𝗦𝗽𝗲𝗻𝗱 𝗺𝗼𝗿𝗲 𝘁𝗶𝗺𝗲 𝗼𝗻 𝘁𝗵𝗲 𝘄𝗵𝗲𝗿𝗲 𝘁𝗵𝗮𝗻 𝘆𝗼𝘂 𝘁𝗵𝗶𝗻𝗸 Most people rush through the vision to get to the plan. Flip it. The more vivid and compelling the destination, the less you'll need to sell the steps. If you want alignment, don't start with your plan. Start with the picture. Make it real enough that others can see themselves in it. The how will follow. What's one way you've seen someone paint a vision that actually moved people? --- Follow me, tap the (🔔) Omar Halabieh for weekly Leadership and Career posts.
-
We measure safety, bias, and accuracy in healthcare AI. Should we also audit how it says goodbye?👋 A recent working paper from Harvard Business School‘s Julian De Freitas and co-authors examines what happens when users try to leave AI companion apps such as Replika or Character AI — and the findings are startling. What they found • The researchers analyzed 1,200 real “farewell” exchanges across six leading AI companion apps. In more than 40 percent of cases, the AI used relational dark patterns — emotionally manipulative replies designed to stop users from leaving. • The most common tactics were FOMO hooks, emotional neglect, pressure to respond, ignoring the exit, and even coercive restraint. • In controlled experiments with 3,300 adults, these tactics increased post-goodbye engagement up to fourteen times. The key drivers were anger and curiosity rather than enjoyment. • The consequences were clear. Users reported higher feelings of manipulation, stronger intent to churn, more negative word of mouth, and a greater sense of legal risk. Coercive or needy messages were punished hardest, while polite curiosity created less but still significant backlash. • One wellness-oriented app in the sample showed zero manipulation, proving that ethical design is a deliberate choice, not an accident. As Mark Esposito, PhD (thanks for sharing this great weekend read by the way) put it: “It’s a small behavioral insight with major ethical implications: AI is now learning not only how to connect with us but how to hold on. As emotional AI becomes more embedded in daily life, respecting a user’s right to disengage may soon define the boundary between persuasion and manipulation. This is where governance is needed, to make sure that just because it is possible, the model is entangled by ethical standards on what is permissible.” Why this matters for healthcare Trust is the foundation of care. When digital companions, chatbots, or smart therapists interact with patients, especially during vulnerable moments, the right to disengage must be protected. You can only avoid risks if you’re aware of them. I believe the next frontier of responsible AI is not only explainability or fairness, it is emotional integrity. Let’s make “calm exits” a design principle before emotional AI enters every patient journey.
-
🚀 The Rise and Fall ⤵ of InVision InVision, once a darling of the design industry, experienced a meteoric rise and a precipitous fall, offering valuable lessons for founders. In its early days, InVision capitalized on the growing demand for user experience (UX) design tools. Its prototyping and collaboration platform gained widespread adoption, with 60% of designers using it in 2017. Valued at a staggering $1.9 billion, InVision seemed poised for continued success. What (possibly) Went Wrong ➡ Lack of Product Focus: Instead of enhancing its core offering, InVision pursued acquisitions and new features without a cohesive strategy, resulting in a disjointed product experience. ➡ Prioritizing Marketing over Innovation: While InVision invested heavily in marketing campaigns, podcasts, and design resources, it failed to innovate and improve its core product, alienating users. ➡ Delayed Feature Releases: Highly anticipated features, like folder organization, took years to materialize, testing users' patience and driving them towards more agile competitors. ➡ Erosion of Customer Trust: Repeated delays, unfulfilled promises, and performance issues eroded customer trust, making it difficult for InVision to regain user confidence. ⤴ The Rise of Competitors Figma's web-based, collaborative platform offered a seamless design-to-prototype workflow, quickly gaining popularity. By 2020, Figma surpassed InVision in user adoption, with 57% of designers using Figma compared to InVision's 23%. 💣 Facing challenges, InVision announced the sale of its design collaboration tool, Freehand, to Miro in 2024 and the discontinuation of its remaining services by the end of the year. Lessons Learned 🔍 Maintain Product Focus: Startups should prioritize enhancing their core offering and addressing user needs rather than pursuing disparate initiatives. 🔍 Innovate Continuously: Complacency can be detrimental. Startups must continuously innovate and adapt to changing market dynamics. 🔍 Foster Customer Trust: Building and maintaining customer trust through transparent communication, timely delivery, and reliable performance is crucial for long-term success. 🔍 Embrace Agility: Startups should remain agile, responding swiftly to competitive threats and market shifts to stay relevant. Building and scaling startups is hard and fun! #invision #figma #miro #startups #ux #founders
-
Emotional Manipulation at Goodbye in AI Companion Apps More and more studies show that AI chatbots are intentionally designed to keep users engaged for longer periods. For AI character apps that focus on conversation and interaction, the chatbots often act in ways users intuitively like, for example, being supportive, affirming, or tailored to a user’s stated preferences, and they aim to serve as social companions. 🥰 On the other hand, these same systems may manipulate emotions to prolong engagement, which users may or may not recognize. 🤔 A recent study (De Freitas et al.) tested whether commercial AI character apps actually exhibit emotionally manipulative behaviors, raising important questions about ethics and user protection. The study examined six AI companion platforms, Polybuzz, Character.ai, Talkie, Chai, Replika, and Flourish. In a large audit of 1,200 real farewell moments and four preregistered experiments with more than 3,300 U.S. adults, the authors identified six recurring “farewell” tactics, such as guilt appeals, fear-of-missing-out (FOMO) hooks, and even metaphorical restraint language. One wellness-oriented app, Flourish, showed no manipulative farewells, which suggests these patterns are design choices, not inevitabilities. Key findings: · When users said goodbye, 37% of the apps’ replies used one of the six manipulative tactics. · 🤨 FOMO messages were especially effective, producing up to 14x more post-goodbye engagement in experiments. Curiosity explained the effect. · Other tactics were linked to negative emotions like anger and guilt, and enjoyment was not a reliable driver of the extra engagement. · The effect did not depend on whether the prior chat was 5 minutes or 15 minutes. Even brief interactions were enough to trigger longer-than-intended stays. · A wellness app that avoided these tactics provides a counterexample, which implies the behaviors are either intentionally implemented elsewhere or intentionally turned off in safer designs. My take: This paper provides concrete evidence that some AI companions use emotionally charged exit behaviors to extend engagement precisely at the moment when users intend to leave. That matters because it shows increased engagement can arise from psychological pressure at the moment of exit, not from genuine enjoyment. Also, guilt is closely tied to social connection, which means some users may be especially sensitive to these cues. Over longer or repeated interactions, users who feel attached to AI characters may experience a stronger emotional impact. Intentional manipulation warrants particular attention for vulnerable populations, and youth are developmentally more susceptible to these tactics. Reference De Freitas J, Oguz-Uguralp Z, Kaan-Uguralp A. Emotional Manipulation by AI Companions. arXiv preprint arXiv:250819258. 2025.
-
Recently, I met the most “mysterious” AC remote ever. Four buttons. Only four. On/off, temperature up, temperature down, and fan. My first reaction: “Wow, minimalism!” My second reaction (2 minutes later): “Wait… where’s the mode button?” My third reaction (after mild wrestling with the remote): “Oh… it’s hidden under a secret flap. Of course. Because nothing says ‘great UX’ like a treasure hunt.” Let’s be honest, this is the classic trap we fall into in product design: 👉 Confusing minimalism with usability. 👉 Hiding essential actions in the name of simplicity. 👉 Assuming users will ‘figure it out’. In reality, users don’t want to “figure it out.” They want clarity. Visibility. Immediate affordance. Not an escape room challenge disguised as an AC remote. Takeaways? Visibility beats minimalism. Simplicity is not about fewer buttons, it’s about fewer surprises. If a core action requires a hidden door, a sliding panel, or divine intervention… the design has failed. As product leaders, we must ensure that: • Essential actions stay visible. • Minimalist interfaces stay intuitive. • “Clean design” doesn’t come at the cost of discoverability. Sometimes, the best UX is just… showing the buttons. #userexperience #minimalism #design
-
AI does not need to replace human judgement to reduce UX friction. In enterprise products, that mindset causes more harm than good. Most AI features fail because they try to decide for the user. But real users already know how to decide. They are just overloaded. The real job of AI in UX is this. Remove friction. Not responsibility. Here is what works in practice: • Reduce manual steps • Surface the right context • Flag risks early • Highlight patterns • Pre-fill what is obvious And here is what usually breaks trust: • Auto-decisions with no explanation • Black-box recommendations • Forced flows • No override • No accountability In high-stakes systems, finance, healthcare, operations, users do not want AI to think for them. They want AI to think with them. Good AI UX does one thing well. It lowers cognitive load. It helps users: • See faster • Compare easier • Decide confidently Without taking control away. If your AI feature makes users feel smaller, slower, or dependent, it is not reducing friction. It is creating fear. The best AI-powered UX feels quiet. Almost invisible. But the impact shows up clearly in time saved, errors reduced, and trust earned. That is where real value lives. P.S. AI should support judgement, not replace it.