Social Impact Of AI

Explore top LinkedIn content from expert professionals.

  • View profile for Ghazal Alagh
    Ghazal Alagh Ghazal Alagh is an Influencer

    Chief Mama & Co-founder Mamaearth, TheDermaCo, Dr.Sheth’s, Aqualogica, BBlunt, Staze, Luminéve | Mamashark @Sharktank India | Artist | Fortune & Forbes Most Powerful Woman in Business

    703,626 followers

    I came across research last week that I genuinely cannot stop thinking about. In the logic of AI, "man" is to "programmer" as "woman" is to "homemaker." No one explicitly coded that bias into the system; the machines simply learned it from us. They mirrored our job postings, our articles, and our casual conversations and billions of our own blind spots fed into a black box until the algorithm started reflecting our worst habits back at us. Bias in AI isn't always malicious. But sometimes it feels like AI is being weaponized against women's safety at a scale. On platforms like X, a woman posts a photo and the replies are filled with prompts for AI tools to undress her (see the links in comments).These tools then publicly generate explicit, non-consensual images of real women who are students, mothers, leaders. We want to use AI. We must use AI but thoughtfully. And the information it is sharing is just a mere unfortunate reflection of our society. A society where women have fought their way up as they have been historically been reduced, objectified, and pushed to the margins but now those patterns are being encoded into new systems. When a tool can be used to violate a woman's dignity in seconds, that's a design and policy failure. My question is: Can we build AI that doesn't inherit the worst of us? I think we can. But only if the people building it are asking that question out loud before the product ships. #AI #GenderBias #WomenSafety

  • View profile for Nicki Martin

    Strategy and tools for business growth. More business, less busy-ness 🐝

    51,775 followers

    Not all engagement is created equal! Algo update! LinkedIn’s algorithm is now penalising accounts with lots of automated, AI-generated comments on their posts 🚫 Instead of helping, these repetitive or irrelevant comments, that repeat your post back to you parrot fashion, could actually be DAMAGING the reach of your favourite Creators, meaning you will see LESS of what you like in the feed! And they won't thank you for it! If you've been taught by some 'guru' that engagement always wins, and installed a chrome extension or third party tool to help you keep on top of it, please wipe that smug smile off your face! LinkedIn is on to you! You're damaging not only your own reach, but that of the people you have been attempting to build a robotic relationship with! Everyone knows I love AI - but the comments section is NOT the place for it! At You Need Nicki, we've always stood firm on the power of real, meaningful engagement. There's no shortcuts, you have to do the work. And we're super happy to see the Linkedin algorithm favouring genuine interactions that drive value again. Thoughtful comments, authentic conversations, and real connections. This is what we help our clients focus on: quality over quantity, with engagement that builds genuine connections and opportunity. Genuine engagement is a springboard for real life relationships - like the lovely Angie McQuillin here who is one of the many LinkedIn connections I have since had the pleasure of meeting in person. Pro tip: If you spot AI-driven, empty comments on your posts, consider deleting or blocking them to protect your reach and maintain a high-value feed. How can you spot an AI comment on your post? Drop your thoughts in the comments? 🤖 #LinkedInTips #SocialMediaStrategy #MeaningfulEngagement #AlgoUpdate

  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,240 followers

    "This report developed by UNESCO and in collaboration with the Women for Ethical AI (W4EAI) platform, is based on and inspired by the gender chapter of UNESCO’s Recommendation on the Ethics of Artificial Intelligence. This concrete commitment, adopted by 194 Member States, is the first and only recommendation to incorporate provisions to advance gender equality within the AI ecosystem. The primary motivation for this study lies in the realization that, despite progress in technology and AI, women remain significantly underrepresented in its development and leadership, particularly in the field of AI. For instance, currently, women reportedly make up only 29% of researchers in the field of science and development (R&D),1 while this drops to 12% in specific AI research positions.2 Additionally, only 16% of the faculty in universities conducting AI research are women, reflecting a significant lack of diversity in academic and research spaces.3 Moreover, only 30% of professionals in the AI sector are women,4 and the gender gap increases further in leadership roles, with only 18% of in C-Suite positions at AI startups being held by women.5 Another crucial finding of the study is the lack of inclusion of gender perspectives in regulatory frameworks and AI-related policies. Of the 138 countries assessed by the Global Index for Responsible AI, only 24 have frameworks that mention gender aspects, and of these, only 18 make any significant reference to gender issues in relation to AI. Even in these cases, mentions of gender equality are often superficial and do not include concrete plans or resources to address existing inequalities. The study also reveals a concerning lack of genderdisaggregated data in the fields of technology and AI, which hinders accurate measurement of progress and persistent inequalities. It highlights that in many countries, statistics on female participation are based on general STEM or ICT data, which may mask broader disparities in specific fields like AI. For example, there is a reported 44% gender gap in software development roles,6 in contrast to a 15% gap in general ICT professions.7 Furthermore, the report identifies significant risks for women due to bias in, and misuse of, AI systems. Recruitment algorithms, for instance, have shown a tendency to favor male candidates. Additionally, voice and facial recognition systems perform poorly when dealing with female voices and faces, increasing the risk of exclusion and discrimination in accessing services and technologies. Women are also disproportionately likely to be the victims of AI-enabled online harassment. The document also highlights the intersectionality of these issues, pointing out that women with additional marginalized identities (such as race, sexual orientation, socioeconomic status, or disability) face even greater barriers to accessing and participating in the AI field."

  • View profile for Karim Sarkis

    Culture, Media and Entertainment, TMT @Strategy&

    8,480 followers

    The dreaded LinkedIn AI reply, a helpful assistant or an example of how not to use AI tools? You’ve noticed them. The polished , somewhat lengthy comments on your post. They repeat your post’s main points. They are always enthusiastic yet in an impersonal tone. They state some form of “opinion”, usually at the end of the comment, that is about as thought-provoking as a bottle of ketchup. Most people know they are AI generated. So what’s the point? If we get to a stage where AI’s are posting and AI’s are commenting, what is this network for? This is not a productivity hack. Having more obviously AI-generated comments posted per day does not help position you on this platform. Nor does it lead to new connections or relationships (unless AIs also start sending invites and accepting connections…) And no it isn’t the romanticized use-case of bringing the world together by eliminating language barriers. Most of the comments I’ve seen have been from English speaking profiles. It’s seems to be simply laziness. It’s a case of tech for tech’s sake, without much of a thought for why it would be helpful. Or another example of “volume is good” as the default approach to all things tech. Time to step back and evaluate where AI-tools fit in human interactions on this and similar platforms. #AI #Humaninteraction #tech

  • View profile for Olena Ivanova, MD, PhD

    Women’s & Global Health Researcher | FemTech Advisor & Community Builder | Driving Equity & Innovation in Sexual and Reproductive Health

    3,955 followers

    Automating Inequality: When AI Undervalues Women’s Care Needs New research from Care Policy and Evaluation Centre (CPEC) by Sam Rickman reveals that large language models (LLMs) used to summarise long-term care records and support social workers in England may be introducing gender bias into decisions about who gets support. Using real case notes from 617 older adults, researchers created gender-swapped versions and generated 29,616 summaries using different AI models. The results? - Google’s widely used AI model ‘Gemma’ downplays women’s physical and mental issues in comparison to men’s. - Terms associated with significant health concerns, such as “disabled,” “unable,” and “complex,” appeared significantly more often in descriptions of men than women. If AI summaries soften women’s diagnoses, they risk receiving less support, not because their needs are different, but because the language makes them seem so. #GenderBias #LLMs #HealthEquity #ResponsibleAI

  • View profile for Lindsey Gamble
    Lindsey Gamble Lindsey Gamble is an Influencer

    VP, Creator Strategy & Innovation | Creator Economy Expert | Advisor

    16,303 followers

    Automated comments are getting the boot on LinkedIn. The platform is cracking down on comments left using third-party automation tools that bypass any type of human review. Gyanda Sachdeva, VP of Product Management at LinkedIn, shared that enforcement actions may include: 🚫 Removing them from the “Most Relevant” section 📉 Limiting distribution so they aren’t shown outside the commenter’s network ⛔ Restricting accounts that use these tools in severe cases Automated comments had been rampant for a while. These new enforcement actions should further reduce the generic comments that were previously used to game visibility. While AI tools have made it easier than ever to create content and automate engagement, more is not always better. Commenting can absolutely help you build an audience and get in front of prospects, clients, peers, and potential followers. It’s easy to see why some people turn to automation. According to LinkedIn, comments can drive 3x more visibility. And with LinkedIn adding impression counts for comments last year, it further signals how important they are. But I see commenting as a value game, not a volume game. Most AI-generated comments I’ve seen simply restate what’s already been said in the post. Even worse are the automated comments that are just a string of emojis. They’re not additive and don’t move the conversation forward. Yes, automation can give you scale without manual effort. But in many cases, these types of comments end up reflecting poorly on the individuals and companies posting them.

  • View profile for Leanne Shelton 💬
    Leanne Shelton 💬 Leanne Shelton 💬 is an Influencer

    Author of ‘AI-Human Fusion’ | LinkedIn Top Voice – AI | Smart, Human-First Use of AI Tools for Marketing, Business Communications & Productivity | Training | Keynotes | Coaching

    8,356 followers

    Linkedin is starting to feel like a networking event where everyone’s swapped their business cards for bots. Scroll for five seconds and you’ll see what I mean. There are auto-generated posts with zero soul. Generic leadership quotes or messages repackaged by ChatGPT. Comments like “Great insights!” that scream “AI wrote this for me!” And in some cases, there are bots commenting on content written by other bots. Ugh. 😅 We’ve turned one of the most powerful platforms for professional human connection into an AI echo chamber – and I’m confused about why humans are letting it happen. Don’t get me wrong. I train companies to use AI tools like ChatGPT every day. I love what’s possible when the tech is used well. But what we’re seeing on LinkedIn right now isn’t smart AI use. We’re seeing humans outsourcing their voices to the machines. As a result, we’re losing the very thing that made this platform powerful in the first place – real people, sharing real ideas, with real impact. It’s not too late to turn this around. But first, we need to talk about the problem – and what we need to do about it. Read more of my musings in my article published by Mumbrella today - https://lnkd.in/gsxfe-jx LinkedIn coaches, Kate Merryweather and Karen Tisdell, and I cover this topic in more detail within 'AI-Human Fusion'. Grab your copy here - https://lnkd.in/gGGRyz5C #ai #linkedin #keepithuman

  • View profile for Adam Strong

    7–8 Figure Exits in 12–36 Months | Growth Advisor to M&A Law Firms | Strategic Board Advisor | Looking For Acquisitions

    7,475 followers

    Those AI-generated LinkedIn comments you're so proud of? They're killing your credibility faster than spam Last year, my posts drowned in "Great job! "Thanks" "Interesting!" spam. Today? AI-generated essays that say nothing I watched engagement drop as authenticity died Then I realised: robots can’t build trust. Humans do AI didn’t raise the bar—it just made mediocrity sound smarter Authenticity is your unfair advantage 𝟱-𝗦𝗲𝗰𝗼𝗻𝗱 𝗔𝗰𝘁𝗶𝗼𝗻𝗮𝗯𝗹𝗲 𝗙𝗶𝘅𝗲𝘀: 1. Steal this template: “The part about [X] hit hard. How did you handle [specific challenge]?” 2. Add 1 personal detail: “This reminds me of when I…” (10 seconds). 3. Ask a short question: “Would this work for [industry]?” 4. Ditch the essay. Write like you’re texting a friend. 5. Set a 60-second timer. Overthinking = sounding like ChatGPT. Genuine comments take 30 seconds but: • Skyrocket your visibility (algorithm rewards real convos) • Make you memorable in a bot-dominated feed • Build relationships that turn followers into clients Will your next comment be forgettable AI fluff… or the reason someone DMs you? P.S. The best LinkedIn growth hack isn’t a tool. It’s you. “Agree? Drop your #1 tip for authentic engagement below ------- Hi I'm Adam Strong and I help founders scale, systemise, and exit businesses in 12-24 months—without losing their sanity Listen to the full episode with me and Ruben Hassid link below

  • View profile for Divya Gokulnath

    Entrepreneur | Teacher | Built for Impact, Led with Heart | Educating 250M+ Learners | Global Voice in AI, Equity & Innovation

    444,226 followers

    It’s easy to forget today that the world’s first programmer - Ada Lovelace - was a woman. If Charles Babbage is the father of computers, Ada is undeniably the mother of programming. She not only wrote the first algorithm but also questioned, even in the 1840s, how technology might reshape society. Fast forward to today, artificial Intelligence is being hailed as the backbone of humanity’s future. And yet, as things stand, AI risks leaving women behind. Men significantly outnumber women in AI talent pipelines, in research roles, and even in consumer adoption of AI tools. Leadership representation is even worse. AI is not neutral. It learns from historical data - the same data that carries decades of gender biases - and in doing so, it can reinforce and even amplify these biases. When AI reflects a world where women have been underrepresented, it reproduces that world in its outputs. You know, the world where women “don’t belong in tech”. Globally, women make up less than one-third of tech employees and only a fifth of the AI workforce. And according to recent research, women are less likely to believe AI will impact their jobs - or benefit them. That is because they worry - more than men - about the ethics of using these tools or about being judged for relying on them. But if AI boosts productivity - and women hesitate to use it - the gender gap in pay, opportunity, and leadership might widen even further. Companies must do more than simply “offer access” to AI. They must actively invite, encourage, and support women in experimenting with these tools. Inclusion, after all, doesn’t happen by accident. A century from now, history could say that AI deepened gender inequality - or that we acted early and changed the story. The choice is ours.

Explore categories