Understanding Deepfake Risks

Explore top LinkedIn content from expert professionals.

  • View profile for Jason Rebholz
    Jason Rebholz Jason Rebholz is an Influencer

    Co-Founder & CEO @ Evoke Security | Agentic Security, AI Security

    32,063 followers

    There’s more to the $25 million deepfake story than what you see in the headlines. I pulled the original story to get the full scoop. Here are the steps the scammer took: 1. The scammers sent a phishing email to up to three finance employees in mid-January, saying a “secret transaction” had to be done. 2. One of the finance employees fell for the phishing email. This led to the scammers inviting the finance employee to a video conference. The video conference included what appeared to be the company CFO, other staff, and some unknown outsiders. This was the deep fake technology at work, mimicking employees' faces and voices. 3. On the group video conference, the scammers asked the finance employee to do a self-introduction but never interacted with them. This limited the likelihood of getting caught. Instead, the scammers just gave orders from a script and moved on to the next phase of the attack. 4. The scammers followed up with the victim via instant messaging, emails, and one-on-one video calls using deep fakes. 5. The finance employee then made 15 transfers totaling $25.6 million USD. As you can see, deep fakes were a key tool for the attacker, but persistence was critical here too. The scammers did not let up and did all that they could to apply pressure on the individual to transfer the funds. So, what do businesses do about mitigating this type of attack in the age of deep fakes? - Always report suspicious phishing emails to your security team. In this context, the other phished employees could have been an early warning that something weird was happening. - Trust your gut. The finance employee reported a “moment of doubt” but ultimately went forward with the transfer after the video call and persistence. If something doesn’t feel right, slow down and verify. - Lean into out-of-band authentication for verification. Use a known good method of contact with the individual to verify the legitimacy of a transaction. - Explore technology driven identify verification platforms for high dollar wire transfers. This can help reduce the chance of human error. And one of the best pieces of advice I saw was from Nate Lee yesterday, who called out building a culture where your employees are empowered to verify transaction requests. Nate said the following “The CEO/CFO and everyone with power to transfer money needs to be aligned on and communicate the above. You want to ensure the person doing the transfer doesn't feel that by asking for additional validation that they're pushing back against or acting in a way that signals they don't trust the leader.” Stay safe (and real) out there. ------------------------------ 📝 Interested in leveling up your security knowledge? Sign up for my weekly newsletter using the blog link at the top of this post.

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | Member of the Board of Irish Museum of Modern Art | PhD in AI & Copyright

    59,608 followers

    There’s a pretty good chance that the shocking rate at which AI is advancing is out-pacing your cyber security training, policies and maybe even technologies. Have you addressed the use of AI and deep fakes in your cyber security policies? In a recent and alarming development that seems to have leapt straight from the pages of a science fiction novel, a Hong Kong based finance worker at a multinational firm was defrauded of $25 million, falling victim to an elaborate scam that employed deepfake technology to impersonate the company's CFO. This incident, which unfolded during a video conference call, marks a disturbing milestone in the intersection of cybercrime and AI, underscoring the urgent imperative for companies to bolster their cybersecurity frameworks, particularly against the backdrop of deepfake technology. The mechanics of the scam were deceptively simple yet devastatingly effective. The finance employee was lured into a video call with several participants, believed to be colleagues and the CFO, only to discover later that each participant was a digital fabrication. The deepfake avatars, mirroring the appearance and voice of real company personnel, instructed the employee to initiate a "secret transaction", leading to the unauthorised transfer of $25.6 million. This incident is not an isolated event but rather a harbinger of the potential threats posed by AI-driven disinformation and fraud. The use of deepfake technology to bypass facial recognition software, impersonate individuals for fraudulent purposes, and undermine the integrity of personal and corporate identities presents a clear and present danger. The case in Hong Kong, where fraudsters successfully manipulated digital identities to orchestrate financial theft, exemplifies the sophistication of contemporary cybercrime. The implications of this event extend far beyond the immediate financial loss. It serves as a stark reminder of the vulnerabilities inherent in digital communication platforms and the necessity for robust verification processes. The reliance on video conferencing and digital communication, accelerated by the global pandemic, has exposed systemic weaknesses ripe for exploitation. In response to this escalating threat, it is incumbent upon companies to adopt comprehensive cybersecurity strategies that address the unique challenges posed by deepfake technology. This includes implementing advanced authentication protocols, raising awareness and training employees on the potential risks of deepfakes, and deploying AI-driven security measures capable of detecting and neutralising synthetic media. As AI output become increasingly indistinguishable from reality, the line between authentic and artificial communication will blur, challenging individuals and organisations to navigate a new frontier of digital authenticity. It compels a reevaluation of the assumptions underpinning digital trust and identity verification, urging a proactive approach to cyber defence.

  • View profile for Arockia Liborious
    Arockia Liborious Arockia Liborious is an Influencer
    39,249 followers

    The New Corporate Threat: Deepfakes That Even Experts Can't Detect Welcome to the new reality where AI doesn’t just generate content, it manufactures convincing lies. You’ve probably seen it: - A CEO announces a fake acquisition. - A politician "says" something they never did. - A voice note "from your boss" requests a fund transfer. It all looks real. But it’s not. It’s a deepfake AI-generated audio, video, or images designed to deceive. Why it matters: Deepfakes are no longer just internet tricks or entertainment. They’re now: - Financial fraud enablers (voice clones used to scam employees) - Corporate risk vectors (fake news impacting stock prices) - Political weapons (manipulated clips used to sway public opinion) - Personal threats (identity misuse, blackmail, defamation) How to spot a deepfake  Look for: - Unnatural blinking or awkward lip sync - Plastic skin or weird lighting - Robotic tone or emotionless speech - Out-of-character statements - No credible source backing the video If it feels off, it probably is. What you can do: - Pause before sharing - Use tools like Deep ware, Microsoft Video Authenticator, or Adobe Verify - Train your teams especially PR, legal, and finance - Push for content provenance in your organization In the GenAI era, trust is currency. Don’t spend it on content you didn’t verify. #artificialintelligence

  • View profile for Jaclyn Lee PhD, IHRP-MP, PBM
    Jaclyn Lee PhD, IHRP-MP, PBM Jaclyn Lee PhD, IHRP-MP, PBM is an Influencer

    LinkedIn Top Voice I Linkedin Power Profile I CHRO I Author I Influencer

    25,587 followers

    𝗧𝗵𝗲 𝗥𝗶𝘀𝗲 𝗼𝗳 𝗔𝗜 𝗗𝗲𝗲𝗽𝗳𝗮𝗸𝗲𝘀 𝗶𝗻 𝗥𝗲𝗰𝗿𝘂𝗶𝘁𝗺𝗲𝗻𝘁 – 𝗔𝗿𝗲 𝗬𝗼𝘂 𝗣𝗿𝗲𝗽𝗮𝗿𝗲𝗱? Recently, a recruiter at a remote digital studio encountered a shocking experience—a job candidate who used deepfake technology to attend a virtual interview. The signs were subtle at first: reluctance to turn on the camera, unnatural facial movements, and distorted video quality. When asked to perform a simple gesture, the video abruptly ended. It was a chilling reminder that as technology advances, so do the tactics used to deceive. Unfortunately, this isn’t an isolated incident. Cases of job applicants using AI-generated avatars or voice-changing tools are starting to surface more frequently—especially in remote hiring scenarios. 𝗔𝘀 𝗛𝗥 𝗽𝗿𝗼𝗳𝗲𝘀𝘀𝗶𝗼𝗻𝗮𝗹𝘀 𝗮𝗻𝗱 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗹𝗲𝗮𝗱𝗲𝗿𝘀, 𝘄𝗲 𝗺𝘂𝘀𝘁 𝗮𝘀𝗸 𝗼𝘂𝗿𝘀𝗲𝗹𝘃𝗲𝘀: 1. Are our hiring processes resilient against such threats? 2. Are our teams trained to spot the red flags? 3. Are we relying too heavily on virtual processes without the right checks in place? 𝗛𝗲𝗿𝗲’𝘀 𝘄𝗵𝗮𝘁 𝘄𝗲 𝗰𝗮𝗻 𝗱𝗼:  • 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁 𝗺𝘂𝗹𝘁𝗶-𝗹𝗮𝘆𝗲𝗿𝗲𝗱 𝘃𝗲𝗿𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 – Ask for live camera interaction and conduct structured behavioural interviews.  • 𝗧𝗿𝗮𝗶𝗻 𝗵𝗶𝗿𝗶𝗻𝗴 𝗺𝗮𝗻𝗮𝗴𝗲𝗿𝘀 𝗮𝗻𝗱 𝗿𝗲𝗰𝗿𝘂𝗶𝘁𝗲𝗿𝘀  – Help them recognise signs like voice delays, facial distortions, or a mismatch between lip movement and audio.  • 𝗦𝘁𝗿𝗲𝗻𝗴𝘁𝗵𝗲𝗻 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗮𝘄𝗮𝗿𝗲𝗻𝗲𝘀𝘀 – Treat recruitment fraud as seriously as cyber threats. Technology should empower us, not expose us. It’s time we integrate ethical AI use with vigilant human judgement to protect the integrity of our hiring processes and the trust in our organisations. Here are 2 LinkedIn posts from fellow professionals sharing their recent personal experiences during Zoom interviews. These real-life encounters serve as powerful reminders of how digital deception is evolving. Let’s keep the conversation going, stay informed, and stand united in safeguarding the integrity of our hiring processes. https://lnkd.in/gNstP5Qp https://lnkd.in/gxp8BEpv

  • View profile for David Sadigh

    Founder & CEO at DLG (Digital Luxury Group)

    11,575 followers

    🚨 SCAM : Someone cloned my voice 🚨 Today, some of my colleagues and personal network received a sophisticated scam—a message from a French number, displaying my profile picture, and worst of all… a voice message mimicking my voice. Yes, MY voice. Same tonality, same (cute) little French accent… This kind of fraud is becoming more common, and it could happen to you or your business soon. Few things to remember: 1️⃣ AI-generated voices are now highly realistic – If your voice is online (videos, podcasts, interviews), scammers can clone it. You don’t believe it until it happens to you. 2️⃣ Never trust voice alone – Always verify unusual requests through a second channel (text, email, or in person). 3️⃣ As often, Deepfake scams rely on urgency – If someone is pressuring, stop and confirm before acting. 4️⃣ Use a “safe word” with close contacts (and kids!) – A pre-agreed phrase can help confirm someone’s identity in critical situations. 5️⃣ Be mindful of your digital footprint – The more personal data (voice, images, videos) you share publicly, the easier it is to be impersonated. 6️⃣ Raise awareness in your company & network (like I’m doing here) – Businesses need strict identity verification protocols, especially for financial transactions. Welcome to 2025! #Deepfake #AI #CyberSecurity #ScamPrevention #FraudDetection

  • View profile for David Birch

    International keynote speaker, author, advisor, commentator on and investor in digital financial services. Recognised thought leader whose books on digital identity, money & assets have been widely praised.

    24,894 followers

    M&S, Harrods and Co-op have all been hit by serious cyberattacks this year, with M&S losing hundreds of millions in value when payments went down. One emerging threat? Fake remote workers. Some firms have hired North Korean operatives with AI-polished faces, stolen identities and spotless (because fraudulent) background checks. The suggested fix? Keep your cameras on. The reality? Deepfake video feeds are already good enough to fool entire conference rooms, Arup learned this the hard way when a synthetic CFO ordered a $25m transfer. This isn’t a visibility problem. It’s a verifiable identity problem. Banks use strong biometrics, cryptographic proofs and verifiable credentials for KYC every day. Employers need the same for KYE. Digital signatures can’t be deepfaked; video calls can. So here’s the question: Are we finally ready to move from “seeing is believing” to “cryptographically proving is believing”? #digitalidentity #verifiablecredentials #authentication #authorisation #verification

  • View profile for Tom Vazdar

    Principal Consultant | Cybersecurity & AI (Governance, Risk & Compliance) | CEO @ Riskoria | Media Commentator on Cybercrime & Digital Fraud | Creator of HeartOSINT

    10,004 followers

    What happens when deepfake technology becomes a service anyone can buy? I've been tracking the Deepfakes-as-a-Service market, and the numbers are alarming. Deepfake fraud attempts jumped 1,300% in 2024. From one attack per month to seven per day. Here's what keeps me up at night: The February 2024 Arup case. A finance employee joined a video call with the CFO and several colleagues. Everyone looked real. Everyone sounded real. The employee authorized $25.6 million in wire transfers. Every single person on that call was AI-generated. This wasn't some nation-state operation. Underground marketplaces now offer deepfake creation as a point-and-click service. No technical skills required. Just cryptocurrency and malicious intent. The psychology is what makes it work. We're wired to trust what we see and hear, especially when it matches our expectations. A realistic video of your CFO making a familiar request triggers immediate credibility. By the time you think to question it, the money's gone. Traditional defenses aren't enough anymore: → Voice verification systems can be defeated → Video calls don't guarantee authenticity → Even following verification procedures can fail Organizations need multi-channel verification protocols. If someone requests a wire transfer on video, verify through a completely separate channel. Code words. Challenge-response systems. Procedural friction on high-risk transactions. But here's the problem: 99% of security leaders say they're confident in their deepfake defenses. Only 8.4% actually scored above 80% in detection tests. We think we're protected when we're actually vulnerable. Have you updated your verification procedures for the deepfake era? #Cybersecurity #AISecurity #DeepfakeFraud #DigitalRisk #FraudPrevention

  • View profile for Ben Colman

    CEO at Reality Defender | 1st Place RSA | JP Morgan Hall of Innovation | Ex-Goldman Sachs, Google, YCombinator

    20,952 followers

    I just submitted my comments to FINRA about deepfakes in financial services. They asked for input on modernizing rules for the digital workplace. Naturally, I had thoughts. FINRA's 55-page regulatory notice covers everything from remote work to AI chatbots. Buried on page 44? A single paragraph about deepfakes. One paragraph. For a threat that could cost financial services $25 billion by 2027. Here's what's happening right now: Deepfakes are bypassing biometric verification during customer onboarding. AI voices are authorizing wire transfers. Synthetic executives are joining Zoom calls. And the current rules? They assume the person on your screen is actually a person. Wild assumption in 2025. The fascinating part isn't that regulators are behind the curve. That's expected. It's that they're asking the right questions. "How have technological advances helped or hindered members' ability to fight fraud?" Great question. Here's the answer no one wants to hear: The same AI making compliance more efficient is making fraud more effective. It's an arms race. And right now, the bad guys have better weapons. At Reality Defender, we see the casualties daily. Banks discovering their "CFO" never joined that call. Investment firms realizing they onboarded synthetic identities. The good news? FINRA's listening. The better news? We don't need to wait for new rules to protect ourselves. Because while regulators debate modernization, someone's using your earnings call to train their voice model. Read more about why we did this and see our comments in full below 👇 https://lnkd.in/en7nVb9a

  • View profile for Jodi Daniels

    Practical Privacy Advisor / Fractional Privacy Officer / AI Governance / WSJ Best Selling Author / Keynote Speaker

    20,544 followers

    Fraud no longer hides in the shadows. It might show up disguised as someone you know. Like when the CEO calls and her voice on the phone sounds exactly right. Her urgency feels real, and the wire transfer request to a new bank account seems legitimate, so accounting releases the funds. And just like that, the company loses $20k to a fraudster who weaponized AI. This isn't science fiction. It's happening right now to individuals and organizations alike. Fraudsters are creating disturbingly real AI deepfakes that can fool even the most cautious people. And companies need strategies to combat them. Because those audio and visual cues we've relied on for decades are no longer reliable indicators of authenticity when it comes to AI deepfakes. Organizations can fight back with these defense strategies: ✔ Stay cautious and be wary of anyone requesting money or personal information, even if they look or sound like someone you trust. ✔ Don’t send money or share sensitive data in response to a single phone or video call. Phone numbers can be spoofed, so always verify a person’s identity by contacting them separately at a number you trust. ✔ Use small action requests, like asking a person to turn their head, blink repeatedly, or hum a song while on a video or phone call. If they decline, freeze up, or go silent, it could be a fraudster. ✔ Establish a safe word that only your inner circle knows to confirm the identity of someone claiming to be a colleague, family member, or friend.   ✔ Use strong passwords. Enable multifactor authentication (MFA) on all company devices and accounts whenever possible. And don’t forget to report AI deepfakes to law enforcement and any relevant social media channels, websites, and other platforms where the encounter took place. All of these tips ALSO work for individuals too because hackers like causing havoc with anyone they can. The question isn't whether AI deepfakes will target your organization. It's whether your organization will be ready when it does.   Food for thought as we kick off Cybersecurity Awareness Month.   ♻ Share our infographic to help companies combat AI deepfakes. 

  • View profile for Adnan Amjad

    US Cyber Leader at Deloitte

    4,337 followers

    Deepfake-related fraud is increasingly omnipresent. Singular points of security are no longer reliable enough – especially for high-stakes environments like financial service organizations, as a recent The Wall Street Journal article featuring Deloitte’s Anish Srivastava explains (https://deloi.tt/4nlto2c).   To address these complex and evolving threats, banks and financial institutions should implement multi-layered security “defense-in-depth" strategies that can proactively detect, mitigate, and respond to deepfake threats and restore trust.  Organizations can implement multiple layers of security to protect against deepfakes, including secure user onboarding, contextual analysis, media liveness confirmation, strong authentication and session binding measures, and deepfake detection AI.      Maintaining deepfake protection requires ongoing employee training, regular security audits, continuous monitoring of emerging threats, and prompt response to incidents.  

Explore categories