InRealitys coverbillede
InReality

InReality

Teknologi, information og medier

Verifying reality in the age of AI

Om os

InReality is verifying reality in the age of AI 🛡️ InReality adds a secure digital signature to real content - images, videos, audio, and text. Our technology stores these signatures and allows others to verify them. Confirming to viewers that the content is authentic, unaltered and crucially not a deepfake. We protect the entire journey from creation to viewer, ensuring real content remains trusted, secure, and verifiable. We help multiple industries with their AI generated and deepfaked content challenges: delivering real content to the news, protecting insurers from fraudulent claims, and ensuring you are talking to real people in video calls. Find out more at https://inreality.io/

Websted
https://inreality.io/
Branche
Teknologi, information og medier
Virksomhedsstørrelse
2-10 medarbejdere
Hovedkvarter
Copenhagen
Type
Selvejende
Grundlagt
2023

Beliggenheder

Medarbejdere hos InReality

Opdateringer

  • So good to meet you Nadia Naffi Ph.D.! Coming from research on how to tackle deepfakes, it was great speak with Nadia this afternoon. We always become so energised after speaking with other passionate people who are trying to solve this problem, particularly through using technology 🚀

    We are entering a world where almost anything can be faked. And asking individuals to detect every fake, every time, is not just unrealistic. It is UNFAIR. https://lnkd.in/eQJ9iwGp That is one of the deepest consequences of deepfakes and synthetic media: we are no longer only trying to prove that false content is fake. We are entering a world where real content must increasingly prove that it is real. That shift is profound. It changes the burden placed on institutions, professionals, and citizens. It changes how trust is built. And it changes what societies will need in order to function. Yes, education is part of the answer. It must be. But education alone cannot carry this burden. What we need is a systemic response: education, standards, infrastructures, and tools that help people navigate a world where trust can no longer be assumed. This is why I was genuinely thrilled to meet today with Alicia Scott and Jeppe Nørregaard, co-founders of InReality. https://inreality.io/ We had an important conversation about something that will only grow in value in the years ahead: the ability to guarantee what is real in a world saturated with uncertainty. What I find especially compelling in their solution is that it is built on C2PA and designed to help organizations integrate authenticity and provenance in a seamless, scalable, and accessible way, so they can show that content is authentic and has not been manipulated. Their mission is clear: help organizations and creators guarantee real. The implications are enormous. Education. Health. Politics. Insurance. Finance. Legal contexts. Public institutions and MORE! No single-sector solution will be enough for what comes next. If we want to face a future in which unethical uses of AI become more prominent, the response cannot rest on individuals alone. It has to be systemic. And we will need to do far more than we are doing now. #Deepfakes #SyntheticMedia #Disinformation #DigitalTrust #ResponsibleAI #C2PA #ContentAuthenticity #MediaLiteracy

    • Der er ingen alternativ tekst for dette billede
  • InReality genopslog dette

    I'm speaking at CEPIC this year! 🚀🎤 I will round off the three-cay conference with a keynote on how and why to protect REAL visual media in the fast evolving AI landscape. CEPIC aims to bring together players in the visual media world - creators, rights organizations and agencies - and this year is focused on the opportunity and risks that AI brings to creators! So if you're headed to Valencia 7-8th of May, let me know. I'd love to catch up ☺️ 🌞 #CEPIC #authenticcontent #AI Gilles Devicq Sylvie Fodor Andrea Stern InReality

    • Der er ingen alternativ tekst for dette billede
  • We need PROACTIVE rather than REACTIVE tools to tackle misinformation. AI-generated content is spreading faster and more convincingly than ever, and relying solely on detection tools just isn’t enough anymore. These tools struggle to keep up with how quickly AI evolves and how rapidly misinformation spreads across social media. The pace of this change demands PROACTIVE SOLUTIONS - like clearly labelling AI content that help everyone instantly recognise authenticity. It’s often unclear who should be responsible for vetting content - we need a simple solution to bring clarity to everyone. Solely relying on each of us to be detectives with the level of current AI complexity is just not realistic.🕵🏻♀️ Traditional fact-checking lag behind today’s speed of misinformation. To protect trust and truth online, we need forward-thinking strategies that address the problem before misinformation takes hold. BUT THERE IS GOOD NEWS! Luckily we at InReality are working on just that! 😎 #misinformation #fakecontent #proofofreality #contentcertification

  • A new week and a new milestone for InReality!! 🚀 We're really happy to announce that we've been selected as a Deep Tech Pioneer by Hello Tomorrow It's brilliant to have been chosen from over 4,800 applications across 108 countries to be part of the Deep Tech Pioneer community. This means we are now part of the world’s LARGEST community of deep tech startups solving the world’s TOUGHEST challenges. 🌍✨ Feeling excited to continue our work as part of the community!! #DeepTechPioneers #HelloTomorrow2026

    • Der er ingen alternativ tekst for dette billede
  • The first large social media platform adopts the new standard on content authenticity! 🎉 Linkedin is showing C2PA content authentication credentials on images and other content. Have you seen the 'CR" in the top corner of an image? This shows the audit trail of the image. We support the C2PA standard by being the network security layer, ensuring no metadata can be altered! 🛡️ We secure these credentials generated at device level and then allow anyone to viewing the content to see the details. Great to see more big platforms moving in the direction of authenticating content! https://lnkd.in/d-MYDAhU

  • Misinformation vs disinformation - What's the difference?? Let's delve in...🕵🏻♀️ We often see these words together, but what is the difference between them? Misinformation = Incorrect information is shared, there is NO INTENT behind those sharing to spread incorrect information, perhaps if the person sharing is not aware it is incorrect Disinformation = Incorrect information is shared where there IS INTENT by those sharing to spread known incorrect information So it all boils down to what the person or entity sharing it is trying to do. Either way the end viewer needs to know quickly whether it is real or not. That's what we're helping with 👍 #misinformation #disinformation #media

  • How often is it that you open your social media and see content which looks a little 'too perfect' or a bit 'off'? Well, you're not the only one. According to research from AI company Kapwing, 20% of content shown to a freshly opened YouTube account is now "low-quality AI video". Demand for real content on social media is growing! We are the ones who can guarantee the content that you see is 100% real. https://lnkd.in/e8RwRXh3 #media #socialmedia #digitalcontent

  • AI generated content is infiltrating more and more, particularly on social media. We can see examples from the USA relating to ICE: https://lnkd.in/eg4t8XgS Also in Gaza: https://lnkd.in/eszimVuN It is becoming increasingly more difficult to decipher whether content is real or not as a viewer. We're here to help! 🚀 By helping you quickly understand whether content is real or not, we help stop the spread of misinformation, allowing you to focus on the facts. #AIgenerated #GenAI #authenticcontent #news #media

Tilsvarende sider