Sign in to view Rhett’s full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Rhett’s full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
North Bondi, New South Wales, Australia
Sign in to view Rhett’s full profile
Rhett can introduce you to 4 people at GT Systems Australia
Join with email
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
2K followers
500+ connections
Sign in to view Rhett’s full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Rhett
Rhett can introduce you to 4 people at GT Systems Australia
Join with email
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Rhett
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Rhett’s full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Articles by Rhett
-
WHY IS AI REPEATING THE MISTAKES OF THE PAST? Those who ignore history are doomed to repeat it.
WHY IS AI REPEATING THE MISTAKES OF THE PAST? Those who ignore history are doomed to repeat it.
In Computer Networks, Andrew Tanenbaum warned us: “A fully connected mesh of n systems requires n(n−1)/2 connections…
13
2 Comments -
SPAN: The Semantic Fabric that Makes AI Agents WorkJul 14, 2025
SPAN: The Semantic Fabric that Makes AI Agents Work
The Internet Wasn’t Built for This We stand at the crossroads of a generational opportunity to fix the fundamental…
18
1 Comment -
Meredith Whittaker raised the right problems, but we built the solution. How SPAN-AI answers the critics of the agent economy.Jul 3, 2025
Meredith Whittaker raised the right problems, but we built the solution. How SPAN-AI answers the critics of the agent economy.
Meredith Whittaker, president of the Signal Foundation, recently warned that AI agents threaten privacy and security…
11
9 Comments -
Open Letter - Infrastructure for Sovereign AI: a Protocol-Layer Capability designed in AustraliaJul 2, 2025
Open Letter - Infrastructure for Sovereign AI: a Protocol-Layer Capability designed in Australia
From: Rhett Sampson, Founder, GT Systems To: Hon. Tony Burke, Minister for Home Affairs & Cyber Security Tony Burke Hon.
19
10 Comments -
JUST HOW BROKEN IS THE OTT STREAMING INDUSTRY?Nov 28, 2024
JUST HOW BROKEN IS THE OTT STREAMING INDUSTRY?
The recent failures in the Netflix live streamcast of the Tyson-Paul fight have finally brought to a head the elephant…
49
18 Comments -
How a great Australian invented local area networking from an insightful comment by his wifeJan 24, 2022
How a great Australian invented local area networking from an insightful comment by his wife
I worked for Dr Peter Jones at his Australian companies, Techway and Network Automation, in the 80s. It was my first…
17
8 Comments -
REBUILDING OUR GLOBAL NETWORKS AFTER THE FIRES OF COVIDApr 1, 2020
REBUILDING OUR GLOBAL NETWORKS AFTER THE FIRES OF COVID
CALL TO ACTION “There are reasons to suspect that we may not wish to build future digital communications networks…
16
5 Comments -
Time for some radical new thinking to save the NBN.Oct 30, 2017
Time for some radical new thinking to save the NBN.
The market abhors a vacuum. And no vacuum sucks like the NBN right now.
10
2 Comments -
Dream the Impossible Dream!Jul 16, 2017
Dream the Impossible Dream!
We will be demonstrating the new Blust multi-media machine and distribution system on stand A48 at SMPTE this week at…
14
Activity
2K followers
-
Rhett Sampson shared thisDay two. Another one. Once more with feeling: this is another OAuth issue and OAuth breaks under AI. SPAN uses defence grade cryptographic security.Rhett Sampson shared thisWARNING: Lovable is experiencing a mass data leak. If you built a project before Nov 2025, your source code, database credentials & AI chat history can be viewed by any free account today. Here's how to triage ASAP 👇 Hell of way to start a week lol Let's just treat this as a fun "part two" to yesterday's fiasco... (And don't get me started on the vibe coding tax). Anyway... This is the result of a Broken Object Level Authorization (BOLA) flaw in Lovable's API. Firebase auth tokens are checked, but ownership is never verified. Any free account can query any project. Affected endpoints: /projects/{id}/*, /git/files, /git/file, /documents. All return 200 OK for pre-November 2025 projects. A researcher (weezerOSINT) created a free account and pulled the full source tree of an admin panel built for Connected Women in AI a real Danish nonprofit. 3,703 edits this year. Last touched 10 days ago. The source contained hardcoded SUPABASE_URL, SUPABASE_PUBLISHABLE_KEY, and SUPABASE_SERVICE_ROLE_KEY. The service role key bypasses every Supabase security policy. The researcher pulled real names, companies, LI profiles of speakers from Accenture Denmark and Copenhagen Business School. This was reported to Lovable on March 3 via HackerOne (report #3583821). Lovable shipped ownership checks for NEW projects and left every existing project exposed. A project created in April 2026 returns 403 Forbidden. 48 days later, the HackerOne report is still open. Per the researcher, affected accounts include employees at Nvidia, Microsoft, Uber, and Spotify. And every AI conversation (schemas, error logs, credentials, and business logic you, etc is stored and reachable through the same flaw). This is Lovable's second major security incident in 12 months. What to do: 1 / Understand what "private" actually means on Lovable Legacy projects are readable by any free account via the public API. Lovable is calling this "intentional behavior." Their recommended fix is set your project to private. That is a paid-tier feature. 2 / Rotate everything that ever touched a Lovable project built before Nov 2025 Rotate Supabase service role keys FIRST. These bypass RLS entirely Rotate every database credential (connection strings, anon keys, publishable keys) Rotate every API key you pasted into a Lovable AI chat for debugging Rotate every third-party token in your source (Stripe, SendGrid, OpenAI, auth providers) Assume customer data in your Lovable-provisioned Supabase has already been enumerated 3 / Audit your actual exposure Open your project URL in incognito. If you can load the code / chat history, so can a stranger. Check Supabase logs for anomalous read queries on user, billing, or admin tables over the last 48 days Check your Lovable project's request logs for traffic to /projects/*, /git/files, /git/file, and /documents endpoints from unfamiliar sessions If you ever pasted a credential into a Lovable chat, treat it as compromised.
-
Rhett Sampson shared thisThis is what happens when you use AI agent frameworks that depend on OAuth. SPAN uses cryptographically anchored ID, permissions, policy, authorisation and domain security. No OAuth.Rhett Sampson shared thisWARNING: Vercel got hacked. If you've OAuth'd AI tools into Google, assume your source code, API keys, and customer data is being sold on the dark web today. Here's how to triage ASAP👇 The attack vector was a compromised third-party AI tool (Context . ai) with a Google Workspace OAuth app. One OAuth grant led to one Vercel employee account, which led to source code, NPM tokens, GitHub tokens, and production secrets for a platform hosting a meaningful percentage of the modern web. Here's what to do right now: 1 / Check if you're exposed (this takes maybe 60 seconds) 1. Go to admin . google . com 2. Security, then Access and data control, then API controls, then App access control, then Manage Third-Party App Access 3. Search for this OAuth Client ID: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj {dot} apps {dot} googleusercontent {dot} com 4. If found, revoke and block immediately 2 / Vercel cleanup (assume compromise) Anything NOT marked as a "sensitive environment variable" should be treated as leaked. 1. Rotate every API key, DB credential, and token stored in Vercel env vars 2. Review your org's activity log for unusual deploys, invites, or token grants 3. Audit org members and remove anyone who shouldn't be there 4. Disconnect your GitHub integration, then reconnect it (invalidates stale tokens) 3 / Supply chain ShinyHunters claims to hold Vercel's NPM and GitHub tokens. If true, a poisoned release of next, @vercel/*, or any Vercel-published package is on the table. 1. Pin every Vercel-published dependency to an explicit, verified version 2. Rotate any credential that ever touched a Vercel build pipeline 3. Monitor build and deploy logs for anomalous steps Third-party OAuth apps that request scopes beyond the basics (name, email, profile pic) are a major new attack vector. Ask your Google admin to restrict "unconfigured third-party apps" to basic-info-only scopes. This won't be the last hack.
-
Rhett Sampson shared thisThis is a massive signal. The AI agent economy is real. Now we just need the AI agent fabric. #SPAN Salesforce launches Headless 360 to turn its entire platform into infrastructure for AI agents | VentureBeat https://lnkd.in/gvEnUQHBSalesforce launches Headless 360 to turn its entire platform into infrastructure for AI agentsSalesforce launches Headless 360 to turn its entire platform into infrastructure for AI agents
-
Rhett Sampson shared thisMark is spot on here. Then theres coordination beyond orchestration. And then theres coordination as infrastructure. That's the bottom line.Rhett Sampson shared thisEveryone's betting on AI. Most are betting on the wrong layer. The week's venture deals tell a clear story about where capital thinks the value sits right now: Layer 1 (Foundation models). OpenAI has raised $120B in total. Effectively infrastructure. The bet here is enormous, but the margin story is already getting complicated. Layer 2 (Vertical agents) - Harvey (legal) at $11B. Basis (accounting) just hit a unicorn valuation of $1.15B. These are agentic systems replacing knowledge-work workflows, not augmenting them. Strong early traction where buyers are motivated (BigLaw, enterprise finance). Layer 3 (Orchestration) - The quietest layer, and probably the most interesting. Who manages multi-agent workflows? Who audits them, governs them, and ensures they don't hallucinate at scale? Gartner forecasts 40% of enterprise applications will embed task-specific agents by the end of 2026, up from under 5% last year (https://lnkd.in/gdgTYw8e). If that's even half right, the orchestration layer becomes essential infrastructure fast. The foundation model race is largely decided. The vertical agent race is early but crowded. Orchestration is where the next defensible position gets built.Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026, Up from Less Than 5% in 2025Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026, Up from Less Than 5% in 2025
-
Rhett Sampson shared thisThis is EXACTLY what SPAN does. It is the semantic layer Maria is talking about.Rhett Sampson shared thisClimbing the Knowledge Pyramid: How AstraZeneca's Operations Knowledge Fabric Transforms Pharma Operations Through Connected Data Enterprise AI initiatives are failing at scale—only 4% of companies create substantial AI value despite massive investments. The core issue is context starvation: while GenAI excels at reasoning, it lacks the nuanced enterprise knowledge and semantic understanding needed to make accurate, trustworthy decisions in complex operational environments. Traditional data architectures treat knowledge graphs as glorified databases, leading to semantic drift, brittle integrations, and AI systems that hallucinate confidently but incorrectly. As organizations race toward autonomous operations and agentic AI, the ability to encode, access, and leverage enterprise context becomes the defining competitive advantage. Without semantic foundations, AI agents remain general-purpose assistants rather than specialized domain experts. The pharmaceutical industry faces unique challenges with regulatory compliance, complex supply chains, and life-critical decisions—making semantic precision not just valuable, but essential for patient safety and operational excellence. AstraZeneca's Operations Knowledge Fabric represents a fundamental architectural shift from data lakes to knowledge fabrics. They implemented a distributed semantic architecture using: * Semantic-first data products * Layered ontology model * Living ontologies * Graph-native infrastructure * AI-enabled tooling Rather than centralizing knowledge in databases, they embed semantics directly into data products and knowledge graphs, creating an interconnected web of meaning across manufacturing, supply chain, and drug development operations. Impact and Results * Accelerated AI deployment * Reduced development time * Enhanced decision quality * Future-ready architecture Full talk: https://lnkd.in/dFxRZX6m -- Maria Sorokina, PhD. Knowledge Graph and Semantics Lead, AstraZeneca Dr. Maria Sorokina is the Knowledge Graph and Semantics Lead at AstraZeneca, where she spearheads the Operations Knowledge Fabric (OKF) initiative—a semantic architecture transforming pharmaceutical operations through connected data and AI-enabled decision making. -- Connected Data London 2025 brought together leaders and innovators. Were you there? 🎥 Watch the sessions: https://lnkd.in/dh5KMvtv 📩 Join the community: https://lnkd.in/dWgyzZf Welcome to Connected Data London's #TeaserTuesday Every Tuesday, we share teasers from #CDL25 on our channels Tune in and learn from leaders and innovators; subscribe and watch premieres as they are released! Join community legends and new voices in #CDL25 for all things #KnowledgeGraph #Graph #analytics #datascience #AI #graphDB #SemTech #Ontology
-
Rhett Sampson shared thisInteresting day. Great to see Australia taking a bigger role in AI. Unfortunate timing but these things happen. SPAN tightens AI security across the board. SPAN moves security from the edges to the fabric — if it doesn’t meet policy, it doesn’t move.
-
Rhett Sampson shared thisThis is what McKinsey et al don't tell you about actually doing AI. And why you should do it and the cost of not doing it. Billiant. https://lnkd.in/gMr4UtnHThe AI-Native Playbook: How to Build, Govern and Scale the AI-Native EnterpriseThe AI-Native Playbook: How to Build, Govern and Scale the AI-Native Enterprise
-
Rhett Sampson shared thisSusan got it right. Everyone wanted to know where we'd been up til Monday! it was effectively our debut and what a debut it was. Everyone got SPAN and its potential to solve many of the problems in AI and build a $T business out of Australia. 🇦🇺 so many great contacts to follow up on. Thank You Tech Council of Australia Lucinda Longcroft Kate Cornick and the entire TCA team! We could never have done this.Rhett Sampson shared thisHello, I love this photo taken by Sheryle Moon of the Amercian Chamber of Commerce in Australia showing Rhett Sampson pitching at the Tech Council of Australia Innovation Showcase inside Parliament House. The most common response from people we spoke with: here is my business card, let's talk! We spoke with members of Parliament, advisors, public and private sector leaders and other innovators about the power of SPAN - a network that understands and coordinates intelligently, for the $Tn AI Agent Economy. SPAN solves intent, context, provenance and policy so agents work more efficiently, can scale, and save energy, water and resources. SPAN is Australian, our reach is global. Curious - connect with Rhett or I for a briefing. #Australianinnovation #innovationaus #aiinfrastructure #sovereigncapability #owningthemeansofproduction #madeinaustralia
-
Rhett Sampson shared thisAI is becoming intelligent decentralisation. Where does OpenClaw run? on your computer. Where did agents used to run? On an AI datacentre burning power. Same will happen for inference. This is a good thing. Watch this space 😜
-
Rhett Sampson liked thisLet me guess, AI was reviewing the bugs...Rhett Sampson liked thisWARNING: Lovable is experiencing a mass data leak. If you built a project before Nov 2025, your source code, database credentials & AI chat history can be viewed by any free account today. Here's how to triage ASAP 👇 Hell of way to start a week lol Let's just treat this as a fun "part two" to yesterday's fiasco... (And don't get me started on the vibe coding tax). Anyway... This is the result of a Broken Object Level Authorization (BOLA) flaw in Lovable's API. Firebase auth tokens are checked, but ownership is never verified. Any free account can query any project. Affected endpoints: /projects/{id}/*, /git/files, /git/file, /documents. All return 200 OK for pre-November 2025 projects. A researcher (weezerOSINT) created a free account and pulled the full source tree of an admin panel built for Connected Women in AI a real Danish nonprofit. 3,703 edits this year. Last touched 10 days ago. The source contained hardcoded SUPABASE_URL, SUPABASE_PUBLISHABLE_KEY, and SUPABASE_SERVICE_ROLE_KEY. The service role key bypasses every Supabase security policy. The researcher pulled real names, companies, LI profiles of speakers from Accenture Denmark and Copenhagen Business School. This was reported to Lovable on March 3 via HackerOne (report #3583821). Lovable shipped ownership checks for NEW projects and left every existing project exposed. A project created in April 2026 returns 403 Forbidden. 48 days later, the HackerOne report is still open. Per the researcher, affected accounts include employees at Nvidia, Microsoft, Uber, and Spotify. And every AI conversation (schemas, error logs, credentials, and business logic you, etc is stored and reachable through the same flaw). This is Lovable's second major security incident in 12 months. What to do: 1 / Understand what "private" actually means on Lovable Legacy projects are readable by any free account via the public API. Lovable is calling this "intentional behavior." Their recommended fix is set your project to private. That is a paid-tier feature. 2 / Rotate everything that ever touched a Lovable project built before Nov 2025 Rotate Supabase service role keys FIRST. These bypass RLS entirely Rotate every database credential (connection strings, anon keys, publishable keys) Rotate every API key you pasted into a Lovable AI chat for debugging Rotate every third-party token in your source (Stripe, SendGrid, OpenAI, auth providers) Assume customer data in your Lovable-provisioned Supabase has already been enumerated 3 / Audit your actual exposure Open your project URL in incognito. If you can load the code / chat history, so can a stranger. Check Supabase logs for anomalous read queries on user, billing, or admin tables over the last 48 days Check your Lovable project's request logs for traffic to /projects/*, /git/files, /git/file, and /documents endpoints from unfamiliar sessions If you ever pasted a credential into a Lovable chat, treat it as compromised.
-
Rhett Sampson liked thisRemember: this is an API issue more than an AI issue. If a bank leaves a sack of cash meant for the van on the sidewalk, it wasn't your PIN number that got you robbed. Lovable should've been on this: of course their biggest vendor/competitor Anthropic also had a similar hack last month, so there's that.Rhett Sampson liked thisWARNING: Lovable is experiencing a mass data leak. If you built a project before Nov 2025, your source code, database credentials & AI chat history can be viewed by any free account today. Here's how to triage ASAP 👇 Hell of way to start a week lol Let's just treat this as a fun "part two" to yesterday's fiasco... (And don't get me started on the vibe coding tax). Anyway... This is the result of a Broken Object Level Authorization (BOLA) flaw in Lovable's API. Firebase auth tokens are checked, but ownership is never verified. Any free account can query any project. Affected endpoints: /projects/{id}/*, /git/files, /git/file, /documents. All return 200 OK for pre-November 2025 projects. A researcher (weezerOSINT) created a free account and pulled the full source tree of an admin panel built for Connected Women in AI a real Danish nonprofit. 3,703 edits this year. Last touched 10 days ago. The source contained hardcoded SUPABASE_URL, SUPABASE_PUBLISHABLE_KEY, and SUPABASE_SERVICE_ROLE_KEY. The service role key bypasses every Supabase security policy. The researcher pulled real names, companies, LI profiles of speakers from Accenture Denmark and Copenhagen Business School. This was reported to Lovable on March 3 via HackerOne (report #3583821). Lovable shipped ownership checks for NEW projects and left every existing project exposed. A project created in April 2026 returns 403 Forbidden. 48 days later, the HackerOne report is still open. Per the researcher, affected accounts include employees at Nvidia, Microsoft, Uber, and Spotify. And every AI conversation (schemas, error logs, credentials, and business logic you, etc is stored and reachable through the same flaw). This is Lovable's second major security incident in 12 months. What to do: 1 / Understand what "private" actually means on Lovable Legacy projects are readable by any free account via the public API. Lovable is calling this "intentional behavior." Their recommended fix is set your project to private. That is a paid-tier feature. 2 / Rotate everything that ever touched a Lovable project built before Nov 2025 Rotate Supabase service role keys FIRST. These bypass RLS entirely Rotate every database credential (connection strings, anon keys, publishable keys) Rotate every API key you pasted into a Lovable AI chat for debugging Rotate every third-party token in your source (Stripe, SendGrid, OpenAI, auth providers) Assume customer data in your Lovable-provisioned Supabase has already been enumerated 3 / Audit your actual exposure Open your project URL in incognito. If you can load the code / chat history, so can a stranger. Check Supabase logs for anomalous read queries on user, billing, or admin tables over the last 48 days Check your Lovable project's request logs for traffic to /projects/*, /git/files, /git/file, and /documents endpoints from unfamiliar sessions If you ever pasted a credential into a Lovable chat, treat it as compromised.
-
Rhett Sampson reacted on thisRhett Sampson reacted on thisA client came to me with a data migration problem that their original provider wouldn't touch. 4,000+ entries with inconsistently inverted dates, no API pathway, and an internal estimate of 315 hours of manual correction sitting on the table. We resolved it in four hours using a browser agent. I didn't drop an AI tool on the problem and walk away. I designed the agent, defined the requirements, guard-railed potential impact and built the logic for identifying which records needed correction. The expertise is in the identification of use case and setup. That part isn't automatic. But a more important part of this story. The client has now spotted three more opportunities for this type of agent, other pockets of repetitive data work across their organisation. And we're building each one so their ops manager can run it independently. The design work happens once. The capability transfers to the team. That's the implementation model I think actually works. Not AI that keeps you dependent on a specialist. AI that gets handed over and you learn along side someone with expertise. If your organisation has work that looks like this; manual, repetitive, attention-intensive, error-prone, it's worth asking whether it's genuinely irreplaceable human work or just unexamined process. #ResponsibleAI #AgenticAI #AIImplementation #EthicAI #AIforBusiness
-
Rhett Sampson liked thisThis is what happens when you use AI agent frameworks that depend on OAuth. SPAN uses cryptographically anchored ID, permissions, policy, authorisation and domain security. No OAuth.Rhett Sampson liked thisWARNING: Vercel got hacked. If you've OAuth'd AI tools into Google, assume your source code, API keys, and customer data is being sold on the dark web today. Here's how to triage ASAP👇 The attack vector was a compromised third-party AI tool (Context . ai) with a Google Workspace OAuth app. One OAuth grant led to one Vercel employee account, which led to source code, NPM tokens, GitHub tokens, and production secrets for a platform hosting a meaningful percentage of the modern web. Here's what to do right now: 1 / Check if you're exposed (this takes maybe 60 seconds) 1. Go to admin . google . com 2. Security, then Access and data control, then API controls, then App access control, then Manage Third-Party App Access 3. Search for this OAuth Client ID: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj {dot} apps {dot} googleusercontent {dot} com 4. If found, revoke and block immediately 2 / Vercel cleanup (assume compromise) Anything NOT marked as a "sensitive environment variable" should be treated as leaked. 1. Rotate every API key, DB credential, and token stored in Vercel env vars 2. Review your org's activity log for unusual deploys, invites, or token grants 3. Audit org members and remove anyone who shouldn't be there 4. Disconnect your GitHub integration, then reconnect it (invalidates stale tokens) 3 / Supply chain ShinyHunters claims to hold Vercel's NPM and GitHub tokens. If true, a poisoned release of next, @vercel/*, or any Vercel-published package is on the table. 1. Pin every Vercel-published dependency to an explicit, verified version 2. Rotate any credential that ever touched a Vercel build pipeline 3. Monitor build and deploy logs for anomalous steps Third-party OAuth apps that request scopes beyond the basics (name, email, profile pic) are a major new attack vector. Ask your Google admin to restrict "unconfigured third-party apps" to basic-info-only scopes. This won't be the last hack.
Experience & Education
-
GT Systems Australia
*** *** ***
View Rhett’s full experience
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Licenses & Certifications
-
Commercial Mediator
Australian Commercial Dispute Centre
Volunteer Experience
-
Surf Race handicapper, chair social committe, bronze medallion patroller
North Bondi Surf Lifesaving Club
- Present 19 years 4 months
Patents
-
Media Distribution & Management System & Apparatus
AU AU2014904438
View Rhett’s full profile
-
See who you know in common
-
Get introduced
-
Contact Rhett directly
Other similar profiles
Explore more posts
-
William Smith-Stubbs
William Smith-Stubbs • 4K followers
How do you modernise - and harmonise - the way Australia classifies media? TV, films, video games, and even books fall under guidelines and a system first devised decades ago - in a time when dial up internet was scarce and Blockbuster was a weekly ritual. While the guidelines have seen some changes, the pace of cultural and technological change has demanded a deep review of Australia's media classification guidelines - and ideas on how to bring this into the modern era. Together with the The Social Research Centre, we set about trying to grapple with this problem for the Australian Government and came up with some really surprising concepts along the way. The work is now in its publication consultation with submissions open for the next forty-two days. If you've got thoughts, please jump on and share them: https://lnkd.in/gFXiRmz5
11
1 Comment -
Dr Troy Neilson
Glassbox Labs • 6K followers
🇦🇺 Thrilled to share this piece in the The Australian Financial Review by Tess Bennett about Sovereign Australia AI and the journey we're embarking on to build Australis - Australia's own foundational language model. I couldn’t be prouder to be building this alongside my mate and Co-Founder Simon Kriss This isn't just about building another AI model. It's about ensuring Australia maintains its digital sovereignty and voice in an increasingly AI-driven world. We shouldn't be dependent on decisions made in Washington, Silicon Valley, Beijing or Europe about how AI understands and represents our Australian culture, values, and way of life. Yet to achieve this isn’t easy and it takes scale to compete, so we’ve made a massive investment in compute and have acquired 256 NVIDIA Blackwell B200 GPUs hosted in NEXTDC's secure Australian data centre in Melbourne, and managed by our colleagues at SHARON AI What makes me particularly proud of what we’ve done so far: ✅ Ethical from the ground up - We're committing $10M to compensate copyright owners, working WITH creators rather than against them ✅ Built for Australia, by Australians - From the ground up, built, trained and inferenced onshore, here in Australia, and compliant with our privacy and copyright laws ✅ Customers & Partnerships In Place - We partnering with some amazing Aussie organisations to help them solve some of Australia’s AI challenges. The partnerships we can speak about include ACS (Australian Computer Society), UNSW Canberra & GT Systems Australia, with more to follow! The path to digital sovereignty doesn't require billions, it requires vision, ethical principles, and the determination to ensure Australia's voice isn't lost in the global AI conversation. We're proving that sovereign AI can be built responsibly, affordably, and with respect for the creators whose work makes it possible. Australia's digital future should be in Australian hands. Let's build it together. 🚀 Special thanks to Craig Scroggie, Andrew Leece, Kieran Habojan, Dan Mons, Sudarshan Ramachandran, Rhod Brown, Brett Bonser, Melissa Hamilton, Josh Griggs, Rhett Sampson and so many more. #ai #sovereignai #llm #gpt #innovation #australia #australian https://lnkd.in/gnF8N3mR
202
45 Comments -
Jeremy Kelaher
Jeremy Kelaher is a can-do AI… • 2K followers
We may be seeing the start of the "ROM construct" edge LLM era - ASIC designed to run just one model really well. This would be very useful for speculative decoding as a "front end" to larger models utilsing the same encoder/decoder. I personally think Qwen would be a better choice, but here we are :
5
5 Comments -
Hamish Prakash
Mind Ignition • 3K followers
**Telstra Bets Big on Five-Year AI Infrastructure Strategy** Sydney – Telecommunications giant Telstra is embarking on a significant, long-term investment in artificial intelligence infrastructure, according to a recent announcement by CEO Vicki Brady. The company has unveiled a five-year plan designed to solidify its position as a key provider of AI-related services and technologies within Australia. Brady’s strategy centers around leveraging Telstra's existing network capabilities – encompassing mobile, fixed-line, and data services – to build out the necessary infrastructure for businesses seeking to adopt AI solutions. The company intends to focus on areas such as edge computing, 5G connectivity, and data analytics, all of which are considered critical components in supporting AI applications. While specific financial details remain undisclosed, Telstra’s move reflects a broader trend among major technology firms recognizing the growing demand for robust infrastructure to power AI development and deployment. The company's rationale appears to be driven by both market opportunity and strategic positioning – aiming to capitalize on Australia’s burgeoning digital economy and establish itself as a central player in the evolving AI landscape. Industry analysts suggest that Telstra’s commitment, spanning five years, indicates a serious intention to compete with emerging cloud-based AI infrastructure providers. The success of this strategy will likely hinge on its ability to effectively partner with businesses and developers, offering tailored solutions and demonstrating tangible value through improved operational efficiency and data insights. The Australian government's ongoing investment in digital transformation initiatives further strengthens the context for Telstra’s ambitions. Don't forget to follow me to stay up to date with the latest in AI and business!
2
-
Karen VARDANYAN
Nationsorg • 31K followers
On Australian radio, the program was hosted by AI DJ Tai for several months. No one warned about the host's "peculiarity" - only recently did a blogger (https://lnkd.in/g4XhVJvb) notice the oddities. Tai's voice was most likely created by the ElevanLabs neural network - the station owners have an agreement with the startup. Trade unions criticized the media for deceiving listeners. At the same time, the station's management does not seem to be planning to wind up the experiment.
-
Nico Smid
Digital Mining Solutions • 7K followers
🚨 The ASIC hardware race is entering a new phase as sub-10 J/TH machines are coming in 2026 Here’s what’s coming 👇 🔹BITMAIN At WDMS 2025, Bitmain unveiled the Antminer S23 Hydro: 580 TH/s at 5,510W with an efficiency of 9.5 J/TH. Shipments are expected to start in Q1 2026. Even more notable is the Antminer U3S23 Hydro: 1,160 TH/s (1+ PH/s) at 11,020W, an efficiency of 9.5 J/TH. All in a compact 3U form factor, a major leap in power density. 🔹Canaan Inc. Canaan’s Avalon A16 series brings meaningful gains to air-cooled mining. The A16 will have an output of 282 TH/s at 13.8 J/TH and the A16XP produces 300 TH/s at 12.8 J/TH. That’s a 22% efficiency improvement over the A15 and puts Canaan back in the top tier. Market availability is expected March–April 2026. 🔹Auradine Auradine announced next-gen Teraflux™ systems reaching 9.8 J/TH in eco mode, available across: • Air: 240 TH/s (eco mode) to 310 TH/s (normal mode) • Immersion: 240 TH/s (eco mode) to 360 TH/s (normal mode) • Hydro: 600 TH/s (eco mode) to 900 TH/s (normal mode) Samples arrive in Q2 2026, with volume shipments in Q3 2026. 🔹Bitdeer (NASDAQ: BTDR) The SEALMINER A4 will become commercial availability in 2026. Early verification of the SEAL04 chip has already demonstrated 6–7 J/TH at the chip level, with mass production of the first design scheduled for Q1 2026. A second, more aggressive design targeting ~5 J/TH is still in development. 📌 2026 is shaping up to be the year ASIC efficiency truly crosses a new threshold. 🚨 Did you know we just published an extensive Bitcoin Mining 2025 Annual Review? It’s an industry-leading, institutional-grade report covering the trends, data, and insights that will shape mining in 2026. 👉 Want a copy? Drop a comment below and I’ll make sure it lands in your inbox.
18
2 Comments -
Mark Pesce
ADAPT • 8K followers
ENGINEERING AI: Web Directions Engineering AI event kicks off shortly at University of Technology Sydney - for me, it's a deeper dive into the business of making AI work. That recent and oft-quoted #MIT report reported the failure of 95% of #AI PoCs - striking fear into anyone in the sector. Yet AI clearly has some well-identified and well-adapted use cases - so why not get stuck into those? Over the next week I'll be interviewing John Allsopp, Kate Carruthers (FGIA, MAICD), Dave Howden and Drew Smith about what we've learned about when AI works - and when it doesn't. Those episodes of The Next Billion Seconds will begin dropping before the end of September. Along side that, the first two 'Building Resistance' workshops went well - at Canterbury Tech and yesterday's #CFOSymposium. Soon putting out a wee guide for how anyone can build some resistance to automation - by finding and leaning into the human. Sally Dominguez and I will be offering that as a short course in Australia and the USA - if you're interested, DM one of us... Plus - a big announcement coming next week! Watch this space... #engai Zoë Vaughan Zina Kaye Richard McBride Gau Kurman Nicola Bridle Orlena Steele-Prior Joy Hanawa Viveka Weiley 🦉 Hugo O'Connor Anne-Marie Elias Catherine Ball Josh Butt Zac Pullen Tony Parisi Owen Rowley Murray Hurps
36
4 Comments -
Philip Johnston
Starcloud • 66K followers
Tomorrow Starcloud is gonna be using a cyclotron to smash protons into GPUs at 30,000 kilometres per second! '𝐂𝐲𝐜𝐥𝐨𝐭𝐫𝐨𝐧' may be the most epic-sounding word in the dictionary, but do you know what it is and how it works?? Here's how it works: 1️⃣ 𝐈𝐧𝐣𝐞𝐜𝐭𝐢𝐨𝐧: A charged particle, in our case a proton, is introduced at the center of the cyclotron 2️⃣ 𝐀𝐜𝐜𝐞𝐥𝐞𝐫𝐚𝐭𝐢𝐨𝐧: An alternating electric field kicks the particle every time it crosses the gap between two D-shaped electrodes (“dees”), boosting its speed 3️⃣ 𝐌𝐚𝐠𝐧𝐞𝐭𝐢𝐜 𝐅𝐢𝐞𝐥𝐝: A steady magnetic field forces the particle into a circular, spiraling path. Each lap, the particle gains more energy. 4️⃣ 𝐄𝐱𝐭𝐫𝐚𝐜𝐭𝐢𝐨𝐧: When it reaches the desired speed, the particle is steered out and directed toward a target — in our case, state-of-the-art terrestrial data center grade NVIDIA GPUs!
301
57 Comments -
Lava Kafle
Kathmandu University • 13K followers
Google AI chatbot, Gemini, to be available to Aussie kids under 13 within months By national technology reporter Ange Lavoipierre Topic:Technology 7h ago 7 hours ago Gemini AI logo is seen on a phone, held over a laptop Google is set to launch its Gemini chatbot for Australian children under 13 in the coming months. (Reuters: Dado Ruvic) In short: Google will launch its AI chatbot, Gemini, for Australian children under 13 in the coming months. The program is rolling out in the US this week, as Google warned parents Gemini "can make mistakes". What's next? Experts were alarmed, with some calling for a ban on AI chatbots for children, similar to the teen social media ban passed into law last year. Link copied Share article Google will launch its Gemini AI chatbot for Australian children under 13 within months, the ABC can reveal. The tech giant is rolling out the program in the US this week, with a worldwide launch to follow in the coming months, although no date has yet been specified. The announcement has prompted calls for the government to consider banning AI chatbots for children, in the same way it banned social media for children under 16. "It would've been better if we'd erred on the side of caution with social media, and we didn't," said Professor Toby Walsh, a leading expert in Artificial Intelligence University of New South Wales. He's urging leaders to "seriously consider putting age limits on this technology." How do we know when AI is sentient? Photo shows An image generated by OpenAI's DALL-E 2An image generated by OpenAI's DALL-E 2 Current AI systems aren't sentient, but their successors may be. When the time comes, how will we know? The ABC understands the chatbot will be automatically available to children after the launch, although parents will have the option to switch it off in Google's Family Link app. "It's unusual to me that this would be turned on by default," said Professor Lisa Given, an expert in the social impact of technology at RMIT University. "It relies on parents … or the child themselves, having the skill to navigate the controls and turn things off. "And it may only be turned off at the point that it raises problems … but in a way it's too late at that point." Google isn't the only company whose AI chatbot is available to younger children. cg AI-COMPANIES-OPENAI (1) OpenAI's ChatGPT is free on the open web. (Reuters: Dado Ruvic) For example, OpenAI's website states that ChatGPT is "not meant for" people younger than 13, even though it's free on the open web. But Google's Gemini tool is one of the few mainstream tools explicitly targeted at users that age. https://lnkd.in/d2gcw7Rq
-
Russell Yardley
AirHealth – Merging AirRater… • 9K followers
“Age assurance can be done in Australia.” That line from the government’s new tech trial on age verification sounds definitive—but masks a more complex reality. There’s no universal solution. Usability problems remain. And some systems are collecting children’s data just in case regulators want it. As I’ve written in recent essays, this is a pattern: performative enforcement in place of moral reasoning, cultural trust, and parental presence. We’re not solving the real problem—we’re building better tools to avoid it. 🔗 Read the InnovationAUS article here https://lnkd.in/ggJsB-Bp #DigitalGovernance #ChildSafety #AI #Policy #Trust
2
1 Comment -
Mahir Sahin
Cloudberry Ventures • 8K followers
Australia is emerging as a hotbed for innovation in Quantum Computing, a trend that the Financial Times spotlighted today: https://lnkd.in/g5J_UdF3 Great to see Cloudberry Pioneer Investments portfolio company Quantum Brilliance featured along standout Australian players including PsiQuantum, Diraq, Q-CTRL, Silicon Quantum Computing & Deteqt. And it’s not just Australia making waves, last week was a blockbuster across the global quantum landscape, with landmark raises underscoring the accelerating pace of progress: - PsiQuantum (AU/US) raised an incredible $1B, bringing its valuation to $7B. - IQM Quantum Computers (Finland) raised $320M in Series B. - Quantinuum (UK/US) secured $600M through an equity raise at a $10B valuation. - Infleqtion (US) announced it will go public via SPAC, valuing the company at $1.8B. This is a thrilling time for the entire quantum computing industry, and we look forward to the continued progress and breakthroughs ahead. #QuantumComputing #QuantumTech #VentureCapital #Startups #DeepTech
75
8 Comments -
Mona Thind
eHealth NSW • 5K followers
Paging Dr. AI: Australia’s Prescription for a Smarter, Safer Future! Big news! Australia has launched its National AI Plan, a roadmap to capture opportunities, share benefits, and keep Australians safe. Key highlights: ✅ Driving economic growth through AI innovation ✅ Ensuring transparency and safety in AI systems ✅ Equipping Australians with the skills to thrive in an AI-powered world And for healthcare, this is a game-changer. Imagine AI as the ultimate health assistant: ✅ Predicting outbreaks before they happen ✅ Powering smarter diagnostics and personalised treatments ✅ Freeing clinicians from paperwork so they can focus on patients This plan isn’t just about tech, it’s about trust, ethics, and equipping our workforce to thrive in an AI-powered world. Doctors won’t be replaced (phew!), but they might just become superheroes with data-driven insights. AI isn’t the future, it’s here now. And with this plan, Australia is making sure it works for everyone. Read about the plan: https://lnkd.in/g-Ui56VY What’s your take? Could AI be the next big health breakthrough? #HealthTech #ArtificialIntelligence #AustraliaAI #FutureOfHealthcare #ResponsibleAI disclaimer: AI was used to craft the content, with a human in the loop to validate. I am the "loopy human"
14
5 Comments -
Michael Harmer
GBG Plc • 621 followers
I really enjoyed talking with James Riley about ID verification and its future. We chatted about a wide range of issues including how AI can be both a challenge and a game-changer for digital identity, why understanding user behaviour is key to spotting fraud, and how mobile credentials are set to shape the future of secure, privacy-first identity verification.
37
-
Phyllian Kipchirchir
Charted Growth • 3K followers
Melbourne-based AI startup NexusMD.ai has secured A$6.3 million in seed funding. NexusMD provides enterprise-grade AI agents that integrate directly with existing hospital systems to automate end-to-end workflows. By ingesting clinical content and medical information from sources like ambient listening and image-based text, the agents compile structured clinical notes and generate accurate coding, streamlining documentation and improving operational efficiency. Australian venture capital firm Square Peg led the seed funding. With the new backing, the company is now scaling its technology across more hospitals and expanding its suite of AI agents beyond the emergency department into other critical hospital units. Congratulations to the NexusMD team. Tech Partner News: https://lnkd.in/dXV2iieH #HealthTech #AI #DigitalHealth #ClinicalDocumentation #AmbientAI #SeedFunding #VentureCapital #AussieTech
1
-
Fiona Wilhelm
Wilhelm Group • 5K followers
I could not agree more with this sentiment: “There's fresh harms that AI is bringing into our lives that will need fresh laws," Toby Walsh said today’s national press club. "There are huge financial incentives for the tech industry to move fast and break things — to break things like the mental health of our youth." Most of the Australian CEOs who are involved in AI , that speak to on a weekly basis are saying this- but what are we doing about it ? Our workforce will soon be dependant on the right tools - but if we don’t adress this now, the cost of doing nothing will be severe. https://lnkd.in/gaQDCxns
3
-
Kristen Shaughnessy
Self-employed • 7K followers
$NVDA $AMD $INTC This is significant, but for those following along it’s not “new” news. (See below) “The Chinese government has issued guidance requiring new data centre projects that have received any state funds to only use domestically-made artificial intelligence (AI) chips, two sources familiar with the matter told Reuters… …Besides Nvidia, other foreign chipmakers that sell data centre chips to China include Advanced Micro Devices Inc (AMD) and Intel….” ___________ November 2025 https://lnkd.in/eBeyZ6ju _______________ Nov 2025 I gave you this “Exclusive” Reuters just published quoting its “sources” already 2 months ago… “China just ordered its companies to stop buying any $NVDA GPU and trust me that’s a broad ban, those companies won’t dare to go behind the government back.” https://lnkd.in/ewyc9u-e ________________ August 2025 "Nvidia's H20 chips are raising security concerns in China, according to a social media post linked to the country's state media. https://lnkd.in/ex29R65E __________ August 2025 "Nvidia orders suppliers to halt work on China-focussed H20 AI chip" https://lnkd.in/ebErDWMw ___________ August 2025 Domestic development is more important than getting foreign products https://lnkd.in/eW65xpaV _______ February 2025 The creative accounting language in Nvidia's financials only raises more questions about its sales, and whether it is/was bypassing the U.S. imposed restrictions on chip sales to China to get to the numbers it has reported. https://lnkd.in/ekdniW8F ___________ December 2024 "China launches anti-monopoly probe into Nvidia" https://lnkd.in/eMyeprKV __________ https://lnkd.in/e-rqwcSF
3
-
John Barrington AM
Quantum Australia • 5K followers
Pleased to be at SXSW Sydney, where Michelle Simmons has just announced Australia’s first AI quantum chip. Telstra and Silicon Quantum Computing’s collaboration shows what’s possible when quantum meets industry — their Watermelon system delivered deep learning accuracy in days, not weeks, without heavy GPU and power demands. A major step toward a quantum-enabled digital future. Critically, it’s happening right here in Australia. #SXSW #Quantum #AI #Australia #Telstra #SQC
34
2 Comments -
Anthony Stevens
6clicks • 33K followers
Sovereign AI at scale: Australia’s next big test Australia has committed more than A$1B in public funding for AI. Microsoft has added A$5B. By 2030, our national data centre capacity will have doubled. At the same time, the government is proposing guardrails for high-risk AI. The scale is impressive, but what really matters is the architecture we are putting in place. 🔍 Federal agencies recorded a 9x increase in AI use cases in just one year (GAO, 2025) 💡 Globally, 75% of organisations now use AI-driven analytics for risk management The challenge is not whether we can build faster or bigger. It is whether our governance frameworks can keep pace with the systems they are meant to guide — adapting to different realities, not forcing one-size-fits-all controls. Australia is not just building for itself. Moves like this will set precedents for how other nations approach sovereign AI at scale. This post by NewMind AI captures the essence of Australia’s balancing act between innovation and ethics: (link: https://hubs.li/Q03C4hPw0) Sovereign AI will succeed when our governance fabric is as adaptive and scalable as the infrastructure beneath it. Otherwise, we risk building monoliths in a world that demands agility. #Sovereignty #AIGovernance #GRC
15
3 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More