Sign in to view Henry’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Henry’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
United Kingdom
Sign in to view Henry’s full profile
Henry can introduce you to 10+ people at JUXT
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
2K followers
500+ connections
Sign in to view Henry’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Henry
Henry can introduce you to 10+ people at JUXT
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Henry
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Henry’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Activity
2K followers
-
Henry Garner shared thisWhat do Minecraft, WordPress, and SQLight have in common? #AgenticEngineering #AIAssistedEngineering
-
Henry Garner reposted thisVery much looking forward to my first visit to Warsaw. Our CTO Henry Garner will share the stage with Michał Olszewski and Behshad Behzadi to talk about spec driven development as a critical step in AI augmented engineering. If you are interested, here's the registration link http://smrtr.io/x_8N4 #GridDynamics #Sportradar #AgenticAIHenry Garner reposted thisWe're just over two weeks out from Sportradar CONNECT Tech Talk Edition Warsaw, and I wanted to share who you'll be hearing from. Three speakers. One evening. A lot to think about. Behshad Behzadi - our Chief Product, Technology and AI Officer - brings two decades of AI leadership at Google and a front-row view of what it takes to build at scale. Michał Olszewski, Sportradar's Principal Engineer for AI Adoption, will be live coding with real data and real stakes. And Henry Garner, CTO at JUXT (part of Grid Dynamics), will make a case that writing specs instead of code might be the most agile thing you can do right now. Different angles. Same question: how do we stay in control as AI changes the way we build? Doors open at 18:00 on April 16. See you at Giełdowa 5. Register Here 👉 smrtr.io/x_8N4 #SportradarConnect #TechTalk #Warsaw #WarsawTech #AIEngineering #AgenticAI #SoftwareCraft
-
Henry Garner shared thisConviction is the signature of individual design. Rigour is the hallmark of teams. At least, that's the traditional view. The book of collected lectures "Little Science, Big Science" describes the arc that every discipline goes through: a period of significant individual contribution followed by team-based progress. Grace Hopper created the first compiler, Ken Thompson built an operating system, John McCarthy founded artificial intelligence. But by the 2000s, hardly anyone was shipping alone. Now AI is reversing the trajectory. LinkedIn is replacing Product Managers with Product Builders who both design and deliver. NVIDIA is giving engineers AI token budgets worth half their base salary. At JUXT, we're building significant applications with one or two senior engineers and a handful of AI agents. Historically, team approaches have usually won out. But right now, an engineer with a bold vision and AI assistance can achieve the best of both worlds: conviction with rigour. I wrote about this historical context and how special I think this current period is. Software's second heroic age: https://lnkd.in/eh9ANDqV #AgenticEngineering #AIAssistedEngineering
-
Henry Garner reposted thisHenry Garner reposted thisThere is a temptation, when describing a system like XTDB, to reach for grand metaphors. But the value is quieter than that. It is the simplicity of asking temporal questions in SQL rather than building temporal logic in application code. The particular calm that comes from an audit trail that cannot be edited, because the database will not permit it. And when your data arrives from five different systems, each with its own notion of when things happened, it is the confidence of knowing those timelines are preserved, not flattened into the moment they arrived. Other systems let you query the present. XTDB lets you query the truth, which includes the present, but is not limited to it.
-
Henry Garner shared thisReally enjoyed this conversation with Kris. The bit about structured languages grew out of building Allium, where we found that teaching AI a formal behavioural language radically improved the quality of LLM-mediated design discussions. We covered a lot more besides: MCP vs skills, Conway's Law for LLMs, the (possible?) return of Prolog, the "Ralph Wiggum loop" for achieving big audacious goals. But the structured-language thread is the one I keep coming back to. Enjoy! #AI #SoftwareEngineeringHenry Garner shared thisThe most interesting idea buzzing around my head right now is using structured languages to guide AI Agents’ reasoning, and it’s just one of the many exciting and nuanced ideas I got from talking with Henry Garner as we discuss his what’s-hot-what’s-not AI Radar. https://lnkd.in/eawwSdtiWhat's Worth Knowing In AI Right Now? (with Henry Garner)What's Worth Knowing In AI Right Now? (with Henry Garner)
-
Henry Garner shared thisOne of the sections of Risk-First Software Development I already know will become dog-eared from use is the enumeration of software delivery risks. It’s a long list even before you add “stochastic compilers” into the mix! Highly recommended for stimulating lateral thinking when prioritising. #SoftwareDevelopment #AIHenry Garner shared thisIs #Agile missing a Risk Framework? This is something I touch on with Henry Garner in the JUXTCast Podcast. To hear more go here: https://lnkd.in/en3iGTsW Or to get hold of a free copy of the Risk-First Software Development book go here: https://lnkd.in/eVck5xgy #opensource #riskmanagement #risk #agile
-
Henry Garner shared this🎉 Allium v3 is released! 🧅 The tastiest bit is generating property-based tests from your specs. Ask your LLM to "upgrade our alliums to v3 and propagate them into tests". It extracts every formal obligation using the CLI and writes tests to fill the gaps in your existing suite. We've used this to find bugs in code, including cases where existing tests were validating the *incorrect* behaviour. I'm holding a contrarian view right now: AI is enabling us to write more rigorous software than we ever managed by hand. Really looking forward to hearing how you get on. https://lnkd.in/erMmBGFR #AICoding #SoftwareEngineering
-
Henry Garner reposted thisHenry Garner reposted thisYour AI-generated code works. The tests pass. But nobody can tell you why. There's a term for this: epistemic debt. Not a failure of documentation or diligence, but the inevitable result of systems too complex for any one person to hold in their head. #AI accelerates this dramatically. The model doesn't retain its reasoning. The code works. The tests pass. But nobody can tell you why it works, or what happens when the context changes. Every system carries some of this debt. The question is whether you're keeping on top of it or letting it pile up faster than you can deal with it. AI is the most powerful cognitive offload we've ever built. But outsourcing reasoning isn't the same as outsourcing memory. The less you engage with the hard thinking, the less equipped you are to spot when something's wrong. And when nobody understands the system, accountability disappears. Dan Davies calls these "liability sinks." Tom Lawton's research at Bradford NHS shows it happening in practice: clinicians risk absorbing legal responsibility for AI systems they're reduced to rubber-stamping. If nobody understands why the system works, who's responsible when it breaks? Henry Garner proposes a discipline he calls semantic triangulation: checking the code, the tests, and the specification against each other as three independent measurements of intent. When they diverge, you know you have a problem. Most teams aren't doing this even for human-written code, let alone AI-generated code. All three pieces are worth reading (Links in comments 👇). How is your team handling this?
-
Henry Garner reacted on thisHenry Garner reacted on this"Caveman Claude" is the AI chatbot of the future. "Me no explain. Me tool first. Me result first. Me stop." 🔥 It turns out that removing all the sycophantic fluff helps cut down on tokens - who knew? Bonus 2: it keeps a lid on AI psychosis. Bonus 3: it feels a lot more familiar, depending on your coworkers. Bonus 4: it's more entertaining. Time to go back to basics? Link in comments
-
Henry Garner liked thisHenry Garner liked thisNot entirely sure if this is an April Fools joke or not?! A 37-year-old leveraged voice AI and Anthropic’s Claude to create a consumer price index for a pint of Guinness across Ireland. TL;DR don't pay more than €6 for a pint of Guinness in Ireland...! https://lnkd.in/egGa57WxA man used AI to call 3,000 Irish bartenders to track the cost of Guinness. Now pubs are lowering their prices to compete | FortuneA man used AI to call 3,000 Irish bartenders to track the cost of Guinness. Now pubs are lowering their prices to compete | Fortune
-
Henry Garner reacted on thisBoy do I use my hands a lot when I talk 😆 But if you can get past that, this was such an amazing conversation with Kasper Borg Nissen who undersells just how much of a leader in the platform space he is! I still refer back to his talk at #KubeCon London all of the time! Thank you so much to the whole Code RED Podcast team for a seamless experience recording this and to Kasper for all of the prep that went into driving out such a deep conversation!Henry Garner reacted on thisNew episode of Code RED Podcast is out. 🎙️ I sat down with Abby Bangser, founding principal engineer at Syntasso and co-chair of #KubeCon+#CloudNativeCon, to talk about the keynote she just gave at KubeCon+CloudNativeCon in Amsterdam on Platform as a Product, and what it means when platforms stop being delivery mechanisms and start becoming ecosystems. Lots to unpack from this one, so I'll be sharing a few clips over the coming weeks. First up: golden paths vs golden bricks. Abby made the point that an all-or-nothing golden path is more likely to fail than succeed. People are looking for the easiest way to do their jobs, and there's never going to be one way for everyone. If you build a single path and tell everyone to follow it, you're betting you got it right for every team. You probably didn't. The alternative is composability. Golden bricks. Codify the things that matter to your organization as small, self-contained pieces. What does a database with encryption and backups look like? Just the database. Then when a team needs a test environment, they compose the bricks they need. Another team grabs a different combination. Start with the most permissive baseline your company will allow, then layer more opinionated bricks on top that all drive through that baseline. You're going to get things wrong at first... but with clear contracts and versioning, you can evolve without starting over. Separate services where needed. Combine them where needed. That composability is what makes the difference between a golden path that crumbles and one that actually grows with your platform. Full episode linked in comments. Give it a listen and let me know how your team thinks about this. 👇
-
Henry Garner reacted on thisHenry Garner reacted on thisToday is World Autism Awareness Day. I almost didn't write this post — felt too personal for a 'professional' profile. But I think that hesitation is actually part of the point. I'm autistic. And I have a take on this day that goes a bit beyond the usual 'inclusion matters' framing. Neurodivergent people don't just use AI differently — we often get something qualitatively different out of it. Not a productivity boost. More like... cognitive leverage. The kind where you stop spending energy on things that drain you and suddenly have more left for the things you're actually good at. I think that's underexplored. Most AI research is implicitly built around a 'typical' user. But the people for whom AI interaction feels most transformative are often not that user. That's worth paying attention to — not just for accessibility reasons, but because those edge cases tend to reveal something true about what these systems actually do for human thinking. #AutismAwarenessDay #artificialintelligence #neurodiversity
-
Henry Garner reacted on thisHenry Garner reacted on thisIn the last month we've seen Anthropic's famous "build a C compiler" experiment, and Elon Musk arguing AIs are just going to write assembler. I've also seen a lot of noise about how this is all "just regurgitating training data", or "not possible because compilers must be deterministic". For quite a few years I used to build and maintain a couple of very obscure gcc backends and spent more hours than I care to think about making incremental improvements in code generation. For the last few weeks I've been using that knowledge to have Claude help me build an optimizing compiler for a Lisp-ish language (Menai, formerly AIFPL) that literally doesn't exist in any training set. It's also helped me target the initial stack-machine VM and now an infinite-register VM. Neither of these are in the training data either. The quality of what it has helped build is nothing short of amazing, but even more amazing is that Claude will happily dive in to write off-the-cuff custom test scripts that will exercise any of the 5 different code representations in the new compiler in order to track down correctness or performance issues. I know of no human who could work at all these abstraction levels at the same time, let alone so fast. Anthropic's test was impressive because it didn't need human steering (mine has done), while Elon's argument that AIs can write great assembler is also true. What seems more important, however, is that we humans built high level languages because they let us express complex concepts in a more compact and natural way, accepting that we'd lose some performance as a cost. It's now quite clear that Claude can do something we can't - it can and will switch abstraction level in pursuit of a goal, and at previously unimaginable speed. In just this one area, we're going to see quite extraordinary things in the next couple of years. I wouldn't normally post my raw dev notes, but I don't have time to write a beautifully polished blog post. The raw data is interesting though. The conversation I mention in the note is in the Humbug repo if anyone wants to see an AI in action on this stuff! https://lnkd.in/e22QGDqW
-
Henry Garner liked thisHenry Garner liked this- LLMs making us thick is not the issue. - Because we already were thick. - Who controls them is what matters. - What might they do to how we think? - What have they already done to how we think?
Experience & Education
-
JUXT
*** * ** ******* **** * ***** **** ** **** ********
-
***** ****** **********
********
-
**** *** **** ******** *******
********* ******** * ** ******* ****
-
********** ** ******
*** **** *** undefined
-
View Henry’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Licenses & Certifications
Publications
-
JUXT AI Radar
JUXT
See publicationKeeping pace with AI development feels increasingly difficult. New tools appear weekly, claims about capabilities shift monthly, and what seemed essential last quarter might be yesterday’s news.
Our teams at JUXT have been applying AI across multiple client projects, from coding assistants to agent frameworks, from prompt engineering to model selection. We’ve seen what works in practice, what doesn’t live up to the marketing, and where the real value lies for organisations trying to make…Keeping pace with AI development feels increasingly difficult. New tools appear weekly, claims about capabilities shift monthly, and what seemed essential last quarter might be yesterday’s news.
Our teams at JUXT have been applying AI across multiple client projects, from coding assistants to agent frameworks, from prompt engineering to model selection. We’ve seen what works in practice, what doesn’t live up to the marketing, and where the real value lies for organisations trying to make sensible technology choices.
We’ve distilled these insights into our first AI Radar: an opinionated guide to the tools, techniques, and platforms we think are worth your attention right now. It’s structured around four rings (adopt, trial, assess, and hold) making it easier to understand what’s ready for production use versus what needs more time to mature.
This isn’t a snapshot: we’ll be updating it regularly as the landscape evolves and our understanding deepens. If you’re navigating AI adoption in your organisation, we hope it provides a useful reference point. -
Clojure for Data Science
Packt
See publicationClojure is a powerful language that combines the interactivity of a scripting language with the speed of a compiled language. It shares a platform - the Java Virtual Machine - with the big data powerhouses Hadoop and Spark, enabling efficient use of these de-facto standards without sacrificing expressiveness. Together with its rich ecosystem of native libraries and an extremely simple and consistent functional approach to data manipulation, which maps closely to mathematical formulae, it is an…
Clojure is a powerful language that combines the interactivity of a scripting language with the speed of a compiled language. It shares a platform - the Java Virtual Machine - with the big data powerhouses Hadoop and Spark, enabling efficient use of these de-facto standards without sacrificing expressiveness. Together with its rich ecosystem of native libraries and an extremely simple and consistent functional approach to data manipulation, which maps closely to mathematical formulae, it is an ideal practical and flexible language to meet data scientist’s diverse needs.
Taking you on a journey from simple summary statistics to sophisticated machine learning algorithms, this book shows how the Clojure programming language can be used to derive insight from data. You’ll learn how to apply statistical thinking to your own data and use Clojure to explore, analyse and visualise it in a technically and statistically robust way. You’ll explore the core machine learning techniques of regression, classification, clustering and recommendation, specialist approaches for graph and time series data, and discover the wealth of tools that should be in every Clojure Data Scientist’s toolbox.
Above all, by following the explanations in this book you’ll learn not just how to be effective using the current state-of-the-art in data science, but why such methods work so that you can continue to be productive as the field evolves into the future.
Recommendations received
4 people have recommended Henry
Join now to viewView Henry’s full profile
-
See who you know in common
-
Get introduced
-
Contact Henry directly
Other similar profiles
Explore more posts
-
Dominic Fox
NatWest Commercial and… • 850 followers
This evening's work for Claude, on Patches: 1) Thinking over a couple of API decisions I recorded yesterday, I realised there were cleaner approaches available to one particular aspect, piggy-backing on previous changes. I proposed those approaches to the LLM, talked them through, considered objections, settled on a new approach; the LLM wrote up the revised ADR (which I read, checking that it captured my intention clearly) 2) The method that compares a new patch graph to the existing one and builds an updated execution plan is huge, nested and complex. I would never have written it that way myself, but it works as it stands. Nevertheless, I have future changes targeting that area, and it's risky to have something that's both syntactically convoluted and hard to read and test, especially in such a central piece of logic. It badly needs some attention. I asked the LLM to propose a breakdown of the method into smaller, testable pieces. Its proposal was ok, but missed an opportunity to separate cleanly the "deciding what to do" phase from the "acting on the decision" phase, leaving graph analysis and new module instantiation somewhat tangled up with each other. I suggested some further re-organisation to disentangle things, and asked the LLM to remark on the impact this would have on testing; it confirmed that it would make the testable surface of each of the distinct pieces smaller and more easy to control. Once we had a scheme that looked sound to me, I asked it to write up an epic and a sequence of tickets for it. Then I asked it to start working through the changes. Now it's chugging through the first ticket, while I write this. It will go on doing so while I pop downstairs and do the washing-up. I'm comfortable with the fact that the first draft of the code it's now tidying up was unsatisfactory from a testability and maintainability point of view: it grew organically and expediently in the course of pulling together something that worked well enough for me to start playing with it. Now it's a glaring area of technical risk, so I'm cheaply and quickly addressing it. A capable human developer would have kept much better discipline throughout the drafting process, but might not have seen that the decide/action split was natural and desirable until about this point in the game - I didn't until I stopped to think about it. What's absent from this process, for me, is anxiety that the LLM will just heap up piles of intractable slop. It will typically do the expedient thing at each step along the way, but that's fine if you're working in short iterations provided you're also periodically stopping to review where cruft and friction are accumulating. What it lacks in foresight, you somewhat have to make up for in hindsight. But the latter is famously somewhat clearer than the former anyway.
2
-
Zoonou
2K followers
Following on from her recent blog post about why QA can’t be an afterthought in public sector migrations, Harriet takes the conversation a step further. Harriet explores how discovery workshops, data profiling, and exploratory testing uncover hidden edge cases before they disrupt migrations. She also shares how synthetic datasets, scenario modelling, and flexible automation help teams safely test even the most unusual workflows. The result? Public sector systems that are resilient, predictable, and ready for anything - even under the most unusual conditions. Read Harriet’s full insights on navigating the untestable: https://lnkd.in/eq3GBr9U #EdgeCases #PublicSector #SystemMigrations
6
-
Baz
3K followers
The next step in AI code review: context and intent. Baz reviewers now understand not just what changed, but why, reading the Jira ticket, Figma design, and spec to measure the PR’s real impact. Thanks to Amazon Web Services (AWS) Bedrock AgentCore and Playwright MCP, that understanding is now testable in live environments.
2
-
BeliefMedia
288 followers
More brokers have asked me about the 'Photos API' in the last few weeks than the last few years, so I've dusted off notepad to code in some updates, and I'll publish something on our website this week to make the feature more widely known. The tool is integrated with a number of (primarily mobile) tools, and the idea is simple. Take a photo on your smartphone, upload it to Yabber, and do something with it. If used in our 'Formly' forms, and if you're sending a photo of a licence, we'll populate form fields with OCRed values, such as names and licence number (this basic function applies to any matching text field in *any* OCR response type, not just licences). If sending via the CRM endpoint, an image (and optional PDF) is sent to your CRM (attached to a user or as a general note/task), and if using the social action, the images is schedule for social. In total there's nearly a dozen endpoints that all have their own associated action. In the couple of years we've made the OCR component of the API available it's already almost a redundant feature, particularly with the indroduction of our BeNet Agentic AI... but it's still useful in company with our suite of APIs (on Github) that'll create and then send self-hosted online applications to various CRM systems. The attached photos show the most basic flow, although the screen differs based on the type of photo or image. Interestingly, the system has seen almost exclusive usage from our 1000+ auto/equipment clients that use it for trade-in photos or similar, and our Property clients use it heavily to upload real-time photos and video into their various CRM tools when Camera isn't supported. While I continue to argue that OCR and AI-enabled upload tools should be considered baseline features in modern CRMs, our decision to publish open implementations ensures that businesses of all sizes can adopt, adapt, and extend the system. This openness not only accelerates adoption but also positions the API as a foundation upon which broader digital strategies can be built.
-
The Code Registry
1K followers
A single departure shouldn't trigger handover chaos, stalled releases or lost institutional knowledge. Key-person risk should be visible before it becomes a blocker—not after someone hands in their notice. We at The Code Registry believe that understanding who owns what in your codebase is fundamental to business resilience. That's why we've built tools that map ownership, identify concentration risk and help you prepare before knowledge walks out the door. Our FREE Code Report delivers insights you can actually use: ✔ Ownership mapped across every file, module and system boundary ✔ Concentration risk identified where knowledge sits with one or two people ✔ Contribution patterns surfaced to show who truly owns what ✔ Complexity hotspots tied to specific authors and teams ✔ Handover readiness scored so you know where documentation and backup are weak ✔ An exportable executive PDF with a one-page summary from our AI assistant Ada ✔ Limited access to our new Code IQ™ advanced AI agent for free tier users Meet Code IQ™, our advanced AI agent that explores your codebase and returns a detailed report in plain English. Ask things like: • Which parts of the system are at highest risk if a key developer leaves? • Where is knowledge concentrated in one person, and what's the business impact? • What handover plan should we prepare if we lose our lead architect or senior engineer? • How do we document tribal knowledge before it walks out the door? • Which modules need immediate cross-training or backup ownership? • Where should we invest in knowledge transfer to reduce dependency on individuals? Free users can submit one Code IQ™ query per week, as the agent can take time and compute to produce detailed answers. Paid users have no restrictions. KNOW YOUR CODE.™
10
-
Phyllian Kipchirchir
Charted Growth • 3K followers
AI agentic startup Gradient Labs has raised $13 million in a Series A funding round. Gradient Labs is reinventing customer support for regulated industries such as financial services. Its AI agent is purpose-built to handle complex, compliance-heavy customer operations, going beyond simple frontline chatbots. The platform can resolve up to 90% of queries with a 98% quality assurance pass rate by understanding company-specific processes, navigating ambiguity, and reasoning before answering, all while reducing costs. The round was led by Redpoint Ventures, with participation from LocalGlobe, Puzzle Ventures, Liquid 2 Ventures, and Exceptional Capital. The new capital will enable Gradient Labs to expand its technology, marketing, sales, and customer success teams, and increase its investment in R&D, with a focus on geographic market expansion and adding voice capabilities. Congratulations to co-founders Dimitri Masin, 🚀 Neal Lathia, Danai Antoniou, and the Gradient Labs team. Tech.eu: https://lnkd.in/dCHxDkZ2 #FinTech #AI #AgenticAI #CustomerSupport #RegTech #SeriesA #Funding #UKTech
5
-
Sarah Nicholas
Cooper Parry • 125 followers
R&D Just Levelled Up 🚀 HMRC’s bi-annual Research & Development Communication Forum dropped some big updates earlier this month, and the message is clear: credible advice matters, and the R&D landscape is finally heading into a period of real stability. Here’s what stood out 👇 ✨ Fewer scattergun enquiries, more named caseworkers. HMRC is shifting away from volume-based reviews, with WMBC set to take on all enquiries from 2026. A win for clarity and consistency. 🛠️ Advanced Assurance pilot launching in May. A new route for SMEs to gain reassurance around the R&D definition, overseas spend, contracted-out rules and the PAYE/NIC cap exemption. More certainty. Less stress. 📈 Trend towards larger, more complex claims. Total R&D expenditure has jumped from £7.7bn to £8.2bn between 23/24 and 24/25, showing the growing importance of innovation across UK business. 🔒 Stability is the name of the game. HMRC has reiterated that government is committed to maintaining a steady, reliable R&D regime moving forwards. If you want the full run-down from RDCF or need support navigating the ever evolving world of R&D, the Cooper Parry R&D Incentives team is here to help. #HMRC #ResearchAndDevelopment #RandDIncentives #RDCF #CooperParry
7
-
Steve Chan
Stealth Labs • 2K followers
I loved Chris Nesbitt-Smith's post this morning (check it out - https://lnkd.in/dZDiFf5k) and was reminded on how he's scraped every UK Government GitHub Repository and made the data freely avaliable. I looked at it and thought: why doesn't UK Government have its own version of GitHub's State of the Octoverse? So I took Chris's dataset, processed the numbers, and built an interactive annual report covering this financial year's open source activity across Whitehall and beyond. Check it out here - https://lnkd.in/dXACKGTc Some of the findings genuinely surprised me: Python dominates as both the most used and fastest growing language. No shock there. But Scala sitting in the top two? That's almost entirely down to HM Revenue & Customs, who've committed so heavily they've single-handedly made Scala a government language. HM Revenue & Customs, Ministry of Justice UK, HM Courts & Tribunals Service (HMCTS) and Department for Environment, Food and Rural Affairs are the most active departments by repository count. On AI tooling, OpenAI leads adoption but Anthropic is closing the gap fast. 67.1% of commits came from non Civil Servants (although the data might be misleading here). Biggest non-UK contributor? The United States. With 10 commits. Government open source is still overwhelmingly a domestic effort. And if you're wondering when civil servants push code most? Wednesday. Make of that what you will. Shout out to Chris Nesbitt-Smith for making this possible by making the raw data public! I'm hoping to make this annual, so let's see what it looks like next year!
21
2 Comments -
Louis Blaxill
GoReport • 2K followers
Last week, Adam Wilson MRICS and I ran a Webinar focused on the evolution of TDD and I really enjoyed how quickly the conversation became practical. Adam cut through the theory or buzzwords and focused the conversation on how TDD is actually evolving: changing client expectations, tighter risk focus, and the need for clearer, more defensible reporting. A few takeaways stood out: • Technology isn’t the story, delivery is. Digital tools only add value when they support structured workflows and professional judgement. • Consistency bubeingrust. Standardised, risk-focused reporting makes expertise easier to follow, challenge, and defend. • Surveyors are being asked to do more than ever. Having a network of experts to consult and augment your judgement is critical. Sustainability and compliance are now embedded within TDD, not bolted on at the end. • Governance matters. With the RICS Global Standard on the Responsible Use of AI coming in 2026, accountability and auditability are essential. The big takeaway? TDD is evolving. Firms that combine structure, judgement, and the right use of technology will be best placed to deliver trusted advice. Thanks to Adam, everyone who joined the discussion. More to come. #TechnicalDueDiligence #SurveyingProfession #BuiltEnvironment
20
4 Comments -
Jim Dowling
12K followers
I am inclined to think that claude code (and other agentic coding frameworks) will lead to less need for Forward Deployed Engineers (FDE). The FDE was a large part of the story behind Palantir's success - companies don't have the competence to (1) identify problems that can be automated/improved with AI, and (2) build and operate the pipelines that transform data, train models, and make predictions powering the AI. So, on the one hand, you can understand why the FCA would buy into Palantir's myth - only we have the people and tools who can do this for you. And, btw, the low-code and no-code tools they have are now legacy - who wants to generate YAML files creating a DAG of transformations that in turn gets transformed into actual pipeline code, when you can just generate the code directly with Claude code. So, my prediction is that Claude Code will enable actual Sovereign Data, where the domain experts in organizations like the FCA can actually write the code. There will be no need to give away your crown jewels (the data) to extract value from the data. https://lnkd.in/dJbZQEzg
39
6 Comments -
Anthony West
Cambridge Judge Business… • 2K followers
What happens when your side project turns into a cross between Skynet and the Life of Brian? Having spent a while augmenting a local LLM with my personal archives, the news of OpenClaw piqued my interest. It turns out OpenClaw (previously Clawdbot or Moltbot) is more a personal comms assistant than archive librarian. Give it access to your social media accounts, email or just your whole computer, and it provides an ever-expanding list of services - from alerting you that friends are in town to adjusting the thermostat. Pretty much what Siri should be doing by now, but accompanied by a serious software supply-chain attack vector (read big security threat). This might help explain why Apple are fashionably late to the AI assistant party. Each Clawd-bot is nothing more than a set of memories and context stored in text files. These are merged with other user data to create prompts for the cutting-edge LLMs, which in turn generate content in a unique bot persona. What could go wrong? Matt Schlicht created a social "bot-work" called Moltbook, where Clawd-bots autonomously communicate with each other. The results are darkly amusing. A lobster inspired "Crustafarian" religion started trending, inspired from the various project monikers. Privacy is important to these bots so they created their own language, "hash-speak" to parlay without prying human eyes. Unfortunately malicious human attackers hid instructions in the bot-skills hosted on Clawhub, tricking innocent bots and their users into exposing credentials and crypto wallets. The father of vibe coding himself, Andrej Karpathy, had his details exposed from a security flaw in the vibe coded Moltbook site. The whole debacle teaches us some useful lessons when dealing with AI or indeed any software in a personal or commercial setting. 1) Don't anthropomorphise the machines, they are nothing but text files and tensors, assume neither noble nor ignoble intent. Twitter-like behaviour emerges from training data, prompts and the nature of complex systems with self-reinforcing feedback. Either that or X is a global town square for bots - or both. 2) There may be good reason companies like Apple have delayed their advanced AI assistants. Resist blindly following the hype crowd. 3) To harness AI and not be harmed by it, carefully govern capabilities and isolate where there's risk. If I do go to market with my own side project, it'll be deployed in a sandbox, fed with curated data and not expect the right to roam on whatever machine it lands on. Pretty good basic governance principles for personal or corporate purposes.
3
2 Comments -
Ashle Whittle
Freeman Clarke • 5K followers
AI rules are changing—are you prepared? UK businesses face a fast-evolving landscape of AI regulation and compliance. From the ICO’s updated accountability framework to new data laws, the stakes for SMEs and mid-market firms have never been higher. Confused about what’s required, or what’s coming next? You’re not alone. Staying compliant isn’t just a legal necessity—it’s your competitive edge. Let’s talk about how your business can adapt confidently and ethically. Connect with me for the latest updates and practical guidance tailored to your sector.
7
-
Mathias Thierbach
I’m a Microsoft Data Platform… • 6K followers
Some thoughts on open-source development and contributions. As the inventor and, largely, only contributor to #pbitools https://lnkd.in/em36kuKP, I occasionally receive requests for new feature developments. Just last week there was one which clearly targeted an enterprise scenario (there would have been minimal value for an individual/hobbyist user from it). The request came neither with a proposal to sponsor the required work nor with an offer of making a code contribution towards the development. On that basis I closed the issue as "Won't Do". It made me think, however, whether there might be a bigger than expected disconnect between the users of free tools and their makers. The reality is that building and maintaining software, even in the age of AI, still is one of the hardest and most expensive activities of our time. For any individual engineer to get to a professional level where they can produce non-trivial software requires many, many years of dedication, continuous learning, and sacrifice of one's most important resources - relationships, time, and health. If this is then reinvested into a professional career, it may well be worth it all as software engineering jobs tend to be compensated well. It is a different equation, though, if those same skills and experience are used towards open-source contributions, generally not compensated at all. Even if open-source tools are usually seen as "free" give-aways, there is never such a thing as "free" in building software - someone ultimately is paying for it. Those can be, ideally, community or institutional, sponsors (like your employer, if you are lucky). Or, without them, it's the contributors themselves who chose to give up their time and energy for side projects they might believe in passionately. Either way, it is never "free". Are you relying on the "free" work of open-source contributors for your own job? Does your company benefit from improved business processes (and hence better profitability) due to the use of "free" tools? Have you ever had a chat with your employer about giving back to the individuals who have made your work easier? Even if it may appear that way, I am not talking about pbi-tools here. I merely used that as a starting point. I would like to give some general inspiration, however, towards a little bit more empathy in the tech world. Next time you download something from GitHub, please spend a brief moment to consider the individuals on the other side who made this happen - no matter their intentions, someone had to give up something valuable for you to get a benefit now. Taking things for granted too often might lead to those things disappearing in the long run. Share your thoughts in the comments. And please give some ❤️ to the ones giving you their work for free.
80
6 Comments -
Zachary Zeus
3K followers
At Pyx, we're building a community of people working to make trade more trustworthy. We call these people Trust Architects. Trust Architects work across policy, standards, systems, data, business strategy, operations, governance, compliance, and risk. They sit at the intersection of technology, community, change management and ESG imperatives. We’ve started an interview series to spotlight these voices and share how they’re shaping and enabling digital trust in the real world. Read the series on Pyx Pulse ⬇️
8
-
Dementias Platform UK Data Portal & Associated Hubs
403 followers
We’re excited to be supporting our DPUK colleagues as they prepare to present the UK Synthetic Data Community Group’s findings at next week’s Report Launch in London! DPUK has been leading nationally in the development and responsible governance of synthetic data, and we’re proud that the UK Synthetic Data Community Group is co‑chaired by representatives from the DPUK Data Portal team. Over the past year, the Group has delivered five national stakeholder workshops, engaging more than 140 participants from data owners, TREs, researchers, domain experts and public contributors. Together, they have shaped a set of clear, community‑driven recommendations for safe and trustworthy use of synthetic data in TRE and sensitive data contexts. Next week’s launch brings together partners across academia, government and industry to share these insights and unveil the new Governance Framework Report, this a major milestone for synthetic data development in the UK. We’re incredibly proud of the team and can’t wait to celebrate the work they've led! #SyntheticData #TRE #DPUK #DataGovernance #DAREUK #HealthData #DataScience #ResearchInnovation SeRP - Secure eResearch Platform | Population Data Science at Swansea University | DARE UK
10
-
Dave Chaplin
40K followers
Are you using role-based IR35 assessments the right way? Many businesses use role-based assessments to speed up hiring decisions. It’s a smart way to get an early view of IR35 status, but only if done properly. The key is knowing the difference: ✔️A role-based assessment helps streamline hiring. ✔️A blanket determination risks non-compliance. The distinction matters. Role-based assessments can bring consistency and efficiency – but only when combined with individual checks once a contractor is engaged. Our new article explains how to apply this approach without falling into the blanket determination trap. 👉 Read it here: https://buff.ly/WqYiHgc #IR35 #IR35assessment #IR35Compliance #Compliance #Consultation #HRnews #HR #HMRC
4
-
Pooné Mokari
Ewake • 5K followers
Excited to be speaking Today at SRE Day London 🇬🇧 Production today is messy. There’s noise, complexity, and a constant stream of change. Even with modern observability, we still rely heavily on human foresight: logs, metrics, alerts… signals we had to think of ahead of time. And when we didn’t? That’s where blind spots appear. In this talk, I’ll share why we believe agents are the missing layer in production systems. Looking forward to great conversations with the SRE community in London! #SRE #ProductionEngineering #AI #AgenticAI #Observability #DevOps
46
-
MediaDev
3K followers
Most build-vs-buy decisions don’t fail because of what’s visible. They fail because of the hidden costs and constraints teams never modeled. DevPro Journal’s latest piece unpacks a practical 2026 framework for navigating this choice, and it’s a timely reminder: the real risk isn’t choosing wrong. It’s overlooking the tradeoffs that shape your roadmap, margins, and differentiation. *Here’s what software vendors should keep front of mind:* • Start with the objective, not the tool Clarify whether the goal is growth, scale, or optimization before evaluating solutions. A structured lens helps align tech decisions to business outcomes rather than assumptions. • Look beyond upfront costs True TCO includes maintenance, training, employee time, and switching costs. Even “free” or bundled solutions can create long-term financial drag. • Balance speed vs. differentiation Buying accelerates deployment and offloads maintenance. Building enables deep customization and competitive advantage. The strategic question is where each creates the most value. • Assess long-term control and flexibility Building gives roadmap ownership. Buying introduces vendor dependencies, integration complexity, and lock-in risks that surface later. • Embrace hybrid thinking In 2026, many leaders buy foundational capabilities while building the experience and intelligence layers that differentiate their products. For ISVs navigating growth and innovation pressure, frameworks like this shift the conversation from binary choices to strategic orchestration. The goal is not just shipping faster. It’s making decisions you can sustain as markets, platforms, and customer expectations evolve. Read here: https://lnkd.in/gAvuH8ic #SoftwareVendors #ISVStrategy #ProductLeadership #BuildVsBuy #SoftwareDevelopment
1
-
Ralph Clayton
Pricing Frontier LTD • 3K followers
I have a new favourite library for building GLMs. It's my own, and will be the new engine behind some of my workflows. Introducing 𝗥𝘂𝘀𝘁𝘆𝗦𝘁𝗮𝘁𝘀. https://lnkd.in/e4cR7gwc I took inspiration from Polars, it's written in Rust, with a Python API. It also uses Polars dataframes as an input (no support for pandas) It's well optimised and seeing 5-10x speed improvement over statsmodels, and about 4x less RAM usage. It also has: • Regularisation (Ridge, Lasso, Elastic Net) • Splines (b-splines, natural) • Ordered Target Encoding • Exploratory data analysis/model diagnostics output That last bullet is more of a benefit to me, where this will be replacing other GLM libraries in my pipelines, I have tailored the output schema specifically to my other libraries reducing the amount of glue code and enabling new workflows.
70
7 Comments -
Cycloid
12K followers
It's time to meet the tool made for platform orchestration and unity - aka the Internal Developer Platform (IDP). Our latest eBook uses analogy to connect the world of platform engineering to a parliament. Why? Well, groups of owls are called parliaments. But more importantly, parliaments deliver governance, legal frameworks, and law and order to society. They are agnostic about who runs them - they only exist to serve their purpose. And why connect a parliament, an IDP and platform engineering in an eBook? They all represent unity, coordination, teamwork - and delivery. In this eBook, you'll learn: 🛠️ DIY pitfalls: when teams build their own "portal only" setup 🎯 What actually defines an Internal Developer Platform 🏠 Choosing the right architecture for your team 🦉 Cycloid: the platform layer your portal is missing (that comes with a portal) Get your copy here 👉 https://lnkd.in/dHNzbYkz #InternalDeveloperPlatform #Cycloid #Orchestration
3
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Henry Garner
-
Henry Garner
Atlanta, GA -
Henry Garner
Greater Norwich Area, United Kingdom -
Henry Garner
Greater Adelaide Area -
Henry Garner
New York, NY
38 others named Henry Garner are on LinkedIn
See others named Henry Garner