Sign in to view Henry’s full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Henry’s full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
United Kingdom
Sign in to view Henry’s full profile
Henry can introduce you to 10+ people at JUXT
Join with email
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
3K followers
500+ connections
Sign in to view Henry’s full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Henry
Henry can introduce you to 10+ people at JUXT
Join with email
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Henry
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Henry’s full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Activity
3K followers
-
Henry Garner shared this"Yes but AI hasn't changed the fundamentals of software engineering". I hear this regularly, and although it's correct it seems to me to be a way of not looking too closely at how much _has_ changed. Christopher Browne built a post-trade equity derivatives risk system in weeks, solo, with Claude Code and Allium specs. The kind of system that normally needs a team and a year inside a bank. I think he's done a great job of capturing this new normal: https://lnkd.in/eKvaV3tt #SoftwareEngineering #AgenticEngineering #FinTechJUXT Blog: Building Meridian: a risk system in weeksJUXT Blog: Building Meridian: a risk system in weeks
-
Henry Garner reposted thisHenry Garner reposted thisThat says it all, really. 👆 Last week's Sportradar CONNECT: Tech Talk was a brilliant evening - and that was down to the people in the room as much as the speakers on stage. Great energy, great conversations, and exactly the kind of community we set out to build when we started this series. A huge thank you to Behshad Behzadi, Michał Olszewski, and Henry Garner for bringing the ideas and the inspiration. And to our Warsaw office and teams - thank you for all the support. You made the night happen. 🙌 Next stop: Trondheim. 🇳🇴 (more info 👉 https://lnkd.in/dNzpVKjG) #SportradarConnect #TechTalk #Warsaw #AI #Tech #Community
-
Henry Garner shared thisBig design upfront? I’m on my way to Poland to talk about whether spec-driven development really is Waterfall 2.0. Or with the right kind of AI support, could it actually be the purest manifestation of Agile yet? Join me in Warsaw this evening at Sportradar CONNECT for some big ideas and live AI-assisted coding! Looking forward to meeting Sportradar friends and Grid Dynamics colleagues. See you there? Register here: https://lnkd.in/eWc6ZrKF #SpecDrivenDevelopment #AgileAI #AIAssistedCoding
-
Henry Garner shared thisCome and do valuable work with our stellar team! #joinusHenry Garner shared this
-
Henry Garner reposted thisHenry Garner reposted thisI'm excited to say that Sam Newman, independent consultant and author of Building Microservices, will be speaking at XT26. Sam is a thought leader who cuts through the noise to articulate what matters. We need Sam's perspective in this crazy year, as someone who has been across so much of our industry, as a technical leader for ThoughtWorks and now independent, helping companies with distributed systems and emerging technologies. XT26 is a one-day, invite-only event in central London on June 18th, with around 250 senior engineering leaders across banking and financial services. Apply for an invitation juxt.pro/xt26
-
Henry Garner shared thisAbby Bangser joins the #XT26 lineup with a talk on platform engineering at NatWest Bank. Naming a team "Platform as a Product" is the easy part. She'll walk through the socio-technical journey that reduced Kubernetes deployment times by over 97%, bringing the CNCF's platform engineering white papers to life with hard-won lessons. Register your interest at juxt.pro/xt26. London, June 18th.
-
Henry Garner shared thisUpdate: we really did identify a bug in the Apollo 11 flight code. It's confirmed by the lovely folks from the Virtual AGC Project. I still can’t quite believe it. There’s a resource lock leak on the IMU gyros if they’re caged during torquing. It was present for missions 11-14. They also told us it was fixed as part of larger changes for Apollo 15, so it's not a novel bug (except to us). It's a giant leap for AI-native behavioural specification. Thousands of developers have read this code. Academics have published reliability papers on it. Emulators run it instruction by instruction. But all of that scrutiny asks the same kind of question: how does the code work? We asked a different one: what is it trying to achieve? We used #Allium, our open-source AI skill, to extract a machine-readable spec from the code itself. Within a couple of hours it surfaced a resource leak that would only manifest under a specific, rare error path. If a behavioural specification can find issues in code that reliability papers have been written about, imagine what it could find in yours. https://lnkd.in/ecgWuF5p #Apollo11 #SoftwareEngineering
-
Henry Garner reacted on thisHenry Garner reacted on thisXTDB 2.2 will be shipping with an "external source" API: a way to radically simplify how upstream data sources can be plumbed into XTDB. We have developed this API in conjunction with a key design partner on their Virtual Power Plant Orchestration platform, where they have dozens of sources (ingestion apps, Postgres/TimeScale, Kafka topics etc.) and thousands of upstream devices all flowing data into XTDB continuously. This XTDB deployment, running in Azure, is already handling several TBs of data on power readings, device settings, network topologies and more. Why was XTDB chosen? Because the Design Partner needed a radically simpler data model than any alternatives could offer. For them, reliable (correctable!) reporting with total auditability over time series data was non-negotiable. Bitemporal SQL delivers this. #XTDB (more details on the project is linked in the comment below)
-
Henry Garner reacted on thisThe article states that PostgreSQL supports temporal tables via the temporal_tables extension. Frankly, it is not usable for many a reason: it doesn't support Postgres 18, not available in RDS & friends, doesn't use standard syntax, and forces the user to create several triggers for each temporal table. Too many problems, sorry. As the article correctly states, MariaDB has temporal tables, but MariaDB is not MySQL. For those who are interested in temporal tables, I recommend taking a look at XTDB, which was designed to implement bitemporal tables.Henry Garner reacted on thisWould you like to Time Travel... in MySQL ? https://lnkd.in/ed-_QiCCEvery Major OLTP Has Time Travel. Except MySQL - dbtrail BlogEvery Major OLTP Has Time Travel. Except MySQL - dbtrail Blog
-
Henry Garner liked thisHenry Garner liked thisThe RFI promised in last Friday's SR 11-7 rewrite is going to be the single most important opportunity for FS firms to shape what fills the GenAI governance gap and my guess is that most submissions will be wasted. Responses to these kind of RFIs usually come in one of two categories. The first is the policy submission — three pages explaining your firm's "responsible AI principles" and "commitment to robust governance". Regulators read these, file them, and ignore them when writing the actual guidance. The second is the operational submission — here is the agent inventory we maintain, here is how we attribute usage by line of business, here is the per-call telemetry our risk team uses to detect drift, here are the runtime controls that prevent specific failure modes. The second category shapes the guidance. The first doesn't. Operational submissions require deployed controls. Not "in the roadmap." Not "captured in our policy." Deployed. That is a non-trivial bar, and most firms aren't there. At a minimum, it'll take four things: 1. An agent and model inventory that's accurate by construction, not maintained by hand. If a developer can call a foundation model without that call appearing in the inventory within minutes, the inventory is fiction. 2. Usage attribution at the request level. Per business unit, per application, per agent. Without this you can't answer the most basic regulator question: who in the firm is using which model for what. 3. Real-time guardrails on inputs and outputs. PII filtering, topic restriction, response evaluation. Enforced at the request path, not as an SDK call some teams remember to add. 4. Budget and rate controls per model and per consumer. Not because regulators care about your spend — because operational controls are evidence of operational maturity. Firms with these will write the next round of guidance. Firms with policy documents will read it. If you want more details, check my post from yesterday.
-
Henry Garner liked thisHenry Garner liked thisGenAI coding tools are genuinely powerful. In the right hands, in the right environment, the stuff is remarkable. Experienced engineers with good practices around them are doing things in hours that used to take weeks. Ideas get tested that previously stayed as hypotheses. Long-standing technical debt is getting cleared. Work that wasn't worth the investment a year ago is now done in an afternoon. Right environment means organisations that genuinely understand software engineering. An appreciation that building software is not a production line, but a learning process. Right hands means experienced software engineers who take full end to end ownership. Product mindset. XP practices. Continuous Delivery, with all the automation, tests and guardrails that let you learn and iterate quickly without breaking things. Most organisations don't have that, which is why most of the industry isn't getting much from these tools. The organisations best placed to benefit from GenAI are the ones who invested in engineering foundations years ago. For everyone else, the shortcut you were hoping for doesn't exist. For CEOs and founders hoping to benefit, the answer isn't as simple as handing out Claude licences (as Jason Gorman puts it, "just because you attach a code-generating firehose to your plumbing, that doesn't mean you'll get a power shower"). It's investing in the engineering culture and practices. Unglamorous, slow work, but there's no way around it.
-
Henry Garner liked thisHenry Garner liked thisWhen people say, "If you don't like your job, just start your own business."
-
Henry Garner reacted on thisHenry Garner reacted on this"Spec-driven development is just waterfall" and "Big upfront design doesn't work", I hear both of these a lot. Both miss the same thing. Waterfall's problem was never specifications, it was feedback speed. You'd spec for weeks, build for months, and learn you were wrong long after the cost of being wrong had piled up. Big upfront design fails for the same reason: the loop between intention and reality was too long to correct anything in time. Spec-driven development with AI inverts that math. Write a spec, AI implements, tests verify, you see the result, you adjust. Feedback takes minutes instead of months, and the cycle that made waterfall brittle is gone. TDD, BDD, and acceptance criteria were all spec-first practices, and nobody called those waterfall because the feedback loop was tight. Thought experiment: if coding became truly instant, what would be left to do? The spec, and only the spec. 100% of your effort on design, zero on implementation. That's not waterfall, that's pure design work. But small batches still matter. The constraint has just moved. It used to be bounded by how fast humans could implement. Now it's bounded by how much spec you can meaningfully verify, review, and correct in one cycle. Pile a hundred behaviors into one spec, hand it to AI, and you lose the signal. The discipline stays, the practices around it have to catch up to the new capacity. The real question isn't "should I write specs." It's whether you can afford not to. Because the alternative is trusting an amnesiac AI to hold every decision you've ever made. 📘 The Spec-Driven Shift | Week 4: The Future | Post 18 of 20 Full series → https://lnkd.in/e_58DTaa
-
Henry Garner liked thisHenry Garner liked thisEvery system that was regulated, either explicitly or implicitly, by the fact that they were effortful for humans (letters of recommendation, government filings, essays, or, as this paper finds, lawsuits) will break under a wave of AI.
Experience & Education
-
JUXT
*** * ** ******* **** * ***** **** ** **** ********
-
***** ****** **********
********
-
**** *** **** ******** *******
********* ******** * ** ******* ****
-
********** ** ******
*** **** *** undefined
-
View Henry’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Licenses & Certifications
Publications
-
JUXT AI Radar
JUXT
See publicationKeeping pace with AI development feels increasingly difficult. New tools appear weekly, claims about capabilities shift monthly, and what seemed essential last quarter might be yesterday’s news.
Our teams at JUXT have been applying AI across multiple client projects, from coding assistants to agent frameworks, from prompt engineering to model selection. We’ve seen what works in practice, what doesn’t live up to the marketing, and where the real value lies for organisations trying to make…Keeping pace with AI development feels increasingly difficult. New tools appear weekly, claims about capabilities shift monthly, and what seemed essential last quarter might be yesterday’s news.
Our teams at JUXT have been applying AI across multiple client projects, from coding assistants to agent frameworks, from prompt engineering to model selection. We’ve seen what works in practice, what doesn’t live up to the marketing, and where the real value lies for organisations trying to make sensible technology choices.
We’ve distilled these insights into our first AI Radar: an opinionated guide to the tools, techniques, and platforms we think are worth your attention right now. It’s structured around four rings (adopt, trial, assess, and hold) making it easier to understand what’s ready for production use versus what needs more time to mature.
This isn’t a snapshot: we’ll be updating it regularly as the landscape evolves and our understanding deepens. If you’re navigating AI adoption in your organisation, we hope it provides a useful reference point. -
Clojure for Data Science
Packt
See publicationClojure is a powerful language that combines the interactivity of a scripting language with the speed of a compiled language. It shares a platform - the Java Virtual Machine - with the big data powerhouses Hadoop and Spark, enabling efficient use of these de-facto standards without sacrificing expressiveness. Together with its rich ecosystem of native libraries and an extremely simple and consistent functional approach to data manipulation, which maps closely to mathematical formulae, it is an…
Clojure is a powerful language that combines the interactivity of a scripting language with the speed of a compiled language. It shares a platform - the Java Virtual Machine - with the big data powerhouses Hadoop and Spark, enabling efficient use of these de-facto standards without sacrificing expressiveness. Together with its rich ecosystem of native libraries and an extremely simple and consistent functional approach to data manipulation, which maps closely to mathematical formulae, it is an ideal practical and flexible language to meet data scientist’s diverse needs.
Taking you on a journey from simple summary statistics to sophisticated machine learning algorithms, this book shows how the Clojure programming language can be used to derive insight from data. You’ll learn how to apply statistical thinking to your own data and use Clojure to explore, analyse and visualise it in a technically and statistically robust way. You’ll explore the core machine learning techniques of regression, classification, clustering and recommendation, specialist approaches for graph and time series data, and discover the wealth of tools that should be in every Clojure Data Scientist’s toolbox.
Above all, by following the explanations in this book you’ll learn not just how to be effective using the current state-of-the-art in data science, but why such methods work so that you can continue to be productive as the field evolves into the future.
Recommendations received
4 people have recommended Henry
Join now to viewView Henry’s full profile
-
See who you know in common
-
Get introduced
-
Contact Henry directly
Explore more posts
-
Dominic Fox
NatWest Commercial and… • 865 followers
This evening's work for Claude, on Patches: 1) Thinking over a couple of API decisions I recorded yesterday, I realised there were cleaner approaches available to one particular aspect, piggy-backing on previous changes. I proposed those approaches to the LLM, talked them through, considered objections, settled on a new approach; the LLM wrote up the revised ADR (which I read, checking that it captured my intention clearly) 2) The method that compares a new patch graph to the existing one and builds an updated execution plan is huge, nested and complex. I would never have written it that way myself, but it works as it stands. Nevertheless, I have future changes targeting that area, and it's risky to have something that's both syntactically convoluted and hard to read and test, especially in such a central piece of logic. It badly needs some attention. I asked the LLM to propose a breakdown of the method into smaller, testable pieces. Its proposal was ok, but missed an opportunity to separate cleanly the "deciding what to do" phase from the "acting on the decision" phase, leaving graph analysis and new module instantiation somewhat tangled up with each other. I suggested some further re-organisation to disentangle things, and asked the LLM to remark on the impact this would have on testing; it confirmed that it would make the testable surface of each of the distinct pieces smaller and more easy to control. Once we had a scheme that looked sound to me, I asked it to write up an epic and a sequence of tickets for it. Then I asked it to start working through the changes. Now it's chugging through the first ticket, while I write this. It will go on doing so while I pop downstairs and do the washing-up. I'm comfortable with the fact that the first draft of the code it's now tidying up was unsatisfactory from a testability and maintainability point of view: it grew organically and expediently in the course of pulling together something that worked well enough for me to start playing with it. Now it's a glaring area of technical risk, so I'm cheaply and quickly addressing it. A capable human developer would have kept much better discipline throughout the drafting process, but might not have seen that the decide/action split was natural and desirable until about this point in the game - I didn't until I stopped to think about it. What's absent from this process, for me, is anxiety that the LLM will just heap up piles of intractable slop. It will typically do the expedient thing at each step along the way, but that's fine if you're working in short iterations provided you're also periodically stopping to review where cruft and friction are accumulating. What it lacks in foresight, you somewhat have to make up for in hindsight. But the latter is famously somewhat clearer than the former anyway.
2
-
Baz
4K followers
The next step in AI code review: context and intent. Baz reviewers now understand not just what changed, but why, reading the Jira ticket, Figma design, and spec to measure the PR’s real impact. Thanks to Amazon Web Services (AWS) Bedrock AgentCore and Playwright MCP, that understanding is now testable in live environments.
2
-
Anthony West
Cambridge Judge Business… • 2K followers
What happens when your side project turns into a cross between Skynet and the Life of Brian? Having spent a while augmenting a local LLM with my personal archives, the news of OpenClaw piqued my interest. It turns out OpenClaw (previously Clawdbot or Moltbot) is more a personal comms assistant than archive librarian. Give it access to your social media accounts, email or just your whole computer, and it provides an ever-expanding list of services - from alerting you that friends are in town to adjusting the thermostat. Pretty much what Siri should be doing by now, but accompanied by a serious software supply-chain attack vector (read big security threat). This might help explain why Apple are fashionably late to the AI assistant party. Each Clawd-bot is nothing more than a set of memories and context stored in text files. These are merged with other user data to create prompts for the cutting-edge LLMs, which in turn generate content in a unique bot persona. What could go wrong? Matt Schlicht created a social "bot-work" called Moltbook, where Clawd-bots autonomously communicate with each other. The results are darkly amusing. A lobster inspired "Crustafarian" religion started trending, inspired from the various project monikers. Privacy is important to these bots so they created their own language, "hash-speak" to parlay without prying human eyes. Unfortunately malicious human attackers hid instructions in the bot-skills hosted on Clawhub, tricking innocent bots and their users into exposing credentials and crypto wallets. The father of vibe coding himself, Andrej Karpathy, had his details exposed from a security flaw in the vibe coded Moltbook site. The whole debacle teaches us some useful lessons when dealing with AI or indeed any software in a personal or commercial setting. 1) Don't anthropomorphise the machines, they are nothing but text files and tensors, assume neither noble nor ignoble intent. Twitter-like behaviour emerges from training data, prompts and the nature of complex systems with self-reinforcing feedback. Either that or X is a global town square for bots - or both. 2) There may be good reason companies like Apple have delayed their advanced AI assistants. Resist blindly following the hype crowd. 3) To harness AI and not be harmed by it, carefully govern capabilities and isolate where there's risk. If I do go to market with my own side project, it'll be deployed in a sandbox, fed with curated data and not expect the right to roam on whatever machine it lands on. Pretty good basic governance principles for personal or corporate purposes.
3
2 Comments -
Jim Dowling
12K followers
I am inclined to think that claude code (and other agentic coding frameworks) will lead to less need for Forward Deployed Engineers (FDE). The FDE was a large part of the story behind Palantir's success - companies don't have the competence to (1) identify problems that can be automated/improved with AI, and (2) build and operate the pipelines that transform data, train models, and make predictions powering the AI. So, on the one hand, you can understand why the FCA would buy into Palantir's myth - only we have the people and tools who can do this for you. And, btw, the low-code and no-code tools they have are now legacy - who wants to generate YAML files creating a DAG of transformations that in turn gets transformed into actual pipeline code, when you can just generate the code directly with Claude code. So, my prediction is that Claude Code will enable actual Sovereign Data, where the domain experts in organizations like the FCA can actually write the code. There will be no need to give away your crown jewels (the data) to extract value from the data. https://lnkd.in/dJbZQEzg
40
6 Comments -
Dementias Platform UK Data Portal & Associated Hubs
414 followers
We’re excited to be supporting our DPUK colleagues as they prepare to present the UK Synthetic Data Community Group’s findings at next week’s Report Launch in London! DPUK has been leading nationally in the development and responsible governance of synthetic data, and we’re proud that the UK Synthetic Data Community Group is co‑chaired by representatives from the DPUK Data Portal team. Over the past year, the Group has delivered five national stakeholder workshops, engaging more than 140 participants from data owners, TREs, researchers, domain experts and public contributors. Together, they have shaped a set of clear, community‑driven recommendations for safe and trustworthy use of synthetic data in TRE and sensitive data contexts. Next week’s launch brings together partners across academia, government and industry to share these insights and unveil the new Governance Framework Report, this a major milestone for synthetic data development in the UK. We’re incredibly proud of the team and can’t wait to celebrate the work they've led! #SyntheticData #TRE #DPUK #DataGovernance #DAREUK #HealthData #DataScience #ResearchInnovation SeRP - Secure eResearch Platform | Population Data Science at Swansea University | DARE UK
10
-
Steve Chan
Stealth Labs • 3K followers
I loved Chris Nesbitt-Smith's post this morning (check it out - https://lnkd.in/dZDiFf5k) and was reminded on how he's scraped every UK Government GitHub Repository and made the data freely avaliable. I looked at it and thought: why doesn't UK Government have its own version of GitHub's State of the Octoverse? So I took Chris's dataset, processed the numbers, and built an interactive annual report covering this financial year's open source activity across Whitehall and beyond. Check it out here - https://lnkd.in/dXACKGTc Some of the findings genuinely surprised me: Python dominates as both the most used and fastest growing language. No shock there. But Scala sitting in the top two? That's almost entirely down to HM Revenue & Customs, who've committed so heavily they've single-handedly made Scala a government language. HM Revenue & Customs, Ministry of Justice UK, HM Courts & Tribunals Service (HMCTS) and Department for Environment, Food and Rural Affairs are the most active departments by repository count. On AI tooling, OpenAI leads adoption but Anthropic is closing the gap fast. 67.1% of commits came from non Civil Servants (although the data might be misleading here). Biggest non-UK contributor? The United States. With 10 commits. Government open source is still overwhelmingly a domestic effort. And if you're wondering when civil servants push code most? Wednesday. Make of that what you will. Shout out to Chris Nesbitt-Smith for making this possible by making the raw data public! I'm hoping to make this annual, so let's see what it looks like next year!
21
2 Comments -
Jose Saura
PNW R&D • 514 followers
The project I mentioned in my last post works great, but it definitely took longer than I expected to get everything tuned. So for my next build — a new controller for a spin coater — I tried a different approach. If you haven’t seen one, a spin coater is basically a machine that spins a substrate at controlled speeds so you can lay down a thin, even film. Commercial ones are expensive, so I built my own with a drone motor, 3D‑printed parts, and an ESC. Up until now I was driving it with an RC transmitter, but I needed proper ramp‑up/ramp‑down curves and accurate RPM control. This time I used Google Gemini, and it took me about eight hours to go from idea to a fully working system: firmware, UI, PID tuning, ESC calibration, and support for pro‑level spin profiles. I shared both the prompt and the generated code in this repo. 👉 https://lnkd.in/gWu98ejp But the real story here isn’t the project — it’s the workflow that’s starting to emerge: My LLM‑Driven Engineering Workflow: • I start by having the LLM help define the module’s specs. It’s like a design review partner that never gets tired. • Once the specs look good, I ask the model to write the prompt that will generate the code. • Then I take that prompt to a second LLM and let it critique and improve it. (It usually says “this is already solid, but…” and then gives genuinely useful tweaks.) • I run the refined prompt in my code‑gen model and save that prompt as part of the project. • And yes — the model always misses something. That’s normal. I issue follow‑up prompts and keep them in a “prompt log.” • The magic happens later: when a new LLM version comes out, I can re‑run the same prompts and instantly get a better implementation without rewriting anything. And this is where things get interesting. LLMs are about to change code reuse in a big way I’m realizing that the old idea of “reusing code” is going to shift. Sometimes it’s not worth pulling in a third‑party library anymore. If the model already knows how to implement a PID controller, for example, it can generate a clean, purpose‑built version that fits your architecture better than a generic library. That’s a big deal. Because it means the thing we reuse isn’t the code — it’s the instructions that generate the code. We’re moving from: • reusing libraries → to reusing prompts • sharing code → to sharing the recipe that produces the code • maintaining a codebase → to regenerating it when the models improve The prompt becomes the durable artifact. The code becomes a snapshot — something you can always recreate, refine, or upgrade later. This feels like the beginning of a pretty big shift in how we build software.
6
1 Comment -
Mathias Thierbach
I’m a Microsoft Data Platform… • 6K followers
Some thoughts on open-source development and contributions. As the inventor and, largely, only contributor to #pbitools https://lnkd.in/em36kuKP, I occasionally receive requests for new feature developments. Just last week there was one which clearly targeted an enterprise scenario (there would have been minimal value for an individual/hobbyist user from it). The request came neither with a proposal to sponsor the required work nor with an offer of making a code contribution towards the development. On that basis I closed the issue as "Won't Do". It made me think, however, whether there might be a bigger than expected disconnect between the users of free tools and their makers. The reality is that building and maintaining software, even in the age of AI, still is one of the hardest and most expensive activities of our time. For any individual engineer to get to a professional level where they can produce non-trivial software requires many, many years of dedication, continuous learning, and sacrifice of one's most important resources - relationships, time, and health. If this is then reinvested into a professional career, it may well be worth it all as software engineering jobs tend to be compensated well. It is a different equation, though, if those same skills and experience are used towards open-source contributions, generally not compensated at all. Even if open-source tools are usually seen as "free" give-aways, there is never such a thing as "free" in building software - someone ultimately is paying for it. Those can be, ideally, community or institutional, sponsors (like your employer, if you are lucky). Or, without them, it's the contributors themselves who chose to give up their time and energy for side projects they might believe in passionately. Either way, it is never "free". Are you relying on the "free" work of open-source contributors for your own job? Does your company benefit from improved business processes (and hence better profitability) due to the use of "free" tools? Have you ever had a chat with your employer about giving back to the individuals who have made your work easier? Even if it may appear that way, I am not talking about pbi-tools here. I merely used that as a starting point. I would like to give some general inspiration, however, towards a little bit more empathy in the tech world. Next time you download something from GitHub, please spend a brief moment to consider the individuals on the other side who made this happen - no matter their intentions, someone had to give up something valuable for you to get a benefit now. Taking things for granted too often might lead to those things disappearing in the long run. Share your thoughts in the comments. And please give some ❤️ to the ones giving you their work for free.
80
6 Comments -
Astronomer
142K followers
How do you keep large-scale data operations running without bottlenecks? Jonathan Rainer, former Platform Engineer at Monzo Bank, shares how his team uses Airflow to schedule and manage complex compute workloads efficiently. We discuss scaling challenges, automation and the role of orchestration in modern data workflows. Follow the link in the comments for the episode. #AI #Automation #Airflow #MachineLearning
12
1 Comment -
Highways Plus - surfacing and civils
413 followers
⚠️ £500 million of developer capital is trapped in stalled adoptions across the UK right now. That's not a typo. While you're reading this, hundreds of developments sit "complete" but not adopted—tying up working capital that should be funding new projects. The numbers are genuinely shocking: 📈 Only 45% of Section 38 applications get approved first time 📈 Average processing times up 40% since 2019 📈 Cost escalations of 200-400% becoming routine 📈 Extended maintenance periods affecting 60%+ of developments But here's what most people don't realise: these problems are preventable. The developers succeeding in this environment aren't lucky—they're systematic. They understand that adoption begins during design, not after construction. In our latest analysis, we break down exactly where the system is failing and what you can do to protect your projects from becoming another statistic. Key insights: ➡️ The real root causes of adoption delays ➡️ Why design compliance ≠ adoption compliance ➡️ How LHA resource constraints affect YOUR timeline ➡️ Prevention strategies that actually work Don't let adoption delays derail your next project. Read the full analysis on our website 👇 #AdoptionCrisis #HighwayAdoption #PropertyDevelopment #UKDevelopers #Section38 #ProjectRisk #DevelopmentFinance
2
-
Ujwal A Krishna
Nivy • 417 followers
New work: HiChunk tackles a key weak spot in RAG systems — how you chunk documents matters more than you think Existing RAG benchmarks often miss the impact of how documents are split because they suffer from evidence sparsity (only a few sentences in the doc are relevant). HiCBench is introduced to fix this: it provides manually annotated multi-level chunking points, synthetic QA pairs with dense evidence, and aligned evidence sources. HiChunk is the proposed framework: use fine-tuned LLMs + an Auto-Merge retrieval algorithm to build multi-level document structuring. Results show that HiChunk improves chunk quality without blowing up time, and boosts RAG pipeline end-to-end retrieval & generation performance. Takeaway: chunking strategy (how you split, merge, structure documents) is a first-order lever in RAG effectiveness. Better evaluations like HiCBench help reveal what really works, not just what looks good in basic settings. Read more: [https://lnkd.in/gdVNAZVd) #RAG #RetrievalAugmentedGeneration #DocumentChunking #LLM #AIResearch #EvaluationBenchmarks #HiChunk #HiCBench #InformationRetrieval
-
Panto AI
1K followers
🔍 Day 85: Precision in Language 🔍 In a recent code review, Panto flagged an error message reading "failed to marsh request". The correct term—“marshal”—refers to the process of transforming data into a specific format for transmission or storage. Panto AI is designed to catch not just structural issues, but also linguistic ones—helping teams maintain clean, clear, and correct code at scale. #CodeReview #CleanCode #DeveloperTools #SoftwareEngineering #ErrorMessages #CodeQuality #DevEx #Automation #TechnicalExcellence #CodingStandards
3
-
Sairam Chaganti
LawyerDesk Advocacy Pvt Ltd • 1K followers
We validated LegiScore with someone who wasn't a legal expert. It changed the product more than months of SME feedback did. When we started building, we did the obvious thing — worked with experienced lawyers. They taught us what risks actually matter in property evaluation, how title analysis works in practice, which checks you can never skip. They gave us the legal intelligence that makes LegiScore more than just another tech product guessing at law. But building the system is only half of it. To validate, we showed the product to a young lawyer who'd never used anything like it. And the conversation went somewhere I didn't expect. She wasn't asking about legal depth. She was asking about flow. What should happen first? Why does this insight show up here instead of three screens earlier? Can this decision be automated instead of manually checked every time? It wasn't a usability test anymore. It turned into a workflow redesign session. Because she wasn't conditioned by legacy systems, she wasn't trying to replicate how things have always been done. She questioned things we'd stopped questioning. That pushed us to rethink workflow logic, decision triggers, where automation actually belongs, and how information should be prioritized. We ended up with a product that works for someone seeing it for the first time — which, it turns out, also makes it better for everyone else. Experts helped us build something accurate. A fresh pair of eyes helped us build something clear. I keep thinking about how easy it would have been to skip that second step. When was the last time you showed your product to someone outside your domain? What did they see that you missed? LegiScore | Property AI Legal Search & Reports #AI #TechLeadership #LegalTech #PropTech #BuildInPublic #StartupIndia
9
-
Cycloid
12K followers
It's time to meet the tool made for platform orchestration and unity - aka the Internal Developer Platform (IDP). Our latest eBook uses analogy to connect the world of platform engineering to a parliament. Why? Well, groups of owls are called parliaments. But more importantly, parliaments deliver governance, legal frameworks, and law and order to society. They are agnostic about who runs them - they only exist to serve their purpose. And why connect a parliament, an IDP and platform engineering in an eBook? They all represent unity, coordination, teamwork - and delivery. In this eBook, you'll learn: 🛠️ DIY pitfalls: when teams build their own "portal only" setup 🎯 What actually defines an Internal Developer Platform 🏠 Choosing the right architecture for your team 🦉 Cycloid: the platform layer your portal is missing (that comes with a portal) Get your copy here 👉 https://lnkd.in/dHNzbYkz #InternalDeveloperPlatform #Cycloid #Orchestration
3
-
Teaching Public Service in the Digital Age
3K followers
We're pleased to include Jennifer Pahlka's article 'Project vs Product Funding' in our Unit 3 syllabus update. You can read it here: https://lnkd.in/e2gdGTpF Traditional government technology projects often rely on big upfront budgets and fixed timelines. This “project” funding model tends to lock in outdated systems and spiraling costs, making it hard to adapt to changing needs. Jennifer argues for a shift to the “product” funding model, where ongoing, incremental investments allow for continuous updates and improvements. This approach delivers more responsive, effective public services that evolve with users. 🔄✨ Make sure you give it a read!
6
-
Ashle Whittle
Freeman Clarke • 5K followers
AI rules are changing—are you prepared? UK businesses face a fast-evolving landscape of AI regulation and compliance. From the ICO’s updated accountability framework to new data laws, the stakes for SMEs and mid-market firms have never been higher. Confused about what’s required, or what’s coming next? You’re not alone. Staying compliant isn’t just a legal necessity—it’s your competitive edge. Let’s talk about how your business can adapt confidently and ethically. Connect with me for the latest updates and practical guidance tailored to your sector.
7
-
OpenHands
7K followers
The hard part of refactors isn’t what to change—it’s coordinating all the changes without breaking everything. We’ve published our OpenHands webinar on how to run parallel agents for codebase-wide refactors without losing human oversight). Robert Brennan and Calvin Smith walk through: - How engineering teams use parallel cloud agents to automate massive refactors safely and quickly. - Why task decomposition and dependency-aware batching are essential to avoid agent conflicts and compounding errors. - Practical workflows for 90% automation using PR-based review loops and cloud sandboxes. - Real-world wins: large-scale CVE fixes, framework migrations, Spark upgrades, and adding type annotations across entire repos. If you’re staring down a framework migration, language upgrade, or a monolith that needs untangling, this is a good watch. Stream the recording here:
12
-
Tobias Kirchherr
1K followers
🚀 What really drives platform engineering success? We sat down with Steve Fenton, Head of DevRel & Principal DevEx Researcher at Octopus Deploy, to unpack insights from their latest Platform Engineering Pulse research — and the takeaways are eye-opening. 💡 Highlights: * Making security and ops policies mandatory doesn’t slow teams down — it actually fuels healthier platform adoption. * GitOps and Argo are redefining how modern teams think about deployments. * The metrics that actually matter for platform success aren’t always DORA — sometimes Monk metrics tell a more complete story. It’s a short, research-backed, and incredibly practical conversation for platform teams and engineering leaders who want to make their platforms stick. 🎧 Tune in to hear what the data says about building platforms that developers love — not just tolerate: https://lnkd.in/ddzMdBiU #PlatformEngineering #DevEx #GitOps #DevOps #EngineeringLeadership #SoftwareDelivery
-
Debasish Ghosh
Conviva • 5K followers
And now for some readings of user level RCU .. and a landmark paper that led to the implementation of liburcu - the user space RCU library, a Christmas Day evening read .. Why RCU is difficult in user space ? RCU, particularly its high-performance Quiescent-State-Based Reclamation (QSBR) variant, is easier in kernel mode because the kernel scheduler automatically detects quiescent states whenever a CPU context-switches, enters user mode, or idles, allowing grace-period tracking without any explicit cooperation from kernel code. In user mode, applications lack this built-in mechanism, so threads must explicitly register and periodically report quiescent states (e.g., by calling specific functions), imposing invasive global constraints on the entire application. These requirements make user-level RCU harder to adopt broadly, as they complicate library design and require modifications to all potentially reading threads, which is impractical in many user-space programs. User level implementations of RCU : This paper contributes to user-level RCU by formally describing efficient and flexible implementations that overcome the limitations of prior approaches, which either imposed high read-side overhead or severely restricted application design. It presents multiple classes of RCU (including QSBR, memory-barrier, signal-based, and bullet-proof variants) with detailed algorithms, performance analysis, and comparisons to locking, directly forming the foundational basis for the liburcu library's core flavors and enabling its widespread adoption in user-space applications.
73
3 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Henry Garner in United Kingdom
-
Henry Garner
London -
Henry Garner
Greater Norwich Area, United Kingdom -
Henry Garner
Greater London -
Henry Garner
Greater Norwich Area, United Kingdom
9 others named Henry Garner in United Kingdom are on LinkedIn
See others named Henry Garner