Sign in to view Andreas’ full profile
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Andreas Thurn
Sydney, New South Wales, Australia
1K followers
500+ connections
View mutual connections with Andreas
Andreas can introduce you to 10+ people at Rokt
Join with email
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Andreas
or
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Activity
1K followers
-
Andreas Thurn liked thisWhat a night! Rokt the Rooftop 2025 brought together an incredible community under the Sydney skyline at our beautiful Surry Hills office. From inspiring insights on AI powered innovation and the future of retail during our panel to themed cocktails, meaningful conversations, and live rooftop entertainment - the energy was incredible! A huge thank you to our speakers Michael Dunlop (Rokt), Arti S. (THE ICONIC), Claire Southey (Rokt), and Ian Jankelowitz (Woolworths Group), and a special shout out to our marketing extraordinaire Natalie Moskovska, who continues to impress with her creativity and event excellence. Finally, thank you to everyone who joined us in celebrating the season Rokt-style: bold, bright, and full of connection! #Rokt #RokttheRooftop #AI #Innovation #Networking #Sydney #Tech
-
Andreas Thurn liked thisAndreas Thurn liked thisAt Airwallex, we chose the harder path: building our own infrastructure — from the ground up. Today, we hold 60+ financial licenses and regulatory registrations globally, across the US, UK, EU, China, APAC, and Latin America. This includes everything from money transmitter licenses and EMIs to broker-dealer and payment institution licenses. Why does this matter? Because controlling our stack means we can offer faster, more reliable, and truly global financial services — with less friction and more flexibility for our customers. It’s not the flashy part of fintech. But in the long run, this is our moat. Global finance isn’t built on wrappers — it’s built on infrastructure. #Airwallex #Fintech #RegTech #Infrastructure #GlobalBanking #Compliance #FinancialInnovation #CrossBorder
-
Andreas Thurn liked thisAndreas Thurn liked thisProof-of-Stake networks depend on reliable infrastructure to stay secure and stable. Chorus One uses DataPacket.com servers to keep their staking operations running smoothly, with no downtime. 👉 Read the full case study: https://lnkd.in/e-sAD-Ne
-
Andreas Thurn reacted on thisAndreas Thurn reacted on this3 cities. 30+ new hires. 45+ individual learning hours. In this era of macroeconomic uncertainty, it’s been a privilege to witness the rapid development of this brand new team and to have the opportunity to support them through a cross-regional enablement program. It’s been fantastic to meet and learn from APAC colleagues and customers across Sydney, Singapore & Bangalore! Samit Gorai Howard Toh Lambert K. I really appreciate you spending the time and sharing insights for our new team. Thank you again! Tony J. Hughes Julie Marshall massive thank you to both of you and SalesIQ for the partnership and collaboration in the program! 🙏 #learning #learningculture #grateful #thankyou #alwayslearning #coaching #collaboration #enablement #teambuilding #teamdevelopment Thomas Parsons Isabelle Velzen Marmiel Salinas Anna Bender Upasana Sharma Anindita Veluri Matt Holst Michelle Stephenson Duncan Egan Varun Dandona Isabella Calvi Chaming Teow Marissa Teo Pooja Kumar Gale Dembecki, CPCC Aya Yoshii
-
Andreas Thurn liked thisWe are very excited to be hosting the Sydney #Serverless meetup in our Sydney office.This is going to be our first in person meetup of the year, don't forget to register! Thank you Peter Hanssens for organising!Andreas Thurn liked thisHey folks, Rokt are hosting Sydney #Serverless this coming month - a big shout out to Seth Bell and Kanishka Mohaia... speaker details to be announced shortly... reach out if you'd like to get involved! https://lnkd.in/gnuke7uc
-
Andreas Thurn liked thisAndreas Thurn liked thisLoved being able to organise the Rokt 2021 Holiday Party! Despite all the other events we had to cancel this year, finally it was so fantastic to have a bunch of our Sydney team together for a beautiful day out on the Harbour! 🤗
Experience & Education
-
Stealth
***** ******** ********
-
****
***** ******** ******** *** *********
-
***** ****
**** ******** ********
-
***** ***** *** ************
******** ******* ****** *********** *********** undefined
-
View Andreas’s full experience
See their title, tenure and more.
Already on LinkedIn? Sign in
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Licenses & Certifications
Languages
-
Deutsch
Native or bilingual proficiency
-
Englisch
Full professional proficiency
View Andreas’ full profile
-
See who you know in common
-
Get introduced
-
Contact Andreas directly
Explore more posts
-
Matt Cook
Scouut • 20K followers
6-12 months ago, companies would want to know that you were open to AI use. Now, as a Senior, Staff or Principal Engineer, it's a fundamental requirement. Generally speaking, the startups and scaleups we work with (who naturally have the most capable engineering teams in Australia) won't consider Senior+ engineering talent that isn't using AI to achieve more. Most of them think about AI usage in 2 ways, with most also wanting to see evidence of both in anyone they hire at this level. The 1st is that it's now a non negotiable expectation for senior engineers to be using Claude, Cursor, or similar to move faster. It's not a preference anymore. They'll test for it during interviews, sometimes obviously, sometimes through technical challenges that are essentially impossible without AI help. But make no mistake, your ability to use AI *is* being tested. These companies have seen what their best engineers are doing compared to a couple of years ago, and the improvements in are truly too good to ignore. Some engineers may feel that AI slows them down... They would say that if you're slowed by AI, that's a skill issue, and a you problem - one they aren't interested in hiring. To be clear though, if you're using AI to write code in languages you don't know, or to architect systems you couldn't build yourself is *not* what they want to see. The best engineering environments view AI as an insanely fast working, incredibly stupid assistant. If you can't explain step by step how something needs to be built, you should never ask AI to build it. The 2nd is that they want you to think about how AI can improve the product and the customer experience. Truth is, the best engineers have always had a product mindset anyway - but again, it's becoming non-negotiable now. AI, and every new and improved model released, opens up new opportunities for companies to improve their product, or to increase the value they provide to their customers. As an engineer who's close to the code and the technology, the expectation is now on you to spot these opportunities. Again, if you can't do it, it's viewed as a you problem, and ultimately someone else will end up being hired. This is why when most engineers talk about a 'bad market' with 'no opportunities', you'll hear of other engineers getting inundated with huge demand and multiple offers. There are now engineers who can do these things, and engineers who can't. The former gets hired. The latter does not. Also, if you're still comfortable with hiring the latter - good luck I guess...
16
11 Comments -
Sam Agre
People In AI • 17K followers
A lot of smart engineers from big tech are struggling in AI-native orgs. Not because they aren’t talented. I do not think anybody would argue the talent on the market of these engineers. But because the environment changed. AI teams today don’t have clear specs, stable tooling, or long timelines. They need people who move fast through ambiguity, make calls without full context, and build before everything is figured out. How are you changing your hiring processes to test for this when the big flashy resume comes across your desk to test for "scrappy" or "working in ambiguity"? That’s not a knock on anyone and there are times for both, but the skill set is different. And it’s becoming more obvious every week. Happy Monday!
10
3 Comments -
Steve Bonomo
SCALRR • 31K followers
What a “10x Software Engineer” Really Is We’ve all heard the phrase “10x engineer.” It’s usually misunderstood. A 10x engineer isn’t someone who types faster, ships more lines of code, or works unsustainable hours. The real 10x multiplier comes from something far more rare—and far more valuable. 1. They create leverage, not code volume A true 10x engineer builds systems, abstractions, and tools that make everyone around them more effective. Their impact compounds through the team. 2. They have ruthless clarity They see the essence of a problem quickly—what matters, what doesn’t, and what the simplest workable solution looks like. This clarity prevents months of wasted effort. 3. They manage complexity with taste Taste is the underappreciated trait of elite engineers. It’s the instinct for clean architecture, thoughtful tradeoffs, and code that’s built to evolve, not just to work today. 4. They raise the bar by how they work Their communication is crisp. Their reviews teach, not nitpick. Their sense of ownership is contagious. They model a standard others naturally adopt. 5. They make the whole team move faster This is the real multiplier. The best engineers unlock speed across product, design, infra, and leadership by reducing uncertainty and increasing confidence in decisions. A 10x engineer isn’t a solo superhero—they’re a force amplifier. They don’t outwork the team; they elevate it.
8
-
Matt Cook
Scouut • 20K followers
The traditional path to becoming a senior engineer is closing, and the industry has no incentive to build a new one. The best engineers don't write code anymore - they direct AI to write it. But this only works if you already understand software deeply. You need years of writing code to know what good looks like, to make the right technical tradeoffs, and to recognise when AI is wrong. Within a year, most startups and scaleups will only be hiring engineers that build software this way, or they'll be replaced by new ones that do. And when that happens, the model that currently trains Juniors - watching Seniors, absorbing patterns, making mistakes on low stakes work - will disappear. How on earth do you learn to write good code when nobody around you writes code at all? And what incentive do companies even have to find an answer to that question, when in truth, we probably have all the senior engineers we're ever going to need. AI-augmented engineers are currently 4-10x more productive, but that productivity won't translate to 4-10x the revenue. It translates to fewer people needed for the same output. In other words, the need for 20 Seniors becomes a need for 2-5. If you're a junior, this means the industry won't train you—so you need to train yourself. But you have something previous generations didn't. The very same AI that's closing the traditional path can become your personal mentor that the industry is no longer providing. In your pocket or on your desktop, you now have a Senior Engineer who never gets frustrated with your questions, never gets tired of explaining the same concept a different way, and never makes you feel stupid for not knowing something. They're available at midnight, on weekends. They're a frontend specialist and a backend specialist. They've seen big teams, small teams, and everything in between. Of course, for AI to become your mentor, you have to use it like a mentor, not like a shortcut. When AI generates code, interrogate it. Ask why this approach and not another. Ask how it could fail. Ask what a senior engineer would criticise. When you're using a database or framework you don't fully understand, ask it to explain what's actually happening until you understand completely. When you're building something, ask who it's for and what happens to them when it's slow, or broken, or confusing. Develop product sense and taste, and learn the difference between something that can be built, and something that should be built. (AI can help but this is the hardest part). And then find a real human to fill in the gaps and tell you where you're still getting it wrong. Truth is, if you're a Junior Engineer nobody is coming to save you. But it's also never been easier to save yourself.
18
20 Comments -
Craig Sturgis
VibeCTO.ai • 2K followers
It's potentially a really big deal that Atlassian is buying DX in their aquisition spree. Great for their team, great for their investors. You can quibble with their metrics and I do a bit. But, if it becomes the default to get some kind of half decent metrics in play instead of just story point velocity with a Jira instance, it gives a lot of teams a start at a better understanding of the whole picture of their work. And maybe that will open the door for more conversations and more dialed in understanding as a result that leads to more sophistication from there. I hope so.
10
-
Jason Tame
OfferZen • 826 followers
I started playing around with Convex (and TanStack Start) over the weekend. I think it's the right abstraction for interacting with databases in the AI era: - Everything is code: Database schema, queries, and auth policies are all written in TypeScript and stored in your version control. This makes it extremely easy for AI to understand your backend and database setup and to extend it. - Realtime updates: All writes are reflected instantly in client apps. No need for websockets or manually polling state - if the database state changes, your frontend will know about it - Everything in one place: Your database, backend and frontend are all TypeScript and all in one app. As we are learning with our own apps at OfferZen, this is hugely beneficial for moving quickly. While they have their place in enterprise or larger apps, I predict we'll start to see much fewer 'traditional' MVC + relational DB apps. That architecture was designed primarily to help humans move quickly and efficiently, but it hides too much context from LLMs. https://www.convex.dev/
8
-
Pragyan Tripathi
Amperity • 4K followers
How Clojure’s standard library saved our real-time pipeline (and why you should revisit transduce) We were 3 months into building a real-time analytics engine for a D2C brand. Goal: recommend better products based on live user behavior. Reality: The stream was slow, memory-hungry, and deteriorating every week. Most teams would throw infra at the problem. We threw the REPL and the standard lib at it instead. Here’s what helped and why Clojure was the best fit: 🔹𝐤𝐞𝐞𝐩 over 𝐟𝐢𝐥𝐭𝐞𝐫 + map One traversal. Zero allocation overhead. → Reduced heap churn by 60%. 🔹𝐩𝐚𝐫𝐭𝐢𝐭𝐢𝐨𝐧-𝐛𝐲 for session windows Group events per user in-stream. → No more custom state machines. 🔹𝐣𝐮𝐱𝐭 to run parallel aggregates Minimized intermediate data structures. → 12s → 4s for top-k category scoring. 🔹𝐭𝐫𝐚𝐧𝐬𝐝𝐮𝐜𝐞 with 𝐜𝐨𝐦𝐩 Streamlined batch → insight flow, no intermediate seqs. → 4x faster on 5M events/day. 🔹𝐠𝐫𝐨𝐮𝐩-𝐛𝐲 + 𝐦𝐞𝐫𝐠𝐞-𝐰𝐢𝐭𝐡 Simple composable reducers that scale. → Cut 200+ LOC to 47 LOC. 𝐓𝐡𝐞 𝐬𝐮𝐫𝐩𝐫𝐢𝐬𝐢𝐧𝐠 𝐩𝐚𝐫𝐭? We didn’t need Kafka, Flink, or Spark. Just functional purity + the power of transducers. 𝐓𝐡𝐞 𝐫𝐞𝐬𝐮𝐥𝐭: • Conversion rate: 1% → 4.7% • Order completion: 2.3% → 5.6% • Latency: 45s → 18s • Code: readable, testable, future-proof Clojure isn’t just elegant, it’s an ops advantage. If you're in the community and haven’t revisited the standard lib lately, this is your sign. Read full blog here: https://lnkd.in/dTfTEMz5
15
-
Farhan L.
Lennar • 1K followers
Skipping a token budget is often the most expensive line of code a team never writes. This failure pattern shows up repeatedly in API-first RAG projects particularly teams calling LLM APIs directly (OpenAI, Anthropic, Cohere) instead of using managed platforms like AWS Bedrock or Azure OpenAI. With direct API usage, there are usually no default spend limits. Every token, every retry, every runaway agent loop goes straight to the invoice. What makes this failure mode dangerous is how quietly it compounds. The context window fills to 28,000+ tokens against a 16,384 limit. HTTP 400 fires. Retries kick in automatically. Costs quietly triple. Latency climbs past 14 seconds. With no ceiling set, agents loop indefinitely. Eventually the system stops serving requests entirely. Five compounding failures. One missing constraint. What's interesting is the timeline: • The HTTP 400 appears instantly • The cost spike shows up on billing dashboards 48 hours later • Latency degradation becomes visible after that • Without tracing, teams often can't see where the cascade started The patterns that stop this cascade: ·Budget every segment (system prompt, history, retrieved chunks, tools) before the request leaves the orchestrator ·Summarize history and rerank chunks more context is not the same as better context ·Stream responses with SSE users shouldn't wait on a blank screen while tokens are being consumed ·Instrument everything with OTel: token counts, chunk counts, latency, tool calls per request These aren't premature optimizations. In production RAG systems, they're the baseline. Repost if you've seen this failure pattern in production. #RAG #LLMOps #AIEngineering #SoftwareArchitecture #GenerativeAI #MachineLearning
1
-
𝗬𝗔𝗠𝗜𝗧𝗘𝗥𝗨 [ Miroslav Vršecký ]
Spectoda • 8K followers
I would never think that OCR could be so challenging. Recently I've been playing with a LOT of OCR tools (both self-hosted and cloud-based). What's really frustrating is that you cannot believe their benchmarks. There are some standardised benchmarks (e.g. OCRBench) but they don't include all popular OCR tools. So the only way is to build your own evaluation pipeline and spend a lot of money and time on benchmarking instead on the actual product you're trying to build. Some OCRs are LLM based which introduces hallucinations and is pretty resource intensive. Other OCRs use simpler (less resource intensive) algorithms but in comparison they can sometimes be pretty dumb. Some OCRs excel at raw text parsing, some at table parsing, some at math formula parsing but rarely if ever do they excel at all of those all at once. Which leads either to the acceptance of inaccurate results in some areas or pushing PDFs through several OCR tools and then merging their results (which is usually prohibitively expensive). This tradeoff especially hurts if the PDFs you want to process are academical papers that act as the source of truth for your science-based application because any small inaccuracy could lead to the application showing false information and misleading people (ultimately defeating its purpose and selling point). Since most PDFs are digitally born I really don't understand why researchers haven't created some kind of structured (JSON/XML/etc.) format from which the final PDF would be generated. Then both the structured format and PDF would get published letting people choose if they want a nice and readable PDF they can print or a structured format that they can process easily.
-
Dasith Wijesiriwardena
Microsoft • 2K followers
Come catch my session on Context Engineering at DDD Melbourne this Saturday. We've all been there: GitHub Copilot promises to be your coding companion, but instead feels more like that overeager intern who confidently writes brilliant code for the wrong problem. As AI-assisted development tools become ubiquitous, the industry narrative promises revolutionary productivity gains. Yet many practitioners find themselves playing an exhausting game of context whack-a-mole—constantly explaining, re-explaining, and fixing what their AI "partner" confidently got wrong. Having spent considerable time wrestling with this gap between AI promises and reality, I've discovered that the real challenge isn't prompt engineering—it's context engineering. This talk explores why "vibe coding" is fundamentally broken and introduces the systematic discipline that separates magical AI experiences from expensive disappointments. Through the constraint-context matrix and real examples like the Breadcrumb Protocol, I'll demonstrate why human expertise in scaffolding, steering, and domain understanding isn't just relevant—it's the secret ingredient that makes AI actually work. You'll leave with practical strategies for engineering context systematically and a framework for building sustainable human-AI collaboration that leverages both your skills and the machine's capabilities.
48
2 Comments -
Shane Marks
Carry1st • 2K followers
I was thinking about some threads on LinkedIn about the place for leetcode style interviews. I think these sort of interviews are optimised for people under 30 who are still inwardly focused and can prioritise several hours a day to drill patterns. If I think about the persona of a typical 40 year old (extrapolating from myself) they are probably spending there evenings: - spending 3 hours before work and 3 hours after work managing their kids. - filling out insurance paperwork for shortfall from their kids last stay in hospital - possibly helping an elderly frail parent with their banking, or worse dealing with their deceased estate - and for good measure maybe dealing with a random audit from the tax man for a minor claim 4.5 years ago. For this persona there really isn’t time to spend drilling leetcode, and this style of interview is probably going to filter out people who are deeply experienced having 20 years of work behind them, and have mastered juggling life and work already.
10
1 Comment -
Gueri Segura
Tenmas.Tech LATAM • 9K followers
Openclaw joining OpenAI feels like the closest signal yet that the “one-person unicorn” era might not be fiction. If this model scales, headcount stops signaling capability, leverage shifts to agentic systems and architecture, and founders win on system design—not org size. We’re moving from “build a team” to “build a machine.” #AI #Startups #Founders #AgenticAI #FutureOfWork
2
3 Comments -
Nam D.
Lyra • 5K followers
Lyra's hiring 100 eng by eoy, now at 40, to join our new barangaroo office. - i've been coding >12 yrs now - i hated stupid meetings. hated maintaining buttons. hated getting BORED - i just wanted to lock-in w airpods max/xm5 and build cool shit so I built Lyra everyday we build & ship cool products for generational companies like Paraform, 88RISING, ReadMe (YC), OpenCall.ai (YC W24), Clarion (YC), ProSights (YC), Elsa Fertility backed by giants like Y Combinator, Accel, Felicis, Greylock Partners, Pear VC, A*, Blackbird, Soma Capital, Slow Ventures imagine ur 1st day - airpods max noise cancelling, cursor, m4 mac and 2 monitors. unlimited snacks drinks catered lunch. LOCKED IN. Lyra is a place I wish I had growing up as a die-hard coder. anyways tldr come build w me -> bit.ly/applytolyra [📸 Winston T.] *note: vid is old office, new one is getting built out till mid-Aug ❤️
268
47 Comments -
Sam Taggart
SAS Workshops • 3K followers
The better you know the basics the more advanced you are. Flashy tools only work if you actually know what you are doing with them. Handing a 16 year old a Ferrari race car is probably a bad idea. Handing most software dev teams an AI agent is pretty much the same thing. You are just helping them to crash faster and more spectacularly.
9
3 Comments -
Eli Gündüz
The Careersy Community • 15K followers
Most senior software engineers in Australia are underpaid by $30-50k. Not because they lack the skill. Because the market can't see their seniority. After years coaching engineers, my take is this: The engineers who move up fastest aren’t always the strongest, they’re the easiest to trust from the outside.. Here are 4 mistakes that cost senior engineers money, scope, and access to the roles they actually deserve. 1. Confusing experience with evidence. "10 years of backend development" could mean you designed distributed systems serving millions of users at Atlassian. Or it could mean you maintained the same internal service at a mid-tier consultancy for a decade. From the outside, those look identical. Hiring managers don't hire time served. They hire demonstrated impact. What changed because you were there? What scale did you operate at? If your CV doesn't answer those questions in the first minute or so, you're being assessed below your level. 2. Describing work instead of results. I see this constantly. Resumes that read like task logs. "Built APIs. Worked on microservices. Maintained infrastructure." That shows activity. It doesn't show seniority. A Staff Engineer at Canva isn't measured by what tools they used. They're measured by business impact, system risk, and decision ownership. The fix is consequences, not duties. Reduced latency by 40%. Owned the migration strategy that unblocked delivery for 3 squads. If nothing changed because of your work, it doesn't read as senior enough. 3. Disappearing inside the team. "We built..." "Our team implemented..." "We migrated..." Strong engineers are collaborative. But at senior and Staff levels, companies need to know who drove the decisions, handled the trade-offs, and carried accountability. I worked with an engineer in Melbourne last year. Brilliant. Led the rearchitecture of a core payments service at a Series C fintech. His resume said "contributed to platform improvements." You can honour the team and still show leadership. "I led the architecture decisions." "I owned the migration strategy." These aren't arrogant. They're accurate. 4. Assuming the process will discover your value. A lot of engineers expect interviews to infer that you know your craft. In my experience, that's not how it works. They can only make a decision based on the hour you had and the signals they got. Everything else is guessing. If your impact requires explanation, context, or insider knowledge to understand, a weaker but clearer candidate will move ahead of you. Here's the question I'd sit with: 📌 If someone who has never met you looked at your profile for 60 seconds, what evidence would convince them you operate at your target level? In this market, your resume, LinkedIn, and first 90 seconds must make your value obvious, because if people have to figure it out, they may never see how capable you are. Hit 👍 if this resonated, it lets me know and helps it reach others who might need it.
20
4 Comments -
Zuhayeer Musa
Levels.fyi • 63K followers
You don’t have to be an engineer to earn FAANG-level pay. A Stripe Strategy & Ops role hit $466K and a Spotify Client Partner reached $453K in recent Levels.fyi submissions. We pulled recent US new-offer submissions for non-engineering roles and compared them against median senior SWE pay at FAANG. To keep this grounded, we filtered for mid-career IC roles (so no VPs, Directors, or SVPs). These are experienced ICs, not executives, receiving offers that compete with some of the best-known engineering compensation benchmarks. A few highlights: At Stripe, a Strategy & Operations hire came in at $466K, nearly identical to the $467K median senior SWE at Meta. Close by, a Client Partner at Spotify reached $453K, showing how revenue-driving roles can rival engineering comp when incentives and scale align. Design also shows up near the top. A Product Designer at Snowflake hit $440K, and in the broader dataset designers actually dominated the upper end with several Meta and Google design offers clearing $500K. And this isn’t limited to a single discipline. Across the data we also saw roles in recruiting, marketing, legal, and project management, landing in the $330K–$410K range, right alongside senior engineers at some of the largest tech companies. The real hierarchy isn’t "Engineering > everything else" It’s closer to "Top companies > top roles > everything else" Engineering remains one of the most reliable ladders to high compensation in record time. But this data is a useful reminder that tech’s economic upside isn’t reserved exclusively for engineers. At the right company, with the right leverage, many different functions participate in the same value creation. Work in HR and interested in more break-downs like these across roles and peer groups? Check out the Levels.fyi data explorer where we pull insights like this and more: https://lnkd.in/gvgc56FE
148
3 Comments -
Yanislava Hristova 🌎
Remote IT World • 26K followers
Funding doesn’t create growth. It amplifies your weaknesses. After a raise, I see the same pattern repeatedly: Open roles stay unfilled. Product velocity drops. Technical debt compounds. Founders become decision bottlenecks. The issue isn’t funding. It’s hiring structure. Post-funding teams don’t need more resumes. They need: Senior ownership layers. Engineers who scale processes. Remote access to wider talent pools. Hiring locally for global growth targets creates friction fast. If you’ve recently raised capital, what’s your biggest bottleneck right now?
26
15 Comments -
Mohammed Khaleed
MBN Solutions • 16K followers
I’m no neuroscientist, far from it. But I’ve always felt there’s some form of logic behind how we think. Almost like a series of binary decisions based on what we know. One thing I keep coming back to with AI: - It doesn’t think the way humans do. LLMs are great at generating answers. They’re not great at staying consistent when there are strict rules, constraints, or edge cases. That’s usually where things break. I came across logic programming today, things like Prolog, and it clicked. You define facts and rules, then let the system reason. It got me thinking: If you combine LLMs with logic programming, do you end up with something far more reliable? Less hallucination. More consistency. Genuinely curious, is anyone here using logic-based systems in production? If yes, what are you seeing? If not, what’s stopping you? https://lnkd.in/eumFa4z2
4
2 Comments -
Adam Warski
SoftwareMill • 9K followers
What kind of guidance does an LLM need to write a direct-style #Scala 3 application? At the baseline - not a lot, e.g. Claude is quite good both in Scala 3 & direct-style. For finer details - some additions to the prompt might be useful. Read more: https://lnkd.in/dxwy-WdD
20
5 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More