Solving Coding Challenges With LLM Tools

Explore top LinkedIn content from expert professionals.

Summary

Solving coding challenges with LLM (large language model) tools means using AI assistants to help write, debug, and improve computer code. These tools can automate parts of the coding process, making it faster and helping developers tackle complex tasks with more creativity.

  • Refine your prompts: Give the AI clear, specific instructions and break tasks down into smaller parts to avoid confusion and boost accuracy.
  • Mix human review: Always check and test the code the AI generates, treating it like a draft that needs your oversight to ensure reliability and security.
  • Experiment and iterate: Try different prompts, switch between models, and ask for multiple options to discover better solutions and new ideas.
Summarized by AI based on LinkedIn member posts
Image Image Image
  • View profile for Ado Kukic

    Community, Claude, Code

    11,817 followers

    I've been using AI coding tools for a while now & it feels like every 3 months the paradigm shifts. Anyone remember putting "You are an elite software engineer..." at the beginning of your prompts or manually providing context? The latest paradigm is Agent Driven Development & here are some tips that have helped me get good at taming LLMs to generate high quality code. 1. Clear & focused prompting ❌ "Add some animations to make the UI super sleek" ✅ "Add smooth fade-in & fade out animations to the modal dialog using the motion library" Regardless of what you ask, the LLM will try to be helpful. The less it has to infer, the better your result will be. 2. Keep it simple stupid ❌ Add a new page to manage user settings, also replace the footer menu from the bottom of the page to the sidebar, right now endless scrolling is making it unreachable & also ensure the mobile view works, right now there is weird overlap ✅ Add a new page to manage user settings, ensure only editable settings can be changed. Trying to have the LLM do too many things at once is a recipe for bad code generation. One-shotting multiple tasks has a higher chance of introducing bad code. 3. Don't argue ❌ No, that's not what I wanted, I need it to use the std library, not this random package, this is the 4th time you've failed me! ✅ Instead of using package xyz, can you recreate the functionality using the standard library When the LLM fails to provide high quality code, the problem is most likely the prompt. If the initial prompt is not good, follow on prompts will just make a bigger mess. I will usually allow one follow up to try to get back on track & if it's still off base, I will undo all the changes & start over. It may seem counterintuitive, but it will save you a ton of time overall. 4. Embrace agentic coding AI coding assistants have a ton of access to different tools, can do a ton of reasoning on their own, & don't require nearly as much hand holding. You may feel like a babysitter instead of a programmer. Your role as a dev becomes much more fun when you can focus on the bigger picture and let the AI take the reigns writing the code. 5. Verify With this new ADD paradigm, a single prompt may result in many files being edited. Verify that the code generated is what you actually want. Many AI tools will now auto run tests to ensure that the code they generated is good. 6. Send options, thx I had a boss that would always ask for multiple options & often email saying "send options, thx". With agentic coding, it's easy to ask for multiple implementations of the same feature. Whether it's UI or data models asking for a 2nd or 10th opinion can spark new ideas on how to tackle the task at hand & a opportunity to learn. 7. Have fun I love coding, been doing it since I was 10. I've done OOP & functional programming, SQL & NoSQL, PHP, Go, Rust & I've never had more fun or been more creative than coding with AI. Coding is evolving, have fun & let's ship some crazy stuff!

  • View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    241,712 followers

    𝗖𝗮𝗻 𝗟𝗟𝗠𝘀 𝘄𝗿𝗶𝘁𝗲 𝗯𝗲𝘁𝘁𝗲𝗿 𝗰𝗼𝗱𝗲 𝗶𝗳 𝘆𝗼𝘂 𝗸𝗲𝗲𝗽 𝗮𝘀𝗸𝗶𝗻𝗴 𝘁𝗵𝗲𝗺 𝘁𝗼 “𝗪𝗥𝗜𝗧𝗘 𝗕𝗘𝗧𝗧𝗘𝗥 𝗖𝗢𝗗𝗘"? 💡 𝗧𝗵𝗲 𝘀𝗵𝗼𝗿𝘁 𝗮𝗻𝘀𝘄𝗲𝗿: 𝗬𝗘𝗦! Interesting experiment by Max Woolf. He gave Claude 3.5 Sonnet a Python challenge:   Generate 1 million random integers and find the smallest and largest numbers with a digit sum of 30. The goal? Optimize the code over multiple iterations 𝗯𝘆 𝘀𝗶𝗺𝗽𝗹𝘆 𝗮𝘀𝗸𝗶𝗻𝗴 𝗶𝘁 𝘁𝗼 “𝘄𝗿𝗶𝘁𝗲 𝗯𝗲𝘁𝘁𝗲𝗿 𝗰𝗼𝗱𝗲.”  𝗥𝗲𝘀𝘂𝗹𝘁𝘀: 1️⃣ Initial Implementation: Basic, functional, but slow (657ms).   2️⃣ Optimized Iteration: Precomputes digit sums, adds parallelism → 2.7x faster.  3️⃣ Enterprise Overengineering: Added multiprocessing, rich metrics, and JIT optimization → 100x faster! 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀:   - Iterative prompting works! Performance improved significantly with each iteration of "write better code".   - LLMs introduce unique optimizations (e.g., vectorization, JIT compilation), but also subtle bugs that require human review.   - Over time, the LLM started adding unnecessary “enterprise” features — a comical form of “going cosmic” for code. 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆:   LLMs can significantly improve code performance with simple prompts—but they’re not perfect. The experiment showed that while LLMs can suggest great optimizations, they also miss the mark or add unnecessary complexity without clear guidance. This is where human oversight comes in. Subtle errors? Misaligned logic? That’s why code specifications and test-driven development are critical when using LLMs. So, next time you’re stuck, just try: “𝘄𝗿𝗶𝘁𝗲 𝗯𝗲𝘁𝘁𝗲𝗿 𝗰𝗼𝗱𝗲.” You might be surprised at what it delivers. 😜 Here is the experiment: https://lnkd.in/d4gBsk-y

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35,564 followers

    We know LLMs can substantially improve developer productivity. But the outcomes are not consistent. An extensive research review uncovers specific lessons on how best to use LLMs to amplify developer outcomes. 💡 Leverage LLMs for Improved Productivity. LLMs enable programmers to accomplish tasks faster, with studies reporting up to a 30% reduction in task completion times for routine coding activities. In one study, users completed 20% more tasks using LLM assistance compared to manual coding alone. However, these gains vary based on task complexity and user expertise; for complex tasks, time spent understanding LLM responses can offset productivity improvements. Tailored training can help users maximize these advantages. 🧠 Encourage Prompt Experimentation for Better Outputs. LLMs respond variably to phrasing and context, with studies showing that elaborated prompts led to 50% higher response accuracy compared to single-shot queries. For instance, users who refined prompts by breaking tasks into subtasks achieved superior outputs in 68% of cases. Organizations can build libraries of optimized prompts to standardize and enhance LLM usage across teams. 🔍 Balance LLM Use with Manual Effort. A hybrid approach—blending LLM responses with manual coding—was shown to improve solution quality in 75% of observed cases. For example, users often relied on LLMs to handle repetitive debugging tasks while manually reviewing complex algorithmic code. This strategy not only reduces cognitive load but also helps maintain the accuracy and reliability of final outputs. 📊 Tailor Metrics to Evaluate Human-AI Synergy. Metrics such as task completion rates, error counts, and code review times reveal the tangible impacts of LLMs. Studies found that LLM-assisted teams completed 25% more projects with 40% fewer errors compared to traditional methods. Pre- and post-test evaluations of users' learning showed a 30% improvement in conceptual understanding when LLMs were used effectively, highlighting the need for consistent performance benchmarking. 🚧 Mitigate Risks in LLM Use for Security. LLMs can inadvertently generate insecure code, with 20% of outputs in one study containing vulnerabilities like unchecked user inputs. However, when paired with automated code review tools, error rates dropped by 35%. To reduce risks, developers should combine LLMs with rigorous testing protocols and ensure their prompts explicitly address security considerations. 💡 Rethink Learning with LLMs. While LLMs improved learning outcomes in tasks requiring code comprehension by 32%, they sometimes hindered manual coding skill development, as seen in studies where post-LLM groups performed worse in syntax-based assessments. Educators can mitigate this by integrating LLMs into assignments that focus on problem-solving while requiring manual coding for foundational skills, ensuring balanced learning trajectories. Link to paper in comments.

  • View profile for Esco Obong

    Sr SWE @ Airbnb | Follow for LLMs, LeetCode + System Design & Career Growth (ex-Uber)

    36,390 followers

    I work at Airbnb where I write 99% of my production code using LLMs. Spotify's CEO recently announced something similar. I mention my employer not because my workflow is sponsored by them, but to establish a baseline for the massive scale, reliability constraints, and code quality standards this approach has to survive. Many engineers abandon LLMs because they run into problems instantly, but these problems have solutions. If you're a skeptic, please read and let me know what you think. 𝗧𝗵𝗲 𝘁𝗼𝗽 𝗽𝗿𝗼𝗯𝗹𝗲𝗺𝘀 𝗮𝗿𝗲:  • 𝗖𝗼𝗻𝘀𝘁𝗮𝗻𝘁 𝗿𝗲𝗳𝗮𝗰𝘁𝗼𝗿𝘀 (generated code is really bad or broken)  • 𝗟𝗮𝗰𝗸 𝗼𝗳 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 (the model doesn’t know your codebase, libraries, apis..etc)  • 𝗣𝗼𝗼𝗿 𝗶𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻 𝗳𝗼𝗹𝗹𝗼𝘄𝗶𝗻𝗴 (the model doesn’t implement what you asked for)  • 𝗗𝗼𝗼𝗺 𝗹𝗼𝗼𝗽𝘀 (the model can’t fix a bug and tries random things over and over again)  • 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 𝗹𝗶𝗺𝗶𝘁𝘀 (inability to modify large codebases or create complex logic) In this article, I show how to solve each of these problems by using the LLM as a a force multiplier for your own engineering decisions instead of a random number generator for syntax.

  • View profile for Paolo Perrone

    No BS AI/ML Content | ML Engineer with a Plot Twist 🥷100M+ Views 📝

    127,319 followers

    How to actually code with LLMs in 2026. Not the hype. What's working for engineers who ship: 1️⃣ 𝗦𝗽𝗲𝗰 𝗯𝗲𝗳𝗼𝗿𝗲 𝗰𝗼𝗱𝗲 Don't throw wishes at the LLM. → Describe the idea → Let the AI ask questions until requirements are clear → Compile into spec.md → Generate step-by-step plan → Then code It's "waterfall in 15 minutes." 2️⃣ 𝗦𝗺𝗮𝗹𝗹 𝗰𝗵𝘂𝗻𝗸𝘀 Ask for too much = jumbled mess. "Like 10 devs worked on it without talking." One function. One bug. One feature. Then next. 3️⃣ 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗽𝗮𝗰𝗸𝗶𝗻𝗴 LLMs are only as good as what you show them. → Relevant code → API docs → Known pitfalls → Preferred approaches Don't make the AI guess. 4️⃣ 𝗠𝗼𝗱𝗲𝗹 𝗺𝘂𝘀𝗶𝗰𝗮𝗹 𝗰𝗵𝗮𝗶𝗿𝘀 Each model has blind spots. Stuck? Copy the same prompt to another model. Sometimes a second opinion is all you need. 5️⃣ 𝗛𝘂𝗺𝗮𝗻 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗼𝗼𝗽 AI writes with complete conviction. Including bugs. Including nonsense. Treat every snippet like junior dev code. Read it. Run it. Test it. 6️⃣ 𝗖𝗼𝗺𝗺𝗶𝘁 𝗹𝗶𝗸𝗲 𝘀𝗮𝘃𝗲 𝗽𝗼𝗶𝗻𝘁𝘀 AI generates fast. Veers off course fast. Commit after each small task. Your safety net when AI goes sideways. 7️⃣ 𝗥𝘂𝗹𝗲𝘀 𝗳𝗶𝗹𝗲𝘀 Use CLAUDE.md, GEMINI.md, or .cursorrules. → Your coding standards → Your patterns → Your constraints Train it once. Enforce everywhere. The mental model: LLMs are over-confident junior devs. You're the senior engineer. They're the force multiplier. 💾 Save this before your next AI coding session.

  • View profile for Mihail Eric

    Head of AI at Monaco | Lecturer at Stanford | Helping 30K+ software engineers uplevel with AI | themodernsoftware.dev | 12+ years building production AI systems

    17,136 followers

    One AI coding hack that helped me 15x my development output: using design docs with the LLM.  Whenever I’m starting a more involved task, I have the LLM first fill in the content of a design doc template.  This happens before a single line of code is written.  The motivation is to have the LLM show me it understands the task, create a blueprint for what it needs to do, and work through that plan systematically.. –– As the LLM is filing in the template, we go back-and-forth clarifying its assumptions and implementation details. The LLM is the enthusiastic intern, I’m the manager with the context.  Again no code written yet. –– Then when the doc is filled in to my satisfaction with an enumerated list of every subtask to do, I ask the LLM to complete one task at a time.  I tell it to pause after each subtask is completed for review. It fixes things I don’t like.  Then when it’s done, it moves on to the next subtask.  Do until done. –– Is it vibe coding? Nope.  Does it take a lot more time at the beginning? Yes.  But the outcome: I’ve successfully built complex machine learning pipelines that run in production in 4 hours.  Building a similar system took 60 hours in 2021 (15x speedup).  Hallucinations have gone down. I feel more in control of the development process while still benefitting from the LLM’s raw speed.  None of this would have been possible with a sexy 1-prompt-everything-magically-appears workflow. –– How do you get started using LLMs like this?  @skylar_b_payne has a really thorough design template: https://lnkd.in/ewK_haJN –– You can also use shorter ones. The trick is just to guide the LLM toward understanding the task, providing each of the subtasks, and then completing each subtask methodically. –– Using this approach is how I really unlocked the power of coding LLMs.

  • View profile for Philipp Schmid

    AI Developer Experience at Google DeepMind 🔵 prev: Tech Lead at Hugging Face, AWS ML Hero 🤗 Sharing my own views and AI News

    165,288 followers

    Reasoning Models 2.0, combine Reasoning with Tool Use! ✨ START teaches LLMs to use tools, such as code interpreter to improve reasoning and problem-solving. Self-taught Reasoner with Tools (START) integrates tool usage with chain-of-thought reasoning by enabling tool calls, self-check, exploration, and self-debug while reasoning using a self-learning framework. 👀 Implementation 1️⃣ Collect math problems (AIME, MATH) and coding tasks (Codeforces, LiveCodeBench) 2️⃣ Create context-specific hints like "Maybe using Python here is a good idea" 3️⃣ Generate tool-assisted reasoning data (insert hints after conjunctions like "Wait" and before stop tokens) 6️⃣ Score trajectories, remove repetitive patterns, and create a seed dataset with successful tool-assisted reasoning examples. 7️⃣ Fine-tune model on seed dataset, then self self-Distill to generate more diverse reasoning trajectories 6️⃣ Fine-tune the base model using rejection sampling (RFT) on the extended dataset Insights 💡 Improves math accuracy by +15% (AMC23: 95.0%) and coding by +38.6% on medium problems. 📈 Test-time scaling via sequential hints boosts AIME24 performance by 12%. 🐞 Code template modification reduces debug errors by 41% in training data. 💡 Adding tools (Python interpreter) improves performance more than adding more training data. 🧠 Large models already possess latent tool-using abilities that can be activated through hints. 🛠️ Two-phase training (Hint-RFT then RFT) allows the model to learn effective tool usage. 📍 Hint place selection is important. After conjunction Token and before stop token. Paper: https://lnkd.in/emF_m8Qz

  • View profile for Phillip Carter

    tech pm @ salesforce

    2,279 followers

    I wrote a bit about how I use LLMs for coding these days. The TL;DR is: - Use Claude and pay for it, and radically update your priors on LLM coding capabilities. Benchmarks mean little, it's the champ in this arena right now - Rapid feedback cycles matter more than ever because you really do need to run your code as you build it - Reconsider the use of libraries, since generating code that solves a scoped problem is now cheap, but dependency management is still hard - Build durable context for projects. Think up front a bit, build up a description of a codebase, plant it in that codebase, and use the LLM to update it as you go - Small diffs make for happy coders - Agents aren't good yet, not unless you handhold them through narrowly-scoped work - Who knows exactly what's next, but software engineering will undoubtedly be changed forever as a discipline https://lnkd.in/gBqgHhDs

Explore categories