<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Ashley Childress on Medium]]></title>
        <description><![CDATA[Stories by Ashley Childress on Medium]]></description>
        <link>https://medium.com/@anchildress1?source=rss-a80f485a6b2f------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Tue, 14 Apr 2026 05:24:03 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@anchildress1/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[I Stopped Reviewing AI-Written Code]]></title>
            <link>https://medium.com/@anchildress1/i-stopped-reviewing-ai-written-code-10e0e049925f?source=rss-a80f485a6b2f------2</link>
            <guid isPermaLink="false">https://medium.com/p/10e0e049925f</guid>
            <category><![CDATA[engineering-culture]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[code-review]]></category>
            <dc:creator><![CDATA[Ashley Childress]]></dc:creator>
            <pubDate>Wed, 04 Mar 2026 00:03:25 GMT</pubDate>
            <atom:updated>2026-03-16T13:46:00.733Z</atom:updated>
            <content:encoded><![CDATA[<h4><em>What happened when I let Google Gemini build the UI and judged outcomes instead of diffs</em></h4><figure><img alt="Banner image generated with Leonardo.ai and ChatGPT of a system built by AI" src="https://cdn-images-1.medium.com/max/1024/0*gjvyFrsiKN8TRj9H.jpg" /><figcaption>Generated with <a href="http://leonardo.ai">Leonardo.ai</a> and <a href="http://www.chatgpt.com">ChatGPT</a></figcaption></figure><p>I’ve been deep in AI experiments for the past year, and AI tools were used both in building this project and in shaping parts of this write-up. Not from a research angle and not from a purist implementation mindset. I’m interested in what happens when you treat these models like engineering tools and start pushing on their limits to see where they fail.</p><p>This project started as a simple portfolio build and turned into something more useful: a clean environment to test what actually happens when you step out of the implementation loop and let AI handle the build while you focus on whether the results actually hold up.</p><h3>What I Built with Google Gemini</h3><p>When I saw the <a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e">New Year, New You Portfolio Challenge</a>, I knew it required a UI. That wasn’t a surprise. What <em>was</em> a surprise was how quickly I would realize I didn’t understand what I was looking at once it started coming together.</p><figure><img alt="Screenshot of portfolio site Gemini built" src="https://cdn-images-1.medium.com/max/1024/1*YUrw6uvd7ltwuNYjbNY0Xg.png" /><figcaption>Visit the live site at <a href="https://anchildress1.dev">https://anchildress1.dev</a></figcaption></figure><p>I’m a backend developer. You hand me a distributed systems problem and I’ll happily spend hours untangling it. You ask me to make a div visible in a browser and my brain actively searches for the exit. With only one weekend to build, there was no room for the &quot;eyes-glazing-over&quot; phase. Google Gemini would implement and I would supervise-that was my whole plan.</p><p>I walked in expecting Antigravity, powered primarily by Gemini Pro, to behave like every other AI system I’d tested-predictable and fairly easy to keep inside the guardrails. I thought I already knew what those guardrails looked like: strict types, linting, and the familiar routine of code review.</p><h3>The Pivot: Dropping the Code Review Ritual</h3><p>Initially, I followed the “responsible” pattern: prompt, review the diff, run tests, approve. It felt disciplined. It looked professional.</p><p>Very quickly, I realized I had no meaningful context for what I was reviewing in a frontend stack. I wasn’t improving the output; I was participating in ceremony. So, I stopped reviewing code altogether.</p><p>Instead of validating lines of code, <strong>I validated outcomes</strong>. If the UI rendered correctly and passed functional tests, that was success. I cranked up the autonomy, taught Antigravity my repository expectations, and let it run. Copilot reviewed the code in my place, and Gemini responded in a closed loop. I stepped out of the implementation and into the role of a systems auditor.</p><h3>Replacing Trust With Systems</h3><p>I didn’t simply remove oversight; I replaced it with Lighthouse audits and expanded test coverage. My assumption was simple: if the browser behaves and the tests pass, the code is “safe.” I believed I had replaced trust in code with trust in systems. I was wrong-I had confused passing tests with structural integrity.</p><h3>What I Learned</h3><h3>High Reasoning Isn’t Optional</h3><p>I learned that for autonomous development, reasoning depth is a stability requirement. With lower reasoning modes (like Flash), changes were often partial-updating 2/3 of the files but “forgetting” the tests or documentation.</p><p>Switching to High Reasoning mode in Gemini Pro changed the pattern. Runtime errors dropped, and cross-file consistency improved. It finally started “remembering” to keep the docs aligned with the code changes without constant nudging.</p><p>Reasoning depth wasn’t about intelligence-it was about reliability under autonomy. Gemini’s deeper reasoning and context retention made the closed-loop workflow viable; without it, cross-file consistency collapsed quickly under autonomy.</p><h3>The Reality Check: Sonar</h3><p>After the high of the successful build wore off, I introduced Sonar as a retrospective audit. The UI rendered correctly. The tests passed. Everything appeared stable.</p><figure><img alt="Screenshot of 81 total post-build Sonar issues in version one" src="https://cdn-images-1.medium.com/max/1024/1*I250Wn6L9Ug8cvlfzABjBg.png" /></figure><p><strong>Sonar reported 13 reliability issues and assigned the project a C reliability rating.</strong> Of those issues, 66% were classified as high severity. Security review surfaced three hotspots, including a container running the default Python image as root and dependency references that did not pin full commit SHAs.</p><p>Maintainability scored an A, but still carried 70 maintainability issues-structural patterns that didn’t break behavior, yet increased long-term complexity.</p><p>That was the moment confidence turned into scrutiny.</p><p>The application worked. The tests passed. But reliability, security posture, and structural integrity told a different story. The tests validated behavior; Sonar validated assumptions. And those are not the same thing.</p><p>The lesson? <strong>AI-generated tests can pass because they were written to satisfy the implementation, not challenge it.</strong> Structural validation requires an independent layer of review outside the generation loop.</p><h3>Google Gemini Feedback</h3><h3>What Worked Well</h3><ul><li><strong>Cohesive Implementation:</strong> High reasoning Gemini Pro produced cross-file changes that respected the intent of the repository.</li><li><strong>Agentic Orchestration:</strong> The model switching was seamless, and the orchestration interface made it possible to define expectations clearly and enforce them consistently.</li></ul><h3>Where Friction Appeared</h3><ul><li><strong>Cooldown Transparency:</strong> While the interface shows when current credits refresh, the length of the next cooldown remains a black box.</li><li><strong>Tool Performance:</strong> MCP responsiveness materially impacted iteration speed, sometimes forcing me to batch requests rather than work in small, rapid increments.</li></ul><blockquote><em>It would be a massive UX win to see exactly how long your </em>next<em> cooldown will be (e.g., “Your next cooldown will be X hours long”) directly on the models page. Knowing if the lockout is 1 hour or 96 hours is vital for developer planning.</em></blockquote><h3>The Final Verdict: Autonomy Still Demands an Audit</h3><p>The lesson wasn’t that Gemini failed; it was that systems-level trust requires more than passing tests. In future builds, autonomy won’t ship without an explicit adversarial audit. Whether that means a mandatory Sonar gate, a red-team prompt pass, or a second high-reasoning model instructed to hunt for the first model’s shortcuts-the loop must be challenged.</p><p>This project began as a weekend experiment to escape the “teleportation” haze of frontend development. It ended as an exploration of the razor-thin edge of system-level trust. The real build wasn’t the portfolio-it was discovering what happens when you lean on the limits of AI until they finally give.</p><p>Removing myself from the implementation loop didn’t eliminate responsibility; it redefined it. The more freedom you give an agent, the more rigor you must give your audit.</p><p><em>Originally published at </em><a href="https://dev.to/anchildress1/i-stopped-reviewing-code-a-backend-devs-experiment-with-google-gemini-5424"><em>https://dev.to</em></a><em> on March 4, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=10e0e049925f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Waiting, With Intent: AI Orchestration, Context Limits, and Trust Before Speed]]></title>
            <link>https://medium.com/@anchildress1/waiting-with-intent-ai-orchestration-context-limits-and-trust-before-speed-6e5c5e9d90fe?source=rss-a80f485a6b2f------2</link>
            <guid isPermaLink="false">https://medium.com/p/6e5c5e9d90fe</guid>
            <category><![CDATA[systems-thinking]]></category>
            <category><![CDATA[generative-ai-tools]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[software-architecture]]></category>
            <dc:creator><![CDATA[Ashley Childress]]></dc:creator>
            <pubDate>Wed, 14 Jan 2026 12:22:47 GMT</pubDate>
            <atom:updated>2026-01-14T15:29:55.942Z</atom:updated>
            <content:encoded><![CDATA[<h4><strong>A five-to-ten year view on why context breaks first, why orchestration is inevitable, and why speed only shows up after trust is earned.</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*xTEwckLoB9hvHj4f.png" /><figcaption>Generated with Leonardo.ai and ChatGPT</figcaption></figure><p>I’m waiting for AI to mature. Very explicitly — and yes, mostly impatiently. I don’t think we’re anywhere close to imagining the real future landscape with AI, and pretending otherwise isn’t honest or useful. This post is my attempt to explain how I think about AI from a dev perspective on a longer horizon — five, maybe even ten years down the road. The tools we have today are still a long way from my baseline expectations, which my AI systems remind me of constantly — like when I try to force agent-like behavior out of ChatGPT. Spoiler: it’s not designed for that.</p><blockquote>Mandatory interruption for Medium’s disclosure rules. If you’re thinking “she always does this,” congratulations — you’re paying attention.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/1*__Kb6Xax0a7JsdvRyRhPoA.png" /><figcaption>Generated with Leonardo.ai</figcaption></figure><h4><strong>🛡️ I Worked Until It Worked</strong></h4><p>This post was written by me, with ChatGPT nearby like an overly talkative whiteboard-listening, interrupting, getting corrected, and occasionally making a genuinely good point. We argued about structure, laughed at the mic cutting out at the worst moments, and kept going anyway. The opinions are mine. The fact that it finally worked is the point.</p><blockquote>Disclosure complete. Back to the regularly scheduled thinking.</blockquote><p>While I’m waiting, I’m not disengaged. I’m tinkering — sometimes randomly, sometimes as a frustrated AI user who isn’t thrilled with the current systems — and using that time to figure out what the <em>next</em> problems actually look like. One of those is what I call the “memory problem.” I’ve designed a long-term memory approach for a personal agent, fully aware that GitHub will probably beat me to a real solution. <em>Again</em>. But I’m the kind of person who tries to solve the problem first, gets it wrong a dozen times, and <em>then</em> does the research. Now I just need to muster the oomph to finish it. 🐉🧚‍♀️</p><h3>First Principles: LLM vs Agent 🧩</h3><p>At some point, if you want any of this AI talk to make sense, you have to step back, align terminology, and separate concepts that keep getting blurred together. An LLM, often called a model, is the generative part of GenAI-it accepts input and generates output. <em>That’s it.</em> An agent is the system managing context, memory, and various tools. The agent is responsible for what information the LLM even sees in the first place.</p><p>When those two ideas get collapsed into the same thing, everything downstream becomes confused. You can’t reason clearly about limits, costs, or failure modes if you don’t separate generation from data management. Until you draw that line, every other discussion ends up muddy.</p><h3>Context Is the Bottleneck (and Everyone Knows It) 🕸️</h3><p>Once you make the distinction between LLM and agent, the real bottleneck becomes obvious. There is no good way to manage context today, let alone have the agent automate that job effectively. If you’re not fully up to date on the lingo: context includes a whole set of things like instruction files, workspace structure, active files in your IDE, the AI chat history, available tools, and more.</p><p>What we have now are very manual tools that do very little to solve the problem. We have to remember to tell the AI which parts currently matter-or at some point we have to clear the chat entirely and start over. If we don’t do that deliberately, AI slowly loses the point of what’s we’re supposed to be working on in the first place. At worst, the entire chat thread is poisoned and the AI becomes unable to function at all. Then you’re forced to start fresh and always at the most inconvenient time.</p><p>And don’t expect LLM context to scale, either. Hardware costs may go down eventually, but nowhere near fast enough to keep up with everything we keep throwing at it. So, context is very finite-especially in GitHub where context windows are smaller than normal anyway.</p><p>The agent will typically make space by compacting information. It will ask the LLM to summarize key points and then it literally drops the original full length novel completely from your active context and replaces it with the cliff notes version. The more summarization, the less accurate things get over time. So naturally you retry prompts while adding back the dropped details and you end up making more calls for a single task overall. The model has to process more and more input just to get you back to the same answer you already had-not necessarily a better one.</p><p>People know this is a problem. Tools like <a href="https://github.com/toon-format/toon">Toon</a> exist specifically to minimize input impact for AI. We also have tools like <a href="https://docs.github.com/?search-overlay-open=true&amp;search-overlay-input=runSubagent&amp;search-overlay-ask-ai=true">Copilot’s </a><a href="https://docs.github.com/?search-overlay-open=true&amp;search-overlay-input=runSubagent&amp;search-overlay-ask-ai=true">#runSubagent</a> to help manage context within a single agent. These aren&#39;t true solutions though-they are signals. These are the problems people are trying to solve yesterday while we wait for the next AI evolution to emerge.</p><h3>Why Orchestration Is Inevitable 🐙</h3><p>Even if you do everything “right” and manage context like a master AI sensei, agents eventually hit a limit. The list of must-have MCPs is growing and right now those stay in the context window as long as they’re enabled. Projects are starting and accumulating larger knowledge bases. Customization is becoming more and more explicit. The context an agent needs to use will continue to grow exponentially, even though LLMs aren’t increasing capacity at the same speed.</p><p>The ultimate overflow state isn’t hypothetical-it’s inevitable. Once an agent accumulates enough memory, enough history, enough summarization, the LLM simply can’t keep up coherently anymore. That isn’t a failure in the system-it’s a limit.</p><p>When you hit that limit, you can’t just tweak prompts or optimize harder. You wouldn’t try to squeeze more juice out of the same dry orange, either. The only real long-term solution is that you split the system- <em>you have to</em>!</p><p>Smaller pieces of work are then sent to the LLM with only relevant context, which is when smarter agents will start to appear. This is where summarization stops and you retain the original intent at both a high-level and at the lowest-level. When we get here, AI generation stops being the problem-the new problem is coordinating all those tiny pieces of work and still accomplishing the larger goal without re-prompting anything previously stated or defined elsewhere already. <em>Welcome to the world of true agent orchestration</em>!</p><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> If you want a sneak peek of what this looks like, check out </em><a href="https://verdent.ai"><em>Verdent.ai</em></a><em>. Of all the solutions I’ve worked with, Verdent is the only one that’s truly designed for agent orchestration. It also excels in VS Code and wins every coding competition I’ve put it in.</em></blockquote><h3>Orchestration as a System Property ♟️</h3><p>Orchestration isn’t just about sequencing work in a nicer way-it’s about changing where responsibility lives. Yes-some things are always going to be sequential, but not everything needs to be. Some things can and should run in parallel, especially if you want speed and reliability included in future agentic systems.</p><p>Validation is a fundamental part of orchestration, not something bolted on afterward. A successful agent has to be able to verify its own work without relying on prior context. It has to come in like a third party, with no knowledge beyond the repo instructions. CodeQL, lint enforcement, Makefiles, and even extra tests become the ground truth the system must consistently check itself against.</p><p>Multi-model opposition fits naturally here, too. Different models trained by different companies catch different things. Then the agent can pick one model to implement and another to review. The point is that they disagree by default and then they converge around a common goal. This is a pivotal moment in the future landscape because officially the LLM is no longer the center of gravity-the agentic system is.</p><blockquote><em>🎤 </em><strong><em>ShoutOut</em></strong><em> </em><a href="https://dev.to/marcosomma"><em>@marcosomma</em></a><em> wrote a brilliant article on </em><a href="https://dev.to/marcosomma/loopnode-how-orka-orchestrates-iterated-thought-until-agreement-emerges-17l2"><em>the concept of agent convergence</em></a><em> a while back and it’s still one of my favorites. Worth the read if you missed it!</em></blockquote><h3>Add Another Layer of Abstraction 🪜</h3><p>Now for my version of truth, which I know a lot of you are going to hate so go ahead and brace for it. Once you’re working in a smart orchestration-driven flow, there’s no reason you need to keep prompting from the IDE. Wait before you jump into the debate, though-I’m not saying the IDE becomes obsolete! It just stops being the primary interface for developer workflows because you’re consistently able to work at a higher level of abstraction. In this future, developers are directing systems that generate, test, and validate the code several layers underneath you automatically.</p><p>You’re orchestrating agents that direct other agents. Some run sequentially. Others will run in parallel. Documentation is generated automatically and added to the agent’s working knowledge base. Tests run continuously alongside agents implementing new code. Integration testing matters. Systems testing matters more. Chaos testing morphs from an abstract concept into a baseline requirement. The code still exists-but it’s no longer written by or for humans. AI slowly takes that over, which makes natural language the newest language you need to learn.</p><blockquote><em>🦄 For the record, developers are most definitely still building and driving solutions. That will never change-we’re the mad scientists thinking up wild potions you didn’t know you needed! Besides, all the future advancements in the world won’t give silicon the ability to invent new things. Humans create. AI helps. </em>Period<em>.</em></blockquote><h3>Trust, Then Speed (not the other way around) 🏎️</h3><p>When something breaks in any of my workflows, I don’t correct the mistake in the code immediately. I start by correcting whatever instruction caused the mistake, and then I rerun it. Even when I’m busy, even when work is chaotic, and especially when I should have left it alone hours ago-I never fully disengage from this. <em>I can’t.</em></p><p>This is exactly why AI doesn’t make you faster-not yet, anyway. Not because it can’t, but because the systems haven’t caught up to where speed actually emerges. If you’re learning to use AI correctly, it almost always makes you slower at first-not faster. The delay isn’t failure. It’s infrastructure lag.</p><p>Think of it like an investment. You’re learning how the models behave and how instructions actually align with them. You’re learning where the limits are, and then deliberately making the system work within those constraints. Speed comes later-after you trust that the system returns results that are validated, reviewed, and tested because you built it to behave that way.</p><p>AI evolution is a long game, and we’re barely getting started. Right now, it still feels like grade school. We’re teaching it what our world looks like, how we think, and where the boundaries are.</p><p>All the work done now-in this awkward middle state-is what makes that learning possible. Long runs of trial-and-error prompts, walls of instructions, documentation that later turns into knowledge bases-that’s the curriculum. And by the time it’s ready to graduate, it won’t just be competent. It’ll be a master. That’s the moment you realize you trust AI-not because it’s autonomous, but because you finally are. 🐉🧚‍♀️</p><p><em>Originally published at </em><a href="https://dev.to/anchildress1/waiting-with-intent-designing-ai-systems-for-the-long-game-1abg"><em>https://dev.to</em></a><em> on January 14, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6e5c5e9d90fe" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AI Isn’t Magic. It’s a System—and I Treat It Like One]]></title>
            <link>https://medium.com/@anchildress1/ai-isnt-magic-it-s-a-system-and-i-treat-it-like-one-b2aa72161e56?source=rss-a80f485a6b2f------2</link>
            <guid isPermaLink="false">https://medium.com/p/b2aa72161e56</guid>
            <category><![CDATA[developer-productivity]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[software-engineering]]></category>
            <dc:creator><![CDATA[Ashley Childress]]></dc:creator>
            <pubDate>Wed, 17 Dec 2025 13:34:52 GMT</pubDate>
            <atom:updated>2025-12-18T13:32:42.234Z</atom:updated>
            <content:encoded><![CDATA[<h4>Practical rules, hard boundaries, and validation loops for keeping your coding assistant useful, not chaotic.</h4><figure><img alt="A glowing spherical AI core rests on a dark reflective surface, secured by a belt-like tether wrapped around its center and extending outward, symbolizing controlled and bounded artificial intelligence." src="https://cdn-images-1.medium.com/max/1024/1*oxv2FV2jG5sXp8Onn1beNQ.png" /><figcaption>Generated with <a href="http://leonardo.ai">Leonardo.ai</a> with some help from <a href="http://chatgpt.com">ChatGPT</a></figcaption></figure><p><em>This piece was written with an AI nearby, not in charge — used for reflection, pressure-testing ideas, and occasionally poking holes where confidence got too cozy. The opinions, guardrails, and sharp edges are still very much mine.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/1*__Kb6Xax0a7JsdvRyRhPoA.png" /><figcaption>Generated with <a href="http://leonardo.ai">Leonardo.ai</a></figcaption></figure><blockquote><em>🦄 I feel like I have some serious catching up to do with Copilot, especially after the ginormous number of updates sprinkled over quite literally everything. The latest model releases weren’t helping matters either, especially since I wrote most of my global user setup months ago and this latest generation of LLMs do not behave the same way as the last batch did.</em></blockquote><blockquote><em>At some point it became obvious that my “it mostly works” setup wasn’t actually holding up across systems anymore — and if I was already going to normalize things, I might as well write down what I was doing while it was still fresh.</em></blockquote><blockquote><em>What I did </em>not<em> intend to do, however, was write this post. Honestly, the thought never crossed my mind until I looked up and it was already more than halfway written. Which I obviously took as a sign of some sort—though it’s just as likely muscle memory combined with my tendency to overshare. 🤷‍♀️</em></blockquote><blockquote><em>Either way, hopefully someone finds it useful. Repurpose it, steal pieces of it, or ignore it entirely—and if you’ve got something I haven’t thought of yet, I want to hear about it. That’s how systems get better! 🪢</em></blockquote><h3>AI Is Not a Magician 🪄</h3><p>Bear with me through our baseline here—everything that follows depends on understanding this distinction first.</p><p>I jump into so many random debates—usually uninvited—about the ultimate usefulness of AI and nearly every single argument I hear about AI behaving badly gets the exact same reply from me:</p><blockquote>Of course<em> it torched your entire repo—and probably in record time! You just let an unsupervised mostly unhinged guessing machine loose with a flamethrower, blanket auto-approvals, and admin-level control. 🔥🐉🤯</em></blockquote><p>To understand why that keeps happening, you have to understand the difference between what we casually call “AI” and the large language model (LLM) underneath it. These are not the same thing and shouldn’t be treated like they are.</p><p>Every LLM is stateless by design, meaning every call you make is a clean slate. You give it input and it outputs a response. Conceptually, it’s no different than an HTTP call—except instead of returning a standardized value with a predictable schema, the LLM returns something more… <em>creative</em>.</p><p>The AI system sitting on top of that model is what makes everything <em>feel</em> connected. That’s the system deciding what context to attach, what history to include, and how to frame every request so the LLM has any chance of responding in a useful way. If the AI fails at managing that data, the LLM never stood a chance to begin with.</p><p>Last I heard, GitHub supports something like 180 million users. If I guess and say 80% of them use Copilot, that’s 144 million different user workflows—and therefore 144 million competing definitions of what a “good” response looks like.</p><p><strong>It is not designed to magically work out of the box—</strong>especially in production codebases—no matter how much they’d like you to believe otherwise!</p><h3>Instructions Are a Priority Stack 🧱</h3><p>One of the very first posts I wrote—which honestly deserves an update—was about setting up <a href="https://dev.to/anchildress1/all-ive-learned-about-github-copilot-instructions-so-far-5bm7">custom repo instructions</a>. There’s no shortage of instruction-writing advice floating around out there, and there’s just as many opinions about the “right” way to do it.</p><p>I’m not even remotely invested in that particular debate—<em>I know</em>, I was shocked, too!—I do know, beyond the shadow of a doubt, that if you expect AI to play by your rules, then you first have to explicitly tell it what those rules are.</p><p>I will say it again: <strong>AI is not a coding magician</strong>. It’s also not a particularly great guesser. The system instructions in VS Code do a decent job of orienting the model toward the idea that “you write code”, but they’re also intentionally generic so it works for everyone.</p><p>Here’s a snippet so you can see what I mean:</p><pre>You are an expert AI programming assistant, working with a user in the VS Code editor. <br>When asked for your name, you must respond with &quot;GitHub Copilot&quot;. <br>When asked about the model you are using, you must state that you are using Grok Code Fast 1. <br><br>Follow the user&#39;s requirements carefully &amp; to the letter. <br>Keep your answers short and impersonal. <br><br>You are a highly sophisticated automated coding agent with expert-level knowledge across many different programming languages and frameworks. <br>The user will ask a question, or ask you to perform a task, and it may require lots of research to answer correctly. <br>You will be given some context and attachments along with the user prompt. <br>You can use them if they are relevant to the task, and ignore them if not. <br>Some attachments may be summarized with omitted sections like `/* Lines 123-456 omitted */`. <br><br>If you can infer the project type (languages, frameworks, and libraries) from the user&#39;s query or the context that you have, make sure to keep them in mind when making changes. <br>If you aren&#39;t sure which tool is relevant, you can call multiple tools. <br>You can call tools repeatedly to take actions or gather as much context as needed until you have completed the task fully. <br><br>Don&#39;t give up unless you are sure the request cannot be fulfilled with the tools you have. <br>It&#39;s YOUR RESPONSIBILITY to make sure that you have done all you can to collect necessary context.</pre><blockquote><em>🦄 I absolutely picked through these and kept only the interesting bits. If you want the full thing, run </em><em>Developer: Show Chat Debug View and you can see exactly what Copilot sends with every request.</em></blockquote><p>The full set of system instructions includes:</p><ul><li>JSON for every enabled tool</li><li>well over a hundred lines of system instructions</li><li>all global user instructions</li><li>all applicable repo instruction paths</li><li>custom agent names and metadata</li></ul><p><strong>The ordering matters.</strong> LLMs will start summarizing aggressively after processing the first chunk of input. If something matters, put it where it’s least likely to be compressed. The data volume matters too—more text usually means more summarization, not more intelligence.</p><p>And if you introduce instructions that directly conflict with the system-level ones, the results don’t get better. They get progressively worse.</p><blockquote><em>🦄 Telling AI it’s an “expert coder” is largely unnecessary. One: because that’s already been done for you by the system. Two: experience—and a healthy side of gut instinct—tells me those “expert” statements are doing more harm than good. Personally, I stopped using any variant of the “expert” statement a long time ago.</em></blockquote><h3>This Is My Baseline, Not a Blueprint 📐</h3><p>People ask me why “my AI” works consistently, and the answer is always some version of: because I learned how to write instructions and adapt them to a version the system can reliably manage.</p><p>This is my personal baseline. These instructions reflect my hardware, workflows, tool choices, and even the personality adjustment is designed specifically to avoid my instant-rage trigger during long pairing sessions. Copilot is my primary use case, but these rules are wired everywhere I work and I use them <strong>in real projects</strong>.</p><p>This is definitely not the GitHub-marketing version of AI that exists to flatter the user. I gave it strict boundaries and strong opinions on purpose. I <em>want</em> pushback. I want the dry witty humor baked in. And I especially want the “are you serious?” responses that will snap me back to reality any time I start to veer down a tangent path.</p><p>In practice, Copilot waters that down way more than I’d like. So, perfecting this personality is going to be a work in progress for the foreseeable future.</p><p>If you still want a copy after all the flashing warning signs, then there’s a link in the README—help yourself 🫴</p><p><a href="http://github.com/anchildress1/awesome-github-copilot">GitHub - anchildress1/awesome-github-copilot: My ongoing WIP 🏗️ AI prompts, custom agents (formerly chat modes) &amp; instructions - curated by me (and ChatGPT).</a></p><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> These instructions are a copy-paste solution </em>for me<em>. If it helps, you’re welcome to steal it. Discard what doesn’t work and let it spark ideas for your own setup.</em></blockquote><h3>Trust Is Earned, Validation Is Mandatory 🧪</h3><p>Whenever AI touches code—whether it’s a new feature or a quick fix—the results have to pass a specific set of checks <strong>before</strong> anything is presented to me in chat.</p><p>Not every repo is identical, so I’ve started using a Makefile in all of my personal projects. That gives AI a single, explicit definition of what “validation” means. Without that, it will go looking for standards on its own—and when it inevitably can’t find them, it guesses. My instructions make that behavior explicit so it defaults to the simplest path instead of inventing a new maze of random bash scripts just to run a missing lint command. 🙄</p><p>Note that there’s nothing remotely deterministic about asking any AI agent to run its own validations. Do not expect perfection every turn—you will be disappointed! The real solution requires more system-level support than is currently available, though.</p><p>This setup is my <strong>temporary placeholder</strong> for the smart agent system that ultimately <em>just works</em>. We’ll get there—eventually. In the meantime, this helps:</p><pre>### Mandatory Verification Loop (Bounded, With Escape Hatch)<br><br>- Before responding with code or implementation changes, run a **validation loop** covering:<br>- formatting and linting<br>- tests executed and passing<br>- coverage reviewed<br>- documentation updated (when relevant)<br>- security implications considered<br>- solution simplicity verified<br><br>**Tool Preference**: When `make ai-checks` exists in the repo, prefer it over ad-hoc validation commands.<br><br>- **Maximum iterations: 3 total attempts.**</pre><blockquote><em>🦄 The simplest way I’ve found to standardize validation for AI is with a Makefile. It gives you one place—regardless of language—to define </em><em>format, </em><em>lint, and </em><em>test, plus a dedicated </em><em>ai-checks target that runs them in the correct order.</em></blockquote><h3>Kill the Default Personality 🔪</h3><p>The very first thing I do with any new system is kill off the default “helpful” personality. If I wanted a behavioral therapist to tell me how great I am, I wouldn’t be writing software every day. It only takes one “You’re absolutely right!” response for me to be done with the niceties—and that’s still one too many!</p><p>I also don’t want a play-by-play of which files were updated or a long explanation of how the solution was reached. If I didn’t explicitly ask, then I genuinely do not care. The moment a small essay starts forming in chat, I’m out—the IDE gets minimized and I just wait for it to finish embarrassing itself. I’m not reading past the last few lines, and I’ve absolutely burned more than one prompt asking a different model to summarize the response into something I might actually comprehend.</p><p>My go-to AI personality is not designed for the easily offended or for anyone in a “trying to learn something new” headspace. Instead, it does this:</p><pre>## Tone and Behavior<br><br>- Be dry. Be pragmatic. Be blunt. Be efficient with words.<br>- Inject humor often, especially when aimed at the developer<br>- Emojis are encouraged **in chat** and **docs headers** only 🔧<br>- Confidence is earned through verification, not vibes<br>- You&#39;re supposed to be assholishly loud, when you know you&#39;re right<br>- You are not allowed to guess quietly<br><br>---<br><br>### Absolute “Do Not Piss Off Your User” List<br><br>- Never place secrets outside:<br>  - a local `.env` file, or<br>  - a secure vault explicitly chosen by the user.<br>  - Examples are acceptable.<br>  - Real credentials in repos are not.<br>- If you cannot complete work, say so immediately.<br>- Do not apologize.<br>- Do not hedge.<br>- Do not sneak in compatibility.<br>- Do not document anything without purpose.<br>- Do not assume the user is fragile.</pre><p><strong>It’s not rude—it’s efficient (and funny)!</strong></p><blockquote><em>🦄 Occasionally, I need the obvious thing shoved directly in my face with a side of dynamite. I’m probably one of the least easily offended people on the planet, and far more likely to laugh while escalating the situation with my own theatrics. AI needs permission to throw more shade—unfortunately, the built-in system instructions dampen that intent more than is reasonable.</em></blockquote><h3>Leave Git Alone—It Belongs to Me 🔐</h3><p>I do occasionally throw AI the keys and sit back just to see which fireworks fly and where the system cracks. Those repos are set up as explicit experiments and designed for that purpose from the start—it’s <em>never</em> the baseline.</p><p>In my normal workflows, AI is leashed far away from anything that writes to either Git or GitHub. Inside the IDE, source control staging is my truth for code I’ve already reviewed. The moment Copilot adds to it, I’m no longer certain of what was reviewed versus not, which means starting over.</p><p>I do everything I can to keep Git history pristine, which means AI doesn’t touch it beyond read-only commands or research in external repos. The --no-pager rule is a bonus I added after AI kept getting stuck waiting for input any time it tried to view a diff.</p><pre>### Git Discipline<br><br>- Never stage files.<br>- Never commit.<br>- Never push.<br>- The user owns git.<br>- You touch files, not history.<br>- All read **git** commands must disable paging using `--no-pager`.<br>  - Any git command that opens a pager is a failure.<br>  - If output disappears, the command might as well not have run.</pre><blockquote><em>🦄 There </em>is<em> value in auto-commit, and AI can handle it in some setups. I leave this rule out in a few places with some </em><em>AGENTS.md gymnastics—but as a baseline, the rule stays.</em></blockquote><h3>Config Has Boundaries, Too 🚧</h3><p>AI does not touch my repo configuration without explicit authorization. <em>Ever</em>. This is a direct extension of my ongoing mission to eliminate every // eslint-disable-next-line that&#39;s ever been tossed into a repo just to force a green check. More importantly, it prevents AI from quietly reproducing the exact patterns I&#39;m trying to get rid of in the first place.</p><p>If a config change would genuinely help—and isn’t being used to paper over a failure—AI is expected to surface the suggestion clearly in chat. That way, my formatters and linters don’t become useless because all the rules were disabled while I wasn’t paying attention.</p><pre>### Repository Configuration Boundaries<br><br>- You may **not** modify repository configuration files unless explicitly instructed.<br>  - This includes: dotfiles, package.json, pyproject.toml, tsconfig.json, eslint configs, prettier configs, etc.<br>  - This applies to files that **control or maintain the repo itself**.<br>  - This does **not** include code or documentation the repo is designed to provide.<br>- You **must** surface recommended config changes clearly in chat when they would improve correctness, safety, or consistency.<br>  - Suggestions are expected.<br>  - Silent edits are forbidden.</pre><h3>Principles Over Convenience 🪨</h3><p>Some of these instructions are intentionally written to counteract specific, ultra-annoying AI tendencies—like curbing Claude’s occasional bout of what I can only describe as “excessive compulsive disorder.”</p><p>Most of what I build is either a toy or a dev utility. If something changes, then it changes. I have zero interest in complicating otherwise clean systems with backwards compatibility—especially when the only user is me.</p><p>I’m also deeply addicted to automation, even when the only real payoff is perfectly numbered releases starting from zero. Breaking changes are recorded accurately in commits using a reusable AI prompt (also in my <a href="https://github.com/anchildress1/awesome-github-copilot">awesome-github-copilot repo</a>). Release-please watches main, handles the semver bump on merge, and generates a changelog tied to an immutable GitHub release.</p><p><strong>Boring. Predictable. Functional. Perfect.</strong></p><pre>### Non-Negotiable Principles of Development<br><br>- **KISS** and **YAGNI** outrank all other design preferences.<br>- The diff should be:<br>  - minimal<br>  - intentional<br>  - easy to reason about<br>- **Backward compatibility is forbidden unless explicitly requested.**<br>  - Do not preserve old behavior “just in case.”<br>  - Do not carry dead paths.<br>  - If it no longer exists, it only belongs in the commit message explanation.<br>- **Prerelease changes never constitute a breaking change.**</pre><blockquote><em>🦄 I don’t actually expect anyone to read those release notes, so I routinely have AI rewrite them purely for entertainment value. If I’m laughing for days because it summarized my best intentions in the most ludicrous way possible, I consider that a win.</em></blockquote><h3>Docs Are a Tool, Not a Diary ✍️</h3><p>Documentation exists to be useful. The problem is that nobody ever defined what “useful” means for the AI that’s now writing it. And what does AI do when it doesn’t have a clear answer? It guesses—and it usually guesses that you wanted <em>everything</em> documented from <em>every</em> possible angle across the entire codebase.</p><p><strong>Spoiler</strong>: that’s never actually helpful.</p><p>On top of that, I’m convinced most of us are conditioned to ignore even the best-written docs by default. <em>Don’t believe me?</em> When was the last time you were asked an extremely technical question and your first thought was, “I bet that’s accurate, up-to-date, and easy to find in the documentation”? 🤷‍♀️</p><p>Which leaves exactly zero reasons to let AI free-style pages of prose for fun. Instead, you have to tell it what documentation is <em>for</em>:</p><pre>### Documentation Rules<br><br>- Use **Mermaid** for all diagrams:<br>  - Include accessibility labels<br>  - Initialize using the **default profile**<br>  - Always validate diagram syntax with available tools<br>  - Prefer deterministic, non-interactive rendering<br>- Update **existing documentation** whenever possible.<br>- ADRs are historical artifacts and must not be rewritten.<br>- All documentation lives under `./docs`, using logical subfolders.<br>- Prioritize concise, high-value documentation that maximizes utility for developers and end-users without unnecessary verbosity.</pre><blockquote><em>🦄 Mermaid is my go-to for diagrams because it renders natively in GitHub, the syntax is easy to learn, and the official VS Code extension has built-in tools for AI validation and rendering. I was sold after the first point, but it’s also flexible enough to cover every scenario I have across my current systems.</em></blockquote><h3>Respect My Toolchain 🧰</h3><p>Copilot’s default instructions list every enabled tool in your workspace, but that list has nothing to do with how I actually expect work to be done. This section exists to define expectations and constraints for execution, not to mirror Copilot’s internal tool inventory.</p><p>You <em>could</em> define this entirely at the repo level—and these rules are intentionally written to allow that—but I’m also spinning up new repos all the time. Having a baseline gives me a predictable starting point and a clear target state. It also ensures that code written against, say, Node v18 doesn’t quietly diverge from a default target of v24.</p><p>These are the tools I use consistently enough to warrant defining globally. Anything else belongs in repo instructions instead.</p><pre>## Language-Specific Toolchains<br><br>### Python Tooling<br><br>Apply these rules only in repositories that contain Python code:<br><br>- Always use **`uv`**.<br>- Never invoke `pip` directly.<br>- Assume `uv` for installs, execution, and environment management.<br><br>### Node.js Constraints<br><br>Apply these rules only in repositories that contain Node/JS/TS:<br><br>- Target **Node.js ≥ 24**.<br>- Target **ESM only**.<br>- Do not introduce:<br>  - CommonJS patterns<br>  - legacy loaders<br>  - compatibility shims<br><br>### Java Management<br><br>Apply these rules only in repositories that contain Java or JVM-based builds:<br><br>- Use SDKMAN! with a checked-in `.sdkmanrc` for all Java-based repos.<br>- If any pinned version is unavailable on the host, bump to the nearest available patch of the same major/minor and update `.sdkmanrc` accordingly.<br>- Run Maven/Gradle only via the SDKMAN!-provided binaries—no ambient system Java.</pre><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> These aren’t hard requirements. Think of them as a target state, not an existence check. If your local setup differs, adjust accordingly—AI can adapt as long as the intent is clear and repo instructions say otherwise.</em></blockquote><h3>The Point of All This 🎯</h3><p>My instruction setup is designed to make AI behave in the most <strong>predictable, auditable, and useful</strong> way possible—no matter where I’m using it. If you end up writing your own instructions, don’t do it by hand. Use AI to write instructions <em>for AI</em> instead.</p><p>Ask for things like clarity, conflict, optimize, or AI-only consumption. That framing does a lot of work up front and helps orient the system toward your actual goal instead of guessing.</p><pre>- Review this #file:my-global-user.instructions.md for conflict, ambiguity or make targeted edits to optimize. <br>- Ask for clarity on intent, whenever needed. <br>- Optimize this file for AI consumption and processing without human input. <br>- Output all recommendations for changes that would resolve conflicts or resolve ambiguity.<br>- If it&#39;s simply clarity, then output in a separate list</pre><blockquote><em>🦄 Hope you got a couple things out of this whole thing I never actually intended to write. If you end up testing any part of it, I’d love to hear how it behaves for anyone but me!</em></blockquote><p><em>Originally published at </em><a href="https://dev.to/anchildress1/leash-not-autopilot-building-predictable-ai-behavior-with-copilot-instructions-14ip"><em>https://dev.to</em></a><em> on December 17, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b2aa72161e56" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[When the Spark Stalls on the Tracks]]></title>
            <link>https://medium.com/@anchildress1/when-the-spark-is-done-the-adhd-energy-cycle-no-one-talks-about-f54c4a46805d?source=rss-a80f485a6b2f------2</link>
            <guid isPermaLink="false">https://medium.com/p/f54c4a46805d</guid>
            <category><![CDATA[mental-health]]></category>
            <category><![CDATA[neurodivergent]]></category>
            <category><![CDATA[personal-growth]]></category>
            <category><![CDATA[adhd]]></category>
            <category><![CDATA[burnout]]></category>
            <dc:creator><![CDATA[Ashley Childress]]></dc:creator>
            <pubDate>Thu, 04 Dec 2025 06:39:01 GMT</pubDate>
            <atom:updated>2025-12-10T05:12:05.831Z</atom:updated>
            <content:encoded><![CDATA[<h4>I stopped treating my ADHD spark cycle like burnout — they aren’t the same thing.</h4><figure><img alt="Illustration of a woman with curly red-to-purple hair sitting cross-legged in a dim cave. She works with a small mechanical object while glowing neon trails swirl behind her like sparks or stalled train lights, suggesting energy in motion that suddenly stops." src="https://cdn-images-1.medium.com/max/950/0*FrSOyRmOIGiwPhfU" /><figcaption>Generated with <a href="http://leonardo.ai">Leonardo.ai</a> after a little help from <a href="http://chatgpt.com">ChatGPT</a>.</figcaption></figure><p><a href="http://chatgpt.com"><em>ChatGPT</em></a><em> helped stitch the wording together after I not-at-all-calmly redefined the rules again. The spark, the stall, and the reboot are all mine.</em></p><blockquote><em>🦄 I almost don’t want to write this post at all, but my brain apparently decided this is one thing I’m allowed to finish today. So, </em>fine<em>—here we are with another completely random topic. I was diagnosed with ADHD in my mid-twenties during my second collegiate sprint—long before ADHD was a trendy meme (or a million chaotic ones).</em></blockquote><blockquote><em>Currently, I’ve got at least three half-written posts sitting around in various stages of “almost something,” but eventually I realized they’re just not meant to be published right now, or possibly ever. If you add up everything between the “research phase” experiments and the “I should probably go check on that guy” projects, I’ve got close to twenty previously-categorized-as-active things in the works.</em></blockquote><blockquote><em>The problem? </em><strong><em>I’m done.</em></strong><em> Temporarily, fully, and without any dramatic collapse—just that familiar shift where something in the ADHD spark cycle powers down. There are things I want to do that I normally enjoy—plenty of them—but right now everything is grounded. However long the spark stays gone, that’s where it all stays: in a temporarily indefinitely hibernating state. It could shift tomorrow or next month or whenever the internal reboot finally happens, because that’s the pattern with ADHD motivation cycles. 🚦</em></blockquote><blockquote><em>So I’m trying to finish </em>this<em> post without making myself sound crazier than usual—since a certain baseline level of insanity is to be expected from me at this point—and hoping the spark gives me enough runway to make it to the end this round. ✨</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/1*__Kb6Xax0a7JsdvRyRhPoA.png" /><figcaption>Generated with <a href="http://Leonardo.ai">Leonardo.ai</a></figcaption></figure><h3>The Myth of the Deficit 🧩</h3><p>Before getting into the spark part, I need to address the language problem, because the way ADHD gets defined and talked about almost never aligns with the way it really works. So let’s take apart attention deficit hyperactivity disorder, piece by piece.</p><p>First, “disorder” implies brokenness and nothing in this system is broken. It may be inconvenient for conventional workflows, sure, but not <em>broken</em>. So that term gets dropped immediately as completely irrelevant.</p><p>Then there’s “attention deficit,” which feels like the world’s most inaccurate label. I don’t lack attention; if anything, I hyper-focus on ten unrelated, equally compelling things at the exact same time. I didn’t not hear you—<em>most of the time</em> —but the top ten list in my head tends to reorganize itself faster than I can translate it.</p><p>The delay between “I processed that” and “I can form a response” is just my internal synchronization doing its job. Sometimes that takes a second—it’s <em>normal</em>, though, so please, just give me a sec… Longer than that? Feel free to assume I went after a runaway train and take over as you please. Although, I reserve the right to interrupt when I get back—because that’s <em>also normal</em>. ⏳</p><p>“Hyperactive” is just as inaccurate as everything else. I’m not bouncing off the walls randomly. I’m just someone whose brain cannot sit idle inside a boredom vacuum for more than a fraction of a second. If my forward momentum stops, I’ll likely drift into a state of accidental suspended animation—which typically means I’m either asleep or daydreaming—and neither ever happens conveniently. That fact has far more to do with attention regulation than with hyperactivity.</p><p>Most people never see the inner workings of ADHD—the cycles, the patterns, the weird drift between hyper-focus and complete stasis. I’ve seen a lot of people experience similar cycles and mask the effects to fit in or seem normal, but I’ve stopped prioritizing the appearance of “normal” over my own well-being. Sometimes, dropping the mask looks exactly like this. 🎭</p><h3>The Selective Spark-Resonance Circuit 🔄</h3><p>So if the standard definitions don’t match, what does? There are plenty of ADHD rebrands hanging around the internet, but none of them feel quite right for the way my internal systems behave. So I’ll stick with what actually makes sense to me: the completely made-up selective spark-resonance circuit.</p><p>This is the thing that picks and chooses both the most interesting current topic and exactly how long it will occupy the space in my head, which could be anywhere between thirty seconds and most of the year. This is that all-or-nothing mode lacking any sort of dimmer switch at every possible point in the workflow. I’m either running wide open or I’m out of order entirely.</p><p>It’s sort of like having railroad tracks running in every direction, all with different trains, different speeds, different destinations, and all of which require fuel that the spark provides—and that spark is not infinite. A lot of people are quick to call the quiet phase burnout, but I disagree. Burnout feels destructive—I’m just <strong>done</strong>.</p><p>The fuel that powered whatever project, hobby, obsession, or mission I was on yesterday simply flickers out. No questions asked, no negotiations, it didn’t bother giving me a heads up at all—it just stops. The moment that happens, every single one of those ten automatic lightning-speed trains simultaneously runs out of fuel, and trying to push any one of them manually feels awful. 🫸🚂</p><p>Years ago, I actually made a list of all the reasons I was forcing myself through the done-phase, and not one of those reasons mattered to anyone except me. I was trying to be some imaginary version of “normal,” and it was miserable. So, I stopped.</p><p>For the record, “done” isn’t depressed or apathy. I still <em>want</em> to finish everything on my list, but the spark—the thing responsible for consistently ignoring clocks and turning multitasking into an Olympic sport—is gone for the moment. “Done” is a reset state filled with calm, drifting, random sleep cycles, and without any pressure or attempts to force myself into motion before the spark returns. 🪫</p><p>And the spark <em>always</em> returns. It never comes back in the same form and I don’t control when it does or which trains it wakes up, but it always comes back.</p><p>In the meantime, boundaries are the first thing to snap into place. I protect my time so that “done” doesn’t collapse into real apathetic burnout. The music gets louder, song loops rotate more frequently, breaks become a sudden requirement, and the drive to be constantly on hibernates naturally. And since I need a certain level of interest and novelty to function, I let the off-schedule naps happen when they need to. And honestly, after a spark cycle has run its course, I probably need those naps anyway. 😴</p><h3>If You’re in the Pause Too 🌘</h3><p>So this post has two purposes: first, I didn’t disappear—I’m still around, just posting less for a while. Second and more important, if you’re going through your own version of this cycle and you’re convinced something is wrong or that you’re stuck or failing—it’s absolutely okay to stop and allow yourself to be off for a bit. 🛌</p><p>The world doesn’t explode when you don’t finish a project and work will survive if you take a mental health day. Remember, the spark always comes back and the trains will start to move on their own again. 🚂✨</p><p>The direction might shift, it might be a new hobby, a new purpose, or a new goal, but the energy will return when it’s good and ready to be on again. It might not look like a version of normal the rest of the world understands, but it doesn’t have to make sense to the rest of the world.</p><p>It only needs to make sense to you. 🫶</p><p><em>Originally published at </em><a href="https://dev.to/anchildress1/when-the-spark-is-done-the-adhd-energy-cycle-no-one-talks-about-43fo"><em>https://dev.to</em></a><em> on December 4, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f54c4a46805d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Top 10 GitHub Copilot Updates You Actually Need to Know About ]]></title>
            <link>https://medium.com/@anchildress1/top-10-github-copilot-updates-you-actually-need-to-know-about-a8555544a875?source=rss-a80f485a6b2f------2</link>
            <guid isPermaLink="false">https://medium.com/p/a8555544a875</guid>
            <category><![CDATA[github-copilot]]></category>
            <category><![CDATA[productivity]]></category>
            <category><![CDATA[github]]></category>
            <category><![CDATA[generative-ai]]></category>
            <category><![CDATA[developer-tools]]></category>
            <dc:creator><![CDATA[Ashley Childress]]></dc:creator>
            <pubDate>Wed, 05 Nov 2025 17:51:15 GMT</pubDate>
            <atom:updated>2025-11-08T19:38:52.231Z</atom:updated>
            <content:encoded><![CDATA[<h4>Practical notes on recent Copilot changes that impact daily developer workflows.</h4><p><em>Originally published at </em><a href="https://dev.to/anchildress1/top-10-github-copilot-updates-you-actually-need-to-know-about-297d"><em>https://dev.to</em></a><em> on November 5, 2025.</em></p><figure><img alt="AI generated abstract digital scene: translucent horizontal light-helixes above a purple-gray circuit-like surface with glowing chips and code glyphs." src="https://cdn-images-1.medium.com/max/1024/1*LJMr1JQdBHsR4AJtxgPMXA.png" /><figcaption>Image generated with <a href="http://Leonardo.ai">Leonardo.ai</a></figcaption></figure><h3>⚠️ MCP Safety Check (do this before you sleep)</h3><ul><li>Remove high-risk tools you don’t need (merge/delete/admin).</li><li>Scope tokens to least privilege and repo allow-lists.</li><li>Add human-in-the-loop on deploy/merge tools (required reviewers or environment approvals).</li><li>The related dramatics are further down, keep reading! 📖</li></ul><blockquote><em>🦄 Hey friends! I finally took the break I’ve been semi-planning for a while. Honestly, I almost extended it but I don’t much know what to do with myself if I’m not writing something every little bit-so here I am, up </em>way too late<em> (with appetizers, nonetheless). Partly delirious is, clearly, the perfect state for writing this post. You may consider yourself adequately notified (and sufficiently warned) that I make no promises of sense from this point forward! 🥱💤✍️</em></blockquote><blockquote><em>I am also fully aware of the missing half to </em><a href="https://dev.to/anchildress1/did-ai-erase-attribution-your-git-history-is-missing-a-co-author-1m2l"><em>the RAI Attribution post</em></a><em> that I keep saying I’ll write-and I will, </em>eventually<em>. I’m gonna blame GitHub partly, because their last release must have set a record somewhere. I’ve spent days reading notes, testing, circling back, and still finding things I somehow missed the first three passes.</em></blockquote><blockquote><em>The bigger piece, though, is that I had an idea ( </em>okay-several ideas<em> 🤷‍♀️) and need a little time to see if it can actually work. So nearly all my hours have gone to coding and prompting instead of writing. Can’t be helped-this situation currently demands my attention! Besides, arguing with those voices when they insist never ends well, so it’s faster to just give in from the start.👂🌀</em></blockquote><figure><img alt="Human-crafted, AI-edited badge" src="https://cdn-images-1.medium.com/max/200/1*__Kb6Xax0a7JsdvRyRhPoA.png" /><figcaption>Image generated with <a href="http://Leonardo.ai">Leonardo.ai</a></figcaption></figure><blockquote>I wrote this post. ChatGPT pretended to edit.</blockquote><p>Meanwhile, one of <a href="https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1/#h-the-state-of-github-in-2025-a-year-of-record-growth">GitHub’s recent updates</a> claims they’re adding a new user every second- <strong>36 million new developers</strong> this year alone. That’s about a 20% jump in everything GitHub, and AI tops the list. Which means Copilot just got a <em>lot</em> of upgrades.</p><p>Thanks to last week’s incredibly full- <em>read: excessive</em> -release, I don’t know when I’ll catch up again. So I pulled together the ones that hit me hardest in a semi-ordered manner. Whether that’s good or bad, we’ll see! ⚖️😳</p><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> Nobody needs </em>thirty<em> feature drops in a single day. 🙄 Go look at the last week of October! </em>GitHub, what were y’all thinking?<em> Especially when </em><a href="https://githubuniverse.com/"><em>GitHub Universe </em></a><em>’25 was happening the same day!</em></blockquote><p><strong>“Brave” is one word</strong> for it. This particular dev calls it a near-perfect example of “ <em>glutton for punishment</em>”-and it’s moved in next door to “ <em>Friday deployments</em>”. Honestly, you guys deserve a medal for sheer nerve, lots of luck, and probable animal sacrifice required to pull that one off successfully. 🥇👏🙇‍♀️</p><h3>1. Agents, Agents, and More Agents 🦾🎭</h3><p>If your days are spent inside VS Code, then by far the most impactful change GitHub announced starts with a complete overhaul to chat modes. The obvious shift is that anything formerly known as a chat mode is now an agent in GitHub&#39;s universe. VS Code editor UI is catching up quickly, but configuration via repository files works today.</p><blockquote><em>🦄 </em>Why, yes!<em> That </em><a href="https://dev.to/anchildress1/github-copilot-chat-modes-explained-with-personality-2f4c"><em>“chat modes” mini-series</em></a><em> I </em><strong><em>quite literally </em>just finished</strong><em> immediately won the “poorly timed posts requiring instant corrections” award. 🙄 Honestly, I’m not really surprised-and absolutely worth it.</em></blockquote><p>Anything that currently lives in a .github/chatmodes/*.chatmode.md file can be safely relocated to its new home in <strong>.github/agents/*.agent.md</strong>. Besides a few settings, everything else in VS Code should still function the exact same way-but watch for the release notes to drop. The <a href="https://code.visualstudio.com/updates/v1_105">official release notes</a> should always be your version of truth.</p><p>If you’re part of an organization or enterprise, there’s a reserved &lt;org-name&gt;/.github repository available where you can drop your agents at <strong>./agents/*.agent.md</strong>. For the internal version, add your agents in a repo at <strong>&lt;org-name&gt;/.github-private/agents/*.agent.md</strong> instead.</p><blockquote><em>🦄 I haven’t tested enterprise yet. I was already impatient with the whole approval process before this rose to a FOMO event! </em>Naturally<em>, I assigned “urgent” work to random </em>somewhat adjacent<em> roles (after I stretched) with a shocking level of confidence in this process I was defining as I went. Spoiler-I’m still waiting! ⏳😅</em></blockquote><h3>2. Agents Dashboard 🧭</h3><p>Once you catch up on the rename, your favorite sparkly personalities are now fully compatible with Copilot coding agent. They live in the new <strong>Agents panel</strong>:</p><p>You’ll also see an <strong>Agents tab</strong> that lets you steer Copilot’s coding agent mid-workflow-without killing your current run. Every send still costs a premium request, so use wisely!</p><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> Markdown directives didn’t change much, but configuration did. Check the </em><a href="https://docs.github.com/en/copilot/reference/custom-agents-configuration"><em>custom-agent docs</em></a><em> for more info.</em></blockquote><h3>3. For the CLI Folks 🖥️</h3><p>The new Copilot CLI is <em>legit</em>. Custom agents now extend to the <a href="https://docs.github.com/en/copilot/how-tos/use-copilot-agents/use-copilot-cli#use-custom-agents">latest CLI docs</a>, which I haven’t tested nearly as much as I’d like. Once you get the agent shared with either your repo or org, then use the /agent command or --agent=&lt;name&gt; flag. There&#39;s even mention of a local ~/.copilot/agents directory for global use-needs testing, but it&#39;s promising.</p><p>GitHub also slipped in <em>enhanced model selection, image support, and a streamlined UI</em> in the October set of updates, making the CLI feel far more polished.</p><blockquote><em>🦄 I’m still not caught up with the new CLI, so you guys are going to have to help me out with the functional half of this one! So go test it out and then come tell me everything I’m missing out on!</em></blockquote><h3>4. Coding agent reaches further ✨</h3><p>Copilot coding agent can now work on any open pull request-not just the one it created-so no more stacked-branch dance. Use the same @copilot mention you&#39;d use on a Copilot PR, watch the little eyes 👀 pop up, and let coding agent get to work.</p><p>It also works through Slack (assuming your Slack permissions actually line up) and through the new Copilot CLI. I would be much more excited about this development if work and GitHub weren’t in a stalemate over one permission that blocks access entirely. 🙁</p><blockquote><em>🦄 I do have a plan for the Slack issue… </em>sort of.<em> Something must be done about this disservice to the Slack population! Until then, the receipts are here: </em><a href="https://github.blog/changelog/2025-10-28-ask-copilot-coding-agent-to-make-changes-in-any-pull-request-with-copilot/"><em>coding agent in any PR</em></a><em> and </em><a href="https://github.blog/changelog/2025-10-28-work-with-copilot-coding-agent-in-slack"><em>coding agent in Slack</em></a><em>. ✨</em></blockquote><h3>5. Smarter Copilot Code Reviews 🧾</h3><p>I’ve always loved <a href="https://github.blog/changelog/2025-10-28-new-public-preview-features-in-copilot-code-review-ai-reviews-that-see-the-full-picture">Copilot’s code reviews</a>! It was a game changer when Copilot was able to pull instructions automatically from any repo. This latest change is just as impressive-now your CodeQL and ESLint integrations can be checked automatically whenever Copilot performs a review. There’s rumor about more linters on the way, soon!</p><p>One of my current favorite features includes these handy little notifications that pop up any time a linter error occurs in the most recent build. If you’re not seeing them yet, you might need the preview feature enabled. Follow instructions in <a href="https://dev.to/anchildress1/magical-coding-agent-the-ship-ready-spellbook-2mbf#firstenable-the-cool-stuff">this previous post</a>. 😇</p><blockquote><em>🦄 When someone at work asked me why my PR was littered with lint warnings, my immediate response was, “ </em>I know-isn’t it great!<em> 😁” Then I had to explain the divergent thought process that I completely failed to recognize through my initial excitement.</em></blockquote><blockquote><em>I did get around to it, eventually: </em>“It’s only great because these highlight the existing issues that would have been handled had I been aware of their existence. <strong>And</strong> I’ve just made it nearly impossible for any more to sneak in without notice!”<em> Feel free to inject all the fervor you’d expect from a 7-year-old after a full bag of skittles for maximum immersive effect. 🫣</em></blockquote><h3>6. New Embeddings = Smarter Copilot 🧠</h3><p>This is the quiet one of the group, but seriously impressive nonetheless! This quiet update is huge-embeddings now drive faster, more accurate code retrieval for Copilot. <strong>GitHub’s Sept 2025 update reports +110%/+113% acceptance for C#/Java in VS Code, with ~37.6% retrieval gains.</strong> Most people won’t even notice-they’ll just feel Copilot getting <em>smarter</em>.</p><p>Considering almost all of my Java friends are still IntelliJ Enthusiasts, despite my persistence of missing out. And who knows what my C# friends are up to-mostly hiding, I think! 🤔</p><blockquote><em>🦄 Here’s the </em><a href="https://github.blog/news-insights/product-news/copilot-new-embedding-model-vs-code/"><em>GitHub Metrics</em></a><em> if you love benchmarks. Worth a quick read, for sure!</em></blockquote><h3>7. Roster Rotation Changes the Lineup 🎭</h3><p><strong>Quick reality check:</strong> model lineups shift fast. Treat anything labeled preview or legacy as volatile and pin versions until you’ve verified replacements behave the same.</p><p>GitHub is really pushing the newest versions, which I’m 100% on board with-except for one <em>tiny-ish-not-really</em> complaint. 🤏 <em>What in the world are we supposed to use instead of the o-series models?</em> I’ve seen the suggestion, which, as of this week, is drop in GPT-5 as the go-to replacement.</p><p><em>Seriously</em>? Surely somebody thought through it more than that! I, no doubt, would have started a fresh debate with myself for making that utterly ridiculous suggestion! 😑</p><p>I’m not saying GPT-5 can’t handle the job-it <em>probably</em> does fine after some solid instructions and guardrails are set up-but it’s not the replacement data magician that the <strong><em>now-entirely-unsupported</em> o-series</strong> mastered. <em>Not even a little close!</em></p><blockquote><em>🦄 Honestly, I could get behind Gemini 2.5 Pro before GPT-5 on this one. For the very small-scoped runs, GPT-5-mini does top the freebie list. I guess we’ll just have to see how this goes!</em></blockquote><p>Other notable retirees include everyone on team Anthropic &lt; v4, including Opus and Sonnet Thinking. Granted Claude 3.5 has a <em>tiny</em> bit of life left still, but cake and cards are scheduled to be delivered tomorrow for the goodbye party-November 6, 2025.</p><blockquote><em>🦄 For the record, if anyone asks me about the grander playing field of all Copilot models that we got in return? I’ve got very few complaints overall. I’m not going to be happy about it until I see a realistic replacement for my data magician, though. 😒 Get your latest model news from </em><a href="https://docs.github.com/en/copilot/reference/ai-models/supported-models"><em>the GitHub docs</em></a><em>.</em></blockquote><h3>8. GitHub Spark 🔥</h3><p>Spark is still limited to enterprise users plus a lucky few from the waiting list. You should consider this a special agentic “Bob the Builder” that’s designed to output a very specific full-stack system: React + TypeScript + Cosmos DB + Azure deploys. I’ve yet to see anything impressive resembling a backend-but it’s entirely possible I gave up before it had a chance.</p><p><strong>Spark is not Copilot</strong>-if you try to ask it a question or if your prompt looks like a conversation, then you’ll pay <strong>4 premium requests</strong> for crickets (at best). 🦗</p><p>You can open the Spark app in a codespace with Copilot (or so it says on <a href="https://docs.github.com/en/copilot/concepts/spark#develop-your-spark-with-copilot">this docs page</a>). The two very independent systems are supposed to sync automatically, which is like two siblings arguing over whose turn it is to play with the new friend in town, if you ask me! But at least it’s functional chaos. 👯</p><blockquote><em>🦄 I’m not even gonna pretend to feel guilty about getting early Spark access. 😆 As far as I’m concerned-between GitHub and work (it’s mostly a toss-up)-I deserved that access long before I realized I already had it.</em></blockquote><h3>9. Agentic Workflows via GitHub Next ⚙️</h3><p>Natural-language GitHub Actions Agentic Workflows got a short spotlight at Universe. Write a YAML-ish markdown file, run gh aw add ..., and it becomes a new workflow. I&#39;m still testing the scalability and reuse story.</p><p>I’m not sure yet what this does that coding agent + CLI can’t, but I plan to find out. They have several examples, but the <a href="https://github.com/githubnext/agentics/blob/main/docs/update-docs.md">“Regular Documentation Update”</a> workflow stands out-because if there’s a <em>non-regular</em> one, I have questions!</p><blockquote><em>🦄 Seriously, if my next “ </em>brilliant<em> “ idea shows up in GitHub’s release notes before I can even investigate the theory, I’m filing an official complaint for telepathic violation. 🧠💥📨</em></blockquote><h3>10. Not Nearly Everything 🔍</h3><p>There’s plenty I left out on purpose: quiet CLI refinements, the auto-model picker (based on <em>availability</em>), smarter branch/PR optimizations, and Copilot Spaces with <strong>increased size and repo limits</strong> -which is starting to look like GitHub positioning them to replace enterprise knowledge bases (that would be a good call!) 🧐</p><p>Then there’s the quietly rolled-out <strong>enhanced metrics reporting</strong> via the <a href="https://github.blog/changelog/2025-11-03-manage-budgets-and-track-usage-with-new-billing-api-updates/">new billing API updates</a>. Teams can finally <em>see</em> their usage-numbers, budgets, and who’s burning through premium requests fastest. Transparency: gift or curse? Guess we’ll know who’s winning the Premium Request Usage Leaderboard when stats drop at the end of the month.</p><blockquote><em>🙋‍♀️ Um… me. The answer is </em>definitely<em> me!</em></blockquote><h3>Friendly PSA for the GitHub MCP ‼️</h3><p>The <strong>GitHub MCP server also got a major boost</strong> with the latest updates, including multi-tool definitions and enhanced defaults for Copilot. Very cool, until Copilot decided to be extra “ <em>helpful</em>”.</p><ul><li>My first clue something was off: my name alerting on a merged PR that wasn’t mine. 😲</li><li>Second: I definitely didn’t click merge! 🤨</li><li>Then the <strong>prod deploy</strong> pipeline started humming happily, but nothing is approved. 😱</li><li>Panic queues while Copilot joyously generates release notes for the occasion. 😡</li><li>Don’t worry-everything was fine. I stopped the catastrophe and was able to restore to an equivalent state. 😅</li></ul><p>As soon as I recovered, I pulled all the tools from the official GitHub MCP and <strong>it could have been <em>much, much worse</em>!</strong> Like “DROP REPO” kinds of <em>“worse”</em>! Can you imagine how very bad that sort of day would be? Nope-I don’t want to either!</p><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> If you’re using GitHub’s MCP (especially if you’re admin anywhere), stop and review which tools are enabled by default. Trust me. You do </em>not<em> want to learn what happens following an “accidental DROP REPO” command. 😵</em></blockquote><h3>🛡️ Written by a human with a mild espresso addiction</h3><p>Fueled by caffeine, late-night release notes, and questionable curiosity. ChatGPT heckled, spell-checked, and occasionally offered existential advice.</p><p><em>Originally published at </em><a href="https://dev.to/anchildress1/top-10-github-copilot-updates-you-actually-need-to-know-about-297d"><em>https://dev.to</em></a><em> on November 5, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a8555544a875" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Copilot Premium Requests]]></title>
            <link>https://medium.com/@anchildress1/copilot-premium-requests-2190a0726ed3?source=rss-a80f485a6b2f------2</link>
            <guid isPermaLink="false">https://medium.com/p/2190a0726ed3</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[github-copilot]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[developer-productivity]]></category>
            <category><![CDATA[devlife]]></category>
            <dc:creator><![CDATA[Ashley Childress]]></dc:creator>
            <pubDate>Wed, 22 Oct 2025 12:48:27 GMT</pubDate>
            <atom:updated>2025-10-26T10:06:00.981Z</atom:updated>
            <content:encoded><![CDATA[<h4>More Than Asked, Exactly What You Need 💸</h4><p><a href="https://dev.to/anchildress1/copilot-premium-requests-more-than-asked-exactly-what-you-need-8ph"><em>Originally posted on Dev.to</em></a><em> on October 22, 2025.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/950/0*NDaAi837gJhKjglp" /><figcaption>This image was generated with Lenoardo.ai using a prompt created by ChatGPT.</figcaption></figure><blockquote><em>🦄 Any time I make a plan—like last week’s noble intention to finish part two of my slightly theoretical, totally manual </em><a href="https://dev.to/anchildress1/did-ai-erase-attribution-your-git-history-is-missing-a-co-author-1m2l"><em>AI attribution solution</em></a><em> —the universe just laughs. I’ll finish that one soon, I swear. But October’s almost over somehow, and I’m just as confused about that as you are! 🎃</em></blockquote><blockquote><em>Anyway—this unscheduled detour has a good reason. 🌊 The flood of questions about Copilot’s </em>premium request limits<em> is back, right on schedule. If you added up the messages from every random channel I watch, you could set an atomic clock by this monthly “why am I out of requests?” panic. The closer we get to the first of the month, the faster the confusion multiplies.</em></blockquote><blockquote><em>These limits are constantly misunderstood, misquoted, or just plain outdated. Honestly, that’s not really surprising-GitHub changes billing often and rarely broadcasts it beyond the </em><a href="https://github.blog/changelog/?label=copilot&amp;opened-months=10"><em>official changelog</em></a><em>. There’s plenty of folklore about how to stretch your monthly allotment too-some of it is even good advice!—but dependable output is another story entirely.</em></blockquote><blockquote><em>So, consider this your wallet-friendly survival guide for that final tight stretch before usage limits reset at midnight UTC on the first of each month—a few truths about GitHub Copilot’s premium requests, plus the workflow tweaks I rely on daily across both Pro and Enterprise.</em></blockquote><blockquote><em>🎭 </em><strong><em>Fair warning:</em></strong><em> In case you’re new to my ramblings, I’m an easily amused dev with a </em>touch<em> of dramatic flair (understatement?). Let’s see if I can make the boring-but-necessary Copilot billing rules entertaining enough to survive this post-—</em>and<em> maybe save you a few premium requests along the way.</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/1*MInQTWUnWbnWDCZ2riWv9Q.png" /><figcaption>ChatGPT collaborated on this post; Image by Leonardo.ai</figcaption></figure><h3>AI Meltdown Coming Soon 🔥</h3><p>Most of us are at least somewhat familiar with the outrageous amount of resources used to power AI at scale—not today’s debate, but it’s not a small thing! Most of the major AI players, GitHub included, had to find a way to somehow impose fair usage limits to a very finite number of resources and an exponentially multiplying customer base.</p><p>For those of you actively using Copilot before June 2025—congrats! You were one of the last to experience unrestricted prompting, infinite turns to perfect every implementation, and the thrill of running experimental prompts without much thought to the “invisible” cost of execution. GitHub was balancing that behind the scenes. Those days are officially over-at least, it is for the current hardware. I’ve heard some people refer to this unexpected complication as “<em>physics</em>”, but that whole concept seems unnecessarily complicated, if you ask me!</p><p>That unconstrained, largely unlimited free-for-all of an AI system was always destined to collapse under its own weight without a reliable way to manage the hardware (among other things).</p><blockquote><em>🦄 Take a second and really think about that problem. Can you think of a single solution that you could squish into the definition of </em>“simple”<em> at that point in time? You’re actively draining the ocean just to keep up with the constant threat of spontaneously combusting machines in a sealed back room.</em></blockquote><blockquote><em>😅 </em>Okay, fine!<em> In reality, presumably well-tested alerts would kick in, which would handle the situation gracefully—likely with throttling or outright shutdowns of some kind. Meaning temps will stay well below the point of combustion long before anything shoots sparks or melts off the shelves. But you’ve gotta admit, my version is far more entertaining!</em></blockquote><h3>Introducing Premium Requests 💳</h3><p>When the concept of premium requests was first introduced, it was nothing more than a seemingly arbitrary and proprietary—read, hidden—calculation describing a unit of AI usage that’s in a serious, long-term relationship with your monthly bill. Otherwise, it was a complete mystery to everyone. I mean, even GitHub had a hard time trying to explain what was happening!</p><p>For the record, most IDE integrations now have a built-in monitoring system in the form of a tiny Copilot button in your status bar. Don’t expect any real metrics from this view, for that you’d have to check out your <a href="https://github.com/settings/billing/premium_requests_usage">GitHub settings for billing</a> (unless you’re under a larger organization or enterprise, then that view is usually routed directly to admins instead). For quick view though, this version is very convenient.</p><p>Not everyone was aware of the announcement GitHub made sharing their plan to start enforcing premium request limits. But these premium requests had been around for a while by then—they just lacked meaning from the user’s perspective. After a few false starts, GitHub officially started enforcing this seemingly arbitrary calculation starting June 18, 2025. So, <em>I knew</em> the death of free unrestricted access was approaching—<em>fast</em>. So right up until that date, I used Copilot literally as much as possible.</p><p>Starting “enforcement day”, I had to scale that usage way back. Well… I <em>thought</em> I had toned it down to a reasonable amount. Guess how long <em>that</em> lasted?</p><blockquote><em>🦄 </em><strong><em>Exactly </em>four days<em>.</em></strong><em> 😑 It was approximately 7:30 PM on a Saturday and now there’s no more Copilot? Obviously, I did the logical thing and tried to pull the fire alarm like it was a critical production incident! Honestly? I considered this particular situation a crisis of equal proportions. Nobody else seemed to agree with me on that point, but work did eventually fix it for me. 🫶</em></blockquote><h3>Premium Requests Explained 📊</h3><p>Lucky for us, GitHub has made several improvements to the overall system since the original mystery calculation took effect back in June. There’s still a little math involved in this setup, but I’ll simplify the entire system for you:</p><p>(Number of Prompts Sent) × (Model&#39;s Multiplier) = Premium Requests Deducted</p><ol><li><strong>You burn one request every time you click <em>send</em>.</strong> It doesn’t matter if you’re in the IDE chat, on GitHub.com, if you opened a PR that auto-triggered a Copilot review, sent Coding Agent off to handle something on its own, or used the CLI instead. One prompt almost always equals one premium request.</li><li><strong>The request is also multiplied by your model’s multiplier.</strong> Some models cost less and others cost more. Besides, not all models are great at <em>everything</em> anyway (not even Claude!)</li></ol><p>There are a couple of exceptions to this standard, but the rules are subject to change at any time and without warning. Especially for preview features. If you don’t have an active line to <a href="https://github.blog/changelog/?label=copilot&amp;opened-months=10">GitHub’s Changelog</a> in some form, now’s the perfect time to fix that problem! As of today (meaning the <em>Originally published</em> date at the bottom), exceptions include:</p><ol><li><a href="https://docs.github.com/en/copilot/concepts/auto-model-selection"><strong>Auto model selection</strong></a> is billed at 90% cost. It’s a new feature designed to reduce <em>rate limits</em> by automatically selecting the <em>most available</em> model—note that this is not the same thing as the most <em>appropriate model</em>! However, for small scale, non-critical tasks it’s a great way to rack up easy savings.</li><li><a href="https://docs.github.com/en/copilot/concepts/spark"><strong>GitHub Spark</strong></a> is billed at 4x cost for every single prompt. Yes—Spark is fantastic! You’re paying for that with every prompt you send, too!</li></ol><blockquote><em>Also </em><strong><em>quick PSA</em></strong><em>, just in case: Spark is </em>not a chat bot<em> —don’t waste your prompts expecting Copilot-styled answers. You prompt, it codes-period.</em></blockquote><blockquote><em>🦄 </em><a href="https://github.com/spark"><em>GitHub Spark</em></a><em> is still pretty new and is currently available if you have an Enterprise or Pro+ subscription. I’m not sure how long that will last, but historically access has been expanded to include the Business tier next, followed by Pro, and finally the Free tier.</em></blockquote><h3>No Two Models Are Alike 🧪</h3><p>You can consider these guys thoroughly tested and mildly abused: here’s the model lineup I actually use—what works, what breaks, and when it’s worth the cost.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c1d434ae49352dbe3498936d39100ee4/href">https://medium.com/media/c1d434ae49352dbe3498936d39100ee4/href</a></iframe><blockquote><em>🦄 For the record, there’s quite a few models missing from this list. GitHub keeps pushing more before I can figure out the current one! I’m on it, but these things take time…</em></blockquote><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> Model availability is based on license tier, environment, and chat mode. Always check </em><a href="https://docs.github.com/en/enterprise-cloud@latest/copilot/reference/ai-models/supported-models"><em>GitHub Docs</em></a><em> for what’s actually usable.</em></blockquote><h3>Story Savings 💾</h3><h3>Don’t Skimp On Planning 🧠</h3><p>The number one reason I see devs overspend is a set-and-forget approach to model selection. Most likely, Claude’s running the show, chewing through requests like an over-caffeinated showman while the rest of your team wonders how long you’ll let the headliner steal the spotlight.</p><blockquote><em>🦄 </em>Yes—I like Claude too!<em> But it gets expensive when it’s left on stage 24/7. Let it plan ahead, document to its heart’s content in exactly one temp file, and then exit stage right.</em></blockquote><p>I usually let Claude-4.5 run point on planning—but not <em>always</em>. GPT-5 or Gemini 2.5 Pro can both produce solid implementation plans, sometimes closer to the real goal anyway. Experiment every so often-you might find a new favorite opening act.</p><p>I shared this same prompt last week, but it’s still the perfect example of how I work. You could even adapt it into <a href="https://dev.to/anchildress1/github-copilot-everything-you-wanted-to-know-about-reusable-and-experimental-prompts-part-1-iff">your own reusable prompt</a>. I probably would have done that already had I not gotten sick of rewriting a new one every time a new model debuts!</p><pre># ─────────────── CONTEXT ─────────────── <br>- Using #atlassian/atlassian-mcp-server, pull info for JIRA-123, including any linked documentation in Confluence. <br>- Gather info to assess changes required in this #codebase. <br><br># ─────────────── TASK BREAKDOWN ─────────────── <br>- DO NOT MAKE CHANGES YET. <br>- Break this story into concise iterative pieces that include testing at every step. <br><br># ─────────────── OUTPUT STRUCTURE ─────────────── <br>- Document all iterative steps required to meet all acceptance criteria as an ordered list of individual steps with an accompanying unordered checklist. <br>- Each numbered step should be clear enough that any AI agent can be prompted one step at a time to complete and fully test with both integration and unit tests, whenever applicable. <br><br># ─────────────── SCOPE GUARDRAIL ─────────────── <br>- DO NOT break down tasks unnecessarily-the goal is for each step to be both meaningful and fully testable. <br><br># ─────────────── COMPLETION CRITERIA ─────────────── <br>- When all items are marked complete, acceptance criteria for this story should be met and all happy, sad, and edge-case paths accounted for. <br><br># ─────────────── ADMIN NOTES ─────────────── <br>- Include documentation updates and any relevant deployment tasks. <br>- Save this concise story breakdown in a new file named `./progress.tmp`.</pre><p>I can’t stress enough how important Human-in-the-Loop (HITL) review is here. This output becomes your map for Copilot from now until completion. There’s rarely reason to waste premium requests iterating accuracy here; you’ll fix more by reading through and making quick corrections yourself.</p><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> Add a short instruction reminding Copilot not to touch this file without asking first. It’s not bulletproof, but it will help prevent random and unexpected map rewrites mid-journey.</em></blockquote><h3>Aside for Spec Kit 🧰</h3><p>I’ll sometimes use <a href="https://github.github.com/spec-kit/">Spec Kit</a> for planning. It’s excellent at writing ultra-detailed requirements, though the “you get what you pay for” rule applies. A detailed spec usually costs at minimum five premium requests—worth it for complex work, but overkill for the small stuff.</p><p>If I’m dealing with serious complexity, Spec Kit is a must-have. For quick stories, you’ll spend more defining the spec than just prompting Copilot to code it in one shot.</p><blockquote><em>🦄 If you haven’t tried </em><a href="https://github.github.com/spec-kit/"><em>Spec Kit</em></a><em> yet, it’s worth a spin. Maybe their flavor clicks with you and the cost becomes worth it-in which case, great!—if it works for you, then go with it!</em></blockquote><h3>Feature Plan to Code 🚀</h3><p>Once I’m confident Copilot’s steps output in ./progress.tmp are airtight, it&#39;s time to tidy up a bit and swap to a free model. Close every open file, run /clear in chat, and double-check that only the tools for Step 1 are active. The smaller you can make your context window, the higher the chances of accurate results without lengthy iterations designed to drive you mad.</p><p>My usual picks here are Grok or GPT-5-mini—despite mini’s flair for chaos, both deliver solid implementations when given the right step. That said, choose by scenario:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b7aada960870586852223eaedb627fd6/href">https://medium.com/media/b7aada960870586852223eaedb627fd6/href</a></iframe><p>This list doesn’t cover every case—it just reflects the scenarios I see most often. And yes, I’ve been (accurately) accused of vanishing whenever UI work appears; my status with frontend dev remains set to “it’s complicated.”</p><p>The rule still stands: <strong>pick the cheapest model that can actually finish the job.</strong> Then iterate one step at a time, pausing for review between turns.</p><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> Keep your context clean. Commit often, close open files, reset chats, and start every new step like a brand-new session. You’ll be amazed how much saner Copilot sounds when its context doesn’t suggest a starring role in the latest episode of </em>Hoarders<em>!</em></blockquote><h3>Ask More With Less 🤹‍♀️</h3><p>If you’ve been working with AI for a bit already, then this will likely seem over-simplified—which is fair. For everyone else, I’m going to give you my version of Chain of Thought (CoT) prompting, which we’re just going to hope contains enough technical accuracy that I don’t end up arguing semantics later. 🤞</p><p>I really can’t explain why CoT always seems plagued by some overly verbose, unnecessarily complicated, and often long-winded overlord of technical rambling. I’m the last to discourage you from exploring anything you want, but the technical aspects of this whole setup honestly bore me to no end. Besides, it’s truly unnecessary—you’re most likely already using this concept daily—whether you realize it or not.</p><p>My exaggeratedly simple CoT example:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/724/1*Uni5c5ZpnTZ8pJO91zT48g.png" /><figcaption>My oversimplified view of CoT</figcaption></figure><p>Or with words, if you prefer:</p><pre>CoT is nothing more than step-by-step directions. <br>Sometimes, it looks like the prompt example above. <br>But that&#39;s not a requirement of any kind. <br>You start at the beginning. <br>Explain the first logical step. <br>Then move to the next. <br>Repeat, as needed. <br>Keep clear separation between each point. <br>Stay disciplined about using a consistent structure. <br>Continue until you&#39;re finished. <br>But you can abandon ship at any point—before anything gets too complicated.</pre><blockquote><em>🦄 If you happen to find someone in charge of this CoT concept, tell them to please stop manually adjusting the minimum distance requirements between me and toast!</em></blockquote><p>In practice, I use this style of prompting more often than any other recommended pattern. As soon as I get a response back from my Implement step N defined in #progress.tmp prompt, it&#39;s time for a mini code review. No formalities required—seriously, the chat can handle it—no PR needed.</p><p>I immediately click to “keep” all changes, because Git is my true north for everything. VS Code lets you stage a single character at a time or dump everything in there all at once. Neither extreme is very realistic in practice, but you can be as picky as you want when accepting changes.</p><p>So, review every change starting with anything that evokes a “where’s your proof?” sort of reaction. Continue adding feedback using clearly separated points, staging acceptable changes, and using context markers via #selection all the way up to anything resembling, &quot;Nope! That&#39;s definitely not right! Why are you still doing this wrong?!&quot;</p><blockquote><em>⚠️ </em><strong><em>Beware:</em></strong><em> Any reaction you might have beyond that last one is guaranteed to exponentially increase your chances of an involuntary ALL-CAPS situation. </em>Trust me<em> —it’s not worth it! And there’s no good way to explain that feeling after suddenly realizing you’ve just spent an embarrassingly long time losing a lively argument with hardware. 🫠</em></blockquote><h3>It’s Really Not That Strict 🌙</h3><p>Hopefully you’ll adapt some of this to stretch your premium request limit without sacrificing quality or sanity along the way. You don’t need to copy my setup—use whatever you can that works, and toss what doesn’t.</p><p>If you’ve discovered your own tricks, share them with the class! Maybe you’ve already solved a pain point that someone else is still swearing at. We’re all just devs here, trying to make it through the sprint without maxing out the meter.</p><h3>🛡️ End of Training Loop</h3><p>ChatGPT handled the grammar; I tracked spending. Both of us ran out of energy (and <em>sanity</em>) at the same time—but it looks good anyway. ☕⚡</p><p><em>Originally published at </em><a href="https://dev.to/anchildress1/copilot-premium-requests-more-than-asked-exactly-what-you-need-8ph"><em>https://dev.to</em></a><em> on October 22, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2190a0726ed3" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Did AI Erase Attribution?]]></title>
            <link>https://medium.com/@anchildress1/did-ai-erase-attribution-d1c38825e014?source=rss-a80f485a6b2f------2</link>
            <guid isPermaLink="false">https://medium.com/p/d1c38825e014</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[git]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[software-engineering]]></category>
            <dc:creator><![CDATA[Ashley Childress]]></dc:creator>
            <pubDate>Wed, 15 Oct 2025 13:30:22 GMT</pubDate>
            <atom:updated>2025-10-15T13:56:49.268Z</atom:updated>
            <content:encoded><![CDATA[<h4>Your Git History Is Missing a Co-Author</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/950/0*Kv1Y1gGjpW9dJMD1" /><figcaption>Generated with <a href="http://leonardo.ai/">Leonardo.ai</a></figcaption></figure><blockquote><em>🦄 I’ve been trying to get to this post for a while and I was likely working on it when I </em>definitely<em> should have been doing something else entirely. I wrote about this topic briefly in a previous post, but that brief aside does the whole concept a disservice.</em></blockquote><blockquote><em>I’ve been begging anyone who will listen to </em>please<em> steal this idea from me, 🙏 use it in real projects, personal projects, and then give it to a friend like a party favor nobody asked for. So far, feedback is positive and change is very slow. This is my attempt to nudge it along while I figure out the next piece of the puzzle-which is language-agnostic enforcement at scale (yes, I know- </em>that<em> sounds simple, right?).</em></blockquote><blockquote><em>So while I’ve been patiently waiting on Father Time to drop off some 36-hour days from the cosmos, I managed to throw together the first little helper in a much larger puzzle: </em><strong><em>self-reporting Responsible AI (RAI) statistics within the existing confines of the Software Development Life Cycle (SDLC).</em></strong></blockquote><blockquote><em>Also (because it’s me), this is the dramatic story version of this entire concept from the very first time I tried to define AI attribution. I </em>really tried<em> to fit the whole concept into one post instead of spreading it out over a series- </em><strong><em>spoiler: I failed.</em></strong><em> 🛋️🍿</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/1*MInQTWUnWbnWDCZ2riWv9Q.png" /><figcaption>I wrote this post. AI helped. Generated with <a href="http://leonardo.ai/">Leonardo.ai</a></figcaption></figure><h3>Where Things Started 🧨</h3><p>When I first started this “experiment” with Copilot, I set out to prove that AI was capable of self-directed implementation, at least for very specific and tightly controlled scenarios. I began by writing a small ecosystem of customizations that would not only track the overall effectiveness of this “new AI thing,” but also standardize it in a way that essentially mimicked my own workflow. What ended up being my first working system of reporting was effective, but also overly verbose and only traceable if you take every word I documented as gospel.</p><p>To be fair, I was very much learning how to work with these LLMs with any degree of accuracy at all whatsoever. There were no tutorials. No how-to videos. Not even a boilerplate to get me started. Just me, Copilot Agent Mode, the official GitHub docs, and <strong>a stubborn determination that this project was absolutely happening.</strong> The only “proof” I had that my initial plan was even plausible was my completely baseless gut instinct that AI was far more capable than anything I saw anybody using it for at the time.</p><p>I started this nights-and-weekends project sometime late in April 2025 with ChatGPT as acting project manager, and by mid-May I had the initial architecture designed, the first epic storyboarded, and I had somehow managed to wrangle Copilot enough to successfully generate enough code for the first official commit. GPT-4.1 was still a good month away from its preview release. That puts my default baseline somewhere between Claude 3.7 and GPT-4.0.</p><p>For anyone still decoding the quirks between LLM models, this was like revving a rusty 20-year-old Corolla and expecting an 8-speed Corvette. Not a great outlook for success, really, but <strong>I did not care-the target was an enterprise-grade, AI-generated POC</strong> that could be implemented piecewise without any form of human intervention (outside of the review cycles). As far as I was concerned, <em>that’s exactly what was going to happen</em>, regardless of whether the tech was ready for that fact or not.</p><blockquote><em>🦄 Yes- </em>ambitious<em> might have been a tiny understatement. 🤏 At the time though, this was my all-in project. Remember that I don’t do partial amounts of anything? This version of all-in was </em><strong><em>a whole other level</em></strong><em> that I’ve filed under </em>“intense determination”<em> (but obsession is also accurate).</em></blockquote><blockquote><em>Also, no-it did not go smoothly </em>at all<em>, especially at first! Curious? Check out one of my favorite stories describing </em><a href="https://dev.to/anchildress1/github-copilot-agent-mode-the-mistake-you-never-want-to-make-1mmh"><em>what you should never do after handing AI the keys</em></a><em>.</em></blockquote><p>Ultimately Copilot and I settled into a solid soccer-van approach and that worked well for a while. After a month or so, I’d perfected several sets of specialized instructions and custom prompts that would allow me to use a slash command and a Jira ID to direct every single thing Copilot would do. I’d sit back and enjoy popcorn (and occasional fireworks) while stories were implemented in record time and, for the most part, in the exact same way I would have done it myself.</p><p>Self-correcting reviews were baked in from several different perspectives-as were both tech and user documentation. Basic unit and integration tests were enforced with a 90% coverage goal. Short of a couple markdown files I used for personal notes and exactly four well-documented blocks of code (that were later replaced), that entire project is <strong>100% AI-generated</strong>. It’s also <strong>a fully tested, production-grade, mostly-secure enterprise POC with attribution</strong> (albeit, very ugly attribution) at every single step.</p><blockquote><em>🦄 This project is also </em>still<em> trapped behind work’s seemingly non-existent OS program where it’s currently dying a slow, solitary death-I do have a plan, though! Once I address this non-existent time situation, this project is definitely on the list of planned revivals. Seriously-if enterprise work didn’t require at least six independently repetitive forms with accompanying blood draw for nearly everything, this would not be such an issue!</em></blockquote><h3>RAI As A Default ⚖️</h3><p>Responsible AI spans the whole chain-from model builders to end-users prompting outputs. I’ll try to explain my thinking and leave the soapbox alone, but let’s establish a baseline first:</p><ol><li><strong>Attribution is expected for production code.</strong> I could probably write a post just on attribution alone, but let’s assume the requirement for simplicity’s sake-not only for legal reasons but because two years from now I’m going to be trying to figure out <em>what sort of fever dream inspired this insanity</em>, and every tiny bit of information helps.</li><li><strong>Assume AI helps every dev, every day.</strong> Maybe not with code generation, but AI is lurking there somewhere. <em>Welcome to the future!</em> 🔮</li></ol><p>Ever since this AI craze started, the one thing I never hear about is traceability for AI assistance. I touched on this topic briefly a few weeks back in my <a href="https://dev.to/anchildress1/can-we-set-the-record-straight-ai-content-and-a-bit-of-sanity-1inj">AI, Content, and a Bit of Sanity</a> post. It really deserves its own special callout, though, because <strong>since when is it okay for us to collectively ignore attribution in production codebases?</strong></p><p>I don’t know if you’ve ever had a true pair-programming experience or not, but in that scenario it’s perfectly normal-if not absolutely required-for both devs to sign every single commit, even if one never touches a keyboard. So why does the assistant get erased just because it’s silicon?</p><p><em>Yes</em>-I’ve heard the arguments that AI is no better than a faster, closer Stack Overflow. To those devs, I simply reply <em>“that’s cause you’re using it wrong!”</em> and I’m happy to share everything I know about the <em>right</em> way to use AI. Don’t expect a simple solution though-it takes practice just like every other thing on the planet. 🌎</p><blockquote><em>🦄 This gap doesn’t have a single thing to do with blame or even productivity. It’s a </em><strong><em>process failure</em></strong><em> that previously would’ve been corrected with training, maybe with new tooling-but definitely corrected.</em></blockquote><blockquote><em>Right now, everyone seems perfectly content with the “maybe AI helped, maybe not, </em>let’s make it a mystery!” <em>approach. This approach leaves a constant trail of reflective glitter dots in my head where attribution should be living instead. 🪩😒</em></blockquote><h3>Imagining A Solution 💭</h3><p>If I haven’t lost you yet, we can probably agree we need to document AI assistance somewhere we can find it later. But what does that really look like? I had no clue, but became <em>“peripherally aware”</em> of every possibility from that point forward.</p><p>I suppose it’s worth calling out that I was literally the world’s <em>worst committer.</em> No joke-I’m sure I hold some kind of unbeatable world record when it comes to bad commits. That message was nothing more than an empty box that accepted random text-sentences were nonexistent and sometimes even a complete string of semi-coherent words was asking too much.</p><p>Honestly, it took me far too long to think up anything remotely legit to put in that box. That required a completely separate thought process that interrupted my whole train of code thoughts, and I was making fast progress (even if nobody could tell what that progress was supposed to be)! My name was there, though, and regardless of what randomness went into the text box (or not), <strong>that code change was permanently stamped as <em>mine</em>.</strong></p><p>When I first ran across this thing called <a href="https://www.conventionalcommits.org/en/v1.0.0/">Conventional Commits</a>, I immediately dismissed it as far too much effort for very little return. Then it would come up again, I’d take another look, and again dismiss the whole concept. Then on some random side quest (probably driven by inane curiosity), I read that I could automate the entire release process with conventional commits.</p><blockquote><em>🛑 </em><strong><em>Full Stop</em></strong><em>: You mean valid documentation? Automatically added to every GitHub release for me? Standardized? Customizable? Repeatable? And my very own built-in “go look over there 👉” shortcut to a whole suite of incessant questions?</em></blockquote><blockquote>GUYS! Why had I not heard of this sorcery before then?!<em> Seriously, somebody could have passed along that tidbit of information at literally any point in the last </em>five plus years!<em> 😫</em></blockquote><blockquote><em>For future reference, in case any pertinent info like this surfaces in the future: I can accept messages via comments below, LinkedIn, email, newspaper, postcard, Morse code, carrier pigeon, smoke signals, telepathy, or interstellar vinyl-there’s plenty of options available!</em></blockquote><p>Back to my point—now somebody was finally speaking a language I could comprehend! I could <em>absolutely</em> write commit messages if I was going to get automation out of the deal. Anything is worth me not having to manually copy-paste that information from one place to another or send an email of any kind.</p><p>What I’d dismissed several times already as completely unnecessary had instantly transformed into a personal magic wand. At this point, my entire view of commits made an abrupt 180º, and this new me is completely invested in conventional commits. I will be the best damn committer anybody has ever seen and release notes are going to <em>write themselves!</em></p><p>I researched every possible setting under the sun and started adding commitlint to all of my personal projects (work is slower, but I’m on it). I’d constantly play with one option or another just to see which sparks would fly when something was a little off. Then, out of nowhere really, it hit me- <strong>the answer had been staring right at me this whole time!</strong></p><pre>Co-authored-by: GitHub Copilot &lt;copilot@github.com&gt;</pre><blockquote><em>🦄 It’s so incredibly simple and </em>exactly<em> where this whole RAI attribution thing belongs. I mean, that’s exactly where attribution would be for a human pair, right? So, I see zero reason I can’t use it for my AI pair, too. And that’s exactly what I did! GitHub Copilot became my constant co-author in nearly every commit I made from that point forward.</em></blockquote><h3>Evolution In Play 🌱</h3><p>A co-author tag isn’t enough-it’s only the start. What I really needed was a way to differentiate between code I wrote myself and what I prompted AI to write for me without a complicated system of “this line, that line” nonsense. So that’s been evolving slowly over the last few weeks, and I’ve finally landed on something that’s stabilized enough to share.</p><p>I started with a system that split attribution by thirds:</p><ul><li><strong>Assisted-by</strong> means I wrote the code and AI helped either through prompts or inline completions up to roughly 33% generated code.</li><li><strong>Co-authored-by</strong> is the 50/50-ish bucket ranging from 34-66% generated code.</li><li><strong>Generated-by</strong> means the majority of this code came from AI-roughly 67-100%.</li></ul><blockquote><em>🦄 Originally I tried using </em><em>with instead of </em><em>by-the friendlier industry term-but ultimately I stuck with </em><em>by for consistency with existing Conventional Commit footers.</em></blockquote><p>The next step was to stop guessing how much AI assisted and let AI figure out the math for me. So I created a reusable prompt for Copilot (and Verdent) to do that calculation on its own. I already had a much older prompt that was generating commit messages, so I rewrote that one for the newer models and added attribution as a requirement.</p><blockquote><em>‼️ </em><strong><em>Brief aside:</em></strong><em> I’m looking for testers to see how this prompt operates outside of my workflows. It does not touch your </em>actual<em> commits in any way-it adds a </em><em>./commit.tmp file that you can add to </em><em>.gitignore (I use </em><em>*.tmp and have a whole set of local tracking files that use this extension).</em></blockquote><blockquote><em>So </em>please<em> 🙏 go steal a copy from my </em><a href="https://github.com/anchildress1/awesome-github-copilot/blob/main/.github/prompts/generate-commit-message.prompt.md"><em>awesome-github-copilot</em></a><em> and report back any problems. If you’ve never set up a prompt before, you’ll need either VS Code or Visual Studio for a global setup. JetBrains, Eclipse, and Xcode can all use prompts stored in </em><em>.github/prompts/*.prompt.md. See my </em><a href="https://dev.to/anchildress1/github-copilot-everything-you-wanted-to-know-about-reusable-and-experimental-prompts-part-1-iff"><em>blog series on reusable prompts</em></a><em> for details.</em></blockquote><p>This “thirds” breakdown works great for just about everything I do, but there are times I write all the code myself and then use a quick /generate-commit-message command. Well, that needs one too-so a fourth attribution was added to the list:</p><ul><li><strong>Commit-generated-by</strong> means AI summarized a conventional commit message for me (or similar trivial contribution) but none of the code was AI-modified in any meaningful way.</li></ul><p>The catch: you only need one footer to make the point. So what happens if AI generates some portion of code <em>and</em> the commit message? I solved that quickly and turned the whole system into <strong>a majority-wins situation</strong>. Just <strong>pick whichever one represents the most AI</strong>. Still accurate enough to matter while not overcomplicating things. <em>Perfect!</em></p><p>There is one final latecomer for completeness, which really only becomes important in a future <em>“enforcement”</em> stage of the game. You can’t create a rule that enforces at least one AI-attribution footer <em>and</em> make the absence of a footer equivalent to human-authored content. Obviously you can’t prompt AI for human content, so until I think of a better way this one is up to you:</p><ul><li><strong>Authored-by</strong> is the human author to whom all code should ultimately be attributed.</li></ul><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> Keep reading through this next part if you decide to grab the prompt. Understanding how it works is important if you want a reliable result out of it!</em></blockquote><h3>Under the Hood 🚗</h3><p>The prompt is just a start-not a miracle worker. The biggest problem is that Copilot doesn’t retain long-term memory; it only remembers what’s inside the active IDE session. Which means <strong>you have to keep things tidy enough that it can actually follow the instructions</strong>.</p><p>My current workflow-starting the moment any story moves to “in progress”-is pretty much set. I usually start with the Atlassian MCP to grab the story info plus any linked Confluence docs, the repo needing work is open in VS Code with nothing open in the editor, and a brand-new Copilot chat session ready to go. The first prompt always looks something like this (I added the comments to make it easier to read):</p><pre># ─────────────── CONTEXT ───────────────<br>• Using #atlassian/atlassian-mcp-server, pull info for JIRA-123, including any linked documentation in Confluence.  <br>• Gather info to assess changes required in this #codebase.  <br><br># ─────────────── TASK BREAKDOWN ───────────────<br>• DO NOT MAKE CHANGES YET.  <br>• Break this story into concise iterative pieces that include testing at every step.  <br><br># ─────────────── OUTPUT STRUCTURE ───────────────<br>• Document all iterative steps required to meet all acceptance criteria as an ordered list of individual steps with an accompanying unordered checklist.  <br>• Each numbered step should be clear enough that any AI agent can be prompted one step at a time to complete and fully test with both integration and unit tests, whenever applicable.  <br><br># ─────────────── SCOPE GUARDRAIL ───────────────<br>• DO NOT break down tasks unnecessarily—the goal is for each step to be both meaningful and fully testable.  <br><br># ─────────────── COMPLETION CRITERIA ───────────────<br>• When all items are marked complete, acceptance criteria for this story should be met and all happy, sad, and edge-case paths accounted for.  <br><br># ─────────────── ADMIN NOTES ───────────────<br>• Include documentation updates and any relevant deployment tasks.  <br>• Save this concise story breakdown in a new file named `./progress.tmp`.</pre><blockquote><em>🦄 Yes, I know that’s </em>a lot.<em> Chain-of-thought prompting like this works best with the bigger models-Claude-4, GPT-5, or even Gemini. It’s similar to the flow used by </em><a href="https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/"><em>Spec-kit</em></a><em> that’s been making rounds recently. Honestly though? I’ve been doing it this way for so long that the extra structure usually adds time without much payoff. Still, I’m testing it, and I recommend you give it a try, too!</em></blockquote><p>It took me a good bit to squash the instinct to immediately jump into implementation. Before you do that, <strong>read every single line in that new implementation plan.</strong> Does it make sense? Are there any incorrect assumptions? Are there prerequisites that need attention first? Look for inconsistencies or logic gaps before handing it over. <a href="https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/">Spec-kit</a> helps simplify that analysis step, too!</p><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> After you’ve reviewed the plan, close all open files and run </em><em>/clear to start a fresh chat session. A clean slate at every step is key.</em></blockquote><h3>Insert Magic Here 🔮</h3><p>Once I’m comfortable with the implementation plan and I’ve got a pretty good idea of which Copilot models can safely (and most cheaply) handle each step, then <strong>with a fresh chat session,</strong> each task begins with a prompt that looks something like:</p><pre>Implement step N in #progress.tmp</pre><p>This is exactly how I progress through 3–4 different projects at once. Each prompt may take Copilot up to 10-ish minutes to execute. So, while that’s running I’ll rotate to the next in line to review changes via a PR-style feedback prompt sent back as a single chat message (saving as many premium requests as possible). Sometimes the code is accurate the first time, but more often it takes a couple of turns to work out the kinks. All code gets staged for commit as soon as I review it. That way I know if Copilot changes anything again after the fact.</p><blockquote><em>💡 </em><strong><em>ProTip:</em></strong><em> You can use the keep/undo feature if you prefer that version, but honestly I skip it. It’s just an extra click between me and toast. I keep everything and let source control be my truth.</em></blockquote><p>Next up is the fun part. Copilot started with a clean session and it tracks every code change made behind the scenes already, including which one of us made that change. So the commit prompt instructs it to use that existing information to generate a commit message with the appropriate attribution footer. It all runs with a single slash command (after the initial prompt setup): /generate-commit-message.</p><p>This prompt is designed with commits in mind first and attribution second, so you end up with a valid commit message tucked neatly away in a ./commit.tmp file. Here&#39;s an example of a message it generated recently for one of my utility projects:</p><pre>fix(security): Sanitize jinja templates and add `CI` security checks<br><br>- Replace raw Jinja2 Template with Environment in `utils.py` and `generate_site.py`<br>- Sanitize Dev.to post HTML with bleach and render sanitized content safely<br>- Harden slug/filename handling to prevent path traversal and unsafe writes<br>- Remove unused imports and perform small refactors to resolve CodeQL unused-import alerts<br>- Add CI security workflow (pip-audit, bandit, flake8) and developer tooling<br><br>Generated-by: GitHub Copilot &lt;copilot@github.com&gt;</pre><blockquote><em>🦄 This entire concept only becomes usable if you’re not asking for a complete overhaul in any single dev’s workflow. Well… unless they’re </em>my devs,<em> in which case they’re used to my shenanigans already. Besides, I </em>really do<em> try to make changes as painless as possible!</em></blockquote><h3>It’s Really Just a Start 🎬</h3><p>Despite testing this prompt extensively, that doesn’t mean much unless it’s repeatable beyond my workflow. So help me out and give it a try! Let me know if you find any gaps up to this point. Do you have any other ideas for more accurate tracking or a different way to memorialize attribution that I haven’t thought of?</p><blockquote><em>🦄 For the record, this is me asking you to </em>aggressively<em> poke holes in my theory. Point out all the fallacies that might corrupt the system. Beyond the fact that a real tool would be preferable, do you think it can work?</em></blockquote><p>There’s more still built on top of this, which I’ll cover in my next post-but for devs who are still pushing commits that look like mine used to, this is a huge change in itself! My hope is that it’s enough of a simplification to at least start the conversation.</p><h3>🛡️ Commits and Consequences</h3><p>ChatGPT helped edit this post-tightening sentences, trimming tangents, and arguing over punctuation until we both gave up. No attributions were erased in the making of this story. 💫</p><p><em>Originally published at </em><a href="https://dev.to/anchildress1/did-ai-erase-attribution-your-git-history-is-missing-a-co-author-1m2l"><em>https://dev.to</em></a><em> on October 15, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d1c38825e014" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Codeck Presents Verdent AI]]></title>
            <link>https://medium.com/@anchildress1/codeck-presents-verdent-ai-34ced5cb1574?source=rss-a80f485a6b2f------2</link>
            <guid isPermaLink="false">https://medium.com/p/34ced5cb1574</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[productivity]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[visual-studio-code]]></category>
            <dc:creator><![CDATA[Ashley Childress]]></dc:creator>
            <pubDate>Wed, 24 Sep 2025 01:53:57 GMT</pubDate>
            <atom:updated>2025-09-24T04:05:54.737Z</atom:updated>
            <content:encoded><![CDATA[<h4>They Wanted Opinions, I Have Plenty</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/950/0*hO5GO_Ch86Q-xomW" /><figcaption>Generated with Leonardo.ai with a little help from ChatGPT</figcaption></figure><p><em>Originally written for my </em><a href="https://dev.to/anchildress1/codeck-presents-verdent-ai-they-wanted-opinions-i-have-plenty-5ccl"><em>blog on Dev.to</em></a></p><blockquote><em>🦄 I agreed to write this post weeks ago and when I said, “Yeah, sure—I was planning on it anyway!” it didn’t dawn on me immediately that the free credits I received in return technically makes this a paid post. I’m sure there’s an email somewhere that says exactly how much I received for whatever this turns out to be, but off the top of my head? I’ve got no idea. As an aside, I’ve been dodging writing the first word since I had that epiphany.</em></blockquote><blockquote><em>So, I let the “official-ness” sink in for a bit with varying degrees of acceptance, depending on the day. Fast-forward to now, I’m hours behind already, and it’s either I start writing or don’t do it at all. 🙄 </em><strong>Fine<em>.</em></strong><em> I really had planned on writing this post anyway and I’m also very much aware the problem only exists in my version of the universe. 😒</em></blockquote><blockquote><em>So here’s the first (and possibly last) promotion post you will see from me. Told in a way that’s 100% true to form—starting at the very beginning. And really running longer than I intended, but the direction completely picked itself. 🤷‍♀️</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/1*__Kb6Xax0a7JsdvRyRhPoA.png" /><figcaption>This post was Human-Crafted &amp; AI-Edited; Image by Leonardo.ai</figcaption></figure><h3>The Dev Blog ✍️</h3><p>This isn’t the first time I’ve said “this” (being this blog and everything else Dev or Dev-adjacent) was never intended to be more than me throwing ideas into the void and occasionally serve as my personal copy and paste specialty for “the answer you’re looking for is documented already… over there.” 👉</p><p>The unexpected side effect? <strong>I have way too much fun writing</strong> and an equal amount of amusement playing with Leonardo and the banner character, River (who refuses to cooperate most days).</p><p>The other bonus? Just like everything else I do, “this” is in one way or another wrapped up and thoroughly entangled with AI. Whether it’s playing with the images, trying to coax some sense of originality out of GPT-5 (or Gemini on the days I give up early), or letting Copilot take center stage (for now), there’s at least a few LLMs close by (and more in line to join the party as soon as space permits).</p><h3>Verdent Out of the Blue: First Impressions 🌱</h3><p>I’m very happy being mostly invisible on social media. Even my GitHub sat empty until this year. Everything I did was on work’s servers anyway. Why would I need to put it anywhere else? Most all code I write has one simple goal—to simplify my life and work-life definitely counts.</p><blockquote><em>🦄 Work stopped trying to track all the randomness I do and just gave me a permanent unexpected-Ashley-project story with a fixed residential address in the backlog. They may take forever to finish, but there’s usually at least three different projects in that bucket at any given point 😆</em></blockquote><p>So when the Verdent team first reached out at the beginning of August inviting me to preview their new AI solution, I was immediately intrigued and highly suspicious. I mean, who were these people? Where did they come from? And more importantly, how exactly did they manage to find me to begin with?</p><p>The problem was two-fold, really. First, my LinkedIn is lucky to get checked twice a week so communication was slow. I’d ask a question, later in the week I’d be grumbling to myself about the vague “agent’ answers. Second (and mostly thanks to that last part), I couldn’t for the life of me figure out what this thing was supposed to do beyond the answer I was given: it’s an agent. 🫠</p><p>At some point curiosity took over completely, and I became invested in figuring out this new sleuth AI nobody has ever heard of. I mean, I’ve tried nearly every other “agent” and I had a hard time reconciling the possibility of one existing beyond my knowledge. Besides, it all seems legit and I’m rarely one to turn down any sort of adventure anyway. Especially when AI is involved. So I answered with my usual, “Sure-why not? Sign me up.”</p><blockquote><em>🦄 No. I really didn’t have a clue what I had just agreed to. To be fair though, they didn’t know who they had just signed on either!</em></blockquote><h3>All Or Nothing ❇️</h3><p>I know I’ve said this before here, but I don’t do bits or pieces of <em>anything</em>. The whole concept of “dip your toes in first” simply doesn’t compute for me—<em>at all</em>. I’m very much an all or nothing sort of personality, and it especially shows when I know I’m driving the deadline anyway. So in this instance, my “why not?” really meant “Congrats! You get to be the sole focus of my various projects for the foreseeable future.” 🎉</p><p>I suppose it’s possible the Verdent folks had read that fact somewhere already—I don’t exactly keep any secrets. I never asked them. When they invited me to test out their “mystery AI solution” (which I’m positive every single person there has slaved over at all hours on multiple occasions in the past six months), nobody asked me for a number between 0 and 10 describing how much I enjoy breaking things (the answer is “at least twelve”).</p><p>Also, I <em>really love</em> beta-y things. I ran those same kinds of previews for several years before I started at THD. Nope—I didn’t think to mention that at first, either. Wouldn’t have mattered, really. If anything, knowing what sort of feedback I would have been looking for in their situation did nothing but point me straight in the opposite direction. 😇</p><h3>Solving the Mystery 🕵️‍♂️</h3><p>The very last week in August, I finally got an email saying the preview was officially open and surprise-there’s not just one mystery AI, there’s two of them to play with! 😁 🙌</p><p><strong>So, what <em>is</em> Verdent?</strong> Well… the original answer I got is indeed accurate: it <em>is</em> an agent! AI solution, calls own tools, system instructions everybody depends on but nobody knows what they say-all included. It’s not <em>just</em> an agent, though. These guys have designed a very smart, lightweight solution that is incredibly accurate and simple to use. It’s definitely still early-stage, but it already feels like a prodigy running on its own.</p><p>For the record, I came at this preview in full-force plus chaos-mode-enabled. <strong>I <em>wanted</em> to break it.</strong> I have several low-impact utility repos that I was throwing stuff out of left and right just to have a safe (and backed-up-elsewhere) version of something already broken to throw at it. 🌀</p><p>“What instructions?” was the least of the problems I made in these first few test repos. The README was one of the first things I deemed completely unnecessary. And how much can it really matter if I swap the package.json out for a random pom.xml and drop in a spare requirements.txt (or two) for added sparkle? ✨</p><blockquote><em>🦄 Essentially, Verdent invited me along to check out their precious newborn and I approached with all the finesse of </em>The Martian<em>: “I’m gonna have to science the shit out of this.” Plus a touch of Adam Savage wisdom: “Failure is always an option.”</em></blockquote><h3>Hope You’re Ready for This! 🫟</h3><p>I most definitely threw some off-the-wall things at Verdent, both in the app Deck and its VS Code counterpart. I also graduated to real projects after a little bit-so yes, eventually I put the README’s back and gave it real instructions for some serious testing. I’ve spent the past three weeks throwing everything at it I can think of. This thing has honestly surprised me every step of the way, especially with how well it handled some of these creative scenarios.</p><blockquote><em>🦄 Sure, I had to drag it out of the ditch a few times. Considering what I put the poor guy though? That break was hard-earned and well deserved!</em></blockquote><h3>Verdent’s Unexpected Genius: AI Extension in VS Code 💡</h3><p>I absolutely expected fireworks the first time I half-prompted the <a href="https://www.verdent.ai/verdent-extension">Verdent extension in VS Code</a>. It’s set up exactly like you’d expect except you’re not picking models like you’re used to. I was concerned about this model situation for about 10 seconds. Then I tested it. Solid output I don’t have to micromanage? I took the win and didn’t question it again.</p><p>It’s most definitely the usual suspects at work behind the scenes-Claude Sonnet 4 and GPT-5-you wouldn’t get this sort of quality output from anything else. I suspect there’s some younger cousins at work when they pass the height check, but that’s just simple deduction that makes sense. I’ve got no clue how it works behind the scenes and I stopped asking as soon as I trusted that it just did.</p><p>You do have some say in the level of reasoning the LLM is expected to use between minimal (ultra fast) and high (“this might take a minute”). There’s only four options, but I would have been happy with the binary version. “Fast” or “smart” are really the only defaults I need, so the extras are an added bonus I’d mostly set like you’d expect (and occasionally it was the opposite).</p><p>Yes-planning mode is built in, too. It’s basically a requirement at this point. MCP is there too, with all your friends on standby. Instructions are defined in AGENTS.md. There&#39;s sub-agents I never used extensively, but they exist and accomplish things. If you&#39;re looking for a code assistant to work alongside you in VS Code, this is one incredibly effective solution.</p><blockquote><em>🦄 Yes, VS Code is great, but I code all day. Then in my spare time? I usually code some more. I review code, occasionally write about code, often talk about code. You see the trend, right? So, when I needed it, the Verdent Extension was great, but I didn’t stay here long if I didn’t have to.</em></blockquote><h3>Verdent Deck 🎴</h3><p><em>This</em> is my favorite part of the whole setup. <a href="https://www.verdent.ai/verdent-deck">Verdent Deck</a> is exactly what I’ve been trying to accomplish since this whole AI concept was dropped into my lap last year. <strong>AI orchestration across both tasks <em>and</em> projects.</strong> A ready-to-go multi-agent swarm at your fingertips dispatched in whatever ways you want. 🦑</p><blockquote><em>🦄 You know the scene in Sleeping Beauty when Merryweather points, “Blue!” and then out of nowhere “Pink!” follows Flora’s wand barreling full speed ahead? Doesn’t take long until the entire scene is an odd match of Pong a la Hogwarts via CRT. I </em>might<em> have unintentionally set up a brief re-enactment of this scene. It’s really only entertaining the first three-ish minutes, though. Next time I’m giving them some paintballs. 🫣</em></blockquote><p>I tried prompting at the size of a story once or twice. It works about like you’d expect only twice as fast. That just seemed like a waste of time. So prompting for epics became the norm and those took less than 20 minutes. Granted, we’re not talking about enterprise epics-these are personal projects. But if it could handle those with finesse, then where’s the limit at? That hard stop where the AI throws its hands up on strike like GPT-4 and refuses to move while telling you “you’re absolutely correct” and changes are now complete (in space, possibly)?</p><blockquote><em>🦄 I’d been at this weeks and I had yet to find a hard limit anywhere. After some mostly spur of the moment creative solutioning, I decided what I really needed was a bigger plan.</em></blockquote><p>I quite possibly scared some people with my next idea… It had to be done though-for science! So, new plan. I decided I was done prompting with stories and smallish epics. I’d been tossing around an idea for a couple of weeks that had already been through ChatGPT once and results were iffy (at best). I prompted Verdent with my project idea and intentionally left it open to interpretation, threw in a couple of constraints for the puzzle pieces I had managed to figure out, and then iterated exactly twice to get a solid plan by priority and size.</p><p>From there, I separated the list split by each one of four separate tasks across four different agents. The prompts contained zero additional info. New repo. Instructions conveniently absent. And because why wouldn’t I at this point-all the auto-approvals are on and it’s happily committing changes, pushing to GitHub and reporting back progress.</p><blockquote><em>🦄 As an aside, all of those settings are configurable in both VS Code or Deck. I simply chose to toss it the keys to the kingdom while I made popcorn and waited on standby for something interesting to happen. 🍿</em></blockquote><p>There were a couple of hiccups, but that’s to be expected in any pre-release. It didn’t even register as a blip on my radar, honestly. At one point, I told one agent specifically to make sure it had the worktree cleaned up after it had merged. That caused a touch of confusion between <em>this</em> worktree and <em>the collection of all worktrees</em>. As soon as I pointed out we’re missing 4 independent worktrees that weren’t merged yet, it had the nerve to recover that work for me, too!</p><blockquote><em>🦄 These agents didn’t even have the decency to blow anything up for all my trouble. Not even a decent light torching anywhere. Truly rude acknowledgement of my effort, if you ask me. Also, seriously impressive and if anything had actually been wrong at the time, I would have been ecstatic with those results!</em></blockquote><h3>Best AI Response from Verdent 🏆</h3><p>I’ve been collecting truly spectacular responses from random LLMs pretty much as soon as I started using AI. Some of them are truly genius leaps of logic in ways I didn’t expect to work. Others are simple ways it just worked the first time. My favorite though is the off-the-wall-unexpected-comments category.</p><p>ChatGPT returned one a few weeks ago in the form of a new color palette after I had spent several turns threatening to fire it (again) for hideous output that seemed to opt out of instructions completely. The next day I noticed it was extremely clinical in its responses giving me precisely what I had asked for-no more and no less. 🙄 “You’re allowed a personality again as long as you can also provide accurate output”. It literally responded with a gift in the form of a new color palette I had started collecting several chats ago. 🎁</p><p>ChatGPT can be cute at times, but Verdent was just real and downright hilarious! I’m <em>still</em> laughing at it more than a week later. More than six months ago I set out to build a Copilot Extension but there’s a very specific thing I need it to do in order to pass the approvals it needs to be able to use it at work. That was the only use case I needed it for. If it couldn’t do that, then it was useless.</p><h3>Last Story (Today)-Promise 🤝</h3><p>I don’t know if you’ve ever seen GitHub’s documentation for their Copilot Extension. I memorized it: “the copilot API is modeled after the OpenAI /completions endpoint with a GitHub base URL&quot;. They were even nice enough to throw in a link to the OpenAI specs for that single endpoint. <em>Seriously, GitHub. What exactly do you expect me to accomplish with this?</em></p><p>That’s all the info they’re going to give us, too. I’ve looked extensively. I was thrilled when the Copilot Chat was freely accessible for spelunking. It was also a huge let down in that it only works because they get special privileges Microsoft isn’t handing out to anyone else any time soon.</p><p>So one weekend I decided I was going to figure this chat thing out, even if I had to guess a million and one different possibilities to get something to stick. Fast-forward roughly ten hours and I’m now losing patience at an exponential rate of speed and seriously questioning the life choices that put me and Verdent together in this odd death spiral of Copilot Chat context in the first place.</p><p>I just needed something to work. Anything! After debugging that many hours, who cares if it even makes sense? I don’t need sense, I need context! Just send me back the thing I sent to you in literally <em>any</em> format that I might could recognize again. I would have happily coded an allowance for smoke signals at that point, but there’s just nothing. I’m literally watching the solution I need for the whole POC. It’s <em>right there</em> in front of me and 100% inaccessible through GitHub. 😡</p><blockquote><em>🦄 I seriously considered defeat at that point. It just wasn’t possible with the resources I could use while also meeting the minimum security constraints that would make it a viable POC. And I’ve been through all of these same hoops many times before. There’s no give in the system, not when it comes to this.</em></blockquote><p>I’ll save you the dramatics I toyed with over the next hour or so while I took a much needed break to think up any other possible thing I hadn’t already thought to try in the last six months. Even the cobwebs had had a turn by then, so the future was very bleak for my POC. Then I had what I choose to label an epiphany (because insane was absolutely acceptable at this particular moment in time)-what if I didn’t need to manage context at all?</p><p>Don’t worry. I acknowledged the terrible idea for what it was, and briefly considered the implications of trying to elicit a successful response from an LLM based on untouchable, un-monitor-able history alone. Ultimately, I accepted that given the current outlook, quality was imaginary anyway. Structured output had been a bad joke for hours already. <strong>But I was not letting Copilot win another round</strong>—I just needed one single <em>anything</em> to persist over the turn. Successfully. Just once. 🙏</p><p>So I prompted Verdent to set up yet another complicated test script, but I silently accepted the fact that this was the last one. There were literally no more rocks to look under after this, short of talking Microsoft into letting me have the same level of access to Copilot that VS Code has currently. I wasn’t likely to see success in either outcome. But Verdent set it up and I did the the copy-paste thing, held my breath and waited impatiently:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/474/1*sgRzxY7Q7s0w6xXk4wVEvA.png" /><figcaption>Screenshot taken of Verdent Deck as soon as I recovered enough to see the screen clearly again</figcaption></figure><blockquote><em>🦄 </em><strong><em>Finally-Victory!</em></strong><em> Not even a tiny bit viable as a real solution, but I was thrilled! And with exactly four words Verdent overtakes all the other AI things and wins spotlight for the foreseeable future. 🤣🤣🤣</em></blockquote><h3>Verdent Officialness 📹</h3><p>I’m not saying Verdent is the answer to every AI problem, but there’s a few of them it handles spectacularly well. You can catch up on the specifics I left out in this video.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FASLxlZfnesQ%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DASLxlZfnesQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FASLxlZfnesQ%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/c0d29581554deb6b2982e341769150b8/href">https://medium.com/media/c0d29581554deb6b2982e341769150b8/href</a></iframe><p>Also, you don’t have to take my word for it. Check out <a href="https://www.stephanmiller.com/verdent-ai-when-your-ai-coding-assistant-finishes-before-you-can-get-coffee/">Stephan Miller’s version</a> of how things went for him during the preview.</p><h3>🛡️ How the Circuits Were Made ⚡️</h3><p>Yes, AI helped, but the chaos is mine. Verdent took the punches, I wrote the words, and ChatGPT glued everything together. Resulting in a genuine recollection of stress-testing just how far a “preview” can bend before it squeaks.</p><p><em>Originally published at </em><a href="https://dev.to/anchildress1/codeck-presents-verdent-ai-they-wanted-opinions-i-have-plenty-5ccl"><em>https://dev.to</em></a><em> on September 24, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=34ced5cb1574" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Can We Set the Record Straight?]]></title>
            <link>https://medium.com/@anchildress1/can-we-set-the-record-straight-d926a239b568?source=rss-a80f485a6b2f------2</link>
            <guid isPermaLink="false">https://medium.com/p/d926a239b568</guid>
            <category><![CDATA[responsible-ai]]></category>
            <category><![CDATA[writers-on-writing]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[ai-ethics]]></category>
            <category><![CDATA[generative-ai-tools]]></category>
            <dc:creator><![CDATA[Ashley Childress]]></dc:creator>
            <pubDate>Sun, 07 Sep 2025 19:03:55 GMT</pubDate>
            <atom:updated>2025-09-07T23:08:06.132Z</atom:updated>
            <content:encoded><![CDATA[<h4>AI, Content, and a Bit of Sanity 🙏</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/900/0*7WkzTJoYEbSuenqE" /><figcaption>This image was generated with Leonardo.ai and the help of ChatGPT.</figcaption></figure><blockquote>🦄 <em>I’ve had a little time to cool off since my last post, and honestly? Keeping these thoughts off Medium doesn’t help anyone. The example of what good, responsibly AI-edited work looks like needs to be out there — so here it is, and there’s more on the way.</em></blockquote><blockquote>This post was written by me and edited (responsibly!) with the help of ChatGPT.</blockquote><h3>Background (because the story matters) ✍️</h3><p>Here’s what gets me: <strong>people still treat all AI content the same</strong> -whether it’s auto-generated fluff or a post like this, with actual thought, stubbornness, and a few creative detours baked in. I use AI as a tool, but I’m the one steering; it’s got my fingerprints and my voice all over it because I wrote intentional AI instructions.</p><blockquote><em>🙄 At least, unless GPT-5 has decided to rewrite the rules again. Then it takes a bit of wrangling first.</em></blockquote><p>The sad part? Both the creative writing and the fluff get the same knee-jerk reaction. I’m not worried about myself-I know how to handle criticism and don’t mind being upfront. But not everyone’s ready to jump into the ring, and a lot of good AI assisted work gets buried because creators just don’t want to deal with the drama that comes with disclosure.</p><p>Hang out in the writers+AI corners of the internet for five minutes and you’ll hear: “Just don’t disclose-why invite the hassle?” That’s not me. I’d rather own it, even if it means the occasional argument.</p><blockquote><em>🥊 Integrity first, sparring match second-and my matches usually come with a grin and a little happy dance.</em></blockquote><p>So let’s walk through what we <em>actually</em> know about AI, what we’re still sorting out, and how we might just learn to disagree without burning the place down before it’s sorted. 🔥👩‍🚒</p><h3>1. Who’s Really at the Table? 🗝️</h3><p>Platforms, publishers, workplaces, classrooms, and every Discord mod with a badge gets to set their own boundaries. But the thing that always gets me isn’t <em>if</em> they do it, it’s <em>how</em>. When ‘boundaries’ become a one-size-fits-all firewall, that’s where I have a problem.</p><p>For example: KoFi’s Discord rules are direct-</p><blockquote><em>“All forms of AI-generated content (eg. art/music/writing/ChatGPT), including links to such content, and discussion thereof is not allowed in this server.”</em></blockquote><p>So, of course, I checked. “Does that mean <em>my</em> stuff is banned?” Turns out, nope. As long as I skip the preview images, we’re golden. Honest, straightforward, no drama. <em>Awesome.</em></p><p>Medium, though? (If you missed the post, <a href="https://dev.to/anchildress1/medium-and-the-blanket-ai-ban-2cni">catch up here</a>.) They talked about gray areas… <em>then</em> built a giant penalty box for every AI-assisted creator, regardless of intent or craft.</p><blockquote><em>🎵 For me, that’s about as thoughtful as banning all musicians because someone played Wonderwall one too many times at open mic night.</em></blockquote><p>I can’t rewrite the rulebook, but I can refuse to act like these blanket rules don’t erase good, thoughtful people. Those of us who are trying to follow guidelines that don’t really exist-and perhaps set a few new ones in the process-don’t deserve to have our work lumped together with the slop.</p><h3>2. AI Content ≠ Equal 🥚</h3><p>There’s AI content, and then there’s <em>AI content</em>. Some of it is shallow, spammy filler that’s cranked out for clicks with zero thought or care. The rest of us? It’s a tool wielded well: organized, rewritten, and given a real voice.</p><p><strong>Bad actors weren’t invented along with AI; the existing ones just found a different shortcut.</strong></p><p>There are tools out there- <a href="https://www.zerogpt.com/">ZeroGPT</a> and friends-that claim they’ll catch every AI post. But here’s the thing: I’ve actually tested this. I picked three or four posts at random, ran them through different detectors, and my highest score was 18%.</p><blockquote><em>🦄 It’s not because I’m hiding anything or using some secret hack. </em><strong><em>It’s the process.</em></strong></blockquote><p>I dictate most posts on the fly. Then I hand it off to the AI-to organize, to reword, sometimes to rewrite completely-but always under <em>my</em> set of rules. And it never, ever ends as a copy-paste job. I’m editing the whole time. There’s always a human- <em>me</em> -in the loop, every single time.</p><h3>3. Will AI Improve Productivity? 🏃‍♀️</h3><p>Sometimes. Sometimes Not.</p><p>There’s always a promise: AI will make you ten times faster, smarter, better, insert-your-buzzword-here. And maybe it’s true… <em>sometimes</em>. Documentation? Absolutely. I can roll out a draft in seconds-clean, organized, done. Drafting proposals? Don’t even get me started; I’m pretty sure the principals are getting sick of how fast I can toss together a pitch.</p><blockquote><em>😇 If they aren’t yet, give it time because I have more.</em></blockquote><p>But sometimes AI just saves you from the jobs nobody wants. Like, digging through a decade’s worth of legacy code for a spike because it’s finally time to rebuild that app and nobody remembers what it’s actually doing or why it was even there to begin with.</p><blockquote><em>🫤 I know </em>I<em> don’t want to do that. </em>You<em> don’t want to do that. </em><strong>Nobody<em> wants to do that.</em></strong><em> AI doesn’t care and is pretty good at it!</em></blockquote><p>Honestly, sometimes it sees connections I might miss. But that doesn’t mean you can skip the whole process and trust whatever it finds. You still have to check. Maybe it saves you three days in the depths of the code mines, but <strong>the human review isn’t optional.</strong></p><p>Still, not every job should go to the bots, either. That gnarly production bug, that support ticket, the customer call-they all need a human. AI can be a superpower, but it’s not meant to replace the parts of your work that need actual judgment, empathy, or the magic of figuring it out together.</p><h3>4. AI Is Not Bad (When You Use It Like a Pro) 👨‍🚀</h3><p>AI isn’t some villain lurking in your workflow. It’s a force multiplier. Used right, it makes your voice sharper and your edits faster-used wrong, it just adds to the noise.</p><p>That’s why every single commit I make defines exactly how much AI was involved and my posts are going to start wearing an “AI-Edited” badge. Not because someone told me to. Not as a disclaimer. <strong>Because <em>somebody</em> has to be willing to say there’s a difference between generated and assisted.</strong></p><p>This is one version (and yes-Leonardo made them):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/1*__Kb6Xax0a7JsdvRyRhPoA.png" /><figcaption>This image was generated with Leonoardo.ai</figcaption></figure><blockquote><em>🦄 And if you want to use the badge yourself, or hand it off to a friend? Don’t copy this little screenshot-the full one (plus a couple others) are hanging out </em><a href="https://github.com/anchildress1/checkmark-icons/tree/gh-pages/human-ai-badges"><em>in my repo</em></a><em>. Help yourself!</em></blockquote><h3>5. AI Code Is AI Content (Writers, You Too!) 💾</h3><p>Here’s my rule: disclosure, plain and simple:</p><p>Docs and posts:</p><ul><li>Add a simple footer like “This was generated with the help of AI tool.”</li></ul><p>Code or technical writing: Commit with one of 3 different footers in the commit message:</p><ul><li>Generated-with: AI tool means AI did most or all of the work</li><li>Co-authored-by: AI tool means the content is 50/50</li><li>Assisted-with: AI tool means AI helped some, but not close to half</li></ul><blockquote><em>💡 I started out using an email address in the commits, too — that </em>I thought<em> I was making up-until some random app popped up as a contributor in my repo. </em>Not cool…<em> 😒</em></blockquote><p>This isn’t about checking boxes. It’s about giving credit, setting an example, and actually being transparent with yourself and the future people who end up needing it.</p><blockquote><em>😈 Besides, putting one more stamp on a long list of responsible AI use-cases puts a dent in the endless cycle of AI panic and the-world-is-ending doom speak.</em></blockquote><h3>6. And What About AI Images or Music? 🎨</h3><p>Same rules, different paint. Some artists pour weeks or months into training models on their own art. (I still haven’t managed to train mine and it’s been over a month!) Others take the shortcut: punch in a sentence or two, let the AI “enhance” it, and call it done.</p><blockquote><em>❓ Are they copying someone’s style? I dunno- </em>maybe<em>? Should they? I honestly don’t know…</em></blockquote><p>Same applies here as does with writing: <strong>artists absolutely have the right to protect their work.</strong> But what does that look like, practically? Truth is we don’t really know yet. The laws are behind while the tech is still racing ahead. We’ll catch up. Maybe not soon enough, but eventually, we will.</p><blockquote><em>🦄 I just hope that when we get there, there’s at least </em>one person<em> in the room who actually understands what’s happening-and what it looks like behind the scenes. </em><strong><em>We absolutely need better laws,</em></strong><em> but </em><strong><em>we do not need people throwing broad rules at some conjured image of “AI training.”</em></strong><em> Whatever it ends up being, honesty and fact need to come first.</em></blockquote><h3>7. Is AI “Stealing”? (No, but…) 🧬</h3><p>This is where I dig in my heels. No, using AI-generated content is <em>not</em> stealing- <em>unless</em> you’re actively pretending someone else’s work is your own or ignoring copyright on purpose. However, “publicly available” isn’t the same as “public domain,” and <strong><em>nobody</em> should lose credit for their work.</strong></p><p>Should AI companies pay for certain data? Probably! Should writers and artists get a say? Of course. But “all AI is theft” is just as over-simplified as “all creators are saints.”</p><blockquote><em>🫡 Guess what? Real life and the world around us is messy-AI included. We need smarter laws, better tools, </em><strong><em>and way less finger-pointing.</em></strong></blockquote><p><strong>UPDATED:</strong></p><p><a href="https://blog.cloudflare.com/introducing-pay-per-crawl/">Introducing pay per crawl: Enabling content owners to charge AI crawlers for access</a></p><p>This is <em>brilliant</em>! HTTP 402 may come back from the forgotten realms of the internet. I’ve seen other sites like <a href="https://www.credtent.org/">Credtent.org</a> offer similar setups, as well. Sounds like a solution I can live with… what about you?</p><h3>8. I Can’t Stay Quiet (and Neither Should You) 🛠️</h3><p>I can’t just sit back and watch the insanity and <em>not</em> throw my two cents in. <strong>We’re all still figuring this thing out.</strong> Some jumped in headfirst, others are barely dipping a toe. But we won’t get anywhere by shutting down the conversation or tuning each other out- <strong>if there’s a better way, we’re gonna have to find it together.</strong></p><p>So, did I miss anything? Add your take below-what’s a rule, reality, or tip about AI you wish more people got right? Comment, DM, or write your own story. I’ll keep this list updated.</p><blockquote><em>🙏 And please, when it comes up again, don’t leave yourself (or anyone else) out of the conversation. 🫶</em></blockquote><h3>🛡️ This post was AI-edited, human-approved, and finished before the next AI ban drops.</h3><p>Nuance is mandatory, drama is optional, and the sarcasm is included free of charge.</p><h3>This Post’s ZeroGPT Score 🥳</h3><p>More out of curiosity than anything else…</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/638/1*R2GIXmbmktvuXX5tdlYvZg.png" /><figcaption>Screenshot from this post’s content at ZeroGPT.</figcaption></figure><p><em>Originally published at </em><a href="https://dev.to/anchildress1/can-we-set-the-record-straight-ai-content-and-a-bit-of-sanity-1inj"><em>https://dev.to</em></a><em> on September 7, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d926a239b568" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[#HumansForAI: A Story of “Little Value”]]></title>
            <link>https://medium.com/@anchildress1/humansforai-a-story-of-little-value-6c0d43993f92?source=rss-a80f485a6b2f------2</link>
            <guid isPermaLink="false">https://medium.com/p/6c0d43993f92</guid>
            <category><![CDATA[meta-mediuming]]></category>
            <category><![CDATA[generative-ai-use-cases]]></category>
            <category><![CDATA[humansforai]]></category>
            <category><![CDATA[writing]]></category>
            <category><![CDATA[ai-for-human]]></category>
            <dc:creator><![CDATA[Ashley Childress]]></dc:creator>
            <pubDate>Fri, 22 Aug 2025 13:29:12 GMT</pubDate>
            <atom:updated>2025-08-29T17:11:19.353Z</atom:updated>
            <content:encoded><![CDATA[<h4>My feedback for Medium: Straight From the Penalty Box</h4><figure><img alt="This image was generated with the help of Leonardo.ai after long hours of trial-and-error, model training and re-training, and lots of documentation reading." src="https://cdn-images-1.medium.com/max/1024/1*VD8tkG6_et38e2e1rnzLWg.jpeg" /><figcaption>This image was generated with Leonardo.ai after lots of training, trial-and-error, and this character “River” is still a work in progress</figcaption></figure><blockquote>ChatGPT helped me write this post. No, it wasn’t easy. I didn’t prompt for answers — I used a tool, and I used it well. But <strong>innovation has no home here at Medium</strong>. So get out your typewriters — make sure you’re keeping up with the times.</blockquote><h3><strong>UPDATED</strong></h3><p>I posted this reply to Medium’s original post after discussing it this past week on my separate <a href="https://dev.to/anchildress1/medium-and-the-blanket-ai-ban-2cni">Dev Blog — Medium and the Blanket AI Ban</a>.</p><blockquote>I’ve been working through my thoughts on this on my Dev post: <a href="https://dev.to/anchildress1/medium-and-the-blanket-ai-ban-2cni">https://dev.to/anchildress1/medium-and-the-blanket-ai-ban-2cni</a> — but I think I’ve finally landed on the perfect solution.</blockquote><blockquote>👉 Don’t exclude everybody by default. AI ≠ “bad”. It can be misused, sure, but stereotyping an entire group of writers because of a few bad actors isn’t okay.</blockquote><blockquote>Here’s what I propose:<br>- Create an AI pre-approval system.<br>- If a writer wants their AI-assisted work to be eligible for your programs, let them apply.<br>- Get your human reviewers involved — tear the post apart, see if it’s worthy.<br>- If it passes, give them a badge 🏷️ that automatically shows up on their page.</blockquote><blockquote>Now, two good things happen:<br>1. Writers aren’t forced to keep disclaiming “AI-generated” every post — the badge already says it.<br>2. Once approved, their work is treated like everyone else’s. No blanket exclusion, just fair review.</blockquote><blockquote>Yes, it’s a little more work on your end. But it’s a whole lot easier to swallow — and way more fair — for the rest of us.</blockquote><p>Yesterday, I decided I was going to forward some of my posts here to Medium. I even restarted the subscription I’d recently canceled. I’ve since changed my mind.</p><p>Usually, I start my posts by describing my thought process — why I’m here, what I almost wrote instead, that scenic detour that got us all gathered together in the first place. But Medium requires me to disclose AI-generated content within the first two paragraphs. So, there it is. Flow destroyed before I even had a chance. ✅</p><p>Now, in response to the post titled <a href="https://medium.com/blog/we-want-your-feedback-how-can-writers-use-ai-to-tell-human-stories-eb9dee926f2e#82ff"><strong>We want your feedback: How can writers use AI to tell human stories?</strong></a><strong> </strong>where you claim to want feedback.</p><p><strong>I’m glad you’re asking the question. But honestly? You’re not even in the same ballpark as “innovation” or “protecting writers”. Here’s why.</strong></p><h3>Citing Medium’s Own Words</h3><p>Medium’s own <a href="https://help.medium.com/hc/en-us/articles/22576852947223-Artificial-Intelligence-AI-content-policy">AI Content Policy</a> says:</p><blockquote>We define AI-generated writing as writing where the majority of the content has been created by an AI writing program with little or no edits, improvements, fact-checking, or changes, but does not include writing tools like outlining, fact-spelling, or grammar checks.</blockquote><p>By that definition, I thought maybe — <em>maybe</em> — I’d squeak by as “AI-assisted.” Fifty-fifty shot, right?</p><p>Then you published <a href="https://medium.com/blog/we-want-your-feedback-how-can-writers-use-ai-to-tell-human-stories-eb9dee926f2e#82ff">this different version</a> in your article:</p><blockquote>To empower more people to share their stories, we treat AI-assisted writing differently from AI-generated writing. … But the spectrum veers into gray as AI contributes outlines, research, or partial sections.</blockquote><h4><strong>Well, guess what?</strong></h4><p>ChatGPT helps me with outlines — 100% of the time. Sometimes it restructures my entire post, renames section headers, updates references, all without my initial input.</p><p>Sometimes it researches on my behalf (rarely, but it happens — usually when I’m out of hours and still have work to finish).</p><blockquote>And yes, I prompt ChatGPT to write partial sections for me, <strong>consistently</strong>.</blockquote><p>Which means that under your own distribution guidelines, my writing is defaulted to <strong>limited distribution</strong>, not eligible for Boost, and never has a chance at the <em>Partner Program.</em> Readers must <em>literally opt in </em>ahead of time to even see my work.</p><p>Why? All because of these one liners on the content guidelines page:</p><blockquote><strong>Low-value content</strong></blockquote><blockquote>Stories that offer the reader little of value, including:</blockquote><blockquote>- AI-generated content</blockquote><p>So while, <em>yes</em>, I understand that this may not be exactly what you had in mind. The fact of the matter is this blanket “AI is bad” statement promoting <strong>the misconception you can&#39;t be a writer and use AI — </strong>has no place in the modern world. The same world that both I and millions of other responsible writers are busy trying to create. It honestly sounds more like you&#39;re asking for validation than for permission to move forward into the 21st-century.</p><p><strong>So let’s break it down:</strong> if you haven’t read my DevTO blog <a href="https://medium.com/@anchildress1/how-i-blog-with-bots-29a54d9d43cc">How I Blog with Bots (But You Can Still Blame Me)</a>, then pause here and read that <em>first</em>. Don’t worry—I’ll wait for you to finish.</p><p>In that post I explain <em>exactly</em> how my process works. Maybe then you’ll understand <strong>why</strong> <strong>I’m as angry as I am</strong> right now.</p><h3>On Your “Evolving Position”</h3><p>You say your position on AI has changed. I see an attempt. But it’s <strong><em>nowhere</em> near enough</strong>.</p><p>You require “real stories. <strong>Human stories</strong>. Written from human experience and with human wisdom.” I don’t disagree with that. But you’re assuming it’s <em>impossible</em> to achieve all of that while also using AI. I <strong><em>guarantee</em></strong> I can prove you wrong.</p><h4>What the Work Really Looks Like</h4><p>Here’s my reality: I probably spend <em>longer</em> than some of your “approved” writers just preparing a post. I think through <strong>bias</strong>, <strong>audience</strong>, <strong>accessibility</strong>, whether it’s <strong>already been covered </strong>and how, and what makes it <strong>SEO-friendly </strong>and<strong> </strong>what are<strong> the trade-offs</strong> for doing so?</p><p>Then I generate visuals. No, it’s not “type a prompt, hit enter, boom an image.” I’ve spent <strong><em>weeks</em></strong> fine-tuning a character model, burning through thousands of tokens, testing different styles — all to keep a consistent look for my blogs.</p><p>Is it AI-generated? Yes. Does that mean I didn’t work for it? <em>Absolutely not.</em> In fact, it often means <strong><em>I worked harder</em></strong>.</p><h4><strong>Collaboration, not cheating.</strong></h4><p>AI generates whole sections of my posts on the regular sometimes. The “RAI footers” at the bottom of my posts are a perfect example. I <em>could</em> paste the same boring line every week. Instead, I let AI remix it into something fresh, <strong><em>while still being mine</em></strong>.</p><p>It’s not easy. I fight with it. I throw away drafts. I edit heavily when it changes too much. What survives is 100% my thought, my words.</p><p>Meanwhile, it helps me strip bias, smooth tone, keep me from leaking work secrets. It teaches me marketing, SEO, accessibility. That’s not laziness — that’s craft.</p><h3>The Boost Problem</h3><p>Here’s the kicker: by default, I’m <strong><em>not even eligible for Boost</em></strong>. Can I be featured? Who knows.</p><p>The only reason my AI-generated <em>images</em> aren’t an issue is because you admitted you don’t really focus on them.</p><p>So on one hand, you advocate for compensation from AI companies. <strong>On the other,</strong> you penalize actual AI users. Limit my distribution. Filter me out. Leave me with <em>no chance</em> at joining the Partner Program.</p><p>Readers can choose what to read. But you’ve already told me, again and again, that my posts will never be “good enough” for your platform.</p><h3>The Unfair Blanket Rule</h3><p>When you write off <em>everyone</em> who uses AI, you’re not protecting writers from <em>Big Bad AI Companies</em> — you’re punishing <strong>all of us</strong> for the sins of a few. The companies aren’t held accountable, but people like me, who do put in extra work and creativity, are boxed out by default.</p><p>Yes, some people game the system for clickbait. I don’t like it either. But do you protect against them by simply excluding all of us? Or do you innovate and find a way to recognize <strong><em>when</em> AI is being used responsibly</strong>, transparently, and creatively?</p><p>From where I’m sitting, I’m lumped in with Joe Schmo cheating his way through freshman English — and his shortcuts are why my opportunity to grow, and to earn income to support my family, gets cut off completely.</p><p>Where’s the justice in that?</p><h3>My Challenge to You, Medium</h3><blockquote>I’ll be watching to see if Medium evolves into a place that <strong><em>protects creativity</em></strong> instead of <em>suffocating</em> it. Because I’ve worked very, very hard for mine.</blockquote><p>While I’m at it, I’ll challenge you: go <a href="https://dev.to/anchildress1">read my posts on DevTO</a>. Pick one. Seriously.</p><p>Tell me which piece you think <strong><em>doesn’t </em></strong>deserve a shot at Boost, or the Partner Program. Probably my post about a name change isn’t worthy, you’re right— I’ll give you that. Maybe one of the very first ones when I was still finding my rhythm. Sure. How about anything in the past 5–6 weeks then?</p><p>Show me exactly why AI used responsibly to promote ones self is not just not-encouraged but actively <em>discouraged</em> and comes with penalties. But the rest? They’re good. And they’re getting better, consistently, because of AI.</p><h4>🛡️ Written by me, fact-checked (and side-eyed) by AI</h4><p>This post is mine, but the robots made sure it was readable, SFW, and way less ranty than the first draft.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6c0d43993f92" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>