<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by proflead on Medium]]></title>
        <description><![CDATA[Stories by proflead on Medium]]></description>
        <link>https://medium.com/@proflead?source=rss-793771e1fd56------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 13 Apr 2026 17:40:38 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@proflead/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The only GitHub Copilot CLI tutorial you will ever need]]></title>
            <link>https://medium.com/@proflead/the-only-github-copilot-cli-tutorial-you-will-ever-need-3e66d5392d7c?source=rss-793771e1fd56------2</link>
            <guid isPermaLink="false">https://medium.com/p/3e66d5392d7c</guid>
            <category><![CDATA[copilot-cli]]></category>
            <category><![CDATA[ai-coding]]></category>
            <category><![CDATA[github-copilot]]></category>
            <category><![CDATA[vibe-coding]]></category>
            <category><![CDATA[github-copilot-cli]]></category>
            <dc:creator><![CDATA[proflead]]></dc:creator>
            <pubDate>Thu, 02 Apr 2026 11:57:08 GMT</pubDate>
            <atom:updated>2026-04-03T10:33:46.253Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OL8XUIFz2u1Z2q-6xCHVUg.png" /><figcaption>GitHub Copilot CLI tutorial</figcaption></figure><p>GitHub Copilot CLI brings Copilot directly into your terminal. You can ask questions, understand a project, write and debug code, review changes, and interact with GitHub without leaving the command line. GitHub says Copilot CLI is available on all Copilot plans. It means you can try it with me today! ;)</p><h3>What GitHub Copilot CLI is</h3><p>The easiest way to think about <a href="https://github.com/features/copilot/cli">Copilot CLI</a> is this: it is an AI coding assistant designed for terminal-based work. It is not just a chat box. It can inspect your local project, help you edit code, debug problems, summarize changes, and support GitHub workflows from the terminal. GitHub describes it as a terminal-native assistant with agentic capabilities and GitHub workflow integration.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TpUvAXiYaGrnfuGRv2uhAw.png" /><figcaption>GitHub Copilot CLI</figcaption></figure><p>This matters because much real development already happens in the terminal. You run builds, start servers, use git, run tests, inspect logs, and work with Docker there. Copilot CLI embeds AI into that workflow, so you don&#39;t have to switch tools. That is the main reason it is useful.</p><p>If you want to know a bit more details about it, please watch my video:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fp7LakGgyb8M%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dp7LakGgyb8M&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fp7LakGgyb8M%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/1c4412c35cc0a80016e86f9e6706741e/href">https://medium.com/media/1c4412c35cc0a80016e86f9e6706741e/href</a></iframe><p><strong><em>Watch on YouTube: </em></strong><a href="https://youtu.be/p7LakGgyb8M"><strong><em>GitHub Copilot CLI</em></strong></a></p><h3>How to Install GitHub Copilot CLI</h3><p>You can install Copilot CLI with npm on all platforms, with Homebrew on macOS and Linux, with WinGet on Windows, or with an install script on macOS and Linux. If you install it with npm, you need Node.js 22 or later.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Xseq4-6XdXzazWSBWcOn-g.png" /><figcaption>How to Install GitHub Copilot CLI</figcaption></figure><p><strong>Install with npm</strong></p><pre>npm install -g @github/copilot</pre><p><strong>Install with Homebrew</strong></p><pre>brew install copilot-cli</pre><p><strong>Install with WinGet</strong></p><pre>winget install GitHub.Copilot</pre><p>This is the official Windows install option in the GitHub docs.</p><h3>How to Launch Copilot CLI</h3><p>After installation, start the interactive interface in <strong>your project folder (not root folder) </strong>in the terminal window with this command:</p><pre>copilot</pre><p>When you first launch the CLI, you can use /login and follow the prompts to authenticate with your GitHub account. You usually only need to do this once.</p><p>When you start the Copilot CLI in a project folder, GitHub prompts you to confirm that you trust the files in that folder. This is important because during the session, Copilot may try to read, modify, and execute files in that folder and below it. So, only continue if you trust that location.</p><h3>How Copilot CLI works</h3><p>You can type prompts in normal English, like:</p><ul><li>Explain this project</li><li>Find where authentication is handled</li><li>Add validation to this form</li><li>Help me debug this error</li></ul><p>That is the easy part. The more important part is learning the control commands inside the interactive session. These slash commands help you manage the session, switch models, review changes, share results, and use more advanced workflows.</p><h3>The core slash commands you should know</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BN6kpbbDz9OQ-TWfVKyPrw.png" /><figcaption>core slash commands you should know</figcaption></figure><p><strong>/help</strong></p><p>Use /help to view available commands. This should be your first stop whenever you forget a command or want to see what has changed. GitHub’s docs recommend checking the in-product help because the CLI evolves over time.</p><p><strong>/models</strong></p><p>Use /models to choose a model. This matters because different models may feel different in speed, reasoning depth, and output style. GitHub also has auto model selection concepts that can reduce the need to choose manually in some cases.</p><p><strong>/plan</strong></p><p>GitHub’s best practices guide says models do better when given a concrete plan. In Copilot CLI, you can press <strong>Shift + Tab</strong> to toggle between normal mode and plan mode, or you can use the /plan command from normal mode. In plan mode, Copilot creates a structured implementation plan before any code is written.</p><p>This is a very good habit for beginners. Instead of telling Copilot to build something immediately, first ask it to create a plan. That usually makes the next steps cleaner and easier to review. GitHub explicitly recommends planning before coding.</p><p><strong>/context</strong></p><p>Use /context to inspect how much context is being used. This is especially helpful in longer sessions when you want to avoid running out of room.</p><p><strong>/compact</strong></p><p>Use /compact to summarize the current conversation and free up space without fully starting over. This is useful when the session gets long but you still want to preserve the important context.</p><p><strong>/clear</strong></p><p>Use /clear to fully reset the current session. This is the fastest way to start fresh if the conversation has gone off track.</p><p><strong>/resume</strong></p><p>Use /resume to reopen an earlier session and continue from where you left off. This is one of the best workflow features because you do not always need to restart from zero.</p><p><strong>/diff</strong></p><p>Use /diff to get a summary of changes made during the session. This is a strong habit before committing, reviewing, or explaining the work to someone else.</p><p><strong>/review</strong></p><p>Use /review to run the code review agent on the changes. This is one of the most important commands in real use because it helps you check maintainability, bugs, edge cases, and general code quality after generation.</p><p><strong>/share</strong></p><p>Use /share to export the session to a Markdown file or GitHub gist. This is useful for documentation, debugging, mentoring, and team collaboration.</p><p><strong>/session</strong></p><p>Use /session to inspect information about the session, such as checkpoints, files, the current plan, or even rename the session. This becomes useful when the workflow gets more complex.</p><p><strong>/delegate</strong></p><p>Use /delegate when you want to hand the task off to Copilot coding agent on GitHub instead of keeping all work local.</p><p><strong>/fleet</strong></p><p>The /fleet lets Copilot CLI break down a complex request into smaller tasks and run them in parallel. This is especially useful for bigger maintenance tasks or multi-part work where a single linear flow is slower.</p><p><strong>/agent</strong></p><p>GitHub also supports <strong>custom agents</strong>. When using Copilot CLI, you can choose a custom agent with the /agent command, or reference the agent in a prompt or command-line argument.</p><p>In simple English, a custom agent is like a specialist teammate. You might create a frontend agent, a documentation agent, a testing agent, or a refactoring agent. This becomes very useful when you want Copilot to behave in a more repeatable and opinionated way for a certain kind of task.</p><p><strong>/skills</strong></p><p>Skills are reusable capability packages that can add instructions and specialized behavior. You can manage skills with /skills, and the supported subcommands include list, info, add, remove, and reload.</p><p>You might create skills for frontend design, documentation generation, PDF handling, release note formatting, or internal coding rules. Skills are especially useful when you do the same kind of work often and do not want to repeat the same long instructions every time.</p><h3>A simple beginner workflow</h3><p>If you are new to Copilot CLI, do not start with a giant prompt. Start with a small, safe workflow.</p><p>First, move into a project:</p><pre>cd your-project</pre><pre>copilot</pre><p>Navigate to a folder with code you want to work with and then enter copilot to start the session.</p><p>Then start with project understanding:</p><pre>Give me a simple overview of this project. Explain the folder structure, main entry points, and what I should read first.</pre><p>This fits GitHub’s positioning of Copilot CLI as a tool to answer questions and help you understand and work with code from the terminal.</p><p>Next, narrow the scope:</p><pre>Find where authentication is handled and explain it in simple English.</pre><p>Then plan before editing:</p><pre>/plan Add basic validation to the login form and keep the current style.</pre><p>After reviewing the plan, let Copilot implement the small change.</p><p>Then inspect the result:</p><pre>/diff</pre><pre>/review focus on bugs, maintainability, and edge cases</pre><h3>Autopilot mode</h3><p>A<strong>utopilot or YOLO mode</strong> lets Copilot CLI work autonomously on a task, carrying out multiple steps until the task is complete. In normal use, you usually go back and forth step by step. In autopilot mode, Copilot keeps working after the initial instruction instead of waiting for you after each step.</p><p>This is powerful, but it should be used carefully. A practical beginner rule is simple: use normal mode for learning, use plan mode for clearer structure, and move to autopilot only when you understand the task and the repo well.</p><p>You can start Copilot in automod by using this command:</p><pre>copilot --yolo</pre><p>Or if you launch Copilot, use:</p><pre>/yolo on</pre><h3>Common mistakes beginners make</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hykXjImrolv41kyIMFZe-Q.png" /><figcaption>Common mistakes beginners make</figcaption></figure><p>The most common mistake is asking for too much at once. Copilot CLI usually works better when the task is small and clear. GitHub’s best-practices guide strongly supports planning first, which is another way of saying: do not jump straight into a giant, vague request.</p><p>Another common mistake is accepting generated code without review. That is why /diff and /review should become normal habits. GitHub gives you those commands for a reason.</p><p>A third mistake is letting the session become messy. Long sessions are useful, but only if you manage them well with /context, /compact, /clear, and /resume.</p><p>A fourth mistake is being too relaxed about permissions. GitHub’s trust warnings are there because Copilot may read, modify, and execute files in the working directory. So take those prompts seriously.</p><h3>Useful Resources to Learn More about GitHub Copilot</h3><ul><li>Copilot CLI overview and guides: <a href="https://docs.github.com/en/copilot/how-tos/copilot-cli">https://docs.github.com/en/copilot/how-tos/copilot-cli</a></li><li>Getting started: <a href="https://docs.github.com/en/copilot/how-tos/copilot-cli/cli-getting-started">https://docs.github.com/en/copilot/how-tos/copilot-cli/cli-getting-started</a></li><li>Full list of CLI commands: <a href="https://docs.github.com/en/copilot/reference/cli-command-reference">https://docs.github.com/en/copilot/reference/cli-command-reference</a></li><li>GitHub Copilot CLI Free Course (Microsoft): <a href="https://developer.microsoft.com/blog/get-started-with-github-copilot-cli-a-free-hands-on-course">https://developer.microsoft.com/blog/get-started-with-github-copilot-cli-a-free-hands-on-course</a></li><li>Copilot Agent Library: <a href="https://github.com/proflead/copilot-agent-library">https://github.com/proflead/copilot-agent-library</a></li><li><a href="https://www.youtube.com/shanselman"><strong>Scott Hanselman</strong></a> (Microsoft): practical demos, real developer workflows</li><li><a href="https://www.youtube.com/c/danwahlin"><strong>Dan Wahlin</strong></a> (Microsoft): Great explanations + clean teaching style</li><li><a href="https://www.youtube.com/user/burkeholland">Burke Holland</a>: Great content about VS Code and GitHub Copilot</li></ul><h3>Conclusion</h3><p>GitHub Copilot CLI is a terminal-native AI assistant that can help you understand projects, plan work, make changes, review code, manage sessions, connect with VS Code, and grow into more advanced workflows with skills, custom agents, and autonomous execution. If you have a Copilot subscription, then definitely give it a shot.</p><p>Cheers, proflead! ;)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3e66d5392d7c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Run OpenClaw Locally with Ollama: The Ultimate Guide]]></title>
            <link>https://medium.com/@proflead/run-openclaw-locally-with-ollama-the-ultimate-guide-d717e403a197?source=rss-793771e1fd56------2</link>
            <guid isPermaLink="false">https://medium.com/p/d717e403a197</guid>
            <category><![CDATA[openclaw]]></category>
            <category><![CDATA[ollama]]></category>
            <category><![CDATA[openclaw-with-ollama]]></category>
            <category><![CDATA[openclaw-locally]]></category>
            <category><![CDATA[ai-agent]]></category>
            <dc:creator><![CDATA[proflead]]></dc:creator>
            <pubDate>Wed, 11 Mar 2026 12:25:16 GMT</pubDate>
            <atom:updated>2026-03-11T12:25:16.048Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JM5COCC-ShL4neLsgVwh1g.png" /><figcaption>Run OpenClaw Locally with Ollama: The Ultimate Guide</figcaption></figure><p>Imagine having a personal AI agent running on your computer. It can read files, run commands, automate tasks, and remember your workflows.</p><p>In this guide, you will learn how to run OpenClaw with Ollama locally and choose the best local LLM models.</p><p>This setup allows you to:</p><p>• run AI agents locally<br> • keep your data private<br> • avoid cloud API costs<br> • build powerful automation workflows</p><p>By the end of this tutorial, you will have OpenClaw running with a local model using Ollama.</p><h3>What is OpenClaw?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vL4NfIs2hgRepc87duvw5w.png" /></figure><p><a href="https://openclaw.ai/">OpenClaw</a> is an open-source AI agent framework. Unlike a normal chatbot, OpenClaw can perform real actions on your computer.</p><p>For example, it can:</p><p>• run terminal commands<br> • read and edit files<br> • automate workflows<br> • control browsers<br> • remember tasks using local memory</p><p>OpenClaw acts as a bridge between LLM reasoning models and your operating system.</p><h3>Why Run OpenClaw with Ollama?</h3><p>Running OpenClaw with Ollama gives you a fully local AI agent.</p><p>1. Full Privacy. All data stays on your computer.</p><p>2. No API Costs. You don’t need OpenAI or cloud providers.</p><p>3. Faster Performance. Local models remove network latency.</p><p>4. Persistent Memory. OpenClaw stores conversations in local Markdown files, allowing long-term memory.</p><p>5. Messaging Interface. You can control OpenClaw through:</p><p>• Telegram<br> • Slack<br> • WhatsApp</p><p>This allows you to trigger workflows from your phone.</p><h3>Best Local Models for OpenClaw</h3><p>Choosing the right local model is important for reliable agent behavior.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/723/1*l3naUuESL6meiUAF0uyuCA.png" /><figcaption>Best Local Models for OpenClaw</figcaption></figure><p>For reliable tool usage, use models 14B or larger. Small models often fail when executing multi-step commands.</p><h3>How to Install OpenClaw with Ollama</h3><h4>Step 1 — Install Ollama</h4><p>Install Ollama:</p><pre>curl -fsSL https://ollama.com/install.sh | sh</pre><p>Verify installation:</p><pre>curl http://localhost:11434/api/tags</pre><p>Then download one of these models from <a href="https://ollama.com">ollama.com</a> website:</p><ul><li><a href="https://ollama.com/library/qwen3-coder">qwen3-coder</a> — Optimized for coding tasks</li><li><a href="https://ollama.com/library/glm-4.7">glm-4.7</a> — Strong general-purpose model</li><li><a href="https://ollama.com/library/gpt-oss:20b">gpt-oss:20b</a> — Balanced performance and speed</li><li><a href="https://ollama.com/library/gpt-oss:120b">gpt-oss:120b</a> — Improved capability</li></ul><p>Example of command:</p><pre>ollama run qwen3-coder</pre><h4>Step 2 — Install OpenClaw</h4><p>Install OpenClaw:</p><pre>curl -fsSL https://openclaw.ai/install.sh | bash</pre><p>Run OpenClaw with Ollama by using this command:</p><pre>ollama launch openclaw</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/560/1*XJ5KAPqji8Nmm0WwytPevg.png" /><figcaption>Run OpenClaw with Ollama</figcaption></figure><h4>Video Walkthrough</h4><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FdRXWkHSTJG4%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DdRXWkHSTJG4&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FdRXWkHSTJG4%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/a1a9cfbf09e64632d0a6ac0960a4afff/href">https://medium.com/media/a1a9cfbf09e64632d0a6ac0960a4afff/href</a></iframe><p><strong><em>Watch on YouTube: </em></strong><a href="https://youtu.be/dRXWkHSTJG4"><strong><em>How to Set Up OpenClaw with Ollama</em></strong></a></p><h3>Security: The “Kernel Module” Warning</h3><p>As of the March 2026 security updates, OpenClaw’s broad permissions are a double-edged sword. Because it operates at the kernel/OS level:</p><ul><li><strong>Disable Web Search:</strong> For a fully local workflow, toggle search to false In your config, ensure no data snippets are sent to search engines.</li><li><strong>Audit Your Logs:</strong> OpenClaw saves every action in a local log. Periodically check these to ensure your agent isn’t performing “ghost actions.”</li><li><strong>Human in the Loop:</strong> Always keep tool permissions set to “ask” for sensitive commands like rm -rf or sending external emails.</li></ul><h3>Conclusion</h3><p>If you follow the steps in this guide, you should now have a working OpenClaw setup running with a local model.</p><p>Try it out, experiment with different models, and see what kinds of workflows you can automate.</p><p>And if you discover something interesting, feel free to share it. I’m always curious to see how people are using these tools.</p><p>Cheers, proflead! ;)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d717e403a197" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[This AI Uses Spaced Repetition to Help You Remember More]]></title>
            <link>https://medium.com/@proflead/this-ai-uses-spaced-repetition-to-help-you-remember-more-1b2ed2c4ce13?source=rss-793771e1fd56------2</link>
            <guid isPermaLink="false">https://medium.com/p/1b2ed2c4ce13</guid>
            <category><![CDATA[recall]]></category>
            <category><![CDATA[ai-for-studies]]></category>
            <category><![CDATA[active-recall]]></category>
            <category><![CDATA[how-to-study-effectively]]></category>
            <category><![CDATA[spaced-repetition]]></category>
            <dc:creator><![CDATA[proflead]]></dc:creator>
            <pubDate>Fri, 27 Feb 2026 13:00:22 GMT</pubDate>
            <atom:updated>2026-03-02T10:39:31.691Z</atom:updated>
            <content:encoded><![CDATA[<p>Think back for a moment, how much do you really remember from what you took in last week?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nZ__KdwUYEgNaUcoPnOxnw.png" /><figcaption>This AI Uses Spaced Repetition to Help You Remember More</figcaption></figure><p>Maybe you watched some educational videos, saved a few articles, bookmarked tutorials, or highlighted sections of a long report. It felt productive at the time, but recalling those details days later is usually much harder.</p><p>Getting information isn’t the hard part anymore; there’s plenty of it everywhere. The real challenge is keeping it in your memory. If we don’t review what we learn, our brains just let it slip away.</p><p><em>Research shows we forget most of what we learn within days unless we actively revisit it. Spaced repetition is a learning technique that resurfaces material at increasing intervals, right before you’re likely to forget it.</em></p><p>Modern apps make it super easy to save content. With one click, it’s stored for “later.” But over time, these saved items pile up into big personal libraries that most of us rarely revisit.</p><p>Saving things can feel productive, but it’s not the same as actually learning.</p><p>This gap between taking in information and actually remembering it is what got me interested in <a href="https://www.getrecall.ai/?t=proflead"><strong>Recall</strong></a><strong>. </strong>It’s an AI-powered knowledge base designed not just to organize what you find, but to help you remember it.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FE5AoeSj3_a0%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DE5AoeSj3_a0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FE5AoeSj3_a0%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/59e6383689c6b038441ccf30056918ac/href">https://medium.com/media/59e6383689c6b038441ccf30056918ac/href</a></iframe><p><strong><em>Watch on YouTube: </em></strong><a href="https://youtu.be/E5AoeSj3_a0?si=qAhjUcTWs-dWtYNI"><strong><em>This AI Uses Spaced Repetition to Help You Remember More</em></strong></a></p><h3>From Saving Content to Building Knowledge</h3><p>If you work in a field where learning is constant, you know the routine. Developers keep up with new tech, students juggle big reading lists, and professionals try to stay up to date in fast-changing industries.</p><p>It’s natural to save anything that seems useful, a technical article, a conference talk, or a research paper you want to check out later. Over time, these add up to a personal archive. But building a library isn’t the same as building real knowledge. Even if we read or watch something closely, if we don’t review it, most of the details fade in just a few days. What’s left is usually just a vague sense of familiarity, not something you can use with confidence.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gbMFRR3yIB5T-RfKIYgclQ.png" /></figure><p>Some people try to solve this with note-taking systems or flashcards. These can work, but they usually take a lot of manual effort, organizing notes, making cards, and connecting ideas.</p><p><strong>Recall</strong> takes a different approach. Instead of making you build a learning system yourself, it automates much of the process and helps you go back to what you’ve saved.</p><h3>What Recall Does</h3><p>Recall is more of an AI-powered knowledge base than a regular note-taking app. You can save content from lots of sources, YouTube videos, articles, PDFs, podcasts, and your own notes. After you save something, the platform creates summaries and organizes them for you.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tF1iTO8m-i-pA-WQYEW-Mw.png" /><figcaption>You can save content from lots of sources, YouTube videos, articles, PDFs, podcasts, and your own notes</figcaption></figure><p>Over time, Recall links related topics together, creating a structured network rather than just separate notes. You can even chat with your knowledge base to bring up ideas from the content you’ve collected.</p><p>While other AI tools are adding similar features, Recall focuses more on helping you remember what you’ve saved. That’s where Quiz 2.0 comes in.</p><h3>A Closer Look at Quiz 2.0</h3><p>Recall already had a quiz feature, but the new version adds more options. Now, Quiz 2.0 lets you use open-ended questions and flashcards, so you practice recalling information from memory instead of just picking the right answer.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hm3WrQb2qLFBOPeHfeciNg.png" /><figcaption>A Closer Look at Quiz 2.0</figcaption></figure><p>The goal is to help you use <strong>active recall</strong>, a learning method that improves memory by making your brain work harder. Studies show that this effort is important for building long-term memory.</p><p>One helpful feature is that you can quiz yourself on almost anything. You are not limited to online videos or articles. You can also create quizzes from your own notes, which is great for studying lectures or personal research. If you prefer to stay organized, you can write notes in Recall and turn them into quizzes whenever you want to review.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1017/1*7UrKJtRtw1k_LcDve5Lksw.png" /><figcaption>You can quiz yourself on almost anything</figcaption></figure><p>The system uses <a href="https://www.getrecall.ai/post/supercharge-your-memory-using-spaced-repetition-2023">spaced repetition</a> in the background. This means material comes back at longer intervals, usually just before you might forget it. If you get a question right, you will see it less often. If you have trouble, it comes back sooner. This way, you can focus on what you need to practice most without repeating what you already know.</p><p>You can customize your quiz setup in many ways. Choose the topics, set the difficulty, pick how many questions you want, and use a timer for focused sessions. This makes each review fit your schedule and learning goals, so the experience feels flexible instead of fixed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FxUTZmOdPjRgqjTWoiE7JA.png" /><figcaption>You can customize your quiz setup</figcaption></figure><h3>Using Recall in Practice</h3><p>To really test the platform, I saved content I actually wanted to remember, mostly technical videos and long articles. <a href="https://chromewebstore.google.com/detail/recall-summarize-anything/ldbooahljamnocpaahaidnmlgfklbben">The browser extension</a> made it quick to capture items, and after saving, <strong>Recall</strong> automatically generated a summary and added tags.</p><p>After I had a few items saved, I tried out Quiz 2.0. That’s when it changed from just storing things to actually interacting with them.</p><p>Some questions were direct, while others forced me to hesitate and think. Open-ended questions, in particular, required me to retrieve ideas without prompts — something that is noticeably harder than identifying the correct answer. That effort is important because learning tends to deepen when the brain has to work for the information.</p><p>There are features to help you stay consistent, too. A streak tracker motivates you to review regularly, and optional timers can cut down on distractions during short study sessions. You can also share quizzes to challenge a friend or coworker. For students or teams, this social side could make things more motivating.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-J_AymwMwW3QiI0_1RM23w.png" /></figure><h3>Does It Actually Help You Remember?</h3><p>No tool can promise perfect memory, and <a href="https://www.getrecall.ai/?t=proflead"><strong>Recall</strong></a> doesn’t claim to. What it does offer is structure, and that often makes the difference in whether you remember things. The quizzes acted as gentle reminders, turning saved content into something you use again.</p><p>Without those reminders, many knowledge tools just end up as archives rather than real learning spaces. Quiz 2.0 makes it easier to review what you’ve saved, and that lower effort could help you stick with learning over time.</p><p>This approach reflects a broader shift toward using AI to support <a href="https://www.getrecall.ai/post/rediscover-the-joy-of-learning-with-ai-tools-to-help-you-actively-learn">active learning </a>instead of passive consumption. Rather than just collecting information, you interact with it, test your understanding, and revisit key ideas when needed.</p><h3>Where It Fits</h3><p>There are lots of knowledge tools out there, each with its own approach. Some let you customize everything, while others focus on manual notes or flashcards.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/719/1*kIQiXDnR-vB12c32zvGg8w.png" /><figcaption>Competitor</figcaption></figure><p><strong>Recall</strong> is all about automation, summaries, organizing, connecting ideas, and making quizzes, all with very little setup. If you like moving quickly from saving to reviewing, this style might feel right for you.</p><p>Instead of replacing all your other tools, it’s better to see getrecall.ai as something that can fit alongside what you already use.</p><h3>Conclusion</h3><p>The toughest part of learning today isn’t finding information; it’s holding onto it. Getrecall.ai tries to solve this by mixing knowledge storage with regular review, and Quiz 2.0 is a big part of that. Supporting active recall and smart review timing, it helps turn what you save into lasting knowledge.</p><p>It’s not a magic fix, and you still have to be consistent. But tools that make it easier to review what you’ve learned can help you remember more over time. If you often feel like you forget most of what you take in, this might be worth a try.</p><p>You can find out more at <strong>Recall. There’s</strong> a free version to try, and a premium plan with extra features. If you want to upgrade, use the promo code <strong>prof25</strong> for 25% off a subscription until April 1, 2026.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1b2ed2c4ce13" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Codex GPT-5.3 vs Claude Opus 4.6: Which $20 Subscription Should You Buy in 2026?]]></title>
            <link>https://medium.com/@proflead/codex-gpt-5-3-vs-claude-opus-4-6-which-20-subscription-should-you-buy-in-2026-9459735ca898?source=rss-793771e1fd56------2</link>
            <guid isPermaLink="false">https://medium.com/p/9459735ca898</guid>
            <category><![CDATA[ai-coding-agent]]></category>
            <category><![CDATA[gpt35]]></category>
            <category><![CDATA[claude-opus-4-6]]></category>
            <category><![CDATA[codex-cli]]></category>
            <category><![CDATA[claude-code]]></category>
            <dc:creator><![CDATA[proflead]]></dc:creator>
            <pubDate>Mon, 16 Feb 2026 12:48:37 GMT</pubDate>
            <atom:updated>2026-02-16T12:48:37.718Z</atom:updated>
            <content:encoded><![CDATA[<p>If you are a new developer, you have probably hit the “subscription wall.” You have $20 a month to spend on an AI coding assistant, but you don’t know which one to pick.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WCNIsFvQTe_CtiycU-dq_g.png" /><figcaption>Codex GPT-5.3 vs Claude Opus 4.6: Which $20 Subscription Should You Buy in 2026?</figcaption></figure><p>Just a few years ago, developers were debating whether AI could help them write code. Today, that question is gone. AI is already part of the workflow.</p><p>Now the real question is: Which AI coding assistant should you use?</p><p>Two tools are currently dominating conversations among developers: <strong>GPT-5.3 Codex</strong> and <strong>Claude Opus 4.6</strong>. Both promise faster development, smarter debugging, and the ability to build real applications in minutes. Both are available on roughly similar entry-level plans.</p><p>But if you are a <strong>new developer</strong>, choosing the wrong tool can slow your progress. The right one, on the other hand, can dramatically accelerate your learning.</p><p>To understand the real differences, I tested both models across multiple developer tasks, from building apps to debugging code to creating a playable game.</p><p>If you want to know what actually happened. Please read this article or watch my YouTube video:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FqsYcOn1fKIk%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DqsYcOn1fKIk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FqsYcOn1fKIk%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/2ba35f5e6357bfb52b277ff335148ff7/href">https://medium.com/media/2ba35f5e6357bfb52b277ff335148ff7/href</a></iframe><p><strong><em>Watch on YouTube: </em></strong><a href="https://youtu.be/qsYcOn1fKIk"><strong><em>Claude Opus 4.6 or Codex CLI GPT-5.3</em></strong></a></p><h3>Testing Approach</h3><p>For this comparison, I intentionally avoided complex prompts.</p><p>Why? Because beginner developers rarely write perfect instructions. Most people simply describe what they want and expect the AI to figure it out.</p><p>So I used simple, realistic prompts and ran both tools side by side:</p><ul><li>Claude Opus 4.6 inside Claude Code</li><li>GPT-5.3 through Codex CLI</li></ul><p>The goal was not to chase benchmarks. It was to observe how these models behave in real development scenarios.</p><h3>Building a Full-Stack Application</h3><p>The first test was straightforward: create a full-stack to-do application.</p><p>At first, Codex looked faster. It immediately started generating code, while Claude paused to produce a structured plan.</p><p>This difference is more important than it might seem.</p><p>Claude’s plan acts like a blueprint. If something is wrong, you can fix the direction early — before hundreds of lines of code are written. For beginners, especially, this reduces confusion later.</p><p>Surprisingly, Claude finished the entire app in about four minutes. Codex followed roughly two minutes later.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kJ323L-9JVycY8rVPWAWLQ.png" /><figcaption>Claude at the left, Codex at the right</figcaption></figure><p>Codex delivered a more polished interface and even included basic form validation. Claude’s version looked simpler and skipped validation, but its internal structure felt more deliberate.</p><p>What I’ve noticed is that Codex is often optimized for results, whereas Claude is optimized for the process.</p><p>Neither approach is inherently better; it depends on what you value.</p><h3>The Hidden Reality: Usage Limits</h3><p>Pricing between these tools appears similar at first glance, but the real difference shows up in daily usage limits.</p><p>Eventually, both platforms may push you toward API usage, which adds extra cost. However, Codex typically allows larger daily workloads before hitting restrictions.</p><p>For beginners who are experimenting, breaking things, and trying again, this matters more than most people realize.</p><p>Running into a limit in the middle of a project is not just annoying; it interrupts learning.</p><h3>Debugging a Broken Application</h3><p>Next, I deliberately removed the backend from the app and asked both models to diagnose the problem.</p><p>Claude identified the issue in about thirty seconds and fixed it using minimal context.</p><p>Codex also solved it correctly, but took roughly twice as long and used significantly more tokens.</p><p>The difference highlights something fundamental: Debugging is not about speed; it is about reasoning. Claude clearly demonstrated strength in that area.</p><h3>Analyzing Architecture</h3><p>When asked to review the codebase and suggest improvements, both models performed well.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FQ8KJd4jP64m1-9RvWx0LA.png" /><figcaption>Analyzing Architecture</figcaption></figure><p>However, Claude produced more detailed feedback. Its suggestions were clearer, better formatted, and easier to follow.</p><p>Codex was not far behind; it simply felt less granular.</p><p>For an experienced engineer, that gap might not matter. For a beginner, clarity can make a huge difference.</p><p>More examples you can see in my video:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FqsYcOn1fKIk%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DqsYcOn1fKIk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FqsYcOn1fKIk%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/2ba35f5e6357bfb52b277ff335148ff7/href">https://medium.com/media/2ba35f5e6357bfb52b277ff335148ff7/href</a></iframe><p><strong><em>Watch on YouTube: </em></strong><a href="https://youtu.be/qsYcOn1fKIk"><strong><em>Claude Opus 4.6 or Codex CLI GPT-5.3</em></strong></a></p><h3>The Most Important Insight</h3><p>After all the tests, one pattern became impossible to ignore.</p><p><strong>Codex behaves like an executor.</strong> It moves quickly, writes code immediately, and focuses on momentum.</p><p><strong>Claude behaves like an architect.</strong> It plans, clarifies requirements, and carefully structures solutions.</p><p>This is not a matter of one being superior. They are built for different ways of working.</p><h3>So Which One Should You Choose?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_6wvOPGTA858dYCceyNopQ.png" /><figcaption>So Which One Should You Choose?</figcaption></figure><p>If you already subscribe to ChatGPT Plus, sticking with Codex makes sense. It is fast, capable, and its larger limits support frequent experimentation.</p><p>If you are already using Claude Pro, there is little reason to switch. Claude excels at planning, architecture, and producing structured code.</p><p>However, be aware that heavier usage may require higher-tier plans.</p><p>For developers starting from zero, the decision becomes more nuanced.</p><p>Codex is often the easier entry point simply because you can use it more without hitting limits. More usage means more practice, and practice is what builds skill.</p><p>Later, as your projects grow more complex, Claude becomes incredibly valuable for deeper engineering tasks.</p><p>In simple terms:</p><ul><li><strong>Codex helps you move faster.</strong></li><li><strong>Claude helps you think better.</strong></li></ul><p>The strongest developers will likely use both.</p><p>Use them wisely, keep learning, and focus on fundamentals.</p><p>Let me know which one you choose in the comments below!</p><p>Cheers, proflead! ;)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9459735ca898" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[This AI No-Code Tool Builds REAL Apps, Not Just Prototypes]]></title>
            <link>https://medium.com/@proflead/this-ai-no-code-tool-builds-real-apps-not-just-prototypes-ee7f15b2050c?source=rss-793771e1fd56------2</link>
            <guid isPermaLink="false">https://medium.com/p/ee7f15b2050c</guid>
            <category><![CDATA[no-code-app-builder]]></category>
            <category><![CDATA[vibe-coding]]></category>
            <category><![CDATA[ai-app-development]]></category>
            <category><![CDATA[no-code-development]]></category>
            <category><![CDATA[softr]]></category>
            <dc:creator><![CDATA[proflead]]></dc:creator>
            <pubDate>Tue, 10 Feb 2026 11:14:01 GMT</pubDate>
            <atom:updated>2026-02-10T11:22:15.158Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hak4EY9dMSFGd1hqZi8tBQ.png" /><figcaption>This AI No-Code Tool Builds REAL Apps, Not Just Prototypes</figcaption></figure><p>AI app builders have become very good at generating interfaces. From a technical perspective, that part is mostly solved. The harder problem is what comes next: connecting real data, enforcing user permissions, handling workflows, and deploying the app after the generation.</p><p>I was recently introduced to Softr, an AI no-code platform for building full-stack web apps. You can start from scratch using Softr’s database or build on existing data. At first, I thought it was another AI web app builder, but I was wrong. So give me about 5–10 minutes, and I will show you what you can do with it.</p><h3>What Softr Is</h3><p><a href="http://softr.io/?utm_source=vlad_guzey&amp;utm_medium=influencer&amp;utm_campaign=build_an_ai_app_vibe_coding_block&amp;utm_content=vlad_guzey_yt_article">Softr</a> is a full-stack no-code app builder for turning real data into real applications. It’s built for teams and builders who want to create internal tools, dashboards, and operational apps without writing code, while keeping structure, security, and control.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LpsPrarWy_D1ufru-Juz-A.png" /><figcaption>What Softr Is</figcaption></figure><p>Unlike many tools that focus only on the front end, Softr natively provides the three core components of an app: the interface, the database, and workflows.</p><p>The interface is where users log in and interact, the database stores structured data, and workflows make the app dynamic by triggering automations and integrations.</p><h3>What You Can Build with Softr</h3><p>For example, you can build an internal sales dashboard connected to a spreadsheet, a client portal where each customer sees only their own data, an operations tool for managing records and statuses, or a reporting app that updates automatically as data changes.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/795/1*ZKEeSmZ2N6864KDInH4p8w.png" /><figcaption>What You Can Build with Softr</figcaption></figure><p>You can start from scratch using Softr Databases as your backend, or you can migrate or use existing data from other tools. Softr supports more than 17 data sources, including:</p><ul><li>Google Sheets</li><li>Notion</li><li>HubSpot</li><li>Supabase</li><li>SQL</li><li>Airtable</li><li>Softr’s own database</li><li>And more</li></ul><p>Once the data is connected, you build pages that display, filter, and update that data. Users can log in, see different views based on their role, and interact with the app in a controlled way. All of this you can do without writing a single line of code!</p><h3>Workflows and Integrations</h3><p>Beyond data and UI, Softr includes native workflows that are deeply integrated into the apps you build. These workflows let your app respond to events and interact directly with your app’s data.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/871/1*5NdO1qal2r5rkkAysC-0jQ.png" /><figcaption>Workflows and Integrations</figcaption></figure><p><strong>For example, a workflow can:</strong></p><ul><li>Update app data (e.g. when all tasks are marked Done, automatically mark the project as Done in the database)</li><li>Connect to other tools (e.g. send reminder emails before a task’s due date, notify your team on Slack)</li><li>Use AI (e.g. generate an AI summary from a record and send it by email)</li></ul><p>Workflows can be triggered by user actions such as button clicks or form submissions. While the workflow runs, the user can see a loading state, then be redirected to another page or shown a personalized message. This tight integration is a key advantage compared to using external automation tools alone.</p><p>Softr also integrates with third-party automation platforms such as Zapier, Make, and n8n, enabling your app to connect to larger systems rather than operate in isolation.</p><h3>AI in Softr</h3><p>On top of this foundation, Softr adds several AI-powered features, including the Vibe Coding block, Database AI Agents, Ask AI, and other AI-assisted capabilities.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/871/1*F7fSls3GGpvWmTC2vYl3gA.png" /><figcaption>AI in Softr</figcaption></figure><p>Each of these serves a different purpose, but the most interesting for building custom apps is AI for the builder, including the Vibe Coding block and AI-assisted database and workflow creation. This is where the rest of this article will focus.</p><h3>Vibe Coding Block</h3><p>Most no-code platforms rely on predefined components. These components are useful and fast, and for most business apps, they should not be reinvented. Tables, kanban boards, forms, and simple charts are common patterns that users already understand.</p><p>But this approach also creates a ceiling. When you reach the small set of blocks that need deeper customization, predefined components are no longer enough. The Vibe Coding block is Softr’s answer to that limitation.</p><p>It lets you describe custom UI and application logic in natural language and generate it directly inside a Softr app. Instead of working around existing blocks, you can define new behavior where it’s needed.</p><p>Vibe Coding blocks are fully dynamic. They natively and safely connect to real data sources, such as Softr Databases, Airtable, or Google Sheets, and respond to filters, user input, and state changes in real time. They support conditional logic and visibility rules, so interfaces behave differently based on the user, role, or context.</p><p>You can also use Vibe Coding to generate custom forms, internal tools, or interactive components that don’t exist as default blocks. This is where the platform starts to feel less like a template-based builder and more like a flexible application framework.​</p><p>Because the Vibe Coding block lives within Softr’s system, it uses the same data model, permission rules, theme, and workflows as the rest of the app. Any logic you generate respects access controls, follows the app’s visual theme, and can trigger the same automations and integrations, making it suitable for real, production use.</p><h3>How to Use the Vibe Coding Block</h3><p>Let’s look at a few concrete examples to see how the Vibe Coding block behaves inside a real Softr app.​</p><p>You can find the “Vibe coding” block in the “Browse blocks” section.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9aYEWLUtrXnzIf2ZH-USFQ.png" /><figcaption>Vibe coding block</figcaption></figure><p>Click the yellow plus button in order to add the block to the canvas.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VRWaHq_R9XnL_LdB9PWNXA.png" /><figcaption>Add the Vibe coding block to the canvas</figcaption></figure><p>Then select your data source from the “Source” tab</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uF7aX_0pU_JUUlyz8kSLOg.png" /><figcaption>Select your data source from the source tab</figcaption></figure><h4>Use case: Interactive Quiz</h4><p>My first example is an interactive quiz. Here is the prompt:</p><blockquote>Based on the database listing questions, generate an interactive quiz block allowing to pick an answer from the list of answers, reveal a hint to get help, and goes through the questions one by one before showing a final result screen with the score, also allowing to review all answers</blockquote><p>And this is the result:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/918/1*sD2KoWLZVOFYewe-BJnh_Q.png" /><figcaption>Interactive Quiz</figcaption></figure><h4>Use Case 2: Mini CRM</h4><p>In my second example, I showed a mini CRM. We built a “Deal pipeline”. The prompt:</p><blockquote>Create a lightweight CRM interface. Allow adding contact. Display deal stages in a Kanban view. Enable drag-and-drop to update stages. Show deal value totals per column.</blockquote><p>In about 2 minutes, I got my mini CRM.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*04UFAI8QMn8RZlgypQLOpQ.png" /></figure><h4>Use Case 3: Sales Dashboard</h4><p>In my final example, I created a sales dashboard using data from my Google Sheet. This is the prompt:</p><blockquote>Generate an executive sales dashboard using the sales database.</blockquote><blockquote>At the top, create KPI cards showing:<br>- Total Revenue<br>- Deals Closed This Month<br>- Average Deal Size<br>- Win Rate</blockquote><blockquote>Add visual indicators showing growth or decline compared to last month.</blockquote><blockquote>Below the KPI cards, create:</blockquote><blockquote>1. A revenue trend line chart by month.<br>2. A bar chart showing revenue by sales rep.<br>3. A funnel visualization for pipeline stages.<br>4. A table listing high-value deals over $25,000.</blockquote><blockquote>Highlight negative trends in red and positive trends in green.</blockquote><blockquote>Make the layout clean, modern, and optimized for quick executive review.</blockquote><p>The result:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WlVkNEBeRWzcK8jvWIWZBQ.png" /><figcaption>Sales Dashboard</figcaption></figure><p>And all of these without writing a single line of code!</p><h3>Video Example of Vibe Coding Block in Action</h3><p>In the video below, I’ll walk you through the process step by step. You will see how I’m using the Vibe coding blocks, and by the end, we’ll publish the app so it’s ready for real end users.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FVq-RStDETog%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DVq-RStDETog&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FVq-RStDETog%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/cffbe3a0f17b3e4521954897b1e484a6/href">https://medium.com/media/cffbe3a0f17b3e4521954897b1e484a6/href</a></iframe><p>Watch on YouTube: <a href="https://youtu.be/Vq-RStDETog?si=ad5yeNtVNECp27dB">Softr Tutorial — Build A Real APP, Not a Prototype</a></p><h3>Deploying the App</h3><p>Once the app is ready, deployment is straightforward. You just click a couple of buttons in the interface, and that’s it. You’re not exporting code or moving to another environment. The app you build is the app that gets published.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dcWRo5I5km4iQUIzYQ5k6Q.png" /><figcaption>Deploying the App</figcaption></figure><p>You can choose whether the app is internal, restricted to logged-in users, or publicly accessible. For internal tools, access is usually limited to specific roles. For public apps, you can still control which parts require authentication.</p><p>Deployment here means making the app available to real users, not sharing a prototype link.</p><h3>Softr Pricing</h3><p>You can start with <a href="http://softr.io/?utm_source=vlad_guzey&amp;utm_medium=influencer&amp;utm_campaign=build_an_ai_app_vibe_coding_block&amp;utm_content=vlad_guzey_yt_article">Softr</a> for free or choose a package below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1001/1*te7O_oIQUE66YqKMthCymQ.png" /><figcaption>Softr Pricing</figcaption></figure><h3>Conclusion</h3><p>If you already have data and want to build a real app on top of it without writing code, Softr is worth exploring. Most business apps don’t need to be reinvented, and asking AI to generate everything end-to-end only increases the risk of errors and debugging.</p><p>By using AI where it makes sense, like vibe-coding custom blocks on top of a reliable, modern no-code infrastructure, you get the flexibility of AI with the stability needed to ship robust, production-ready business apps.</p><p><a href="http://softr.io/?utm_source=vlad_guzey&amp;utm_medium=influencer&amp;utm_campaign=build_an_ai_app_vibe_coding_block&amp;utm_content=vlad_guzey_yt_article">Give it a shot</a> and share your feedback in the comments below.</p><p>Cheers, proflead! ;)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ee7f15b2050c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[OpenClaw Tutorial: How to Install & Secure Your Personal AI Bot]]></title>
            <link>https://medium.com/@proflead/openclaw-tutorial-how-to-install-secure-your-personal-ai-bot-0dde8dc71624?source=rss-793771e1fd56------2</link>
            <guid isPermaLink="false">https://medium.com/p/0dde8dc71624</guid>
            <category><![CDATA[openclaw-security]]></category>
            <category><![CDATA[openclaw]]></category>
            <category><![CDATA[openclaw-tutorial]]></category>
            <category><![CDATA[clawdbot]]></category>
            <category><![CDATA[openclaw-bot]]></category>
            <dc:creator><![CDATA[proflead]]></dc:creator>
            <pubDate>Wed, 04 Feb 2026 11:53:50 GMT</pubDate>
            <atom:updated>2026-02-04T12:00:22.163Z</atom:updated>
            <content:encoded><![CDATA[<p>This guide covers how to set up OpenClaw (formerly Clawdbot) on your local machine and, most importantly, how to secure it so strangers can’t access your computer. If you are ready, then let’s get started! :)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OtXelnNhFB2G37HLje_1QA.png" /><figcaption>OpenClaw Tutorial: How to Install &amp; Secure Your Personal AI Bot</figcaption></figure><h3>How to Set Up OpenClaw</h3><h4>Install OpenClaw</h4><p>First, open your terminal (Command Prompt or Terminal on Mac/Linux). You need to install the tool globally. Run this command:</p><pre>curl -fsSL https://openclaw.ai/install.sh | bash</pre><p>OR if using npm directly:</p><pre>npm install -g openclaw</pre><h4>Run the Onboarding Wizard</h4><p>Once installed, start the configuration process:</p><pre>openclaw onboard</pre><ul><li><strong>Security Warning:</strong> You will see a warning that the bot works on your local machine. Read it and accept.</li><li><strong>Quick Start:</strong> Select “Quick Start” for the easiest setup.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/763/1*qMHMT8pL0rmFtLVXc1iOGg.png" /></figure><p><strong>Model Selection:</strong> Choose your AI provider (e.g., OpenAI Codex or GPT-4). You will need to log in to your provider account.</p><p><strong>Connect a chat platform</strong> — After the model is selected, OpenClaw asks you to set up a chat interface. Select your preferred platform (e.g., <strong>Telegram</strong>).</p><ol><li>Open Telegram and search for <strong>@BotFather</strong>.</li><li>Send the command /newbot.</li><li>Give your bot a name and a username (must end in _bot).</li><li><strong>Copy the Token</strong> provided by BotFather.</li><li>Paste this token into your terminal when OpenClaw asks for it.</li></ol><p>A similar process applies to WhatsApp, Discord, and other chat platforms.</p><p><strong>Get Your User ID</strong></p><p>You need to tell OpenClaw <em>who</em> is allowed to talk to it.</p><ol><li>Search for <strong>@userinfobot</strong> in Telegram.</li><li>Click “Start” to see your ID (a number).</li><li>Copy and paste this ID into the OpenClaw terminal.</li></ol><p><strong>Pair Your Bot</strong></p><p>Restart your gateway to apply changes:</p><pre>openclaw gateway restart</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/428/1*TSjeg5jbhGDM_if3eJ5Uwg.png" /><figcaption>Pair Your Bot</figcaption></figure><p><strong>Configure skills (optional)</strong> — OpenClaw can install skills (tools) to perform tasks such as sending emails or editing files. During onboarding, you can skip or install skills. If you choose to install, use <strong>npm</strong> as the node manager; otherwise, select <strong>Skip for now</strong>.</p><p><strong>Provide API keys (optional)</strong> — Some skills require API keys (e.g., Brave Search API). During setup, you can say <strong>No</strong> if you don’t have keys yet.</p><p><strong>Choose UI</strong> — OpenClaw offers a web‑based <strong>Control UI</strong> or a <strong>TUI</strong>. The TUI keeps everything in the command line and is recommended for first‑time setup. When ready, select <strong>Hatch in TUI</strong> to start the bot’s personality configuration. The bot will ask for its name and how to address you. After that, OpenClaw is ready to chat via the terminal and your chosen chat platform</p><p>If you get stuck, please watch my YouTube tutorial:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FD9j2t_w5lps%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DD9j2t_w5lps&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FD9j2t_w5lps%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/449de2efba5c4e3e4a65793c6400dec1/href">https://medium.com/media/449de2efba5c4e3e4a65793c6400dec1/href</a></iframe><p><strong><em>Watch on YouTube: </em></strong><a href="https://youtu.be/D9j2t_w5lps?si=UjSh5YFC-16u8fbv"><strong><em>How to Set Up OpenClaw</em></strong></a></p><h4>Extending capabilities</h4><p>OpenClaw can perform additional tasks after the initial setup.</p><ul><li><strong>Web searches</strong> — If you ask the bot how to perform web searches, it will guide you through obtaining an API key (for example, from the Brave Web Search API) and sending it to the bot via chat. Once the key is set, OpenClaw can search the web and return results.</li><li><strong>File operations</strong> — You can instruct your bot to research a topic and save the results to a Markdown file. The bot will generate the file and include citations.</li></ul><p>Remember that each new capability increases the bot’s permissions, so enable them carefully and keep security in mind.</p><h3>How to Secure OpenClaw</h3><p>By default, giving an AI access to your computer carries risks. Follow these steps to lock it down.</p><h4>Restrict Gateway Access</h4><p>Your bot shouldn’t be visible to the whole internet.</p><ul><li>Open your config file: ~/.openclaw/openclaw.json</li><li>Find the gateway section.</li><li>Change the address 0.0.0.0 to 127.0.0.1 (loopback) This ensures only <em>you</em> (localhost) can access the gateway.</li></ul><h4>Enable Authentication</h4><p>Make sure your gateway requires a token:</p><ul><li>In the same config file, ensure authentication is set to mode: &quot;token&quot;.</li><li>Verify a token is present. Treat this token like a password.</li></ul><h4>Set Channel Policies</h4><p>Don’t let your bot talk to strangers.</p><ul><li><strong>DM Policy:</strong> Set to &quot;pairing&quot; (requires approval).</li><li><strong>Group Policy:</strong> Set to &quot;disabled&quot; so the bot can&#39;t be added to public groups where it might leak data.</li></ul><pre>...<br>  &quot;channels&quot;: {<br>    &quot;telegram&quot;: {<br>      &quot;dmPolicy&quot;: &quot;pairing&quot;,<br>      &quot;groupPolicy&quot;: &quot;mention&quot;<br>    }<br>  }<br>...</pre><h4>Secure Your Credentials</h4><p>Protect the files that store your API keys. Run this command to make sure only <em>your user</em> can read the credentials file:</p><pre>chmod 700 ~/.openclaw/credentials</pre><h4>Run a Security Audit</h4><p>OpenClaw has a built-in tool to check for holes. Run this regularly:</p><pre>openclaw security audit --deep --fix</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jzPTRtpBXB5hC6U2ansduQ.png" /><figcaption>Run a Security Audit</figcaption></figure><p>If it finds issues, you can often fix them automatically with:</p><pre>openclaw doctor --fix</pre><h4>Watch Out for “Prompt Injection”</h4><p>Be careful when asking your bot to browse the web or read untrusted files. Bad actors can hide commands in text that trick the AI. Always use the Sandbox environment when experimenting with untrusted data.</p><h4>Final Step</h4><p>After applying these security fixes, always restart your gateway:</p><pre>openclaw gateway restart</pre><p>If you want a simple walkthrough, please check my video tutorial:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Frep62KFHtRE%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Drep62KFHtRE&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Frep62KFHtRE%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/09fc0424948803f5a02aeb410513e932/href">https://medium.com/media/09fc0424948803f5a02aeb410513e932/href</a></iframe><p><strong><em>Watch on YouTube: </em></strong><a href="https://youtu.be/rep62KFHtRE?si=FONdBK7aoKCoEddD"><strong><em>How to secure OpenClaw Bot</em></strong></a></p><h3>Conclusion</h3><p>OpenClaw gives you the power of a personal AI assistant that runs on your own hardware. When configured correctly, it can search the web, manage files, and respond to your chat messages across multiple platforms. However, because it uses tools that can execute commands on your system, security must be a first‑class concern.</p><p>Stay safe! Cheers! :)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0dde8dc71624" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Running Claude Code with Local Models Using Ollama: A Comprehensive Guide]]></title>
            <link>https://medium.com/@proflead/running-claude-code-with-local-models-using-ollama-a-comprehensive-guide-8772ad9f2df0?source=rss-793771e1fd56------2</link>
            <guid isPermaLink="false">https://medium.com/p/8772ad9f2df0</guid>
            <category><![CDATA[coding-agents]]></category>
            <category><![CDATA[claude-code-offline]]></category>
            <category><![CDATA[local-llm]]></category>
            <category><![CDATA[claude-code]]></category>
            <category><![CDATA[ollama]]></category>
            <dc:creator><![CDATA[proflead]]></dc:creator>
            <pubDate>Sat, 24 Jan 2026 13:43:29 GMT</pubDate>
            <atom:updated>2026-01-24T13:45:03.196Z</atom:updated>
            <content:encoded><![CDATA[<p>In January 2026, Ollama added <strong>support for the Anthropic Messages API</strong>, enabling Claude Code to connect directly to any Ollama model. This tutorial explains how to install Claude Code, pull and run local models using Ollama, and configure your environment for a seamless local coding experience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RiBt9qzaxek8cuv1kEYb3Q.png" /><figcaption>Running Claude Code with Local Models Using Ollama</figcaption></figure><h3>Installing Ollama</h3><p><strong>Ollama</strong> is a locally deployed AI model runner that lets you download and run large language models on your own machine. It provides a command-line interface and an API, supports open models such as Mistral and Gemma, and uses quantization to make models run efficiently on consumer hardware. A <em>model file</em> allows you to customise base models, system prompts, and parameters (temperature, top-p, top-k). Running models locally gives you offline capability and protects sensitive data.</p><p>To use Claude Code with local models, you need <strong>Ollama v0.14.0 or later</strong>. The January 2026 blog notes that this version implements Anthropic Messages API compatibility. For streaming tool calls (used when Claude Code executes functions or scripts), a pre-release such as 0.14.3‑rc1 may be required.</p><pre>curl -fsSL https://ollama.com/install.sh | sh</pre><p>After installation, verify the version with ollama version.</p><h4>Pulling a model</h4><p>Choose a local model suitable for coding tasks. You can see the full list on <a href="https://ollama.com/search">https://ollama.com/search</a> website. Pulling a model downloads and configures it. For example:</p><pre># Pull the 20 B parameter GPT‑OSS model<br>ollama pull gpt-oss:20b</pre><pre># Pull Qwen Coder (a general coding model)<br>ollama pull qwen3-coder</pre><p>To use Claude Code’s advanced tool features locally, the article <strong>Running Claude Code fully local</strong> recommends <strong>GLM-4.7-flash</strong> because it supports tool-calling and provides a 128K context length. Pull it with:</p><pre>ollama pull glm-4.7-flash:latest</pre><h3>Installing Claude Code</h3><p><strong>Claude Code</strong> is Anthropic’s agentic coding tool. It can read and modify files, run tests, fix bugs, and even handle merge conflicts across your entire code base. It uses large language models to act as a pair of autonomous hands in your terminal, letting you <strong>vibe-code</strong> (describing what you want in plain language and letting the AI generate the code).</p><pre>curl -fsSL https://claude.ai/install.sh | bash</pre><p>From your terminal, run:</p><pre>export ANTHROPIC_AUTH_TOKEN=ollama<br>export ANTHROPIC_BASE_URL=http://localhost:11434</pre><pre># Launch the integration interactively<br>ollama launch claude</pre><p>Then you will see the model list that you installed in the previous step. Select the one you want to test, then hit Enter.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/797/1*LWgghoZoluWfGStXU8jTAw.png" /><figcaption>Model list</figcaption></figure><p>And that’s it! Now your Claude code works with Ollama and local models.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4yYsVJ6PdTj_HHSMj_DAbg.png" /></figure><h3>Video Tutorial</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FCOpg79ab6ug%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DCOpg79ab6ug&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FCOpg79ab6ug%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/b6ef3a003994885fd7a5b148c0aa0970/href">https://medium.com/media/b6ef3a003994885fd7a5b148c0aa0970/href</a></iframe><p><strong><em>Watch on YouTube: </em></strong><a href="https://youtu.be/COpg79ab6ug"><strong><em>Claude Code with Ollama</em></strong></a></p><h3>Summary</h3><p>By pairing <strong>Claude Code</strong> with <strong>Ollama</strong>, you can run agentic coding workflows entirely on your own machine. Don’t expect the same experience as with the Anthropic models!</p><p>Experiment with different models and share with me which one worked the best for you!</p><p>Cheers! ;)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8772ad9f2df0" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Stop Copy-Pasting Code: How to “Teleport” Your Claude Sessions]]></title>
            <link>https://medium.com/@proflead/stop-copy-pasting-code-how-to-teleport-your-claude-sessions-058d50cf5024?source=rss-793771e1fd56------2</link>
            <guid isPermaLink="false">https://medium.com/p/058d50cf5024</guid>
            <category><![CDATA[claude]]></category>
            <category><![CDATA[anthropic-claude]]></category>
            <category><![CDATA[claude-code]]></category>
            <category><![CDATA[claude-code-tips]]></category>
            <category><![CDATA[claude-ai]]></category>
            <dc:creator><![CDATA[proflead]]></dc:creator>
            <pubDate>Mon, 19 Jan 2026 09:54:13 GMT</pubDate>
            <atom:updated>2026-01-19T09:54:13.503Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*iFUYV55Uy5ZdlO3oCd_icw.png" /><figcaption>Stop Copy-Pasting Code: How to “Teleport” Your Claude Sessions</figcaption></figure><p>Modern software development rarely happens in one place. You might start a coding session at the office, but later need to finish the job from a different computer.</p><p>There is a way. You have to push code to GitHub, pull it down on your other machine, and-worst of all-you lose your entire conversation history with your AI assistant.</p><p>Recently, I started using Session Teleportation in Claude Code. It allows you to move an entire conversation, including context, history, and the working branch, between the web and your local terminal.</p><p>In this tutorial, I show you how it works and how to use it to make your workflow seamless.</p><h3>First-Time Setup (Do This First)</h3><p>Before you can teleport anything, you need to connect your local environment to Claude’s cloud.</p><p><strong>Install and Update</strong> First, make sure you have the latest version of Claude Code.</p><pre>npm install -g @anthropic-ai/claude-code</pre><p>or use this comand:</p><pre>claude update</pre><h4><strong>Turn on Claude Code on Web</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*n74G1nZ3uTZNkh4_Ks_WrA.png" /><figcaption>Turn on Claude Code on Web</figcaption></figure><p>Open the website <a href="https://claude.ai/code">https://claude.ai/code</a> and finish the onboarding.</p><p>You <strong>must</strong> connect your GitHub account. This is critical because Claude needs access to your repositories to “teleport” the code changes between devices.</p><p><em>Note: If you use an organization’s repository (like at work), you might need to click “Grant” next to your organization’s name in the GitHub permissions screen.</em></p><p>Then set up cloud environments. Give it a name and network access.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BCoVR1GHciIk8dIp3yR5uA.png" /><figcaption>Set up cloud environments</figcaption></figure><h3>How Session Teleportation Works</h3><p>Navigate to your project folder in the terminal and run claude. Claude will detect your git repository and make sure it has permission to access it.</p><p>The teleportation is built around two simple commands.</p><p><strong>1. The </strong><strong>&amp; Prefix (Send to Web)</strong> This is how you start a &quot;background session.&quot; If I type &amp; before my prompt in the CLI or VS Code, Claude runs the task on its cloud infrastructure.</p><ul><li><em>Example:</em> &amp; Refactor the authentication module to use JWT tokens</li></ul><p><strong>2. The </strong><strong>/teleport Command (Bring to Local):</strong> This is how you resume work. You can pull that web session into your local terminal or VS Code using claude --teleport &lt;session-id&gt;.</p><p><strong>Important Note:</strong> This process is <strong>one-way</strong>. You can pull a web session down to your terminal, but you cannot “push” an existing local session up to the web. If you think you might need to switch devices later, always start your task with the &amp; prefix!</p><h3>Moving Tasks from VS Code to the Web</h3><p>Install the Claude Code extension in VS Code or Cursor (via the Extensions panel). Once installed, you can send tasks to the web directly from within your editor.</p><p>Compose your prompt. For example, if you want Claude to refactor authentication logic, start your prompt with &amp;:</p><pre>&amp; Refactor the authentication module to use JWT tokens instead of sessions</pre><p>This creates a background web session and returns a session ID. The task continues even if you close VS Code or shut down your laptop.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/795/1*31QPWfrS0NA2ilXRTiRXRA.png" /><figcaption>creates a background web session</figcaption></figure><p>Monitor the session. Use <strong><em>/tasks</em></strong> in the CLI or click on the task in the web interface to see status.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*K1MotgAWtaYtMNPmBfa_Jg.png" /><figcaption>Monitor the session</figcaption></figure><p>You can also run <strong><em>claude — status &lt;session‑id&gt;</em></strong> from any device.</p><h3>Pulling Web Sessions Back to Your Terminal (VS Code or Cursor)</h3><p>Locate your session. In the Claude chat, run <strong><em>/teleport (or /tp)</em></strong> to see all active web sessions. From the command line, run claude — teleport for an interactive picker or <strong><em>claude — teleport &lt;session‑id&gt; </em></strong>to resume a specific session.</p><pre>claude --teleport session_01RyZ89nysBFFZnqFMZ4KpkZ</pre><p>Meet the requirements. Before teleporting, Claude checks several conditions:</p><ul><li><strong>Clean Git state:</strong> You must have no uncommitted changes. Teleport will prompt you to stash them if necessary.</li><li><strong>Correct repository: </strong>You must be in a checkout of the same repository used on the web.</li><li><strong>Branch availability: </strong>The branch created during the web session must be pushed to the remote; teleport will fetch and check it out.</li><li><strong>Same account: </strong>You must be authenticated as the same Claude.ai user.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mKbNHhr2GCreExcHRxn7IQ.png" /><figcaption>Teleport the session</figcaption></figure><p>Teleport the session. Once these conditions are satisfied, Claude will fetch the branch, load the conversation history, and attach the session to your local environment. You can then continue the conversation and review code in Cursor or the terminal as if you never left.</p><h3>Pro Tips for Getting the Most Out of Claude Code Teleport</h3><ul><li><strong>Parallel Work Streams:</strong> Sometimes I run multiple &amp; commands at once to start several tasks simultaneously.</li><li><strong>Team Collaboration:</strong> This is a hidden gem. I can share a Session ID with a teammate, and they can teleport into my session on their machine. It is perfect for async pair programming.</li><li><strong>One-Way Only: </strong>Remember, you can pull a web session down to your terminal, but you cannot “push” an existing local session up to the web. Always start with &amp; if you think you might need to move!</li><li><strong>Maintain a clean Git state.</strong> Teleportation requires a clean working directory. Use Git stashes or commit your changes before pulling sessions</li></ul><h3>Claude Code Teleportation Tutorial</h3><p>I also have a video with step-by-step instructions on how to use Claude Code teleportation. Please make sure to check it out.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F2j93xjmtI9U%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D2j93xjmtI9U&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F2j93xjmtI9U%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/4b3524e8598637920bbd69f461961d61/href">https://medium.com/media/4b3524e8598637920bbd69f461961d61/href</a></iframe><p><strong><em>Watch on YouTube: </em></strong><a href="https://youtu.be/2j93xjmtI9U?si=KFnRJUOtp_K7aNJW"><strong><em>Claude Code Tutorial: Teleportation</em></strong></a></p><h3>Conclusion</h3><p>Session teleportation blurs the line between local and remote development. It allows you to offload compute‑heavy tasks to the cloud, then seamlessly resume work locally without losing context. This cross‑device mobility is valuable for distributed teams and individuals who switch machines throughout the day.</p><p>I hope you found this tutorial helpful. If so, please leave your comments and subscribe to <a href="https://www.youtube.com/@proflead/videos?sub_confirmation=1"><strong>my YouTube channel</strong></a>, where I share a lot of useful tutorials for devs ;).</p><p>Cheers!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=058d50cf5024" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Codex Skills Explained: The Complete Guide to Automating Your Prompts]]></title>
            <link>https://medium.com/@proflead/codex-skills-explained-the-complete-guide-to-automating-your-prompts-26dd5a89d580?source=rss-793771e1fd56------2</link>
            <guid isPermaLink="false">https://medium.com/p/26dd5a89d580</guid>
            <category><![CDATA[skill-md]]></category>
            <category><![CDATA[codex-skills]]></category>
            <category><![CDATA[claude-code-skills]]></category>
            <category><![CDATA[codex-cli]]></category>
            <category><![CDATA[openai-codex]]></category>
            <dc:creator><![CDATA[proflead]]></dc:creator>
            <pubDate>Tue, 13 Jan 2026 14:48:57 GMT</pubDate>
            <atom:updated>2026-01-13T14:48:57.600Z</atom:updated>
            <content:encoded><![CDATA[<p>If you are using the Codex CLI and find yourself writing the same instructions over and over again, you are not using the tool to its full potential. Codex offers a powerful feature called Skills that allows you to package reusable workflows and give your AI agent new capabilities on demand. If you want to know more about it, then read this article until the end.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gQLj8HWL_0fArBl3zmYtqQ.png" /><figcaption>Codex Skills Explained: The Complete Guide to Automating Your Prompts</figcaption></figure><h3>What Are Codex Skills?</h3><p>A Codex Skill is a reusable workflow packaged into a folder. Instead of rewriting the same instructions every time, you write them once inside the skill and let Codex handle the work.</p><p>Skills help you extend Codex with specific expertise and save time.</p><h3>How Skills Work — Progressive Disclosure</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Bc3G64K9fwl1LbVPbXHffA.png" /><figcaption>How Skills Work — Progressive Disclosure</figcaption></figure><p>Skills use a method called Progressive Disclosure:</p><ul><li>Startup: Codex loads only the names and descriptions of all skills.</li><li>On Demand: When you request a skill, Codex loads the full SKILL.md file.</li><li>Efficient: Tokens are used only when needed.</li></ul><p>This keeps performance fast and context clean.</p><h3>Where Skills Live (Skill Scopes)</h3><p>Skills can be stored in different places:</p><ul><li><strong>Global Level:</strong> Across all projects</li><li><strong>User Level:</strong> Available to your user</li><li><strong>Repository Level:</strong> Inside a specific project</li><li><strong>System Level:</strong> Default built-in skills</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/915/1*_crQmSYq6rlo7uNUWIcv0g.png" /><figcaption>Where Skills Live (Skill Scopes)</figcaption></figure><h3>How to Install Existing Skills</h3><p>Before using or creating skills, make sure your Codex CLI is updated to the latest version. The <strong>Skill Creator</strong> and <strong>Skill Installer</strong> options depend on the latest CLI features. If your version is outdated, these options may not appear in the terminal.</p><p><strong>To install an existing skill:</strong></p><ul><li>Open Codex in the terminal</li><li>Type $</li><li>Choose Skill Installer</li><li>Enter a skill name or paste a GitHub URL</li><li>Codex installs it</li><li>Restart Codex</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*V0bxZyP-p5bWO-Mzdd2bNg.png" /><figcaption>How to Install Existing Skills</figcaption></figure><p>After a restart, type $ again, and you will see your installed skills.</p><h3>How to Create Custom Skills</h3><p>There are two ways:</p><h4>Method A: Using the CLI Creator</h4><ul><li>Start Codex and type $</li><li>Choose Skill Creator</li><li>Enter the name</li><li>Enter the instruction</li><li>Codex asks follow-up questions and builds the skill</li></ul><p>If the skill ends up outside the .codex/skills folder, you must install it manually.</p><p>Simply follow the instructions above, “How to Install Existing Skills”.</p><h4>Method B: Manual Creation (Recommended)</h4><p>A skill has a simple folder structure:</p><ul><li>skill.md (Required): Main instruction file</li><li>scripts/ (Optional): Code scripts for logic</li><li>references/ (Optional): Docs or templates</li><li>assets/ (Optional): Extra resources</li></ul><p>The template of SKILL.md file:</p><pre>---<br>name: skill-name<br>description: Description that helps Codex select the skill<br>metadata:<br>  short-description: Optional user-facing description<br>---<br><br>Skill instructions for the Codex agent to follow when using this skill.</pre><p>The example of SKILL.md file:</p><pre>---<br>name: prompt-optimization<br>description: Improve and rewrite user prompts to reduce ambiguity and improve LLM output quality. Use when a user asks to optimize, refine, clarify, or rewrite a prompt for better results, or when the request is about prompt optimization or prompt rewriting.<br>---<br><br># Prompt Optimization<br><br>## Goal<br><br>Improve the user&#39;s prompt so Codex (or any LLM) produces better output while preserving intent.<br><br>## Workflow<br><br>1. Read the user&#39;s original prompt carefully.<br>2. Identify ambiguity, missing context, or unclear intent.<br>3. Rewrite the prompt to remove ambiguity and provide clear instructions.<br>4. Retain the core intention of the user&#39;s request.<br>5. Add relevant constraints (format, length, style) when helpful.<br><br>## Output format<br><br>Provide:<br>- Improved prompt<br>- Short explanation of what was improved<br><br>## Constraints<br><br>- Do not assume domain knowledge not in the original prompt.<br>- Preserve user intent.<br><br>## Example triggers<br>- “Draft me an email asking for feedback.”<br>- “Turn this into a daily to-do list.”<br>- $automating-productivity</pre><p>In order to create a new skill, follow these steps:</p><ol><li>Go to .codex/skills/</li><li>Create a new folder</li><li>Inside it, create skill.md</li><li>Add:</li></ol><ul><li>Name</li><li>Short description</li><li>Full instructions</li><li>Trigger examples (optional)</li></ul><p>Once the folder exists in .codex/skills/, Codex automatically recognizes it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_x-5Vy6t9FEOCia7eHoGQA.png" /><figcaption>How to Create Custom Skills</figcaption></figure><h3>How Skills Are Detected and Triggered</h3><p>You do not always have to invoke a skill manually. Inside skill.md, you can add trigger examples. When you type a prompt that matches one of those examples, Codex automatically runs the correct skill.</p><p><strong>For example:</strong></p><p>If a Writing Assistant skill has examples like:</p><ul><li>“Help me write a blog post”</li><li>“Draft an introduction for a video script”</li></ul><p>And you type:</p><ul><li>Help me write an article about Codex skills</li></ul><p>Codex understands the intent and triggers the Writing Assistant.</p><p>If it didn’t, then you can call the skill with the <em>$[skill-name] command</em>.</p><h3>Best Practices for Creating Codex Skills</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pZ74z_3qiz5pNfbJ7tOI0w.png" /><figcaption>Best Practices for Creating Codex Skills</figcaption></figure><p>Follow these guidelines:</p><ul><li><strong>One Skill, One Job. </strong>Keep each skill focused on a single task.</li><li><strong>Zero Context Assumption. </strong>Skills should not rely on previous messages — they must be self-contained.</li><li><strong>Refine Descriptions. </strong>If a skill doesn’t trigger, adjust its description and examples.</li><li><strong>Prefer Instructions Over Scripts. </strong>Use text instructions before complex code scripts.</li></ul><h3>GitHub Skills Library (Ready to Use)</h3><p>To help you get started quickly, I created a curated repository of ready-to-use Codex skills:</p><p><a href="https://github.com/proflead/codex-skills-library/tree/master">https://github.com/proflead/codex-skills-library/tree/master</a></p><p>This library includes:</p><ul><li>Developer-focused skills</li><li>Team-oriented workflows</li><li>Example skills you can install or adapt</li></ul><p>Use these skills to improve your workflow or as templates for your own ideas.</p><h3>Video Tutorial On Codex Skills</h3><p>I recommend watching my video tutorial where I demonstrate everything step-by-step.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fd3Ydt6LyGeY%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dd3Ydt6LyGeY&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fd3Ydt6LyGeY%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/9e5f944b796a64b1191bf2a468a2938d/href">https://medium.com/media/9e5f944b796a64b1191bf2a468a2938d/href</a></iframe><p><strong><em>Watch on YouTube: </em></strong><a href="https://youtu.be/d3Ydt6LyGeY"><strong><em>Codex Skills 101</em></strong></a></p><h3>FAQ</h3><p><strong>Q: Do I need to use $ every time I run a skill?</strong><br>A: No. If you provide trigger examples in skill.md, Codex can automatically detect and run skills based on your prompt.</p><p><strong>Q: What happens if Codex doesn’t recognize my skill prompt?</strong><br>A: Check your skill.md description and trigger examples. Make them clearer and more specific.</p><p><strong>Q: Where should I store my skills?</strong><br>A: Store them in .codex/skills/ for project-level use. For global use, place them in your user skills folder or system skills folder.</p><p><strong>Q: Are scripts required inside a skill?</strong><br>A: No. Skills can work with just instructions. Only use scripts when necessary for logic that cannot be expressed in text instructions.</p><p><strong>Q: Can skills be shared with others?</strong><br>A: Yes. You can share skill folders directly or publish them on GitHub. Others can install them using the Skill Installer.</p><p><strong>Q: Will skills slow down Codex?</strong><br>A: No. Because Codex only loads names and descriptions at startup, and loads full skill content only when needed, performance remains fast.</p><h3>Conclusion</h3><p>Codex Skills are a powerful way to automate your prompts, save time, and standardize workflows. You can start using ready-to-use skills from my GitHub library to make Codex work smarter for you.</p><p>If you find this article useful, please make sure to like and share it.</p><p>Cheers! ;)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=26dd5a89d580" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Ollama Tutorial: Run LLMs locally with Ollama — CLI, Cloud, Python]]></title>
            <link>https://medium.com/@proflead/ollama-tutorial-run-llms-locally-with-ollama-cli-cloud-python-78392fa0afd7?source=rss-793771e1fd56------2</link>
            <guid isPermaLink="false">https://medium.com/p/78392fa0afd7</guid>
            <category><![CDATA[ollama]]></category>
            <category><![CDATA[local-llm]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[ow-to-use-ollama]]></category>
            <category><![CDATA[ollamatutorial]]></category>
            <dc:creator><![CDATA[proflead]]></dc:creator>
            <pubDate>Sun, 04 Jan 2026 04:12:15 GMT</pubDate>
            <atom:updated>2026-01-04T04:32:12.095Z</atom:updated>
            <content:encoded><![CDATA[<h3>Ollama Tutorial: Run LLMs locally with Ollama — CLI, Cloud, Python</h3><p>Ollama has become the standard for running Large Language Models (LLMs) locally. In this tutorial, I want to show you the most important things you should know about Ollama.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FAGAETsxjg0o%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DAGAETsxjg0o&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FAGAETsxjg0o%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/8e73bb8039c3f08b4ad0cfc59d74fcad/href">https://medium.com/media/8e73bb8039c3f08b4ad0cfc59d74fcad/href</a></iframe><p><strong><em>Watch on YouTube: </em></strong><a href="https://youtu.be/AGAETsxjg0o"><strong><em>Ollama Full Tutorial</em></strong></a></p><h3>What is Ollama?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ubCZjTRx6BYlTMprZt5-hw.png" /><figcaption>Ollama Tutorial: Run LLMs locally with Ollama — CLI, Cloud, Python</figcaption></figure><p>Ollama is an open-source platform for running and managing large-language-model (LLM) packages entirely on your local machine. It bundles model weights, configuration, and data into a single Modelfile package. Ollama offers a command-line interface (CLI), a REST API, and a Python/JavaScript SDK, allowing users to download models, run them offline, and even call user-defined functions. Running models locally gives users privacy, removes network latency, and keeps data on the user’s device.</p><h3>Install Ollama</h3><p>Visit the official website to download Ollama <a href="https://ollama.com/">https://ollama.com/</a>. It’s available for M<strong>ac, Windows, and Linux.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sD4Wo9G8NBFtuEprBy1vTQ.png" /><figcaption>Install Ollama</figcaption></figure><p>Linux:</p><pre>curl -fsSL https://ollama.com/install.sh | sh</pre><p>macOS:</p><pre>brew install ollama</pre><p>Windows: download the .exe installer and run it.</p><h3>How to Run Ollama</h3><p>Before running models, it is essential to understand Quantization. Ollama typically runs models quantized to 4 bits (q4_0), which significantly reduces memory usage with minimal loss in quality.</p><p><strong>Recommended Hardware:</strong></p><ul><li>7B Models (e.g., Llama 3, Mistral): Requires ~8GB RAM (runs on most modern laptops).</li><li>13B — 30B Models: Requires 16GB — 32GB RAM.</li><li>70B+ Models: Requires 64GB+ RAM or dual GPUs.</li><li>GPU: An NVIDIA GPU or Apple Silicon (M1/M2/M3) is highly recommended for speed.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5J_dKRksdIU9k2gboOJmKQ.png" /><figcaption>Select the model</figcaption></figure><p>Go to<a href="https://ollama.com/search"> the Ollama website</a> and click on the “Models” and select the model for your test.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sJdomySQpBjGQqhSybNICA.png" /></figure><p>After that, click on the model name and copy the terminal command:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VjjDsRWwQXDVEhrhxiip6A.png" /></figure><p>Then, open the terminal window and paste the command:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hoonwzhIzPhZNrTWOEyJzQ.png" /></figure><p>It will allow you to download and chat with a model immediately.</p><h4>Ollama CLI — Core Commands</h4><p>Ollama’s CLI is central to model management. Common commands include:</p><ul><li>ollama pull &lt;model&gt; — Download a model</li><li>ollama run &lt;model&gt; — Run a model interactively</li><li>ollama list or ollama ls — List downloaded models</li><li>ollama rm &lt;model&gt; — Remove a model</li><li>ollama create -f &lt;Modelfile&gt; — Create a custom model</li><li>ollama serve — Start the Ollama API server</li><li>ollama ps — Show running models</li><li>ollama stop &lt;model&gt; — Stop a running model</li><li>ollama help — Show help</li></ul><h3>Advanced Customization: Custom model with Modelfiles</h3><p>You can “fine-tune” a model’s personality and constraints using a Modelfile. This is similar to a Dockerfile.</p><ul><li>Create a file named Modelfile</li><li>Add the following configuration:</li></ul><pre># 1. Base the model on an existing one<br>FROM llama3<br># 2. Set the creative temperature (0.0 = precise, 1.0 = creative)<br>PARAMETER temperature 0.7<br># 3. Set the context window size (default is 4096 tokens)<br>PARAMETER num_ctx 4096<br># 4. Define the System Prompt (The AI’s “brain”)<br>SYSTEM &quot;&quot;&quot;<br>You are a Senior Python Backend Engineer.<br>Only answer with code snippets and brief technical explanations.<br>Do not be conversational.<br>&quot;&quot;&quot;</pre><p><strong>FROM</strong> defines the base model</p><p><strong>SYSTEM</strong> sets a system prompt</p><p><strong>PARAMETER</strong> controls inference behavior</p><p>After that, you need to build the model by using this command:</p><pre>ollama create [change-to-your-custom-name] -f Modelfile</pre><p>This wraps the model + prompt template together into a reusable package.</p><p>Then run in:</p><pre>ollama run [change-to-your-custom-name]</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XflzwpGn3lcImgou9uBCFw.png" /><figcaption>Advanced Customization: Custom model with Modelfiles</figcaption></figure><h3>Ollama Server (Local API)</h3><p>Ollama can run as a local server that apps can call. To start the server use the command:</p><pre>ollama serve</pre><p>It listens on http://localhost:11434 by default.</p><h4>Raw HTTP</h4><pre>import requests<br>r = requests.post(<br>    &quot;http://localhost:11434/api/chat&quot;,<br>    json={<br>        &quot;model&quot;: &quot;llama3&quot;,<br>        &quot;messages&quot;: [{&quot;role&quot;:&quot;user&quot;,&quot;content&quot;:&quot;Hello Ollama&quot;}]<br>    }<br>)<br>print(r.json()[&quot;message&quot;][&quot;content&quot;])</pre><p>This lets you embed Ollama into apps or services.</p><h3>Python Integration</h3><p>Use Ollama inside Python applications with the official library. Run these commands:</p><p>Create and activate virtual environments:</p><pre>python3 -m venv .venv<br>source .venv/bin/activate</pre><p>Install the official library:</p><pre>pip install ollama</pre><p>Use this simple Python code:</p><pre>import ollama<br><br># This sends a message to the model &#39;gemma:2b&#39;<br>response = ollama.chat(model=&#39;gemma:2b&#39;, messages=[<br>  {<br>    &#39;role&#39;: &#39;user&#39;,<br>    &#39;content&#39;: &#39;Write a short poem about coding.&#39;<br>  },<br>])<br><br># Print the AI&#39;s reply<br>print(response[&#39;message&#39;][&#39;content&#39;])</pre><p>This works over the local API automatically when Ollama is running.</p><p><strong>You can also call a local server:</strong></p><pre>import requests<br>r = requests.post(<br>    &quot;http://localhost:11434/api/chat&quot;,<br>    json={<br>        &quot;model&quot;: &quot;llama3&quot;,<br>        &quot;messages&quot;: [{&quot;role&quot;:&quot;user&quot;,&quot;content&quot;:&quot;Hello Ollama&quot;}]<br>    }<br>)<br>print(r.json()[&quot;message&quot;][&quot;content&quot;])</pre><h3>Using Ollama Cloud</h3><p>Ollama also supports cloud models — useful when your machine can’t run very large models.</p><p>First, create an account on<a href="https://ollama.com/cloud"> https://ollama.com/cloud</a> and sign in. Then, inside the Models pag,e click on the cloud link and select any model you want to test.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9Vu4chZaQbDtsMBMWxwvoQ.png" /><figcaption>Using Ollama Cloud</figcaption></figure><p>In the models list, you will see the model with the <strong>-cloud</strong> prefix<strong>,</strong> which means it is available in the Ollama cloud.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mX4x3WSW54tdrqE1bYNTbw.png" /></figure><p>Click on it and copy the CLI command. Then, inside the terminal, use:</p><pre>ollama signin</pre><p>To sign in to your Ollama account. Once you sign in with ollama signin, then run cloud models:</p><pre>ollama run nemotron-3-nano:30b-cloud</pre><h4>Your Own Model in the Cloud</h4><p>While Ollama is local-first, Ollama Cloud allows you to push your custom models (the ones you built with Modelfiles) to the web to share with your team or use across devices.</p><ul><li>Create an account at ollama.com.</li><li>Add your public key (found in ~/.ollama/id_ed25519.pub).</li><li>Push your custom model:</li></ul><pre>ollama push your-username/change-to-your-custom-model-name</pre><h3>Conclusion</h3><p>That is the complete overview of Ollama! It is a powerful tool that gives you total control over AI. If you like this tutorial, please like it and share your feedback in the section below.</p><p>Cheers! ;)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=78392fa0afd7" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>