Integrating Ollama: Bringing Self‑Hosted AI to WordPress with ClassifAI

Integrating Ollama: Bringing Self‑Hosted AI to WordPress with ClassifAI

From Curiosity to Integration: Why Self‑Hosted AI? 🤔

When I first discovered Ollama – an open-source tool that lets you run large language models (LLMs) on your own machine – I was ecstatic. The idea of running AI locally (right on a server or laptop) resonated with me as both a WordPress engineer and an open-source enthusiast, working at 10up. Why? Because it promised two huge benefits that any developer can appreciate: cost savings and privacy. With local AI, there’s no need to pay per API call or send sensitive content to a third-party cloud. All the processing happens under your roof, giving you complete control over your data. For someone like me working on enterprise WordPress projects, those advantages were too exciting to ignore.

At the same time, the broader tech community has been buzzing about open-source LLMs. Tools like Ollama have made it much easier to experiment with models like Llama 3, Mistral, DeepSeek, etc., without relying on proprietary APIs. This growing interest in self-hosted AI aligns perfectly with WordPress’s own open-source spirit. WordPress empowered individuals and organizations to own their web platform; now open-source AI is doing the same for machine intelligence. 

I realized that integrating Ollama into our AI plugin ClassifAI could unlock something special: bringing the power of generative AI to WordPress on our terms. No cloud fees, no data leaving the site – just WordPress and AI working together in a more private, flexible way.

Building the Ollama Integration for ClassifAI 🛠️

As an Associate Director of Engineering at Fueled+10up, I’ve led development on ClassifAI, our open-source WordPress plugin that adds AI capabilities to sites. Historically, ClassifAI relied on cloud services like IBM Watson, Azure, or OpenAI for its smarts. The prospect of adding Ollama as a new “provider” got me energized, because it meant WordPress admins could run AI features with zero cloud dependence. But making this a reality was an adventure of its own – one that combined technical work with some creative problem-solving.

First, why Ollama? We chose Ollama as our initial foray into self-hosted AI because it’s purpose-built for managing local LLMs. It packages models neatly and provides a simple way to run them on macOS, Linux, or Windows. In other words, it lowers the barrier to entry for WordPress developers who might not be machine learning experts. Once I had Ollama running on my machine (with a few test models downloaded), the next step was teaching ClassifAI to “talk” to it.

ClassifAI’s architecture treats AI services as pluggable providers. We already support OpenAI, Azure, and others, so the goal was to implement Ollama as another provider option. Behind the scenes, this meant writing code to send the content (prompts, images, etc.) from WordPress to the local Ollama API and then parse the response. For example, when an editor wants to generate a post excerpt using ClassifAI, and they’ve selected the Ollama provider, the plugin will send the post content to http://localhost:11434/api/chat with a prompt for the model (e.g. “Summarize this post in one sentence”). The Ollama service then returns the AI-generated excerpt, which we capture and show in WordPress. It’s essentially the same flow as using a cloud API – except the AI model can run on the same box as WordPress! 🚀

One of the challenges we faced was ensuring our integration could support different types of AI tasks with appropriate models. Open-source models tend to be specialized; one model might be great at text generation, another at image analysis, another at embeddings (for categorizing or comparing text). We made our Ollama integration flexible: if you’ve loaded a general language model, ClassifAI can now do things like Title Generation, Excerpt Generation, Content Resizing, and Key Takeaways with it. If you install a vision model (for images), you unlock features like Descriptive Image Alt-Text, Image Tagging, and Image Text Extraction. And with an embeddings model, you can run Content Classification locally. In plain terms, a whole swath of ClassifAI’s features – from suggesting SEO-friendly titles to auto-tagging images – can now run via local AI models powered by Ollama. This was a huge milestone: it dramatically expands what WordPress users can do with AI without cloud services.

Of course, no project is without its hiccups. One puzzle was performance. During development, I noticed that the first AI request to Ollama could be slow. The reason: the model has to load into memory the first time you use it. After that, Ollama keeps it in RAM for a while, so subsequent requests are much faster. We had to make sure this “warm-up” delay was handled gracefully – perhaps by informing the user if a request is taking longer initially. It’s a different user experience than the near-instant responses from cloud APIs, but an acceptable trade-off given you’re not paying per query. We also acknowledged that the quality of responses from smaller, open models might not yet match the likes of GPT-4. Frankly, I was impressed with how far open models have come (many are shockingly good for everyday tasks), but they can occasionally be less fluent or accurate. 

Our approach has been to be transparent about this. In documentation and settings, we remind users that “performance [with local models] isn’t as great and results may also be less accurate” compared to big cloud AIs. In practice, for many use cases like generating an excerpt or tags, the local models do just fine – and the benefits (no cost, no external data sharing) often outweigh the minor quality differences.

Another interesting challenge was distribution: how do we help users get started with local AI? We decided to provide clear guidance in the plugin’s UI and docs. For example, we prompt users with steps to install Ollama (if not already installed) and even suggest specific model names for each feature (e.g. “nomic-embed-text” for classification tasks) . It’s a bit more involved than simply pasting an API key, but we tried to streamline it. And for developers spinning up sites on their own servers, running ollama pull <model> and flipping a setting in ClassifAI is pretty straightforward.

One highlight for me was when we got the first successful response back from an Ollama model inside WordPress. I had that “it works!” moment – watching an AI-generated excerpt appear in the editor that came entirely from a model running on my local machine. No external calls, just WordPress and a local AI having a conversation. That felt like a small glimpse of the future.

Technical Deep Dive: How the Integration Works ⚙️

(For those interested in the nitty-gritty, this section offers a peek under the hood.)

Ollama’s Magic in a Nutshell: Ollama runs a lightweight server on your machine (by default at http://localhost:11434/) that exposes a RESTful API. ClassifAI leverages this API to send requests to the local model. Essentially, whenever you invoke an AI feature in WordPress (say, “Generate Tags” or “Summarize this post”), if you’ve set the provider to Ollama, the plugin makes an HTTP call to the Ollama server instead of an external API. The request includes the model name and a prompt or data. For example, using curl you might do something like:

curl http://localhost:11434/api/generate \
  -d '{ "model": "llama3.1", "prompt": "What is water made of?" }'        

This is exactly how you can query a local model via Ollama’s API. In our PHP code, we utilize WordPress’s HTTP functions to do the same. A simplified snippet could look like:

// Prepare the request for Ollama's local API
$request = [
    'model'  => 'llama2-uncensored',  // the model to use
    'prompt' => 'Summarize the following article: ' . $post_content
];
$response = wp_remote_post( 'http://localhost:11434/api/generate', [
    'headers' => [ 'Content-Type' => 'application/json' ],
    'body'    => wp_json_encode( $request ),
]);

if ( ! is_wp_error( $response ) ) {
    $data   = json_decode( wp_remote_retrieve_body( $response ), true );
    $result = $data['response'] ?? '';
    // $result now holds the AI-generated summary
}        

Under the hood, Ollama streams the response (since some models take time to generate text token by token), but we configure it to return a complete result once ready. The key parameters we send are the model name and the chat messages. ClassifAI’s UI lets the admin choose which model to use for a given feature – this gets saved in the settings so we know, for instance, to call model "llama3.1" for text generation or "llava" for image analysis tasks, etc.

We also had to handle different response formats. Text generation is straightforward (you get a text completion back). But for image tagging or OCR, ClassifAI needs to interpret structured data or ensure images are accessible to the model. In our case, we found that some of the vision models require a publicly accessible image URL to analyze (they can’t directly read a file from your local disk via the API). So during development, I sometimes hacked around this by providing a direct URL for testing. In a real installation, if your site is online, images you want to analyze are already reachable to the model. These little details were important to get right so that features “just work” for end users.

From a code perspective, adding Ollama meant writing a new provider class that hooks into our existing feature modules. We tried to keep the implementation clean – if you look at the ClassifAI code on GitHub, you’ll see that adding Ollama didn’t require a complete overhaul, but rather plugging into our extensible architecture. We register Ollama as an available provider and define how to handle requests for each AI task when that provider is selected. This modular approach made development smoother and will make future maintenance easier.

One optional technical hurdle we considered was packaging. Some users run WordPress on remote servers where installing Ollama might be non-trivial (since it’s a separate service). For now, our approach assumes that if you enable Ollama in ClassifAI, you or your sysadmin will install it on the server (or your local environment). We provided links and instructions to assist with that. In the future, I can imagine even more seamless setups – but even today, if you’re comfortable installing a program on your server, you can get this working without touching any code.

In summary, the deep integration with Ollama basically transforms ClassifAI into a conduit between WordPress and the world of open-source AI models. Instead of calling out to Azure or OpenAI, it calls “in” to your own machine. Everything else – from the user’s perspective – remains as easy as clicking a button in the WordPress admin.

Looking Ahead: What’s Next & How to Get Involved 🚀

The journey of integrating Ollama into ClassifAI has been both challenging and rewarding. It opened my eyes to the potential of local AI and how it can complement the WordPress experience. So, what’s next? For one, I’m excited to see more developers and content creators give this a try. If you’re reading this and maintain a WordPress site, I encourage you to experiment with ClassifAI’s Ollama integration. It’s as simple as installing the ClassifAI plugin (free and open-source), and following our docs to set up Ollama. Kick the tires, run some AI tasks locally, and let us know what you think!

I anticipate a growing community around open-source AI in WordPress. There’s already been interest in other self-hosted AI tools (like GPT4All or LMStudio) and you might see those on our roadmap down the line. For now, starting with Ollama gave us a strong foundation – but it’s certainly not the end of the road. If you’re a developer who’s passionate about this space, we welcome contributions on ClassifAI’s GitHub, be it in code, testing, or ideas. Open source isn’t just a license for us; it’s a collaborative mindset. Personally, I’d love to hear feedback or war stories from anyone implementing local AI on their WP sites – it helps guide our next steps.

Finally, a little call to action: Organizations that are serious about AI and want more control over their data are in a great position to benefit from self-hosted models. At Fueled+10up, we’ve been helping some of our clients navigate this area – whether it’s integrating a private model into their publishing workflow or configuring on-premise AI services for high-security environments. If your team is looking to adopt AI on your own terms (maybe fine-tune a model just for your content, or ensure no third-party sees your data), feel free to reach out. We’re always excited to partner on cutting-edge solutions that make sense for your needs.

Working on the Ollama integration for ClassifAI has been one of those projects that reminds me why I got into engineering in the first place. It combined innovation with the open-source ethos, and it solved a real problem (how to use AI affordably and securely in WordPress). I’m proud of what our team accomplished and eager to keep pushing the boundaries. The intersection of WordPress and AI is just starting to heat up – and by leaning into open-source, self-hosted tools, we can ensure that this revolution benefits everyone, not just those who can pay hefty API bills. Here’s to a future where your WordPress site can be both a powerful publishing platform and a smart, AI-assisted ally – all under your own control. 🙌

(P.S. If you try out the ClassifAI + Ollama setup, let me know how it goes! I’d love to hear about your use cases and any ideas you have to make it even better.)

Thanks for sharing this deep dive Darin, I've been researching solutions for this and the article has been immensely helpful

To view or add a comment, sign in

Others also viewed

Explore content categories