Nav, Norway’s Labor and Welfare Administration, operates at a scale where logging is anything but small: ➡️ 400GB of log data generated every day ➡️ 60TB of data retained ➡️ 4,000 production changes per week Nav made the shift from its on-premises Elasticsearch logging stack that was becoming harder to scale, maintain and justify to Aiven for OpenSearch® on Google Cloud Platform. ✅ Zero production issues during and after migration ✅ High-throughput log processing at scale ✅ Fully managed service integrated with Kafka ✅ Consumption-based cost model aligned to actual usage ✅ Engineers freed to focus on improvements not maintenance Explore the full case study: https://bit.ly/4uQId0S Hans Kristian Flaatten 🕊️🍉
Aiven’s Post
More Relevant Posts
-
Many of existing customers that I see these days, are making remarks about how stable Aiven is. Aiven has the SLA of 99.99%. Even if there is an issue, our support team are ready to make sure your services become stable again soon. Don't trust me. Ask around our existing customers. Read our customer success story here about their zero production issues experience in Aiven, for more than 400GB data ingestion per day. Still worried about migration costs? Migration downtime needs to be zero? I'm here to help you reduce the migration cost and avoid migration downtime. Ask Tommy Widyastama to help set up a session with us, as he handles both of our calendar availability. Both of us can cover South East Asia, Japan, South Korea, and ANZ.
Nav, Norway’s Labor and Welfare Administration, operates at a scale where logging is anything but small: ➡️ 400GB of log data generated every day ➡️ 60TB of data retained ➡️ 4,000 production changes per week Nav made the shift from its on-premises Elasticsearch logging stack that was becoming harder to scale, maintain and justify to Aiven for OpenSearch® on Google Cloud Platform. ✅ Zero production issues during and after migration ✅ High-throughput log processing at scale ✅ Fully managed service integrated with Kafka ✅ Consumption-based cost model aligned to actual usage ✅ Engineers freed to focus on improvements not maintenance Explore the full case study: https://bit.ly/4uQId0S Hans Kristian Flaatten 🕊️🍉
To view or add a comment, sign in
-
Nav, Norway’s Labor and Welfare Administration, operates at a scale where logging is anything but small: ➡️ 400GB of log data generated every day ➡️ 60TB of data retained ➡️ 4,000 production changes per week Nav made the shift from its on-premises Elasticsearch logging stack that was becoming harder to scale, maintain and justify to Aiven for OpenSearch® on Google Cloud Platform. ✅ Zero production issues during and after migration ✅ High-throughput log processing at scale ✅ Fully managed service integrated with Kafka ✅ Consumption-based cost model aligned to actual usage ✅ Engineers freed to focus on improvements not maintenance Explore the full case study: https://bit.ly/4uQId0S Hans Kristian Flaatten 🕊️🍉
To view or add a comment, sign in
-
-
Ever had your NGINX Ingress controllers go into CrashLoopBackOff because of MaxMind GeoIP rate limits? I recently ran into a subtle but critical issue while working with GeoIP databases packaged into Nginx controllers in Kubernetes. Using the same MaxMind license across clusters + frequent pod restarts can quickly exhaust the daily download quota — bringing your ingress layer down when you least expect it. A Crashloop in staging / dev can bring down your production controllers (as the quota is license based not IP based) The key insight: GeoIP databases don’t change frequently — but the default setup downloads them on every controller startup. In this blog, I’ve shared a practical and production-tested approach to: Decouple database downloads from ingress controller lifecycle Use scheduled jobs + S3 as a centralized source Leverage init containers to inject GeoIP DBs at runtime Completely avoid hitting MaxMind rate limits Result: More stable clusters, better license utilization, and zero surprise outages. If you’re running NGINX Ingress at scale, this is one of those edge cases worth fixing early. 👉 Read more: https://lnkd.in/dbgnjkye #Kubernetes #DevOps #NGINX #Cloud #PlatformEngineering #SRE
To view or add a comment, sign in
-
Over the past few days, I have been discussing ways to improve 𝐀𝐖𝐒 𝐋𝐚𝐦𝐛𝐝𝐚 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞, including: - Provisioned Concurrency - Memory tuning In my Medium article, I take a broader and more practical approach by exploring how production-grade serverless applications can be optimized for performance. 𝗧𝗵𝗲 𝗮𝗿𝘁𝗶𝗰𝗹𝗲 𝗰𝗼𝘃𝗲𝗿𝘀: - 𝘔𝘦𝘮𝘰𝘳𝘺 𝘵𝘶𝘯𝘪𝘯𝘨 (𝘤𝘰𝘴𝘵 𝘷𝘴 𝘱𝘦𝘳𝘧𝘰𝘳𝘮𝘢𝘯𝘤𝘦) - 𝘉𝘶𝘯𝘥𝘭𝘦 𝘴𝘪𝘻𝘦 𝘰𝘱𝘵𝘪𝘮𝘪𝘻𝘢𝘵𝘪𝘰𝘯 - 𝘌𝘧𝘧𝘪𝘤𝘪𝘦𝘯𝘵 𝘥𝘢𝘵𝘢𝘣𝘢𝘴𝘦 𝘤𝘰𝘯𝘯𝘦𝘤𝘵𝘪𝘷𝘪𝘵𝘺 𝘸𝘪𝘵𝘩 𝘙𝘋𝘚 𝘗𝘳𝘰𝘹𝘺 - 𝘗𝘳𝘰𝘷𝘪𝘴𝘪𝘰𝘯𝘦𝘥 𝘊𝘰𝘯𝘤𝘶𝘳𝘳𝘦𝘯𝘤𝘺 (𝘸𝘪𝘵𝘩 𝘳𝘦𝘢𝘭 𝘤𝘢𝘭𝘤𝘶𝘭𝘢𝘵𝘪𝘰𝘯𝘴) - 𝘐𝘯𝘪𝘵𝘪𝘢𝘭𝘪𝘻𝘢𝘵𝘪𝘰𝘯 𝘣𝘦𝘴𝘵 𝘱𝘳𝘢𝘤𝘵𝘪𝘤𝘦𝘴 For those working with 𝗔𝗪𝗦 𝗟𝗮𝗺𝗯𝗱𝗮 in production, this article provides a detailed, real-world perspective on improving both latency and cost efficiency. Feel free to check it out and share your thoughts I would love to hear your ideas. #AWS #Lambda #Serverless #CloudArchitecture #Performance #DevOps #Backend https://lnkd.in/gp-pMwg2
To view or add a comment, sign in
-
A database existing in the cloud does not mean the system is ready. That distinction matters more than it sounds. I recently worked on a MongoDB Atlas setup where the goal was not just to provision the database, but to make it application-ready. That meant covering the full path: • Terraform-based Atlas project and cluster creation • Standard service users • Scoped custom roles • Bootstrap configuration • Base data seeding • Service-specific connection strings • Secret delivery into GCP Secret Manager Because from an application team’s perspective, “the cluster is up” is not useful if they still cannot authenticate, access the right collections, or consume the required secrets. That gap between infrastructure-ready and application-ready is where a lot of delivery time disappears. This work reinforced something I care about more and more: Platform engineering should reduce uncertainty, not create more of it. The best automation is the kind that leaves downstream teams with fewer questions, not more. How does your team define “ready” in infrastructure projects? #PlatformEngineering #Terraform #MongoDBAtlas #DevOps #CloudInfrastructure #InfrastructureAsCode #Automation #GCP
To view or add a comment, sign in
-
Quarterly close used to mean one thing for DBAs: survival mode. Three pages before lunch. Memory pressure alerts stacking up. Some app team running a load test nobody told you about. And somewhere in the background, a blocking/locking issue quietly preparing to ruin someone's 2 a.m. Microsoft just showed something at Build that I think changes that picture entirely. The new Databases hub in Microsoft Fabric isn't a dashboard. It's a control plane. One place for SQL, PostgreSQL, Cosmos DB; across on-prem, IaaS, PaaS, and beyond; not after you've finished the migration, not after you've standardized everything. As your estate exists today. What got me wasn't the UI. It was the philosophy behind it. The alerts surface decisions, not just data. A Teams notification drops, the agent has already correlated the queries, waits, locks, and workload patterns. It's found the root cause; an open transaction holding a schema modification lock and it's staged the fix. The DBA reviews it, adjusts if needed, approves. Nothing executes without her sign-off. The agent doesn't improvise. It operates inside guardrails. The human stays in control. And because she's not firefighting, she has time to actually do her job - provision the new hyperscale database for the e-commerce team, with governance and security baked in from the start. The developer picks it up in their tool. No ticket. No "which database should I use?" The right defaults are already there. That last part is what sticks with me. The developer doesn't slow down to be safe. The platform already is safe. Velocity as the reward, not the risk. If quarterly close feeling like a fire drill is something you've made peace with, maybe you don't have to.
To view or add a comment, sign in
-
-
Day 76: Volumes Containers are ephemeral. When a container stops, everything written inside it disappears. That's fine for stateless apps. Not fine for databases. Volumes solve this. A volume is storage that lives outside the container but gets mounted inside it. docker run -v mydata:/var/lib/postgresql/data postgres The data writes to mydata on the host. The container mounts it at that path. Container stops, restarts, gets replaced, the data is still there. Two types worth knowing: 1. Named volumes: Docker manages the location. You just give it a name. Good for databases. 2. Bind mounts: you specify an exact path on your host machine. The container sees that folder directly. Good for local development where you want code changes to reflect instantly without rebuilding. In production, you'd rarely use bind mounts. You'd use Named volumes or cloud storage (like EBS or EFS on AWS) instead. This is also where containers connect back to the OS layer, volumes are just directories on the host filesystem, managed by Docker.
To view or add a comment, sign in
-
The Model Context Protocol (MCP) lets you manage cloud infrastructure through natural language commands by connecting AI tools to external services. Instead of clicking through dashboards and running manual commands, you provision databases, deploy applications, and scale resources by describing your intent to an AI assistant. In this tutorial written by Néstor Daza and Anish Singh Walia, you will build a task management API using Node.js and MongoDB, then deploy the database and application to DigitalOcean using the DigitalOcean MCP server. You will use a single MCP server to automate infrastructure provisioning: creating a MongoDB database cluster, deploying your application to App Platform, and managing both services through conversational commands. This article will show developers how to build and deploy an application by combining both DigitalOcean’s Managed MongoDB and App Platform through DigitalOcean’s MCP automation. Read it here 👉 https://lnkd.in/e2JqfZ58
Building a Scalable App with MongoDB Using DigitalOcean's MCP Server | DigitalOcean digitalocean.com To view or add a comment, sign in
-
The Model Context Protocol (MCP) lets you manage cloud infrastructure through natural language commands by connecting AI tools to external services. Instead of clicking through dashboards and running manual commands, you provision databases, deploy applications, and scale resources by describing your intent to an AI assistant. In this tutorial written by Néstor Daza and Anish Singh Walia, you will build a task management API using Node.js and MongoDB, then deploy the database and application to DigitalOcean using the DigitalOcean MCP server. You will use a single MCP server to automate infrastructure provisioning: creating a MongoDB database cluster, deploying your application to App Platform, and managing both services through conversational commands. This article will show developers how to build and deploy an application by combining both DigitalOcean’s Managed MongoDB and App Platform through DigitalOcean’s MCP automation. Read it here 👉 https://lnkd.in/ehZg5nMp
Building a Scalable App with MongoDB Using DigitalOcean's MCP Server | DigitalOcean digitalocean.com To view or add a comment, sign in