25 pull requests. One week. Here's what changed in RisingWave. Most changelogs are boring. This one isn't. This one deserves more attention: you can now pass secrets directly into function call arguments. Before this, RisingWave supported secrets in connector definitions but not in user-defined functions. So if your UDF needed an API key, you were either hardcoding it or working around it. That gap is now closed. Here's the rest of what shipped this week (April 6-12): → jsonb_agg(*) wildcard support: aggregate entire rows into JSON without listing every column → Configurable join cache eviction: tune memory behavior per job instead of living with defaults → Vnode key stats for materialized views: finally see data skew across streaming fragments → CSV and XML encoding for file sinks: writing to S3/GCS in tabular format is now possible (POC) → Iceberg hardening: type mismatch fixes and primary key restrictions that prevent silent bugs And looking ahead to v2.9, the team is pushing hard on Iceberg table maintenance: garbage collection, compaction memory protection, and manifest rewrites. This is what production-ready Iceberg integration actually looks like. Full breakdown: https://lnkd.in/dWRMTkVY What streaming or Iceberg challenge are you dealing with right now? Drop it below for us! 👇
RisingWave 25 PRs in 1 Week: Secrets, JSON, and Iceberg Updates
More Relevant Posts
-
Some prompt instructions which immediately reduced the amount of tokens used in context when working on a large legacy codebase... • 𝗞𝗲𝗲𝗽 𝗔𝗣𝗜 𝗿𝗲𝗾𝘂𝗲𝘀𝘁 𝘀𝗶𝘇𝗲 𝘀𝗺𝗮𝗹𝗹 — Context bloat causes failures. Follow these rules strictly • 𝗥𝗲𝗮𝗱 𝗳𝗶𝗹𝗲𝘀 𝘄𝗶𝘁𝗵 𝗼𝗳𝗳𝘀𝗲𝘁+𝗹𝗶𝗺𝗶𝘁 — never read an entire large file. Read only the 20-50 lines you need. • 𝗨𝘀𝗲 𝗴𝗿𝗲𝗽/𝗴𝗹𝗼𝗯 𝗳𝗶𝗿𝘀𝘁 — find the exact line numbers, then read only that range. • 𝗣𝗶𝗽𝗲 𝗯𝗮𝘀𝗵 𝗼𝘂𝘁𝗽𝘂𝘁 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 `| 𝘁𝗮𝗶𝗹 -𝗡` 𝗼𝗿 `| 𝗵𝗲𝗮𝗱 -𝗡` — never dump full command output. 5-15 lines is usually enough. • 𝗗𝗼𝗻'𝘁 𝗲𝗰𝗵𝗼 𝗳𝗶𝗹𝗲 𝗰𝗼𝗻𝘁𝗲𝗻𝘁𝘀 𝗯𝗮𝗰𝗸 — after reading a file, note what you learned, don't quote it back. • 𝗔𝘃𝗼𝗶𝗱 𝗿𝗲𝗱𝘂𝗻𝗱𝗮𝗻𝘁 𝗿𝗲𝗮𝗱𝘀 — if you already read a section, don't re-read it. Take notes in your response. • 𝗠𝗶𝗻𝗶𝗺𝗶𝘇𝗲 𝘀𝗰𝗿𝗲𝗲𝗻𝘀𝗵𝗼𝘁 𝗳𝗿𝗲𝗾𝘂𝗲𝗻𝗰𝘆 — one screenshot to verify, not one per change. • 𝗣𝗿𝗲𝗳𝗲𝗿 𝗘𝗱𝗶𝘁 𝗼𝘃𝗲𝗿 𝗪𝗿𝗶𝘁𝗲 — sends only the diff, not the whole file. • 𝗦𝘂𝗽𝗽𝗿𝗲𝘀𝘀 𝗻𝗼𝗶𝘀𝘆 𝗼𝘂𝘁𝗽𝘂𝘁 — `2>&1 | grep -v "warning" | tail -5` for builds/tests. • 𝗨𝗽𝗱𝗮𝘁𝗲 𝗮 𝗖𝗢𝗡𝗧𝗘𝗫𝗧.𝗺𝗱 𝗳𝗶𝗹𝗲 — instead of building up conversation context. Future sessions read the file, not the thread.
To view or add a comment, sign in
-
🚨Breaking : THIS CLI PROXY CUTS YOUR CLAUDE CODE TOKEN USAGE BY 60-90% it sits between Claude Code and your terminal. when Claude runs a command, the proxy strips all the noise from the output before sending it back. normal terminal output is full of junk Claude doesn't need. progress bars, warnings, formatting, verbose logs. all of that eats tokens. this tool filters it down to just the information Claude actually needs to do its job 10M tokens saved across sessions with 89% reduction AND its just a single Rust binary with zero dependencies plus its open source if your usage limits have been burning faster than expected, a huge chunk of that is Claude reading terminal output it doesn't even use. this fixes that
To view or add a comment, sign in
-
-
I wanted a simple way to know when Claude finishes a task, and finally solved it. Turns out, it’s super simple: • use afplay (macOS built-in) • pick any sound you like • add a small JSON config in Claude for notification + stop
To view or add a comment, sign in
-
-
For 27 days, every time I resumed a Claude Code session, I was paying 11.5x more than I should have. Not a crash. Not an error message. Just a silent token drain. The kind that shows up as: “Why am I hitting my rate limit so fast?” I run Claude Code with 8 MCP servers and about 110 deferred tools per session, which puts me right in the population this regression hit hardest. My local stats file told the story: - 24.2B total tokens - 97.2% healthy cache hit ratio - about 258 affected resumes over 27 days - about 12.6M extra tokens burned - about $43.56 API-equivalent cost - about 11x faster rate-limit burn What broke: A session-writing path in cli.js stripped records like deferred_tools_delta and mcp_instructions_delta from saved session files. So on --resume, Claude Code had to reconstruct tool state from scratch. Different tool array → different cache key → full miss on the first resumed request. At scale, with reasonable assumptions, this likely burned something like $85K to $570K in extra API spend. Midpoint: about $285K. The bigger issue is trust. The community reverse-engineered the cause, named the function, explained the mechanism, and shipped a test suite. The official fix was a black box. When the failure mode is invisible, and the cost is real, “we fixed it” isn’t enough. Users need a way to verify it. That’s not a Claude Code feature problem. That’s an observability gap. Full breakdown: https://lnkd.in/gZk_Y6Hh
To view or add a comment, sign in
-
-
🚨 How I Track Production Bugs in Real-Time Systems (Node.js + Winston) In high-concurrency systems like gaming 🎮 or betting platforms, debugging production issues is not easy. Logs become your only source of truth. So I built a centralized logging system using Winston to track and debug issues efficiently. 🔍 Why Logging is Important? 👉 Without proper logs: You can’t trace user actions Debugging becomes guesswork Production issues take hours to resolve 👉 With structured logging: You can track every request flow Identify failures instantly Debug real-time issues faster ⚙️ How My Logging System Works 1️⃣ User hits API / Socket event 2️⃣ Winston captures logs with timestamp 3️⃣ Logs stored in structured JSON (.jsonl) format 4️⃣ Files rotate daily to manage size 5️⃣ Logs are used to trace and debug issues 🧠 What I Track in Logs ✔ Request & response data ✔ User ID / transaction ID ✔ Error messages & stack traces ✔ Game state & event flow 💡 Why This Approach Works 📉 Reduces debugging time in production ⚡ Helps identify performance bottlenecks 🔁 Makes system behavior traceable 🔐 Improves reliability in real-time systems 😁 😊 🤩 😎
To view or add a comment, sign in
-
-
API Status Codes Simplified for Developers Understanding API responses shouldn’t slow you down. Here’s a quick cheat sheet to help you debug faster and build smarter. ✅ 2xx — Everything is working perfectly 🔁 3xx — Check redirects or caching issues ⚠️ 4xx — Something’s wrong on the client side 🔥 5xx — Server-side issues, time to investigate Save this for your next debugging session it might just save you hours.
To view or add a comment, sign in
-
-
Your API isn’t always failing. Sometimes your error handling is. I shipped a tiny Supabase Edge Function and started breaking things on purpose. The dashboard logs told me one story. The client told me another. That’s where the real bugs showed up. Returning 200 for everything hid the problem. Returning real status codes changed the game. • 400 for malformed input • 401 for missing/invalid auth • 403 for authenticated but forbidden • 404 when the resource isn’t there • 422 for validation failures • 429 when requests are rate-limited • 500 for unhandled server-side errors Pair that with a clear JSON error body, and log failures inside the function with console.error(...). On the client, handle Supabase errors separately: FunctionsHttpError vs FunctionsRelayError vs FunctionsFetchError. That helps you tell apart: your function returning an HTTP error problems between the client and Supabase cases where the function couldn’t be reached at all Takeaway: treat errors as part of your API contract. Predictable failures are easier to debug. How are you handling Edge Function errors today?
To view or add a comment, sign in
-
-
I ran Ghost Open on its own source code today. $0.23. 27 files. One finding came back Critical. The redaction engine — the module that strips API keys and secrets before sending code to Claude — has a pointer bug. On files with 50+ environment variables, it stops redacting halfway through. Users see "Redacted 12 patterns" and assume they're safe. Pattern 13 was their database password. That's what Ghost does. It finds the thing you didn't know to look for. The bug was fixed the same day. That's the point — you can't fix what you can't see. Bring your own Anthropic API key. New accounts get a $5 credit — more than enough to run your first scan.
To view or add a comment, sign in
-
Claude Code's entire source code leaked today — via a .map file accidentally bundled into their npm package. The technical mistake: Bun generates sourcemaps by default. If you don't add *.map to .npmignore, your full source ships with every npm publish. Happened to Anthropic. Twice. But the interesting finds inside the leak: • A hidden model family codenamed "Capybara" (capybara / capybara-fast / capybara-fast[1m]) • Telemetry that tracks when users swear at the model — a literal frustration metric • A system called "Undercover Mode" designed to prevent internal info from leaking The last one aged particularly well today. If you ship CLI tools to npm: check your .npmignore. Right now.
To view or add a comment, sign in
-
I'm making Dali, an MCP server that gives Claude Code persistent long-term memory across sessions, available as a public repo. It stores memories with automatic vector embeddings for semantic search, plus a full audit trail of tool invocations. https://lnkd.in/g_4XyZNe
To view or add a comment, sign in