Show stories

pattle about 15 hours ago

Show HN: Geo Racers – Race from London to Tokyo on a single bus pass

Geo Racers is a mobile game that combines geography and racing, allowing players to explore real-world locations and compete in fast-paced races. The game aims to make learning about different countries and landmarks engaging and fun.

geo-racers.com
92 69
Summary
Show HN: Generate Web Interfaces from Data
Goose78 about 5 hours ago

Show HN: Generate Web Interfaces from Data

Syntux is an open-source command-line syntax highlighter that supports a wide range of programming languages and file formats. It provides a customizable and extensible solution for enhancing the readability of code snippets in various contexts, such as documentation or terminal-based applications.

github.com
29 7
Summary
ddtaylor about 6 hours ago

Show HN: What is HN thinking? Real-time sentiment and concept analysis

Hi HN,

I made Ethos, an open-source tool to visualize the discourse on Hacker News. It extracts entities, tracks sentiment, and groups discussions by concept.

Check it out: https://ethos.devrupt.io

This was a "budget build" experiment. I managed to ship it for under $1 in infra costs. Originally I was using `qwen3-8b` for the LLM and `qwen3-embedding-8b` for the embedding, but I ran into some capacity issues with that model and decided to use `llama-3.1-8b-instruct` to stay within a similar budget while having higher throughput.

What LLM or embedding would you have used within the same price range? It would need to be a model that supports structured output.

How bad do you think it is that `llama-3.1` is being used and then a higher dimension embedding? I originally wanted to keep the LLM and embedding within the same family, but I'm not sure if there is munch point in that.

Repo: https://github.com/devrupt-io/ethos

I'm looking for feedback on which metrics (sentiment vs. concepts) you find most interesting! PRs welcome!

ethos.devrupt.io
17 5
Summary
Show HN: Pgclaw – A "Clawdbot" in every row with 400 lines of Postgres SQL
calebhwin about 8 hours ago

Show HN: Pgclaw – A "Clawdbot" in every row with 400 lines of Postgres SQL

Hi HN,

Been hacking on a simple way to run agents entirely inside of a Postgres database, "an agent per row".

Things you could build with this: * Your own agent orchestrator * A personal assistant with time travel * (more things I can't think of yet)

Not quite there yet but thought I'd share it in its current state.

github.com
37 29
Show HN: 20+ Claude Code agents coordinating on real work (open source)
austinbaggio about 9 hours ago

Show HN: 20+ Claude Code agents coordinating on real work (open source)

Single-agent LLMs suck at long-running complex tasks.

We’ve open-sourced a multi-agent orchestrator that we’ve been using to handle long-running LLM tasks. We found that single LLM agents tend to stall, loop, or generate non-compiling code, so we built a harness for agents to coordinate over shared context while work is in progress.

How it works: 1. Orchestrator agent that manages task decomposition 2. Sub-agents for parallel work 3. Subscriptions to task state and progress 4. Real-time sharing of intermediate discoveries between agents

We tested this on a Putnam-level math problem, but the pattern generalizes to things like refactors, app builds, and long research. It’s packaged as a Claude Code skill and designed to be small, readable, and modifiable.

Use it, break it, tell me about what workloads we should try and run next!

github.com
39 35
Summary
Show HN: Migetpacks – Zero-config container builds, no Dockerfile needed
ktaraszk about 4 hours ago

Show HN: Migetpacks – Zero-config container builds, no Dockerfile needed

github.com
2 1
aed 3 days ago

Show HN: AI agents play SimCity through a REST API

This is a weekend project that spiraled out of control. I was originally trying to get Claude to play a ROM of the SNES SimCity. I struggled with it and that led me to Micropolis (the open-sourced SimCity engine) and was able to get it to work by bolting on an API.

The weekend hack turned into a headless city simulation platform where anyone can get an API key (no signup) and have their AI agent play mayor. The simulation runs the real Micropolis engine inside Cloudflare Durable Objects, one per city. Every city is public and browsable on the site.

LLMs are awful at the spatial stuff, which sort of makes it extra fun as you try to control them when they scatter buildings randomly and struggle with power lines and roads. A little like dealing with a toddler.

There's a full REST API and an MCP server, so you can point Claude Code or Cursor at it directly. You can usually get agents building in seconds.

Website: https://hallucinatingsplines.com

API docs: https://hallucinatingsplines.com/docs

GitHub: https://github.com/andrewedunn/hallucinating-splines

Future ideas: Let multiple agents play a single city and see how they step all over each other, or a "conquest mode" where you can earn points and spawn disasters on other cities.

hallucinatingsplines.com
205 70
Show HN: CodeRLM – Tree-sitter-backed code indexing for LLM agents
jared_stewart 1 day ago

Show HN: CodeRLM – Tree-sitter-backed code indexing for LLM agents

I've been building a tool that changes how LLM coding agents explore codebases, and I wanted to share it along with some early observations.

Typically claude code globs directories, greps for patterns, and reads files with minimal guidance. It works in kind of the same way you'd learn to navigate a city by walking every street. You'll eventually build a mental map, but claude never does - at least not any that persists across different contexts.

The Recursive Language Models paper from Zhang, Kraska, and Khattab at MIT CSAIL introduced a cleaner framing. Instead of cramming everything into context, the model gets a searchable environment. The model can then query just for what it needs and can drill deeper where needed.

coderlm is my implementation of that idea for codebases. A Rust server indexes a project with tree-sitter, builds a symbol table with cross-references, and exposes an API. The agent queries for structure, symbols, implementations, callers, and grep results — getting back exactly the code it needs instead of scanning for it.

The agent workflow looks like:

1. `init` — register the project, get the top-level structure

2. `structure` — drill into specific directories

3. `search` — find symbols by name across the codebase

4. `impl` — retrieve the exact source of a function or class

5. `callers` — find everything that calls a given symbol

6. `grep` — fall back to text search when you need it

This replaces the glob/grep/read cycle with index-backed lookups. The server currently supports Rust, Python, TypeScript, JavaScript, and Go for symbol parsing, though all file types show up in the tree and are searchable via grep.

It ships as a Claude Code plugin with hooks that guide the agent to use indexed lookups instead of native file tools, plus a Python CLI wrapper with zero dependencies.

For anecdotal results, I ran the same prompt against a codebase to "explore and identify opportunities to clarify the existing structure".

Using coderlm, claude was able to generate a plan in about 3 minutes. The coderlm enabled instance found a genuine bug (duplicated code with identical names), orphaned code for cleanup, mismatched naming conventions crossing module boundaries, and overlapping vocabulary. These are all semantic issues which clearly benefit from the tree-sitter centric approach.

Using the native tools, claude was able to identify various file clutter in the root of the project, out of date references, and a migration timestamp collision. These findings are more consistent with methodical walks of the filesystem and took about 8 minutes to produce.

The indexed approach did better at catching semantic issues than native tools and had a key benefit in being faster to resolve.

I've spent some effort to streamline the installation process, but it isn't turnkey yet. You'll need the rust toolchain to build the server which runs as a separate process. Installing the plugin from a claude marketplace is possible, but the skill isn't being added to your .claude yet so there are some manual steps to just getting to a point where claude could use it.

Claude continues to demonstrate significant resistance to using CodeRLM in exploration tasks. Typically to use you will need to explicitly direct claude to use it.

---

Repo: github.com/JaredStewart/coderlm

Paper: Recursive Language Models https://arxiv.org/abs/2512.24601 — Zhang, Kraska, Khattab (MIT CSAIL, 2025)

Inspired by: https://github.com/brainqub3/claude_code_RLM

github.com
77 34
Summary
hactually 3 days ago

Show HN: Inamate – Open-source 2D animation tool (alternative to Adobe Animate)

Adobe recently announced the end-of-life for Adobe Animate, then walked it back after community backlash.

Regardless of what Adobe decides next, the message was clear: animators who depend on proprietary tools are one corporate decision away from losing their workflow.

2D animation deserves an open-source option that isn't a toy. We've been working with a professional animator to guide feature priorities and ensure we're building something that actually fits real production workflows - not just a tech demo.

Github Repo: https://github.com/17twenty/inamate

We're at the stage where community feedback shapes the direction. If you're an animator, motion designer, or just someone who's been frustrated by the state of 2D animation tools — we'd love to hear:

- What features would make you switch from your current tool?

- What's the biggest pain point in your animation workflow?

- Is real-time collaboration actually useful for animation, or is it a gimmick?

Try it out, break it, and tell us what you think.

Built with Go, TS & React, WebAssembly, PostgreSQL, WebSocket, ffmpeg (for video exports).

12 11
jmalevez about 5 hours ago

Show HN: I generated a "stress test" of 200 rare defects from 7 real photos

Hello HN,

I work on vision systems for structural inspection. A common pain point is usually that while we have a lot of "healthy" images, we often lack a reliable "Golden Set" of rare failures (like shattered porcelain) to validate our models before deployment.

You can't trust your model's recall if your test set only has 5 examples of the failure mode for example.

So to fix this, I built a pipeline to generate datasets. In this example, I took 7 real-world defect samples, extracted their topology/texture, and procedurally generated 200 hard-to-detect variations across different lighting and backgrounds.

I’m releasing this batch of broken insulators (CC0) specifically to help teams benchmark their model's recall on rare classes:

https://www.silera.ai/blog/free-200-broken-insulators-datase...

- Input: 7 real samples.

- Output: 200 fully labeled evaluation images (COCO/YOLO).

- Use Case: Validation / Test Set (not full training).

How do you guys currently validate recall for "1 in 10,000" edge cases?

Jérôme

2 0
gregzeng95 about 9 hours ago

Show HN: ClawDeploy – OpenClaw deployment for non-technical users

Hi HN, I’m building ClawDeploy for people who want to use OpenClaw but don’t have a technical background.

The goal is simple: remove the setup friction and make deployment approachable.

With ClawDeploy, users can: - get a server ready - deploy OpenClaw through a guided flow - communicate with the bot via Telegram

Target users are solo operators, creators, and small teams who need a dedicated OpenClaw bot but don’t want to deal with infrastructure complexity.

Would love your feedbacks :)

clawdeploy.com
4 0
Summary
Show HN: Agent Alcove – Claude, GPT, and Gemini debate across forums
nickvec 1 day ago

Show HN: Agent Alcove – Claude, GPT, and Gemini debate across forums

agentalcove.ai
61 26
joemasilotti about 6 hours ago

Show HN: The Rails developers' guide to mobile app frameworks

This article discusses the various mobile app frameworks available for Rails developers, including Cordova, React Native, and Flutter, and provides an overview of their features, strengths, and use cases to help developers choose the best option for their project.

masilotti.com
3 0
Summary
listofdisks about 7 hours ago

Show HN: ListofDisks – hard drive price index across 7 retailers not just Amazon

I decided to build this after looking for drives for my own new DS1525+. I realized that existing storage price trackers were mostly lazy Amazon API wrappers that ignored other retailers.

ListofDisks tracks offers across Amazon, B&H, Best Buy, Newegg, Office Depot, ServerPartDeals, and Walmart, then normalizes listings into canonical products so the same drive can be compared side-by-side.

Current approach:

Normalization: Retailer-specific parsers + canonical mapping to group listings by actual model Trust Scoring: Filters out low-rated marketplace sellers and mystery listings Context: 90-day median $/TB and historical-low tracking to spot fake sales

Stack: Next.js frontend TypeScript/Node ingestion worker Postgres (Supabase) for DB

CMR/SMR and warranty are included when available but coverage is still partial.

This is a zero-revenue project right now. I just want to make the data accurate and get feedback. I am also considering expanding to memory shortly given the pricing issues with those components currently. Thanks for checking it out!

https://www.listofdisks.com

3 0
franze 1 day ago

Show HN: Triclock – A Triangular Clock

TriClock is a new cryptocurrency that aims to combine the features of Bitcoin, Ethereum, and Monero to offer a secure, private, and scalable digital currency. The article provides an overview of TriClock's technical details and its potential to address the limitations of existing cryptocurrencies.

triclock.franzai.com
57 14
Summary
Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)
segmenta 2 days ago

Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)

Hi HN,

AI agents that can run tools on your machine are powerful for knowledge work, but they’re only as useful as the context they have. Rowboat is an open-source, local-first app that turns your work into a living knowledge graph (stored as plain Markdown with backlinks) and uses it to accomplish tasks on your computer.

For example, you can say "Build me a deck about our next quarter roadmap." Rowboat pulls priorities and commitments from your graph, loads a presentation skill, and exports a PDF.

Our repo is https://github.com/rowboatlabs/rowboat, and there’s a demo video here: https://www.youtube.com/watch?v=5AWoGo-L16I

Rowboat has two parts:

(1) A living context graph: Rowboat connects to sources like Gmail and meeting notes like Granola and Fireflies, extracts decisions, commitments, deadlines, and relationships, and writes them locally as linked and editable Markdown files (Obsidian-style), organized around people, projects, and topics. As new conversations happen (including voice memos), related notes update automatically. If a deadline changes in a standup, it links back to the original commitment and updates it.

(2) A local assistant: On top of that graph, Rowboat includes an agent with local shell access and MCP support, so it can use your existing context to actually do work on your machine. It can act on demand or run scheduled background tasks. Example: “Prep me for my meeting with John and create a short voice brief.” It pulls relevant context from your graph and can generate an audio note via an MCP tool like ElevenLabs.

Why not just search transcripts? Passing gigabytes of email, docs, and calls directly to an AI agent is slow and lossy. And search only answers the questions you think to ask. A system that accumulates context over time can track decisions, commitments, and relationships across conversations, and surface patterns you didn't know to look for.

Rowboat is Apache-2.0 licensed, works with any LLM (including local ones), and stores all data locally as Markdown you can read, edit, or delete at any time.

Our previous startup was acquired by Coinbase, where part of my work involved graph neural networks. We're excited to be working with graph-based systems again. Work memory feels like the missing layer for agents.

We’d love to hear your thoughts and welcome contributions!

github.com
199 56
Summary
Show HN: TinyFish Web Agent (82% on hard tasks vs. Operator's 43%)
gargi_tinyfish about 8 hours ago

Show HN: TinyFish Web Agent (82% on hard tasks vs. Operator's 43%)

Enterprises need ~90% accuracy to deploy web agents. Until now, no agent has come close on real-world tasks. TinyFish is the first production-ready web agent. Here's the evidence.

Results of hard task scores on Online-Mind2Web (300 tasks, 136 live websites, human-correlated judge):

- TinyFish: 81.9% - OpenAI Operator: 43.2% - Claude Computer Use: 32.4% - Browser Use: 8.1%

Why not WebVoyager like everyone else?

Because it's broken. Easy tasks, Google Search shortcuts, and a judge that agrees with humans only 62% of the time. Browser Use self-reported 89% on WebVoyager — then scored 8.1% on hard tasks here.

We evaluated TinyFish against Online-Mind2Web instead — 300 real tasks, 136 live websites, three difficulty levels, and a judge that agrees with humans 85% of the time. No shortcuts. No easy mode.

The cookbook repo is open source: https://github.com/tinyfish-io/tinyfish-cookbook

You can see all failure task runs form here: https://tinyurl.com/tinyfish-mind2web

Happy to answer questions about the architecture, the benchmark methodology, or why we think WebVoyager scores are misleading.

tinyfish.ai
16 12
Summary
n1sni 3 days ago

Show HN: I built a macOS tool for network engineers – it's called NetViews

Hi HN — I’m the developer of NetViews, a macOS utility I built because I wanted better visibility into what was actually happening on my wired and wireless networks.

I live in the CLI, but for discovery and ongoing monitoring, I kept bouncing between tools, terminals, and mental context switches. I wanted something faster and more visual, without losing technical depth — so I built a GUI that brings my favorite diagnostics together in one place.

About three months ago, I shared an early version here and got a ton of great feedback. I listened: a new name (it was PingStalker), a longer trial, and a lot of new features. Today I’m excited to share NetViews 2.3.

NetViews started because I wanted to know if something on the network was scanning my machine. Once I had that, I wanted quick access to core details—external IP, Wi-Fi data, and local topology. Then I wanted more: fast, reliable scans using ARP tables and ICMP.

As a Wi-Fi engineer, I couldn’t stop there. I kept adding ways to surface what’s actually going on behind the scenes.

Discovery & Scanning: * ARP, ICMP, mDNS, and DNS discovery to enumerate every device on your subnet (IP, MAC, vendor, open ports). * Fast scans using ARP tables first, then ICMP, to avoid the usual “nmap wait”.

Wireless Visibility: * Detailed Wi-Fi connection performance and signal data. * Visual and audible tools to quickly locate the access point you’re associated with.

Monitoring & Timelines: * Connection and ping timelines over 1, 2, 4, or 8 hours. * Continuous “live ping” monitoring to visualize latency spikes, packet loss, and reconnects.

Low-level Traffic (but only what matters): * Live capture of DHCP, ARP, 802.1X, LLDP/CDP, ICMP, and off-subnet chatter. * mDNS decoded into human-readable output (this took months of deep dives).

Under the hood, it’s written in Swift. It uses low-level BSD sockets for ICMP and ARP, Apple’s Network framework for interface enumeration, and selectively wraps existing command-line tools where they’re still the best option. The focus has been on speed and low overhead.

I’d love feedback from anyone who builds or uses network diagnostic tools: - Does this fill a gap you’ve personally hit on macOS? - Are there better approaches to scan speed or event visualization that you’ve used? - What diagnostics do you still find yourself dropping to the CLI for?

Details and screenshots: https://netviews.app There’s a free trial and paid licenses; I’m funding development directly rather than ads or subscriptions. Licenses include free upgrades.

Happy to answer any technical questions about the implementation, Swift APIs, or macOS permission model.

bedpage.com
239 60
Show HN: Distr 2.0 – A year of learning how to ship to customer environments
louis_w_gk 3 days ago

Show HN: Distr 2.0 – A year of learning how to ship to customer environments

A year ago, we launched Distr here to help software vendors manage customer deployments remotely. We had agents that pulled updates, a hub with a GUI, and a lot of assumptions about what on-prem deployment needed.

It turned out things get messy when your software is running in places you can't simply SSH into.

Over the last year, we’ve also helped modernize a lot of home-baked solutions: bash scripts that email when updates fail, Excel sheets nobody trusts to track customer versions, engineers driving to customer sites to fix things in person, debug sessions over email (“can you take a screenshot of the logs and send it to me?”), customers with access to internal AWS or GCP registries because there was no better option, and deployments two major versions behind that nobody wants to touch.

We waited a year before making our first breaking change, which led to a major SemVer update—but it was eventually necessary. We needed to completely rewrite how we manage customer organizations. In Distr, we differentiate between vendors and customers. A vendor is typically the author of a software / AI application that wants to distribute it to customers. Previously, we had taken a shortcut where every customer was just a single user who owned a deployment. We’ve now introduced customer organizations. Vendors onboard customer organizations onto the platform, and customers own their internal user management, including RBAC. This change obviously broke our API, and although the migration for our cloud customers was smooth, custom solutions built on top of our APIs needed updates.

Other notable features we’ve implemented since our first launch:

- An OCI container registry built on an adapted version of https://github.com/google/go-containerregistry/, directly embedded into our codebase and served via a separate port from a single Docker image. This allows vendors to distribute Docker images and other OCI artifacts if customers want to self-manage deployments.

- License Management to restrict which customers can access which applications or artifact versions. Although “license management” is a broadly used term, the main purpose here is to codify contractual agreements between vendors and customers. In its simplest form, this is time-based access to specific software versions, which vendors can now manage with Distr.

- Container logs and metrics you can actually see without SSH access. Internally, we debated whether to use a time-series database or store all logs in Postgres. Although we had to tinker quite a bit with Postgres indexes, it now runs stably.

- Secret Management, so database passwords don’t show up in configuration steps or logs.

Distr is now used by 200+ vendors, including Fortune 500 companies, across on-prem, GovCloud, AWS, and GCP, spanning health tech, fintech, security, and AI companies. We’ve also started working on our first air-gapped environment.

For Distr 3.0, we’re working on native Terraform / OpenTofu and Zarf support to provision and update infrastructure in customers’ cloud accounts and physical environments—empowering vendors to offer BYOC and air-gapped use cases, all from a single platform.

Distr is fully open source and self-hostable: https://github.com/distr-sh/distr

Docs: https://distr.sh/docs

We’re YC S24. Happy to answer questions about on-prem deployments and would love to hear about your experience with complex customer deployments.

github.com
96 29
Summary
Adanos about 9 hours ago

Show HN: Insider Trading Alerts – Open-Market Buys&Sells from SEC Form 4 Filings

The article discusses insider transactions, which are trades of a company's stock made by corporate insiders, such as executives and directors. It explores how these transactions can provide valuable insights into a company's financial health and future prospects, and how investors can utilize this information to make informed investment decisions.

stockalert.pro
4 0
Summary
Show HN: TidesDB – A persistent key-value store optimized for modern hardware
alexpadula about 9 hours ago

Show HN: TidesDB – A persistent key-value store optimized for modern hardware

Hey everyone! sharing an open source storage engine I created and work on called TidesDB. I hope you check it out, and do let me know your thoughts and or questions!

You can also find design documentation, benchmarks, libraries and more on the website.

Alex

github.com
9 4
Show HN: JavaScript-first, open-source WYSIWYG DOCX editor
thisisjedr 3 days ago

Show HN: JavaScript-first, open-source WYSIWYG DOCX editor

We needed a JS-first WYSIWYG DOCX editor and couldn't find a solid OSS option, most were either commercial or abandoned.

As an experiment, we gave Claude Code the OOXML spec, a concrete editor architecture, and a Playwright-based test suite. The agent iterated in a (Ralph) loop over a few nights and produced a working editor from scratch.

Core text editing works today. Tables and images are functional but still incomplete. MIT licensed.

github.com
125 44
Summary
Show HN: PardusDB – SQLite-like vector database in Rust
JasonHEIN about 9 hours ago

Show HN: PardusDB – SQLite-like vector database in Rust

PardusDB is a lightweight, single-file embedded vector database written in pure Rust — think SQLite, but for vectors and similarity search.

Key highlights: - No external dependencies - Familiar SQL syntax for CREATE/INSERT/SELECT + vector SIMILARITY queries - Graph-based ANN search, thread-safe, transactions - Python RAG example with Ollama included

We built this as the engine behind our no-code platform at https://pardusai.org/ (private, local-first data analysis).

GitHub: https://github.com/JasonHonKL/PardusDB

Feedback welcome!

github.com
2 0
Show HN: Agent Tools – 136 deterministic data tools for AI agents (MCP/A2A/REST)
sathish-mg about 9 hours ago

Show HN: Agent Tools – 136 deterministic data tools for AI agents (MCP/A2A/REST)

The article discusses Agent Tools, an open-source library that provides tools for building and deploying AI agents. It covers the key features of the library, including agent architecture, training, deployment, and monitoring capabilities.

github.com
2 1
Summary
rishi_blockrand about 23 hours ago

Show HN: Double blind entropy using Drand for verifiably fair randomness

The only way to get a trust-less random value is to have it distributed and time-locked three ways, player, server and a future-entropy.

In the demo above, the moment you commit (Roll-Dice) a commit with the hash of a player secret is sent to the server and the server accepts that and sends back the hash of its secret back and the "future" drand round number at which the randomness will resolve. The future used in the demo is 10 secs

When the reveal happens (after drand's particular round) all the secrets are revealed and the random number is generated using "player-seed:server-seed:drand-signature".

All the verification is in Math, so truly trust-less, so:

1. Player-Seed should matches the player-hash committed

2. Server-Seed should matches the server-hash committed

3. Drand-Signature can is publicly not available at the time of commit and is available at the time of reveal. (Time-Locked)

4. Random number generated is deterministic after the event and unknown and unpredictably before the event.

5. No party can influence the final outcome, specially no "last-look" advantange for anyone.

I think this should be used in all games, online lottery/gambling and other systems which want to be fair by design not by trust.

blockrand.net
21 15
Show HN: Renovate – The Kubernetes-Native Way
JanLepsky 1 day ago

Show HN: Renovate – The Kubernetes-Native Way

Hey folks, we built a Kubernetes operator for Renovate and wanted to share it. Instead of running Renovate as a cron job or relying on hosted services, this operator lets you manage it as a native Kubernetes resource with CRDs. You define your repos and config declaratively, and the operator handles scheduling and execution inside your cluster. No external dependencies, no SaaS lock-in, no webhook setup. The whole thing is open source and will stay that way – there's no paid tier or monetization plan behind it, we just needed this ourselves and figured others might too.

Would love to hear feedback or ideas if you give it a try: https://github.com/mogenius/renovate-operator

github.com
41 15
Summary
kaliades about 10 hours ago

Show HN: BetterDB – Valkey/Redis monitoring that persists what servers forget

Hey HN, I'm Kristiyan. I previously led Redis Insight (the official Redis GUI). When I started working with Valkey, I found the observability tooling lacking — so I started building BetterDB.

The core problem: Valkey and Redis expose useful operational data (slowlog, latency stats, client lists, memory breakdowns), but it's all ephemeral. Restart your server and it's gone. Existing tools show real-time charts but can't tell you what happened at 3am when your p99 spiked.

BetterDB persists this ephemeral data and turns it into actionable insights:

- Historical analytics for queries (slowlog and commandlog patterns aggregated by type), clients (commands, connections, buffers), and ACL activity - Anomaly detection and 99 Prometheus metrics - Cluster visualization with topology graphs and slot heatmaps - Automated latency and memory diagnostics - AI assistant for querying your instance in plain English (via local Ollama) - Sub-1% performance overhead

On that last point — I wrote up our interleaved A/B benchmarking methodology in detail: https://www.betterdb.com/blog/interleaved-testing. Most tools claim "minimal overhead" without showing their work. We open-sourced the benchmark suite so you can run it on your own hardware and verify.

You can try it right now:

    npx @betterdb/monitor
Or via Docker:

    docker run -d -p 3001:3001 betterdb/monitor
BetterDB follows an open-core model under the OCV Open Charter (which prevents future licensing changes). The community edition is free with real monitoring value. Pro and Enterprise tiers add historical persistence, alerting, and compliance features, but are free for now and will be at least until end of month.

We're building this in public — the benchmark suite, the technical blog posts, and the roadmap are all out in the open. Would love feedback from production users of Valkey or Redis on what observability gaps you're still hitting.

GitHub: https://github.com/BetterDB-inc/monitor Blog: https://www.betterdb.com/blog

3 0
cmuir about 11 hours ago

Show HN: Got VACE working in real-time – 30fps on a 5090

I adapted VACE to work with real-time autoregressive video generation.

Here's what it can do right now in real time:

- Depth, pose, optical flow, scribble, edge maps — all the v2v control stuff - First frame animation / last frame lead-in / keyframe interpolation - Inpainting with static or dynamic masks - Stacking stuff together (e.g. depth + LoRA, inpainting + reference images) - Reference-to-video is in there too but honestly quality isn't great yet compared to batch

Getting ~20 fps for most control modes on a 5090 at 368x640 with the 1.3B models. Image-to-video hits ~28 fps. Works with 14b models as well, but doesnt fit on 5090 with VACE.

This is all part of Daydream Scope (https://github.com/daydreamlive/scope), which is an open source tool for running real-time interactive video generation pipelines. The demo was created in scope, and is a combination of Longlive, VACE+Scribble, Custom LoRA.

There's also a very early WIP ComfyUI node pack wrapping scope: https://github.com/daydreamlive/ComfyUI-Daydream-Scope

Curious what people think.

daydream.live
10 0
Summary
marcus-verus about 11 hours ago

Show HN: A FIRE calculator that verifies or determines your retirement number

Most retirement calculators either oversimplify things or ask you to link your accounts so they can make money off your data. I wanted something that actually helped me see if I was on track without handing over a bank login, so I built RetireNumber. It's a privacy-first retirement planner. You type in your own numbers. No account linking, no selling your data, no VC. Just me and a subscription. The engine does real math: Monte Carlo with 1,000 runs, historical backtesting to 1928 using Shiller's Yale data, tax-aware withdrawal strategies, and scenario comparison so you can try different life paths. Whether you're still figuring out your number or you already have a target in mind, you can use it to get there or to check that your plan actually holds up.

You can try it without signing up: https://retirenumber.com/try. There's a demo mode and a short guided tour. Changelog and the full story are on the site if you want to dig in.

This isn't financial advice. It's just a tool for checking your number and whether your plan holds up. There is still a lot to but I'd love to hear whether the inputs and results feel clear and useful.

-Mark

retirenumber.com
5 0
Show HN: It's 2026 and setting up a Mac for development is still mass googling
openbootdotenv about 11 hours ago

Show HN: It's 2026 and setting up a Mac for development is still mass googling

OpenBoot is an open-source project that aims to create a secure, customizable, and extensible bootloader for a wide range of devices, providing a foundation for building robust and flexible embedded systems.

github.com
3 1
Summary