Show HN: LookTake – Try anyone's makeup, outfit, or hairstyle on your photo
Hi HN, I'm Taemin. I built LookTake, a social platform where users share beauty, fashion, and hair looks, and anyone can "Take" that look onto their own photo using AI.
I worked at a game company in Korea doing AI research — graphics, vision, and image generation. I built the in-house image gen service there. While reading generative AI papers, I came across virtual try-on research and had a realization: people will eventually shop by seeing products on themselves, not just browsing photos of models. I started experimenting on weekends. The early results were rough, but promising enough that I left my job.
The core technical challenge: when you use image generation models to transfer someone's look onto another person, they either lose your identity or drop the style details. You ask it to transfer a specific makeup look and it gives you a completely different face, or an outfit loses its pattern and texture, or the hairstyle comes out flat. A prompt-only approach just isn't precise enough.
So I built a multi-stage pipeline — object detection, inpainting, and several other steps — to preserve your identity while accurately transferring style details.
Unlike preset filters or brand catalog try-ons, users share styles from their own everyday photos and anyone in the community can try that look on themselves with one tap. It works across three categories: beauty (makeup transfer), fashion (outfit try-on), and hair (style and color).
I launched in the US and Korea about a month ago. Still early and plenty to improve — would love honest feedback. Does the try-on quality feel convincing?
Demo: https://youtube.com/shorts/mDLkiV3D4rI iOS: https://apps.apple.com/app/looktake-share-style-with-ai/id67... Android: https://play.google.com/store/apps/details?id=io.looktake.ap...
Show HN: Quantifying opportunity cost with a deliberately "simple" web app
Hi HN,
A while ago I had a mildly depressing realization.
Back in 2010, I had around $60k. Like a "responsible" person, I used it as a down payment on an apartment. Recently, out of curiosity, I calculated what would have happened if I had instead put that money into NVIDIA stock.
I should probably add some context.
For over 10 years I've worked as a developer on trading platforms and financial infrastructure. I made a rule for myself - never trade on the market.
In 2015, when Bitcoin traded about 300 usd, my brother and I were talking about whether it was a bubble. He made a bold claim that one day it might reach $100k per coin. I remember thinking it sounded unrealistic - and even if it wasn't, I wasn't going to break my rule.
That internal tension - building systems around markets while deliberately staying out of them is probably what made the "what if?" question harder to ignore years later.
The result was uncomfortable. The opportunity cost came out to tens of millions of dollars.
That thought stuck with me longer than it probably should have, so I decided to build a small experiment to make this kind of regret measurable: https://shouldhavebought.com
At its core, the app does one basic thing: you enter an asset, an amount, and two dates, and it gives you a plain numeric result - essentially a receipt for a missed opportunity.
I intentionally designed the UI to feel raw and minimal, almost like a late-90s terminal. No charts, no images, no emotional cushioning - just a number staring back at you.
What surprised me wasn't the result, but how much modern web infrastructure it took to build something that looks so simple.
Although the app is a single page with almost no UI elements, it still required:
- Client-side reactivity for a responsive terminal-like experience (Alpine.js)
- A traditional backend (Laravel) to validate inputs and aggregate historical market data
- Normalizing time-series data across different assets and events (splits, gaps, missing days)
- Dynamic OG image generation for social sharing (with color/state reflecting gain vs loss)
- A real-time feed showing recent calculations ("Wall of Pain"), implemented with WebSockets instead of a hosted service
- Caching and performance tuning to keep the experience instant
- Dealing with mobile font rendering and layout quirks, despite the "simple" UI
- Cron and queueing for historical data updates
All of that just to show a number.
Because markets aren't one-directional, I also added a second mode that I didn't initially plan: "Bullet Dodged". If someone almost bought an asset right before a major crash, the terminal flips state and shows how much capital they preserved by doing nothing. In practice, this turned out to be just as emotionally charged as missed gains.
Building this made me reflect on how deceptive "simplicity" on the web has become. As a manager I know says: "Just add a button". But even recreating a deliberately primitive experience today requires understanding frontend reactivity, backend architecture, real-time transport, social metadata, deployment, and performance tradeoffs.
I didn't build this as a product so much as an experiment - part personal curiosity, part technical exploration.
I'd be very interested to hear how others think about:
Where they personally draw the line on stack complexity for small projects?
Whether they would have gone fully static + edge functions for something like this?
How much infrastructure is "too much" for a deliberately minimal interface?
And, optionally, what your worst "should have bought" moment was?
Happy to answer any technical questions or dig into specific implementation details if useful.
Show HN: Moonshine Open-Weights STT models – higher accuracy than WhisperLargev3
I wanted to share our new speech to text model, and the library to use them effectively. We're a small startup (six people, sub-$100k monthly GPU budget) so I'm proud of the work the team has done to create streaming STT models with lower word-error rates than OpenAI's largest Whisper model. Admittedly Large v3 is a couple of years old, but we're near the top the HF OpenASR leaderboard, even up against Nvidia's Parakeet family. Anyway, I'd love to get feedback on the models and software, and hear about what people might build with it.
Show HN: Scheme-langserver – Digest incomplete code with static analysis
Scheme-langserver digest incomplete Scheme code to serve real-world programming requirements, including goto-definition, auto-completion, type inference and such many LSP-defined language feature supports. And this project is based here(https://github.com/ufo5260987423/scheme-langserver).
I built it because I was tired of Scheme/Lisp's raggy development environment, especially of the lack of IDE-like highly customized programing experience. Though DrRacket and many REPL-based counterparts have don't much, following general cases aren't reach same-level as in other modern languages: (let* ([ready-for-reference 1]
[call-reference (+ ready-for-)]))
Apparently, the `ready-for-` behind `call-reference` should trigger an auto-complete option, in which has a candidate `ready-for-reference`. Besides, I also know both of them have the type of number, and their available scope is limited by `let*`'s outer brackets. I wish some IDE to provide such features and such small wishes gradually accumulated in past ten years, finally I wasn't satisfied with all the ready-made products.If you want some further information, you may refer my github repository in which has a screen-record video showing how you code get help from this project and this project has detailed documentation so don't hesitate and use it.
Here're some other things sharing to Hacker News readers:
1. Why I don't use DrRacket: LSP follows KISS(Keep It Simple, Stupid) principle and I don't want to be involved with font things as I just read in its github issues.
2. What's the newest stage of scheme-langserve: It achieves kind of self-boost, in which stage I can continue develop it with its VScode plugin help. However, I directly used Chez Scheme's tokenizer and this leaded to several un-caught exceptions whom I promise to be fixed in the future, but I'm occupied with developing new feature. If you feel something wrong with scheme-langserver, you may reboot vscode, generally this always work.
3. Technology road map: I'm now developing a new macro expander so that the users can customize LSP behavior by coding their own macro and without altering this project. After this, I have a plan to improve efficiency and fix bugs. 4. Do I need any help: Yes. And I'd like to say, talking about scheme-langserver with me is also a kind of help.
5. Long-term View: I suspect 2 or 3 years later I will lose concentration on this project but according some of my friends, I may integrate this project with other fantastic work.
Show HN: Emdash – Open-source agentic development environment
Hey HN! We’re Arne and Raban, the founders of Emdash (https://github.com/generalaction/emdash).
Emdash is an open-source and provider-agnostic desktop app that lets you run multiple coding agents in parallel, each isolated in its own git worktree, either locally or over SSH on a remote machine. We call it an Agentic Development Environment (ADE).
You can see a 1 minute demo here: https://youtu.be/X31nK-zlzKo
We are building Emdash for ourselves. While working on a cap-table management application (think Stripe Atlas + Pulley), we found our development workflow to be messy: lots of terminals, lots of branches, and too much time spent waiting on Codex.
Emdash puts the terminal at the center and makes it easy to run multiple agents at once. Each agent runs as a task in its own git worktree. You can start one or a few agents on the same problem, test, and review.
Emdash works over SSH so you can run agents where your code lives and keep the parallel workflow. You can assign tickets to agents, edit files manually, and review changes.
We also spent time making task startup fast. Each task can be created in a worktree, and creating worktrees on demand was taking 5s+ in some cases. We now keep a small reserve of worktrees in the background and let a new task claim one instantly. That brought task start time down to ~500–1000ms depending on the provider. We also spawn the shell directly and avoid loading the shell environments on startup.
We believe using the providers’ native CLIs is the right approach. It gives you the full capabilities of each agent, always. If a provider starts supporting plan mode, we don't have to add that first.
We support 21 coding agent CLIs today, including Claude Code, Codex, Gemini, Droid, Amp, Codebuff, and more. We auto-detect what you have installed and we’re provider-agnostic by design. If there’s a provider you want that we don’t support yet, we can add it. We believe that in the future, some agents will be better suited for task X and others for task Y. Codex, Claude Code, and Gemini all have fans. We want to be agnostic and enable individuals and teams to freely switch between them.
Beyond orchestration, we try to pull most of the development loop into Emdash. You can review diffs, commit, open PRs, see CI/CD checks, and merge directly from Emdash once checks pass. When starting a task, you can pass issues from Linear, GitHub, and Jira to an agent. We also support convenience variables and lifecycle scripts so it’s easy to allocate ports and test changes.
Emdash is fully open-source and MIT-licensed.
Download for macOS, Linux or Windows (as of yesterday !), or install via Homebrew: brew install --cask emdash.
We’d love your feedback. How does your coding agent development setup look like, especially when working with multiple agents? We would want to learn more about it. Check out our repository here: https://github.com/generalaction/emdash
We’ll be around in the comments — thanks!
Show HN: WinterMute Local-first OSINT workbench with native Tor and AI analysis
Desktop app for OSINT and darknet investigations. Everything runs locally on your machine, no evidence is sent to the cloud. Built-in Tor browser lets you access .onion sites directly from the workbench without juggling separate tools. An AI copilot can analyze screenshots and web pages as you work. Every piece of evidence gets SHA-256 hashed with a tamper-evident custody chain so your collection holds up. IOC tracking across cases with automatic cross-case correlation to link shared infrastructure between investigations. STIX 2.1 export and MITRE ATT&CK mapping for structured reporting, currently only available on macOS.
Show HN: ArcticKey – Managed Redis (Valkey) Hosted in the EU
Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code
Every MCP tool call dumps raw data into Claude Code's 200K context window. A Playwright snapshot costs 56 KB, 20 GitHub issues cost 59 KB. After 30 minutes, 40% of your context is gone.
I built an MCP server that sits between Claude Code and these outputs. It processes them in sandboxes and only returns summaries. 315 KB becomes 5.4 KB.
It supports 10 language runtimes, SQLite FTS5 with BM25 ranking for search, and batch execution. Session time before slowdown goes from ~30 min to ~3 hours.
MIT licensed, single command install:
/plugin marketplace add mksglu/claude-context-mode
/plugin install context-mode@claude-context-mode
Benchmarks and source: https://github.com/mksglu/claude-context-mode
Would love feedback from anyone hitting context limits in Claude Code.
Show HN: Recursively apply patterns for pathfinding
I've been begrudgingly working on autorouters for 2 years, looking for new techniques or modern methods that might allow AI to create circuit boards.
One of the biggest problems in my view for training an AI to do autorouting is the traditional grid-based representation of autorouting problems which challenges spatial understanding. But we know that vision models are very good at classifying, so I wondered if we could train a model to output a path as a classification. But then how do you represent the path? This lead me down the track of trying to build an autorouter that represented paths as a bunch of patterns.
More details: https://blog.autorouting.com/p/the-recursive-pattern-pathfin...
Show HN: Workz – Zoxide for Git worktrees (auto node_modules and .env, AI-ready)
workz fixes the #1 pain with git worktrees in 2026:
When you spin up a new worktree for Claude/Cursor/AI agents you always end up: • Manually copying .env* files • Re-running npm/pnpm install (or cargo build) and duplicating gigabytes
workz does it automatically: • Smart symlinking of 22 heavy dirs (node_modules, target, .venv, etc.) with project-type detection • Copies .env*, .npmrc, secrets, docker overrides • Zoxide-style fuzzy switch: just type `w` → beautiful skim TUI + auto `cd` • `--ai` flag launches Claude/Cursor directly in the worktree • Zero-config for Node/Rust/Python/Go. Custom .workz.toml only if you want
Install: brew tap rohansx/tap && brew install workz # or cargo install workz
Demo in README → https://github.com/rohansx/workz
Feedback very welcome, especially from people running multiple AI agents in parallel!
Show HN: Tag Promptless on any GitHub PR/Issue to get updated user-facing docs
Hi HN! I'm Prithvi—my co-founder Frances and I launched Promptless almost a year ago here (https://news.ycombinator.com/item?id=43092522). It's an AI teammate that watches your workflows—code changes, support tickets, Slack threads, etc.—and automatically drafts doc updates when it spots something that should be documented.
Frances and I really appreciated the feedback from our first launch. Today we’re launching Promptless 1.0, which addresses our biggest learnings from the last 12 months.
I also made it way easier to try it out. You can tag @promptless on any open-source Github PR or Issue with a doc update request, and Promptless will create a fork and open a PR for your docs to help. Feel free to use our own docs as a playground: https://github.com/Promptless/docs/issues
Or, you can sign up at https://promptless.ai to get free access for your own docs for the next 30 days. Here's a demo video: https://youtu.be/IWwimHCEY7Y
For me, the coolest part of the last year has been seeing how users got creative with Promptless. One user has Promptless listening in to all their Slack Connect channels, so whenever they answer a customer question, Promptless figures out if their docs should be updated and drafts an update if so. Another user has Promptless processing every customer meeting transcript and updating their internal docs after each meeting: customer dashboards, feature request pages, etc.
Some of the biggest things that are new with version 1.0:
- Automatically updating screenshots: this was by far our most requested feature. The need here was always clear. People would exclude screenshots from docs because they’d get stale quickly, even though they knew screenshots would be helpful to users. A year ago, we just couldn't ship a good enough solution, but given how much LLMs' visual grounding has improved in the last year, now we've got something we're proud of.
- Slop-free writing: The most common critique on early Promptless suggestions was that even though they were accurate, they could sound generic or verbose, or might just reek of AI slop. Promptless 1.0 is 3.5x better at this (measured by voice-alignment compared to what users actually published), through a combination of fine-tuned models, sub-agents, and alignment on user-defined preferences.
- Open-source program: We're especially proud of this—Promptless is now free for CNCF/Linux Foundation projects (reach out if you’re a maintainer!). You can take a look at how Promptless is supporting Vitess (a CNCF-graduated project) with their docs here: https://github.com/vitessio/website/commits
Check it out and let us know if you have any questions, feedback, or criticism!
Show HN: enveil – hide your .env secrets from prAIng eyes
Enveil is an open-source framework that provides secure, end-to-end encryption for data in use, enabling organizations to perform computations on encrypted data without exposing sensitive information. The project aims to advance privacy-preserving technologies and promote secure data sharing and collaboration.
Show HN: PgDog – Scale Postgres without changing the app
Hey HN! Lev and Justin here, authors of PgDog (https://pgdog.dev/), a connection pooler, load balancer and database sharder for PostgreSQL. If you build apps with a lot of traffic, you know the first thing to break is the database. We are solving this with a network proxy that works without requiring application code changes or database migrations.
Our post from last year: https://news.ycombinator.com/item?id=44099187
The most important update: we are in production. Sharding is used a lot, with direct-to-shard queries (one shard per query) working pretty much all the time. Cross-shard (or multi-database) queries are still a work in progress, but we are making headway.
Aggregate functions like count(), min(), max(), avg(), stddev() and variance() are working, without refactoring the app. PgDog calculates the aggregate in-transit, while transparently rewriting queries to fetch any missing info. For example, multi-database average calculation requires a total count of rows to calculate the original sum. PgDog will add count() to the query, if it’s not there already, and remove it from the rows sent to the app.
Sorting and grouping works, including DISTINCT, if the columns(s) are referenced in the result. Over 10 data types are supported, like, timestamp(tz), all integers, varchar, etc.
Cross-shard writes, including schema changes (CREATE/DROP/ALTER), are now atomic and synchronized between all shards with two-phase commit. PgDog keeps track of the transaction state internally and will rollback the transaction if the first phase fails. You don’t need to monkeypatch your ORM to use this: PgDog will intercept the COMMIT statement and execute PREPARE TRANSACTION and COMMIT PREPARED instead.
Omnisharded tables, a.k.a replicated or mirrored (identical on all shards), support atomic reads and writes. That’s important because most databases can’t be completely sharded and will have some common data on all databases that has to be kept in-sync.
Multi-tuple inserts, e.g., INSERT INTO table_x VALUES ($1, $2), ($3, $4), are split by our query rewriter and distributed to their respective shards automatically. They are used by ORMs like Prisma, Sequelize, and others, so those now work without code changes too.
Sharding keys can be mutated. PgDog will intercept and rewrite the update statement into 3 queries, SELECT, INSERT, and DELETE, moving the row between shards. If you’re using Citus (for everyone else, Citus is a Postgres extension for sharding databases), this might be worth a look.
If you’re like us and prefer integers to UUIDs for your primary keys, we built a cross-shard unique sequence, directly inside PgDog. It uses the system clock (and a couple other inputs), can be called like a Postgres function, and will automatically inject values into queries, so ORMs like ActiveRecord will continue to work out of the box. It’s monotonically increasing, just like a real Postgres sequence, and can generate up to 4 million numbers per second with a range of 69.73 years, so no need to migrate to UUIDv7 just yet.
INSERT INTO my_table (id, created_at) VALUES (pgdog.unique_id(), now());
Resharding is now built-in. We can move gigabytes of tables per second, by parallelizing logical replication streams across replicas. This is really cool! Last time we tried this at Instacart, it took over two weeks to move 10 TB between two machines. Now, we can do this in just a few hours, in big part thanks to the work of the core team that added support for logical replication slots to streaming replicas in Postgres 16.Sharding hardly works without a good load balancer. PgDog can monitor replicas and move write traffic to a promoted primary during a failover. This works with managed Postgres, like RDS (incl. Aurora), Azure Pg, GCP Cloud SQL, etc., because it just polls each instance with “SELECT pg_is_in_recovery()”. Primary election is not supported yet, so if you’re self-hosting with Patroni, you should keep it around for now, but you don’t need to run HAProxy in front of the DBs anymore.
The load balancer is getting pretty smart and can handle edge cases like SELECT FOR UPDATE and CTEs with INSERT/UPDATE statements, but if you still prefer to handle your read/write separation in code, you can do that too with manual routing. This works by giving PgDog a hint at runtime: a connection parameter (-c pgdog.role=primary), SET statement, or a query comment. If you have multiple connection pools in your app, you can replace them with just one connection to PgDog instead. For multi-threaded Python/Ruby/Go apps, this helps by reducing memory usage, I/O and context switching overhead.
Speaking of connection pooling, PgDog can automatically rollback unfinished transactions and drain and re-sync partially sent queries, all in an effort to preserve connections to the database. If you’ve seen Postgres go to 100% CPU because of a connection storm caused by an application crash, this might be for you. Draining connections works by receiving and discarding rows from abandoned queries and sending the Sync message via the Postgres wire protocol, which clears the query context and returns the connection to a normal state.
PgDog is open source and welcomes contributions and feedback in any form. As always, all features are configurable and can be turned off/on, so should you choose to give it a try, you can do so at your own pace. Our docs (https://docs.pgdog.dev) should help too.
Thanks for reading and happy hacking!
Show HN: Chaos Monkey but for Audio Video Testing (WebRTC and UDP)
It takes an input video and converts it into H.264/Opus RTP streams that you can blast at your video call systems (WebRTC, SFUs, etc.). It also injects network chaos like packet loss, jitter, and bitrate throttling to see how things break
It scales from 1 to n participants, depending on the compute and memory of the host system Best part? It’s packaged with Nix, so it builds the same everywhere (Linux, macOS, ARM, x86). No dependency hell
It supports both UDP (with a relay chain for Kubernetes) and WebRTC (with containerized TURN servers). Chaos spikes can be distributed evenly, randomly, or front/back-loaded for different test scenarios. To change this, just edit the values in a single config file
Show HN: A free tool to turn your boring screenshots brutalist in seconds
The article explores the development of Neo, an open-source and decentralized blockchain platform, focusing on its features, capabilities, and potential applications in the world of blockchain technology.
Show HN: Babyshark – Wireshark made easy (terminal UI for PCAPs)
Hey all, I built babyshark, a terminal UI for PCAPs aimed at people who find Wireshark powerful but overwhelming.
The goal is “PCAPs for humans”: Overview dashboard answers what’s happening + what to click next
Domains view (hostnames first) → select a domain → jump straight to relevant flows (works even when DNS is encrypted/cached by using observed IPs from flows)
Weird stuff view surfaces common failure/latency signals (retransmits/out-of-order hints, resets, handshake issues, DNS failures when visible)
From there you can drill down: Flows → Packets → Explain (plain-English hints) / follow stream
Commands: Offline: babyshark --pcap capture.pcap
Live (requires tshark): babyshark --list-ifaces then babyshark --live en0
Repo + v0.1.0 release: https://github.com/vignesh07/babyshark
Would love feedback on UX + what “weird detectors” you’d want next.
Show HN: Sowbot – Open-hardware agricultural robot (ROS2, RTK GPS)
Sowbot is an open-hardware agricultural robot designed to close the "prototype gap" that kills most agri-robotics startups and research projects — the 18+ months spent on drivers, networking, safety watchdogs, and UI before you can even start on the thing you actually care about.
The hardware is built around a stackable 10×10cm compute module with two ARM Cortex-A55 SBCs — one for ROS 2 navigation/EKF localisation, one dedicated to vision/YOLO inference — connected via a single ethernet cable.
Centimetre-level positioning via dual RTK GNSS, CAN bus for field comms, and real-time motor control via ESP32 running Lizard firmware.
Everything — schematics, PCB layouts, firmware — is under open licences. The software stack runs on RoSys/Field Friend (for teams who want fast iteration) or DevKit ROS (for teams already in the ROS ecosystem). The idea is that a lab in one country can reproduce another lab's experiment by sharing a Docker image.
Current status: the Open Core brain is largely fabricated, the full-size Sowbot body has a detailed BOM but isn't yet assembled, and we have two smaller dev platforms (Mini and Pico) in various stages of testing.
We're a small volunteer team and we're looking for contributors — hardware, ROS, firmware, docs, whatever you can offer.
The best place to start is our Discord: https://discord.gg/SvztEBr4KZ — we have a weekly call if you'd prefer to just show up and chat.
GitHub: https://github.com/Agroecology-Lab/feldfreund_devkit_ros/tre...
Show HN: X86CSS – An x86 CPU emulator written in CSS
Show HN: Declarative open-source framework for MCPs with search and execute
Hi HN,
I’m Samrith, creator of Hyperterse.
Today I’m launching Hyperterse 2.0, a schema-first framework for building MCP servers directly on top of your existing production databases.
If you're building AI agents in production, you’ve probably run into agents needing access to structured, reliable data but wiring your business logic to MCP tools is tedious. Most teams end up writing fragile glue code. Or worse, giving agents unsafe, overbroad access.
There isn’t a clean, principled way to expose just the right data surface to agents.
Hyperterse lets you define a schema over your data and automatically exposes secure, typed MCP tools for AI agents.
Think of it as: Your business data → controlled, agent-ready interface.
Some key properties include a schema-first access layer, typed MCP tool generation, works with existing Postgres, MySQL, MongoDB, Redis databases, fine-grained exposure of queries, built for production agent workloads.
v2.0 focuses heavily on MCP with first-class MCP server support, cleaner schema ergonomics, better type safety, faster tool surfaces.
All of this, with only two tools - search & execute - reducing token usage drastically.
Hyperterse is useful if you are building AI agents/copilots, adding LLM features to existing SaaS, trying to safely expose internal data to agents or are just tired of bespoke MCP glue layers.
I’d love feedback, especially from folks running agents in production.
GitHub: https://github.com/hyperterse/hyperterse
Show HN: Steerling-8B, a language model that can explain any token it generates
Anthropic has released Steerling, an 8-billion parameter language model, aimed at providing a more aligned and truthful AI assistant that can engage in open-ended dialogue and assist with a variety of tasks while adhering to Anthropic's principles of ethical AI development.
Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)
Interactive timeline of every major Large Language Model. Filterable by open/closed source, searchable, 54 organizations tracked.
Show HN: A Visual Editor for Karabiner
The Karabiner Config Editor is a user-friendly graphical interface tool that allows users to easily create and manage custom configurations for the Karabiner-Elements keyboard customization utility on macOS.
Show HN: Cellarium: A Playground for Cellular Automata
Hey HN, just wanted to share a fun, vibe-coded Friday night experiment: a little playground for writing cellular automata in a subset of Rust, which is then compiled into WGSL.
Since it lets you dynamically change parameters while the simulation is running via a TUI, it's easy to discover weird behaviors without remembering how you got there. If you press "s", it will save the complete history to a JSON file (a timeline of the parameters that were changed at given ticks), so you can replay it and regenerate the discovery.
You can pan/zoom, and while the main simulation window is in focus, the arrow keys can be used to update parameters (which are shown in the TUI).
Claude deserves all the credit and criticism for any technical elements of this project (beyond rough guidelines). I've just always wanted something like this, and it's a lot of fun to play with. Who needs video games these days.
Show HN: StreamHouse – S3-native Kafka alternative written in Rust
Hey HN,
I built StreamHouse, an open-source streaming platform that replaces Kafka's broker-managed storage with direct S3 writes. The goal: same semantics, fraction of the cost.
How it works: Producers batch and compress records, a stateless server manages partition routing and metadata (SQLite for dev, PostgreSQL for prod), and segments land directly in S3. Consumers read from S3 with a local segment cache. No broker disks to manage, no replication factor to tune — S3 gives you 11 nines of durability out of the box.
What's there today: - Producer API with batching, LZ4 compression, and offset tracking (62K records/sec) - Consumer API with consumer groups, auto-commit, and multi-partition fanout (30K+ records/sec) - Kafka-compatible protocol (works with existing Kafka clients) - REST API, gRPC API, CLI, and a web UI - Docker Compose setup for trying it locally in 5 minutes
The cost model is what motivated this. Kafka's storage costs scale with replication factor × retention × volume. With S3 at $0.023/GB/month, storing a TB of events costs ~$23/month instead of hundreds on broker EBS volumes.
Written in Rust, ~50K lines across 15 crates. Apache 2.0 licensed.
GitHub: https://github.com/gbram1/streamhouse
Happy to answer questions about the architecture, tradeoffs, or what I learned building this.
Show HN: ProdRescue AI – Turn Slack war-rooms and raw logs into incident reports
Hi HN,
Most of us have been there: It’s 3 AM, there’s an outage, and the #incident channel is exploding with 200+ messages. Once the fix is deployed, the real pain begins—spending 4 hours reconstructing the timeline for the post-mortem.
I built ProdRescue AI to automate this. It’s an incident intelligence engine that correlates technical logs with human context from Slack.
How it works:
Native Slack Integration: Connect via OAuth 2.0. We only access channels you explicitly invite the bot to.
Contextual Correlation: It maps Slack timestamps to log events, identifying not just what failed, but who made which decision and why.
4-Layer Intelligence: We use a pipeline to Sanitize (mask PII), Correlate (logs + chat), Infer (RCA), and Verify (link every claim to a source log line).
Security: We use ephemeral processing. No log retention, no training on your data.
I’m really interested in your thoughts on the "Evidence-Backed" approach. Instead of just generating a narrative, we link every finding to a specific evidence tag ([1], [2], etc.) to eliminate AI hallucinations.
Check it out here: https://prodrescueai.com
Would love to hear your feedback on the Slack-to-Timeline flow!
Show HN: Ghist – Task management that lives in your repo
The article discusses the creation of a website that allows users to explore the history of GitHub, including its founding, growth, and key milestones. The website provides a comprehensive timeline and interactive features to help users understand the evolution of this influential software development platform.
Show HN: CIA World Factbook Archive (1990–2025), searchable and exportable
A structured archive of CIA World Factbook data spanning 1990–2025. It currently includes: 36 editions 281 entities ~1.06M parsed fields full-text + boolean search country/year comparisons map/trend/ranking analysis views CSV/XLSX/PDF export The goal is to preserve long-horizon public-domain government data and make cross-year analysis practical. Live: https://cia-factbook-archive.fly.dev About/method details: https://cia-factbook-archive.fly.dev/about Data source is the CIA World Factbook (public domain). Not affiliated with the CIA or U.S. Government.
Show HN: Mnemosyne – Cognitive memory OS for AI agents (zero LLM calls)
Mnemosyne is an open-source flashcard program designed to help users efficiently learn and retain information. The software utilizes a spaced repetition algorithm to optimize the review process and improve long-term memory.
Show HN: Open-source KYC plugin for Claude – 95min→27min, £85K→£240/year
Hi HN,
Just launched an open-source compliance plugin for Claude Cowork after seeing fintech teams pay £60K+ for platforms that orchestrate free public data.
UK fintech pilot (30 days, 5 analysts): • 95 minutes → 27 minutes per case • £85K annual platform cost → £240/year (Claude Pro) • Uses only free data: OFAC, UN, EU, Companies House, OpenSanctions
17 mandatory human-in-the-loop checkpoints. No auto-approvals. Deterministic risk scoring (MLR 2017 formulas). MIT licensed.
Launching today because Claude just announced Cowork plugin updates: https://www.linkedin.com/posts/claude-ai_were-introducing-up...
Testing if foundation models can replace compliance middleware for standard workflows (~70% of cases).
Demo slides: https://github.com/vyayasan/kyc-analyst/blob/main/docs/demo-... GitHub: https://github.com/vyayasan/kyc-analyst
Happy to answer questions about LLMs in regulated environments.
Show HN: Praxis, my personal take on Compound Engineering with AI
Hey HN! I really enjoy Every's approach to Compound Engineering (https://every.to/guides/compound-engineering), but their plugin is tightly tied to their project (Cora) and stack (Ruby/Rails). I also found the files too big, and they used more context window than what I would like for my personal use.
So, with the help of Amp Code CLI, I've built my own take on the compound engineering workflow. I tried to keep it agnostic to project stacks and as efficient as possible, so the context window could be used in the best way. I also wanted it to be extendable (for example, just drop your own subagents for review that are specific to your project). I also wanted to be easy to set up and update, so I made a simple CLI tool that keeps track of files in the `.agents` directory, updates when new versions are found in the repository, and displays a diff in the terminal before overwriting any customisations.
I feel this matches well with my personal preferences when working with AI agents, but I would love to have feedback from more people.