Show stories

simedw about 1 hour ago

Show HN: I trained a 9M speech model to fix my Mandarin tones

Built this because tones are killing my spoken Mandarin and I can't reliably hear my own mistakes.

It's a 9M Conformer-CTC model trained on ~300h (AISHELL + Primewords), quantized to INT8 (11 MB), runs 100% in-browser via ONNX Runtime Web.

Grades per-syllable pronunciation + tones with Viterbi forced alignment.

Try it here: https://simedw.com/projects/ear/

simedw.com
37 5
Summary
omarisbuilding about 4 hours ago

Show HN: I built an AI conversation partner to practice speaking languages

Hi,

I built TalkBits because most language apps focus on vocabulary or exercises, but not actual conversation. The hard part of learning a language is speaking naturally under pressure.

TalkBits lets you have real-time spoken conversations with an AI that acts like a native speaker. You can choose different scenarios (travel, daily life, work, etc.), speak naturally, and the AI responds with natural speech back.

The goal is to make it feel like talking to a real person rather than doing lessons.

Techwise, it uses realtime speech input, transcription, LLM responses, and tts streaming to keep latency low so the conversation feels fluid.

I’m specially interested in feedback about: – Does it feel natural? – Where does the conversation break immersion? – What would make you use this regularly?

Happy to answer technical questions too.

Thanks

apps.apple.com
43 32
Summary
Show HN: OpenVideo – A self-hostable, open-source video editor in the browser
snapmotion 42 minutes ago

Show HN: OpenVideo – A self-hostable, open-source video editor in the browser

Open-source, browser-based video editor inspired by CapCut. Timeline editing, runs on modern web APIs, and can be self-hosted. Looking for feedback from devs and video folks.

github.com
2 1
Summary
Show HN: Foundry – Turns your repeated workflows into one-click commands
getfoundry about 2 hours ago

Show HN: Foundry – Turns your repeated workflows into one-click commands

github.com
3 0
Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents
souvik1997 about 12 hours ago

Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents

WASM sandbox for running LLM-generated code safely.

Agents get a bash-like shell and can only call tools you provide, with constraints you define. No Docker, no subprocess, no SaaS — just pip install amla-sandbox

github.com
124 71
Show HN: Kolibri, a DIY music club in Sweden
EastLondonCoder 1 day ago

Show HN: Kolibri, a DIY music club in Sweden

We’re Maria and Jonatan, a married couple running a small music night in Norrköping, Sweden, called Kolibri.

It’s not a software project. We run it through our own small Swedish company, pay artists, and do the operations ourselves. We do one night a month (usually the last Friday) in a restaurant venue called Mitropa. A typical night is about 50–70 paying guests. The first years it was DJs only, but last year we started doing live bands as well.

We made a simple site with schedule plus photos/video so you can see what it looks like: https://kolibrinkpg.com/

On the site:

  * photos and short videos (size/atmosphere)

  * the kind of acts we book (post-punk, darkwave, synth, adjacent electronic)

  * enough context to copy parts of the format if you’re building something similar locally

  * for the tech-curious: we built our own ticketing system (first used in February) and a media ingestion pipeline for Instagram and external photographers
How it started was accidental. I was doing remote music sessions with a friend in London (Ableton projects back and forth on FaceTime), ran out of beer, and walked into the nearest place. I got talking to Nahir, who runs Mitropa, and floated the idea of running a DIY music night there. He was up for it.

What made it take off was doing things in person. People will show up alone if they trust the room. Maria ended up doing a lot of that work: greeting newcomers, noticing who looks uncertain, and setting a tone where people treat each other decently.

Maria didn’t come from a DJ background. Klubbvärdinnan started as a joke name at Kolibri and then became her DJ moniker. She got good quickly, and after a first gig outside our own night she started getting booked elsewhere too.

Marketing-wise, what worked best was very analogue: walking around town, visiting local businesses we genuinely like, buying something, introducing ourselves, and asking if we could leave a flyer.

In the beginning we weren’t sure how to present it on social media. So we filmed headphone walks: one person walking through town listening to a track we picked. It looked good, people wanted to be in them, and afterwards we’d buy them a couple of drinks and actually talk. That turned a social media interaction into a real connection. It was a bit of luck, but it worked.

Questions welcome about what worked, what failed, costs/logistics, and what we’d do differently if we started over.

kolibrinkpg.com
126 23
Summary
Show HN: Cicada – A scripting language that integrates with C
briancr about 14 hours ago

Show HN: Cicada – A scripting language that integrates with C

I wrote a lightweight scripting language that runs together with C. Specifically, it's a C library, you run it through a C function call, and it can callback your own C functions. Compiles to ~250 kB. No dependencies beyond the C standard library.

Key language features: * Uses aliases not pointers, so it's memory-safe * Arrays are N-dimensional and resizable * Runs scripts or its own 'shell' * Error trapping * Methods, inheritance, etc. * Customizable syntax

github.com
51 29
Summary
Show HN: Daily Cat
abraham about 4 hours ago

Show HN: Daily Cat

Seeing HTTP Cats on the home page remind me to share a small project I made a couple months ago. It displays a different cat photo from Unsplash every day and will send you notifications if you opt-in.

daily.cat
3 0
echelon about 5 hours ago

Show HN: Using World Models for Consistent AI Filmmaking

The article discusses the growing demand for world models, which are computer-generated environments used in filmmaking, and how they are becoming increasingly sophisticated and realistic. It explores the technical and creative challenges involved in creating these virtual worlds, which are essential for modern visual effects and storytelling in the film industry.

getartcraft.com
2 0
Summary
Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser)
Flux159 3 days ago

Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser)

Hi HN, I've been building Mystral Native — a lightweight native runtime that lets you write games in JavaScript/TypeScript using standard Web APIs (WebGPU, Canvas 2D, Web Audio, fetch) and run them as standalone desktop apps. Think "Electron for games" but without Chromium. Or a JS runtime like Node, Deno, or Bun but optimized for WebGPU (and bundling a window / event system using SDL3).

Why: I originally started by starting a new game engine in WebGPU, and I loved the iteration loop of writing Typescript & instantly seeing the changes in the browser with hot reloading. After getting something working and shipping a demo, I realized that shipping a whole browser doesn't really work if I also want the same codebase to work on mobile. Sure, I could use a webview, but that's not always a good or consistent experience for users - there are nuances with Safari on iOS supporting WebGPU, but not the same features that Chrome does on desktop. What I really wanted was a WebGPU runtime that is consistent & works on any platform. I was inspired by deno's --unsafe-webgpu flag, but I realized that deno probably wouldn't be a good fit long term because it doesn't support iOS or Android & doesn't bundle a window / event system (they have "bring your own window", but that means writing a lot of custom code for events, dealing with windowing, not to mention more specific things like implementing a WebAudio shim, etc.). So that got me down the path of building a native runtime specifically for games & that's Mystral Native.

So now with Mystral Native, I can have the same developer experience (write JS, use shaders in WGSL, call requestAnimationFrame) but get a real native binary I can ship to players on any platform without requiring a webview or a browser. No 200MB Chromium runtime, no CEF overhead, just the game code and a ~25MB runtime.

What it does: - Full WebGPU via Dawn (Chrome's implementation) or wgpu-native (Rust) - Native window & events via SDL3 - Canvas 2D support (Skia), Web Audio (SDL3), fetch (file/http/https) - V8 for JS (same engine as Chrome/Node), also supports QuickJS and JSC - ES modules, TypeScript via SWC - Compile to single binary (think "pkg"): `mystral compile game.js --include assets -o my-game` - macOS .app bundles with code signing, Linux/Windows standalone executables - Embedding API for iOS and Android (JSC/QuickJS + wgpu-native)

It's early alpha — the core rendering path works well & I've tested on Mac, Linux (Ubuntu 24.04), and Windows 11, and some custom builds for iOS & Android to validate that they can work, but there's plenty to improve. Would love to get some feedback and see where it can go!

MIT licensed.

Repo: https://github.com/mystralengine/mystralnative

Docs: https://mystralengine.github.io/mystralnative/

github.com
44 16
Summary
Show HN: Ourguide – OS wide task guidance system that shows you where to click
eshaangulati 4 days ago

Show HN: Ourguide – OS wide task guidance system that shows you where to click

Hey! I'm eshaan and I'm building Ourguide -an on-screen task guidance system that can show you where to click step-by-step when you need help.

I started building this because whenever I didn’t know how to do something on my computer, I found myself constantly tabbing between chatbots and the app, pasting screenshots, and asking “what do I do next?” Ourguide solves this with two modes. In Guide mode, the app overlays your screen and highlights the specific element to click next, eliminating the need to leave your current window. There is also Ask mode, which is a vision-integrated chat that captures your screen context—which you can toggle on and off anytime -so you can ask, "How do I fix this error?" without having to explain what "this" is.

It’s an Electron app that works OS-wide, is vision-based, and isn't restricted to the browser.

Figuring out how to show the user where to click was the hardest part of the process. I originally trained a computer vision model with 2300 screenshots to identify and segment all UI elements on a screen and used a VLM to find the correct icon to highlight. While this worked extremely well—better than SOTA grounding models like UI Tars—the latency was just too high. I'll be making that CV+VLM pipeline OSS soon, but for now, I’ve resorted to a simpler implementation that achieves <1s latency.

You may ask: if I can show you where to click, why can't I just click too? While trying to build computer-use agents during my job in Palo Alto, I hit the core limitation of today’s computer-use models where benchmarks hover in the mid-50% range (OSWorld). VLMs often know what to do but not what it looks like; without reliable visual grounding, agents misclick and stall. So, I built computer use—without the "use." It provides the visual grounding of an agent but keeps the human in the loop for the actual execution to prevent misclicks.

I personally use it for the AWS Console's "treasure hunt" UI, like creating a public S3 bucket with specific CORS rules. It’s also been surprisingly helpful for non-technical tasks, like navigating obscure settings in Gradescope or Spotify. Ourguide really works for any task when you’re stuck or don't know what to do.

You can download and test Ourguide here: https://ourguide.ai/downloads

The project is still very early, and I’d love your feedback on where it fails, where you think it worked well, and which specific niches you think Ourguide would be most helpful for.

ourguide.ai
51 22
Summary
tullie 4 days ago

Show HN: ShapedQL – A SQL engine for multi-stage ranking and RAG

Hi HN,

I’m Tullie, founder of Shaped. Previously, I was a researcher at Meta AI, worked on ranking for Instagram Reels, and was a contributor to PyTorch Lightning.

We built ShapedQL because we noticed that while retrieval (finding 1,000 items) has been commoditized by vector DBs, ranking (finding the best 10 items) is still an infrastructure problem.

To build a decent for you feed or a RAG system with long-term memory, you usually have to put together a vector DB (Pinecone/Milvus), a feature store (Redis), an inference service, and thousands of lines of Python to handle business logic and reranking.

We built an engine that consolidates this into a single SQL dialect. It compiles declarative queries into high-performance, multi-stage ranking pipelines.

HOW IT WORKS:

Instead of just SELECT , ShapedQL operates in four stages native to recommendation systems:

RETRIEVE: Fetch candidates via Hybrid Search (Keywords + Vectors) or Collaborative Filtering. FILTER: Apply hard constraints (e.g., "inventory > 0"). SCORE: Rank results using real-time models (e.g., p(click) or p(relevance)). REORDER: Apply diversity logic so your Agent/User doesn’t see 10 nearly identical results.

THE SYNTAX: Here is what a RAG query looks like. This replaces about 500 lines of standard Python/LangChain code:

SELECT item_id, description, price

FROM

  -- Retrieval: Hybrid search across multiple indexes

  search_flights("$param.user_prompt", "$param.context"),

  search_hotels("$param.user_prompt", "$param.context")
WHERE

  -- Filtering: Hard business constraints

  price <= "$param.budget" AND is_available("$param.dates")
ORDER BY

  -- Scoring: Real-time reranking (Personalization + Relevance)

  0.5 * preference_score(user, item) +

  0.3 * relevance_score(item, "$param.user_prompt")
LIMIT 20

If you don’t like SQL, you can also use our Python and Typescript SDKs. I’d love to know what you think of the syntax and the abstraction layer!

playground.shaped.ai
78 23
Summary
callmeed 2 days ago

Show HN: I'm building an AI-proof writing tool. How would you defeat it?

auth-auth.vercel.app
21 30
Show HN: We added memory to Claude Code. It's powerful now
dhravya about 9 hours ago

Show HN: We added memory to Claude Code. It's powerful now

The article discusses the addition of Supermemory, a powerful memory model, to the Claude AI system, which has significantly enhanced its abilities. It highlights the impressive performance improvements and expanded capabilities that this integration has brought to the Claude AI.

supermemory.ai
4 0
Summary
Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB
prasoonds about 9 hours ago

Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB

Hey HN,

stripe-no-webhooks is an open-source library that syncs your Stripe payments data to your own Postgres database: https://github.com/pretzelai/stripe-no-webhooks

Here's a demo video: https://youtu.be/cyEgW7wElcs

It creates a webhook endpoint in your Stripe account to forward webhooks to your backend where a webhook listener stores all the data into a new stripe.* schema. You define your plans in TypeScript, run a sync command, and the library takes care of creating Stripe products and prices, handling webhooks, and keeping your database in sync. We also let you backfill your Stripe data for existing accounts.

It supports pre-paid usage credits, account wallets and usage-based billing. It also lets you generate a pricing table component that you can customize. You can access the user information using the simple API the library provides:

  billing.subscriptions.get({ userId });
  billing.credits.consume({ userId, key: "api_calls", amount: 1 });
  billing.usage.record({ userId, key: "ai_model_tokens_input", amount: 4726 });
Effectively, you don't have to deal with either the Stripe dashboard or the Stripe API/SDK any more if you don't want to. The library gives you a nice abstraction on top of Stripe that should cover ~most subscription payment use-cases.

Let's see how it works with a quick example. Say you have a billing plan like Cursor (the IDE) used to have: $20/mo, you get 500 API completions + 2000 tab completions, you can buy additional API credits, and any additional usage is billed as overage.

You define your plan in TypeScript:

  {
    name: "Pro",
    description: "Cursor Pro plan",
    price: [{ amount: 2000, currency: "usd", interval: "month" }],
    features: {
      api_completion: {
        pricePerCredit: 1,              // 1 cent per unit
        trackUsage: true,               // Enable usage billing
        credits: { allocation: 500 },
        displayName: "API Completions",
      },
      tab_completion: {
        credits: { allocation: 2000 },
        displayName: "Tab Completions",
      },
    },
  }
Then on the CLI, you run the `init` command which creates the DB tables + some API handlers. Run `sync` to sync the plans to your Stripe account and create a webhook endpoint. When a subscription is created, the library automatically grants the 500 API completion credits and 2000 tab completion credits to the user. Renewals and up/downgrades are handled sanely.

Consume code would look like this:

  await billing.credits.consume({
    userId: user.id,
    key: "api_completion",
    amount: 1,
  });
And if they want to allow manual top-ups by the user:

  await billing.credits.topUp({
    userId: user.id,
    key: "api_completion",
    amount: 500,     // buy 500 credits, charges $5.00
  });
Similarly, we have APIs for wallets and usage.

This would be a lot of work to implement by yourself on top of Stripe. You need to keep track of all of these entitlements in your own DB and deal with renewals, expiry, ad-hoc grants, etc. It's definitely doable, especially with AI coding, but you'll probably end up building something fragile and hard to maintain.

This is just a high-level overview of what the library is capable of. It also supports seat-level credits, monetary wallets (with micro-cent precision), auto top-ups, robust failure recovery, tax collection, invoices, and an out-of-the-box pricing table.

I vibe-coded a little toy app for testing: https://snw-test.vercel.app

There's no validation so feel free to sign up with a dummy email, then subscribe to a plan with a test card: 4242 4242 4242 4242, any future expiry, any 3-digit CVV.

Screenshot: https://imgur.com/a/demo-screenshot-Rh6Ucqx

Feel free to try it out! If you end up using this library, please report any bugs on the repo. If you're having trouble / want to chat, I'm happy to help - my contact is in my HN profile.

github.com
34 4
Summary
pigless72 about 9 hours ago

Show HN: Xmrcheckout – self-hosted, non-custodial Monero checkout

Hi HN — I built xmrcheckout, an open-source, self-hostable checkout UI + API for accepting Monero (XMR) payments where funds go directly from the customer to the merchant’s own wallet.

What it does:

- Creates invoices (defined in XMR), hosts a public invoice page, and shows payment instructions (address + amount + QR). - Observes the chain to detect incoming payments and update invoice status. - Exposes an API + optional webhooks so you can plug it into an existing order flow.

Trust model:

- No private spend keys (it never requests or stores them). - No transaction signing, no fund-moving automation. - View-only wallet access (wallet address + private view key).

Stack + deploy:

- UI: Next.js - API: Python - Postgres - Uses monerod + monero-wallet-rpc for on-chain observation - optional nginx/TLS via docker compose

Repo + screenshots: https://github.com/xmrcheckout/xmrcheckout

xmrcheckout.com
3 1
Summary
Show HN: Apple II(e) emulator in Rust for native and web
chrismoos about 10 hours ago

Show HN: Apple II(e) emulator in Rust for native and web

Hey all,

Spent some time over my winter break and built this Apple II emulator. I had previously done a C64 one but somehow stumbled on the Apple II and decided it would be a fun project.

Spent a good amount of the time on the Disk II implementation -- there is a lot to it as the software had pretty direct control over what a controller firmware normally would do. Dealing with copy protection schemes and all the timing around that was a bit challenging.

There is a WASM version you can try on the web, please check it out!

https://emu.chrismoos.com/

github.com
3 0
Summary
Show HN: HN Zeitgeist – what 40M HN comments reveal about 20 years of tech
cigol about 10 hours ago

Show HN: HN Zeitgeist – what 40M HN comments reveal about 20 years of tech

Hi HN! I analyzed 40M comments from 2006-2026 and organized them into ~10k topics using AI.

You can explore: - Rising/falling trends over time - Individual topics with sample comments + links to original threads - Deep-dive reports (Bitcoin, Nvidia, self-driving, etc.)

Built in about a week. Feedback welcome!

hn.mrzepa.com
3 0
Show HN: Julie Zero – my screen-aware desktop AI that works out of the box
luthiraabeykoon about 10 hours ago

Show HN: Julie Zero – my screen-aware desktop AI that works out of the box

I’ve posted here before about Julie, an open-source desktop AI assistant I’ve been building in public. The OSS version is local-first and powerful, but it assumes you’re comfortable bringing your own models or API keys.

A lot of people told me the same thing: “I just want to install it and have it work.”

So I built Julie Zero.

Julie Zero is the premium tier that works straight out of the box. No API keys, no setup. Install it and start using it immediately.

What Julie Zero does:

Sees your screen and understands what you’re looking at in real time

Helps across apps by clicking, typing, navigating, and automating workflows

Uses on-screen context, not just text prompts, so responses are actually relevant

Fast and low-latency, so it feels usable during real work

Built with a local-first mindset and tuned for everyday workflows

The open-source version is still there and always will be. Julie Zero is just about removing friction for people who don’t want to configure anything.

Pricing: Julie Zero is $9.99/month, which is a fraction of the price of similar screen-aware tools like Cluely’s premium tier.

Giveaway: I’m giving 3 months of unlimited Julie Zero access to 10 people.

To get it:

Star the open-source Julie repo on my GitHub: https://github.com/Luthiraa/julie (make sure a social is connected to your gh so i can reach out!)

I’ll send you a one-time code for 3-month premium access

I’m building this very hands-on and iterating fast. Would love feedback from people using it in real workflows. Happy to answer questions.

github.com
4 0
Summary
Show HN: Claude Commander: runtime model switching in Cloud Code via hooks/API
stefanostraus about 11 hours ago

Show HN: Claude Commander: runtime model switching in Cloud Code via hooks/API

Hi HN, I built Claude Commander, a small wrapper around Cloud Code that lets you issue commands programmatically from inside Cloud Code (via hooks or scripts).

Main feature: switch model at runtime.

Why: start expensive for planning or hard debugging, then downgrade for execution to cut cost.

github.com
2 0
Summary
yuppiepuppie 3 days ago

Show HN: The HN Arcade

I love seeing all the small games that people build and post to this site.

I don't want to forget any, so I have built a directory/arcade for the games here that I maintain.

Feel free to check it out, add your game if its missing and let me know what you think. Thanks!

andrewgy8.github.io
346 114
Summary
lcolucci 3 days ago

Show HN: LemonSlice – Upgrade your voice agents to real-time video

Hey HN, we're the co-founders of LemonSlice (try our HN playground here: https://lemonslice.com/hn). We train interactive avatar video models. Our API lets you upload a photo and immediately jump into a FaceTime-style call with that character. Here's a demo: https://www.loom.com/share/941577113141418e80d2834c83a5a0a9

Chatbots are everywhere and voice AI has taken off, but we believe video avatars will be the most common form factor for conversational AI. Most people would rather watch something than read it. The problem is that generating video in real-time is hard, and overcoming the uncanny valley is even harder.

We haven’t broken the uncanny valley yet. Nobody has. But we’re getting close and our photorealistic avatars are currently best-in-class (judge for yourself: https://lemonslice.com/try/taylor). Plus, we're the only avatar model that can do animals and heavily stylized cartoons. Try it: https://lemonslice.com/try/alien. Warning! Talking to this little guy may improve your mood.

Today we're releasing our new model* - Lemon Slice 2, a 20B-parameter diffusion transformer that generates infinite-length video at 20fps on a single GPU - and opening up our API.

How did we get a video diffusion model to run in real-time? There was no single trick, just a lot of them stacked together. The first big change was making our model causal. Standard video diffusion models are bidirectional (they look at frames both before and after the current one), which means you can't stream.

From there it was about fitting everything on one GPU. We switched from full to sliding window attention, which killed our memory bottleneck. We distilled from 40 denoising steps down to just a few - quality degraded less than we feared, especially after using GAN-based distillation (though tuning that adversarial loss to avoid mode collapse was its own adventure).

And the rest was inference work: modifying RoPE from complex to real (this one was cool!), precision tuning, fusing kernels, a special rolling KV cache, lots of other caching, and more. We kept shaving off milliseconds wherever we could and eventually got to real-time.

We set up a guest playground for HN so you can create and talk to characters without logging in: https://lemonslice.com/hn. For those who want to build with our API (we have a new LiveKit integration that we’re pumped about!), grab a coupon code in the HN playground for your first Pro month free ($100 value). See the docs: https://lemonslice.com/docs. Pricing is usage-based at $0.12-0.20/min for video generation.

Looking forward to your feedback!

EDIT: Tell us what characters you want to see in the comments and we can make them for you to talk to (e.g. Max Headroom)

*We did a Show HN last year for our V1 model: https://news.ycombinator.com/item?id=43785044. It was technically impressive but so bad compared to what we have today.

127 130
Show HN: SHDL – A minimal hardware description language built from logic gates
rafa_rrayes 3 days ago

Show HN: SHDL – A minimal hardware description language built from logic gates

Hi, everyone!

I built SHDL (Simple Hardware Description Language) as an experiment in stripping hardware description down to its absolute fundamentals.

In SHDL, there are no arithmetic operators, no implicit bit widths, and no high-level constructs. You build everything explicitly from logic gates and wires, and then compose larger components hierarchically. The goal is not synthesis or performance, but understanding: what digital systems actually look like when abstractions are removed.

SHDL is accompanied by PySHDL, a Python interface that lets you load circuits, poke inputs, step the simulation, and observe outputs. Under the hood, SHDL compiles circuits to C for fast execution, but the language itself remains intentionally small and transparent.

This is not meant to replace Verilog or VHDL. It’s aimed at: - learning digital logic from first principles - experimenting with HDL and language design - teaching or visualizing how complex hardware emerges from simple gates.

I would especially appreciate feedback on: - the language design choices - what feels unnecessarily restrictive vs. educationally valuable - whether this kind of “anti-abstraction” HDL is useful to you.

Repo: https://github.com/rafa-rrayes/SHDL

Python package: PySHDL on PyPI

To make this concrete, here are a few small working examples written in SHDL:

1. Full Adder

component FullAdder(A, B, Cin) -> (Sum, Cout) {

    x1: XOR; a1: AND;
    x2: XOR; a2: AND;
    o1: OR;

    connect {
        A -> x1.A; B -> x1.B;
        A -> a1.A; B -> a1.B;

        x1.O -> x2.A; Cin -> x2.B;
        x1.O -> a2.A; Cin -> a2.B;
        a1.O -> o1.A; a2.O -> o1.B;

        x2.O -> Sum; o1.O -> Cout;
    }
}

2. 16 bit register

# clk must be high for two cycles to store a value

component Register16(In[16], clk) -> (Out[16]) {

    >i[16]{
        a1{i}: AND;
        a2{i}: AND;
        not1{i}: NOT;
        nor1{i}: NOR;
        nor2{i}: NOR;
    }
    
    connect {
        >i[16]{
            # Capture on clk
            In[{i}] -> a1{i}.A;
            In[{i}] -> not1{i}.A;
            not1{i}.O -> a2{i}.A;
            
            clk -> a1{i}.B;
            clk -> a2{i}.B;
            
            a1{i}.O -> nor1{i}.A;
            a2{i}.O -> nor2{i}.A;
            nor1{i}.O -> nor2{i}.B;
            nor2{i}.O -> nor1{i}.B;
            nor2{i}.O -> Out[{i}];
        }
    }
}

3. 16-bit Ripple-Carry Adder

use fullAdder::{FullAdder};

component Adder16(A[16], B[16], Cin) -> (Sum[16], Cout) {

    >i[16]{ fa{i}: FullAdder; }

    connect {
        A[1] -> fa1.A;
        B[1] -> fa1.B;
        Cin -> fa1.Cin;
        fa1.Sum -> Sum[1];

        >i[2,16]{
            A[{i}] -> fa{i}.A;
            B[{i}] -> fa{i}.B;
            fa{i-1}.Cout -> fa{i}.Cin;
            fa{i}.Sum -> Sum[{i}];
        }

        fa16.Cout -> Cout;
    }
}

github.com
46 21
Summary
Show HN: Build Web Automations via Demonstration
ogandreakiro 4 days ago

Show HN: Build Web Automations via Demonstration

Hey HN,

We’ve been building browser agents for a while. In production, we kept converging on the same pattern: deterministic scripts for the happy path, agents only for edge cases. So we built Demonstrate Mode.

The idea is simple: You perform your workflow once in a remote browser. Notte records the interactions and generates deterministic automation code.

How it works: - Record clicks, inputs, navigations in a cloud browser - Compile them into deterministic code (no LLM at runtime) - Run and deploy on managed browser infrastructure

Closest analog is Playwright codegen but: - Infrastructure is handled (remote browsers, proxies, auth state) - Code runs in a deployable runtime with logs, retries, and optional agent fallback

Agents are great for prototyping and dynamic steps, but for production we usually want versioned code and predictable cost/behavior. Happy to dive into implementation details in the comments.

Demo: https://www.loom.com/share/f83cb83ecd5e48188dd9741724cde49a

-- Andrea & Lucas, Notte Founders

notte.cc
32 20
Summary
Show HN: A MitM proxy to see what your LLM tools are sending
jmuncor 2 days ago

Show HN: A MitM proxy to see what your LLM tools are sending

I built this out of curiosity about what Claude Code was actually sending to the API. Turns out, watching your tokens tick up in real-time is oddly satisfying.

Sherlock sits between your LLM tools and the API, showing you every request with a live dashboard, and auto-saved copies of every prompt as markdown and json.

github.com
215 119
Summary
Show HN: Autonomous recovery for distributed training jobs
tsvoboda 1 day ago

Show HN: Autonomous recovery for distributed training jobs

Hi HN! We’re TensorPool. We help companies access and optimize large scale compute for training foundation models.

The Problem

It’s been almost a year since we’ve finished YC, and we’ve just crossed 100,000 multinode training GPU hours run on our platform.

On those training runs, we’ve seen countless 3am job crashes because of issues like an Xid error from a flaky GPU or an S3 timeout that corrupted a checkpoint save. By the time you wake up and notice, you've lost 8+ hours of compute. You scramble to diagnose the issue, manually restart from the last checkpoint, and hope it doesn't happen again. Rinse and repeat.

For training runs that take days to weeks, this constant babysitting is exhausting and expensive. The research iteration cycles lost can also make or break a model release (especially for short reservations).

What We Built

This agent monitors your training jobs and autonomously recovers them when things go wrong. It works with Kubernetes, Slurm, and TensorPool Jobs.

We originally built the TensorPool Agent as an internal tool to help us debug failures with our own customers. Over time, we realized its performance was so good that we could automate the entire triage process. We're now releasing a public beta for people to use.

Best case: The TensorPool Agent detects the failure, diagnoses the root cause, fixes it, and restarts your job from the last checkpoint – all while you sleep ;)

Worst case: If the TensorPool agent can't fix the issue automatically, it delivers a preliminary RCA and a list of actions it attempted, giving you a head start on debugging.

How It Works

1) Registration – You provide credentials to your job scheduler via our dashboard. Perms are granted on a whitelist basis; you explicitly control what actions the agent can take.

2) Monitoring – The agent continuously monitors your job for failure conditions.

3) Recovery – On failure, the agent analyzes logs and attempts to diagnose the issue. If successful, it restarts the job from the last checkpoint and resumes monitoring. If not, you get an alert with full context.

Target Failure Modes

The agent is specifically designed for runtime errors that occur deep into training, like:

- CUDA OOM: Memory leaks, gradient explosions

- Xid errors: GPU hardware faults (Xid 79, 63, 48, etc.)

- Distributed communication failures: NCCL timeouts, rank failures

- Storage I/O errors: Checkpoint corruption

- Network issues: S3 request timeouts on mounted object storage

docs.tensorpool.dev
9 3
Summary
Show HN: I built a small browser engine from scratch in C++
crediblejhj 3 days ago

Show HN: I built a small browser engine from scratch in C++

Hi HN! Korean high school senior here, about to start CS in college.

I built a browser engine from scratch in C++ to understand how browsers work. First time using C++, 8 weeks of development, lots of debugging—but it works!

Features:

- HTML parsing with error correction

- CSS cascade and inheritance

- Block/inline layout engine

- Async image loading + caching

- Link navigation + history

Hardest parts:

- String parsing(html, css)

- Rendering

- Image Caching & Layout Reflowing

What I learned (beyond code):

- Systematic debugging is crucial

- Ship with known bugs rather than chase perfection

- The Power of "Why?"

~3,000 lines of C++17/Qt6. Would love feedback on code architecture and C++ best practices!

GitHub: https://github.com/beginner-jhj/mini_browser

github.com
144 45
Summary
embedding-shape 4 days ago

Show HN: One Human + One Agent = One Browser From Scratch in 20K LOC

Related: https://simonwillison.net/2026/Jan/27/one-human-one-agent-on...

emsh.cat
315 151
Summary
Show HN: Shelvy Books
tekkie00 2 days ago

Show HN: Shelvy Books

Hey HN! I built a little side project I wanted to share.

Shelvy is a free, visual bookshelf app where you can organize books you're reading, want to read, or have finished. Sign in to save your own collection.

Not monetized, no ads, no tracking beyond basic auth. Just a fun weekend project that grew a bit.

Live: https://shelvybooks.com

Would love any feedback on the UX or feature ideas!

shelvybooks.com
48 17
Summary
cmkr 4 days ago

Show HN: We Built the 1. EU-Sovereignty Audit for Websites

The article discusses an audit of the European Union's policies and institutions, highlighting the need for greater transparency, accountability, and efficiency in the EU's governance. It emphasizes the importance of addressing concerns about the EU's democratic legitimacy and decision-making processes.

lightwaves.io
104 88
Summary