Show stories

Show HN: React-Kino – Cinematic scroll storytelling for React (1KB core)
bilater 2 days ago

Show HN: React-Kino – Cinematic scroll storytelling for React (1KB core)

I built react-kino because I wanted Apple-style scroll experiences in React without pulling in GSAP (33KB for ScrollTrigger alone).

The core scroll engine is under 1KB gzipped. It uses CSS position: sticky with a spacer div for pinning — same technique as ScrollTrigger but with zero dependencies.

12 declarative components: Scene, Reveal, Parallax, Counter, TextReveal, CompareSlider, VideoScroll, HorizontalScroll, Progress, Marquee, StickyHeader.

SSR-safe, respects prefers-reduced-motion, works with Next.js App Router.

Demo: https://react-kino.dev GitHub: https://github.com/btahir/react-kino npm: npm install react-kino

github.com
8 0
Summary
Show HN: I built a sub-500ms latency voice agent from scratch
nicktikhonov about 19 hours ago

Show HN: I built a sub-500ms latency voice agent from scratch

I built a voice agent from scratch that averages ~400ms end-to-end latency (phone stop → first syllable). That’s with full STT → LLM → TTS in the loop, clean barge-ins, and no precomputed responses.

What moved the needle:

Voice is a turn-taking problem, not a transcription problem. VAD alone fails; you need semantic end-of-turn detection.

The system reduces to one loop: speaking vs listening. The two transitions - cancel instantly on barge-in, respond instantly on end-of-turn - define the experience.

STT → LLM → TTS must stream. Sequential pipelines are dead on arrival for natural conversation.

TTFT dominates everything. In voice, the first token is the critical path. Groq’s ~80ms TTFT was the single biggest win.

Geography matters more than prompts. Colocate everything or you lose before you start.

GitHub Repo: https://github.com/NickTikhonov/shuo

Follow whatever I next tinker with: https://x.com/nick_tikhonov

ntik.me
501 146
Summary
Show HN: Diarize – CPU-only speaker diarization, 7x faster than pyannote
loookas about 1 hour ago

Show HN: Diarize – CPU-only speaker diarization, 7x faster than pyannote

The diarize project is an open-source library for audio diarization, which is the task of segmenting an audio recording into different speakers. The library provides tools for speaker clustering, speech activity detection, and other audio processing tasks.

github.com
2 3
Summary
Show HN: LazyTail – Terminal log viewer with built-in MCP server for AI analysis
raaymax about 2 hours ago

Show HN: LazyTail – Terminal log viewer with built-in MCP server for AI analysis

github.com
3 0
foxfoxx about 23 hours ago

Show HN: Govbase – Follow a bill from source text to news bias to social posts

Govbase tracks every bill, executive order, and federal regulation from official sources (Congress.gov, Federal Register, White House). An AI pipeline breaks each one down into plain-language summaries and shows who it impacts by demographic group.

It also ties each policy directly to bias-rated news coverage and politician social posts on X, Bluesky, and Truth Social. You can follow a single bill from the official text to how media frames it to what your representatives are saying about it.

Free on web, iOS, and Android.

https://govbase.com

I'd love feedback from the community, especially on the data pipeline or what policy areas/features you feel are missing.

govbase.com
201 84
Summary
Show HN: Qast – Cast anything (files, URLs, screen) to any TV from the CLI
narragansett about 2 hours ago

Show HN: Qast – Cast anything (files, URLs, screen) to any TV from the CLI

Hi HN,

I built qast because I couldn’t find a tool that “just works” for casting content to a TV. Some TVs support YouTube natively, some do screen mirroring, and only a handful actually show up in Chrome's cast menu. Even when you do get a connection, one TV might accept MKV but not WebM, while another just drops the audio entirely.

qast sidesteps the compatibility problem. It takes whatever you give it -- a local file, a YouTube URL, your desktop screen, a specific window, or a webpage rendered via headless Chromium -- and transcodes it on the fly to H.264/AAC. Because practically every smart TV in the last decade supports this lowest common denominator, it just works.

(Note: You currently need to be running Linux to use it. macOS/Windows support is on the roadmap).

Under the hood:

Written in Python.

Relies on ffmpeg for the heavy lifting (transcoding, window capture).

Uses yt-dlp for extracting web video streams.

Uses Playwright to render web dashboards in a headless browser before casting.

Auto-discovers Chromecast, Roku, and DLNA devices on your local network.

Mostly, I want to get some early feedback. If you have experience wrestling with this problem (especially the endless DLNA quirks) or have ideas for other useful features, that would be fantastic as well.

github.com
2 0
Show HN: Pianoterm – Run shell commands from your Piano. A Linux CLI tool
vustagc about 19 hours ago

Show HN: Pianoterm – Run shell commands from your Piano. A Linux CLI tool

A little weekend project, made so I can pause/play/rewind directly on the piano, when learning a song by ear.

github.com
56 21
Summary
Show HN: Omni – Open-source workplace search and chat, built on Postgres
prvnsmpth 1 day ago

Show HN: Omni – Open-source workplace search and chat, built on Postgres

Hey HN!

Over the past few months, I've been working on building Omni - a workplace search and chat platform that connects to apps like Google Drive/Gmail, Slack, Confluence, etc. Essentially an open-source alternative to Glean, fully self-hosted.

I noticed that some orgs find Glean to be expensive and not very extensible. I wanted to build something that small to mid-size teams could run themselves, so I decided to build it all on Postgres (ParadeDB to be precise) and pgvector. No Elasticsearch, or dedicated vector databases. I figured Postgres is more than capable of handling the level of scale required.

To bring up Omni on your own infra, all it takes is a single `docker compose up`, and some basic configuration to connect your apps and LLMs.

What it does:

- Syncs data from all connected apps and builds a BM25 index (ParadeDB) and HNSW vector index (pgvector)

- Hybrid search combines results from both

- Chat UI where the LLM has tools to search the index - not just basic RAG

- Traditional search UI

- Users bring their own LLM provider (OpenAI/Anthropic/Gemini)

- Connectors for Google Workspace, Slack, Confluence, Jira, HubSpot, and more

- Connector SDK to build your own custom connectors

Omni is in beta right now, and I'd love your feedback, especially on the following:

- Has anyone tried self-hosting workplace search and/or AI tools, and what was your experience like?

- Any concerns with the Postgres-only approach at larger scales?

Happy to answer any questions!

The code: https://github.com/getomnico/omni (Apache 2.0 licensed)

github.com
162 42
Summary
Show HN: uBlock filter list to blur all Instagram Reels
shraiwi about 20 hours ago

Show HN: uBlock filter list to blur all Instagram Reels

A filter list for uBO that blurs all video and non-follower content from Instagram. Works on mobile with uBO Lite.

related: https://news.ycombinator.com/item?id=47016443

gist.github.com
120 48
Summary
Show HN: Run any Google Chrome version(+116) in Docker for web automation
sam_march about 1 hour ago

Show HN: Run any Google Chrome version(+116) in Docker for web automation

BlitzBrowser is an open-source, lightweight web browser that prioritizes speed, customization, and privacy. It offers a minimalist interface, advanced features like ad-blocking and script blocking, and supports multiple platforms including Windows, macOS, and Linux.

github.com
2 2
Summary
Show HN: Timber – Ollama for classical ML models, 336x faster than Python
kossisoroyce 1 day ago

Show HN: Timber – Ollama for classical ML models, 336x faster than Python

Timber is a lightweight, high-performance logging library for Java and Kotlin that provides a simple and flexible API for logging messages. It supports multiple logging backends, including Logcat, Timber, and SLF4J, and offers features such as tree-structured logging and custom tag generation.

github.com
191 32
Summary
Show HN: Visual Lambda Calculus – a thesis project (2008) revived for the web
bntr 3 days ago

Show HN: Visual Lambda Calculus – a thesis project (2008) revived for the web

Originally built as my master's thesis in 2008, Visual Lambda is a graphical environment where lambda terms are manipulated as draggable 2D structures ("Bubble Notation"), and beta-reduction is smoothly animated.

I recently revived and cleaned up the project and published it as an interactive web version: https://bntre.github.io/visual-lambda/

GitHub repo: https://github.com/bntre/visual-lambda

It also includes a small "Lambda Puzzles" challenge, where you try to extract a hidden free variable (a golden coin) by constructing the right term: https://github.com/bntre/visual-lambda#puzzles

github.com
46 9
Summary
Show HN: Web Audio Studio – A Visual Debugger for Web Audio API Graphs
alexgriss 1 day ago

Show HN: Web Audio Studio – A Visual Debugger for Web Audio API Graphs

Hi HN,

I’ve been working on a browser-based tool for exploring and debugging Web Audio API graphs.

Web Audio Studio lets you write real Web Audio API code, run it, and see the runtime graph it produces as an interactive visual representation. Instead of mentally tracking connect() calls, you can inspect the actual structure of the graph, follow signal flow, and tweak parameters while the audio is playing.

It includes built-in visualizations for common node types — waveforms, filter responses, analyser time and frequency views, compressor transfer curves, waveshaper distortion, spatial positioning, delay timing, and more — so you can better understand what each part of the graph is doing. You can also insert an AnalyserNode between any two nodes to inspect the signal at that exact point in the chain.

There are around 20 templates (basic oscillator setups, FM/AM synthesis, convolution reverb, IIR filters, spatial audio, etc.), so you can start from working examples and modify them instead of building everything from scratch.

Everything runs fully locally in the browser — no signup, no backend.

The motivation came from working with non-trivial Web Audio graphs and finding it increasingly difficult to reason about structure and signal flow once things grow beyond simple examples. Most tutorials show small snippets, but real projects quickly become harder to inspect. I wanted something that stays close to the native Web Audio API while making the runtime graph visible and inspectable.

This is an early alpha and desktop-only for now.

I’d really appreciate feedback — especially from people who have used Web Audio API in production or built audio tools. You can leave comments here, or use the feedback button inside the app.

https://webaudio.studio

webaudio.studio
64 7
Summary
Show HN: Giggles – A batteries-included React framework for TUIs
ajz317 about 14 hours ago

Show HN: Giggles – A batteries-included React framework for TUIs

i built a framework that handles focus and input routing automatically for you -- something born out of the things that ink leaves to you, and inspired by charmbracelet's bubbletea

- hierarchical focus and input routing: the hard part of terminal UIs, solved. define focus regions with useFocusScope, compose them freely -- a text input inside a list inside a panel just works. each component owns its keys; unhandled keypresses bubble up to the right parent automatically. no global handler like useInput, no coordination code

- 15 UI components: Select, TextInput, Autocomplete, Markdown, Modal, Viewport, CodeBlock (with diff support), VirtualList, CommandPalette, and more. sensible defaults, render props for full customization

- terminal process control: spawn processes and stream output into your TUI with hooks like useSpawn and useShellOut; hand off to vim, less, or any external program and reclaim control cleanly when they exit

- screen navigation, a keybinding registry (expose a ? help menu for free), and theming included

- react 19 compatible!

docs and live interactive demos in your browser: https://giggles.zzzzion.com

quick start: npx create-giggles-app

github.com
20 8
Show HN: Rriftt_ai.h – A bare-metal, dependency-free C23 tensor engine
Rriftt about 8 hours ago

Show HN: Rriftt_ai.h – A bare-metal, dependency-free C23 tensor engine

Hi HN, I built rriftt_ai.h because I hit my breaking point with the modern deep learning stack.

I wanted to train and run Transformers, but I was exhausted by gigabyte-sized Python environments, opaque C++ build systems, and deep BLAS dependency trees. I wanted to see what it actually takes to execute a forward and backward pass from absolute scratch.

The result is a single-header, stb-style C library written in strict C23.

Architectural decisions I made: - *Zero dependencies:* It requires nothing but a C compiler and the standard math library. - *Strict memory control:* You instantiate a `RaiArena` at boot. The engine operates entirely within that perimeter. There are zero hidden `malloc` or `free` calls during execution. - *The Full Stack:* It natively implements Scaled Dot-Product Attention, RoPE, RMSNorm, and SwiGLU. I also built the backprop routines, Cross-Entropy loss, AdamW optimizer, and a BPE tokenizer directly into the structs.

It is currently public domain (or MIT, your choice). The foundation is stable and deterministic, but it is currently pure C math. I built this architecture to scale, so if anyone wants to tear apart my C23 implementation, audit the memory alignment, or submit SIMD/hardware-specific optimizations for the matmul operations, I'm actively reviewing PRs.

github.com
4 0
Show HN: Gapless.js – gapless web audio playback
switz about 21 hours ago

Show HN: Gapless.js – gapless web audio playback

Hey HN,

I just released v4 of my gapless playback library that I first built in 2017 for https://relisten.net. We stream concert recordings, where gapless playback is paramount.

It's built from scratch, backed by a rigid state machine (the sole dependency is xstate) and is already running in production over at Relisten.

The way it works is by preloading future tracks as raw buffers and scheduling them via the web audio API. It seamlessly transitions between HTML5 and web audio. We've used this technique for the last 9 years and it works fairly well. Occasionally it will blip slightly from HTML5->web audio, but there's not much to be done to avoid that (just when to do it - lotta nuance here). Once you get on web audio, everything should be clean.

Unfortunately web audio support still lacks on mobile, in which case you can just disable web audio and it'll fallback to full HTML5 playback (sans gapless). But if you drive a largely desktop experience, this is fine. On mobile, most people use our native app.

You can view a demo of the project at https://gapless.saewitz.com - just click on "Scarlet Begonias", seek halfway in the track (as it won't preload until >15s) and wait for "decoding" on "Fire on the Mountain" to switch to "ready". Then tap "skip to -2s and hear the buttery smooth segue.

github.com
35 10
Summary
Show HN: wo; a better CD for repo management
itsagamer124 about 10 hours ago

Show HN: wo; a better CD for repo management

This is something that I've been wanting to make and use myself for the longest time.

if you're anything like me, you have a million projects in a million places ( I have 56 repositories!) and they're all from different people. I'm a big cli and neovim user, so for the longest time I've had to do the following.

cd some/long/path/foo/project

nvim .

This gets really infuriating after a while.

wo changes this to wo project

and you're already cded into your project.

running wo scan --root ~/workspaces --depth <depth>

will automatically scan for git repos (or .wo files if you choose not to track your repo), and add them to your repo list. Names for projects are inferred from the repo name's remote url, so they can be anywhere.

If your repo is local, project owners are inferred from the enclosing folder (e.g. I have a local folder, so project owner will be called local)

But I think the killer feature is hooks.

remember that nvim .?

now you can create custom hooks. on enter, we can automatically bring up nvim. so wo project brings up neovim with all your files loaded.

You can define a hook called claude, and call it like this: wo project claude

You can your hook automatically bring up claude code or any other cli tool of your choice. You can do cursor . code . or zen . or run any arbitrary script of your liking. Hooks can be global as well, no need to redefine for each directory.

I've been using this for a few weeks and it's been exactly what I needed. There's a ton of cool features that I didn't mention that are in the README. and also feel free to star it! ( I need more talking points for my resume). Also feel free to ask me any questions or tell me about similar implementations. or maybe any features you'd like to add!

Whole thing is open source and MIT licensed. Let me know if you guys like it!

github.com
3 0
ddesposito 1 day ago

Show HN: Try Archetype 360 – AI‑powered personality test, 3× deeper than MBTI

Hi there, are you familiar with MBTI, DiSC, Big Five? Well I'm experimenting with a new kind of personality test, Archetype 360, and I'd love for you to try it for free and tell me what you think.

- 24 traits across 12 opposing pairs -- that's three times more dimensions than MBTI or DiSC, so you get a much more nuanced profile. - A unique narrative report generated with AI (Claude), written in natural language instead of generic type blurbs. - Your role, goals, and current challenges are blended into the analysis, so the report feels relevant to your real‑life context, not just abstract traits.

It's an "ephemeral app" so your report only lives in your browser, there's no login, and we don't store your data. Make sure you save the report as a PDF before you close the page.

What I'm looking for is honest feedback on your archetype and report:

- Did it feel accurate and "wow" or just meh? - Did you learn anything unexpected about yourself? - What did it miss or not go deep enough on?

I'll use your feedback to refine the prompts and the underlying model. Just comment here or use the feedback form in the app.

If there's enough interest, the next step is to combine Archetype 360 with a variation of Holland Codes / RIASEC (vocational interest areas) to create a full‑power professional orientation report.

What else would you love to see? Ideas welcome!

Best wishes, Daniel

archetype360.app
9 5
Summary
rendernos about 10 hours ago

Show HN: Trade Stocks and Crypto On-Chain with Full Transparency

aulico.com
3 1
PantheonOS about 11 hours ago

Show HN: PantheonOS–An Evolvable, Distributed Multi-Agent System for Science

We are thrilled to share our preprint on PantheonOS, an evolvable, distributed multi-agent system for automatic genomics discovery.

Preprint: https://www.biorxiv.org/content/10.64898/2026.02.26.707870v1 Website(online platform free to everyone): https://pantheonos.stanford.edu/

PantheonOS unites LLM-powered agents, reinforcement learning, and agentic code evolution to push beyond routine analysis — evolving state-of-the-art algorithms to super-human performance.

Applications: Evolved batch correction (Harmony, Scanorama, BBKNN) and Reinforcement learning or RL agumented algorithms RL–augmented gene panel design Intelligent routing across 22+ virtual cell foundation models Autonomous discovery from newly generated 3D early mouse embryo data Integrated human fetal heart multi-omics with 3D whole-heart spatial data

Pantheon is highly extensible, although it is currently showcased with applications in genomics, the architecture is very general. The code has now been open-sourced, and we hope to build a new-generation AI data science ecosystem.

pantheonos.stanford.edu
3 0
PrateekRao01 about 11 hours ago

Show HN: Cortexa – Bloomberg terminal for agentic memory

Hi HN — I’m Prateek Rao. My cofounders and I built Cortexa, which we describe as a Bloomberg terminal for agentic memory.

A pattern I keep seeing: when agents misbehave, most teams iterate on prompts and then “fix” it by plugging in a memory layer (vector DB + RAG). That helps sometimes — but it doesn’t guarantee correctness. In practice it often introduces a new failure mode: the agent retrieves something dubious, writes it back to memory as if it’s truth, and that mistake becomes sticky. Over time you get memory pollution, circular hallucination loops, and debugging turns into log archaeology.

What Cortexa does:

1. Agent decision forensics (end-to-end “why”): trace outputs/actions back to the exact retrievals, memory writes, and tool calls that caused them.

2. Memory write governance: intercept and score memory writes (0–1), and optionally block/quarantine ungrounded entries before they poison future runs.

3. Memory hygiene + vector store noise control: automatically detect and remove near-duplicate / low-signal entries so retrieval stays high-quality and storage + inference costs don’t creep up.

Why this matters: Observability is the missing layer for agentic AI. Without it, autonomy is fragile: small errors silently compound, deployments become risky, and engineering cost goes up because failures aren’t reproducible or attributable.

Who this is for: 1. Teams shipping agentic workflows in production 2. Anyone fighting “unknown why” failures, memory pollution, or runaway context costs 3. Engineers who want auditability + faster debugging loops

Site: https://cortexa.ink/

Would love feedback from anyone running agents at scale: 1.What’s the most painful agent failure mode you’ve seen in production? 2.What signals would you want in an “agent terminal” (retrieval diffs, memory blame, tool-call traces, alerts, etc.)?

cortexa.ink
8 1
Show HN: Starcraft2 replay rendering engine and AI coach
tomkit about 12 hours ago

Show HN: Starcraft2 replay rendering engine and AI coach

Starcraft2 is an old game, but it's always lacked a way to visualize game replays outside of the game itself.

I built a replay rendering engine from scratch using the replay files and Claude Code.

The replay files contain sampled position coordinates and commands that the player inputs. So I built an isometric view using the map and overlayed unit icons over the map, then interpolated the positions that units move in over time.

I also extracted additional metrics from the game data as well - some are derived on top of other metrics.

Finally, I pass all this context into a LLM for it to critique gameplay and offer strengths and improvements per player.

It's not perfect, but a good starting point to iterate and improve

Let me know what you think!

starcraft2.ai
4 0
behrlich about 21 hours ago

Show HN: Punch card simulator and Fortran IV interpreter

Code: https://github.com/ehrlich-b/punchcards

Just for fun, I've only spent a few hours on it so far. What are everyone's punch card emulation needs?

punch.ehrlich.dev
5 0
Summary
alius about 13 hours ago

Show HN: A Puzzle Game Based on Non-Commutative Operations

While solving a Skewb[https://en.wikipedia.org/wiki/Skewb] cube I thought it would be interesting to have the subproblems of it presented as puzzle games, one thing lead to another and here is the result.

I have definitely some UX problems so looking for feedbacks and thoughts.

The best part of this game is, level generation and difficulty analysis can be automated. I have here 15 tested and 5 experimental levels.

I enjoy 15th level the most, has an intuitive solution.

You can try the competitive mode with a friend, you need to share the link with them.

If I can bring the level count to thousands, I will add a ranking system.

My mind keep racing about the possibilities, but kind of cannot prioritize at the moment.

All kind of feedback, collaboration requests are welcome!

commutators.games
3 0
Summary
GustyCube about 13 hours ago

Show HN: GitHub Commits Leaderboard

I made a public leaderboard for all time GitHub commit contributions.

https://ghcommits.com

You can connect your GitHub account and see where you rank by total commit contributions.

It uses GitHub’s contribution data through GraphQL, so it is based on GitHub’s counting rules rather than raw git history. Private contributions can be included. Organization contributions only count if you grant org access during auth.

There is also a public read only API.

https://ghcommits.com/api

The main thing I learned building it is that commit counting sounds straightforward until you try to match how GitHub actually attributes contributions.

I’d be interested in feedback on whether commit contributions are the right ranking metric, and whether I should also support other contribution types.

ghcommits.com
3 0
Summary
Show HN: Audio Toolkit for Agents
stevehiehn 2 days ago

Show HN: Audio Toolkit for Agents

The article describes a SAS audio processor, a tool that allows users to process audio files and perform various operations such as normalization, equalization, and conversion between different audio formats. The processor is built using the SAS programming language and is designed to be a powerful and flexible tool for audio processing tasks.

github.com
57 9
Summary
adamscottthomas about 14 hours ago

Show HN: An Auditable Decision Engine for AI Systems

maelstrom.ghostlogic.tech
3 1
Show HN: Writing App for Novelist
oknoorap about 20 hours ago

Show HN: Writing App for Novelist

Novelos Studio is a creative agency that specializes in web design, branding, and digital marketing services. The agency offers a range of services to help businesses build a strong online presence and achieve their marketing goals.

novelos.studio
9 5
Summary
LukeB42 2 days ago

Show HN: Vertex.js – A 1kloc SPA Framework

Vertex is a 1kloc SPA framework containing everything you need from React, Ractive-Load and jQuery while still being jQuery-compatible.

vertex.js is a single, self-contained file with no build step and no dependencies.

Also exhibits the curious quality of being faster than over a decade of engineering at Facebook in some cases: https://files.catbox.moe/sqei0d.png

lukeb42.github.io
33 19
Summary
Show HN: Logira – eBPF runtime auditing for AI agent runs
melonattacker 1 day ago

Show HN: Logira – eBPF runtime auditing for AI agent runs

I started using Claude Code (claude --dangerously-skip-permissions) and Codex (codex --yolo) and realized I had no reliable way to know what they actually did. The agent's own output tells you a story, but it's the agent's story.

logira records exec, file, and network events at the OS level via eBPF, scoped per run. Events are saved locally in JSONL and SQLite. It ships with default detection rules for credential access, persistence changes, suspicious exec patterns, and more. Observe-only – it never blocks.

https://github.com/melonattacker/logira

github.com
25 3
Summary