Show stories

Show HN: Marmot – Single-binary data catalog (no Kafka, no Elasticsearch)
charlie-haley about 3 hours ago

Show HN: Marmot – Single-binary data catalog (no Kafka, no Elasticsearch)

Marmot is an open-source data processing framework that provides a simple and efficient way to handle large-scale data processing tasks. It offers a powerful and flexible API for building data pipelines, with support for various data sources and scalable processing capabilities.

github.com
56 11
Summary
Show HN: RunMat – runtime with auto CPU/GPU routing for dense math
nallana about 3 hours ago

Show HN: RunMat – runtime with auto CPU/GPU routing for dense math

Hi, I’m Nabeel. In August I released RunMat as an open-source runtime for MATLAB code that was already much faster than GNU Octave on the workloads I tried. https://news.ycombinator.com/item?id=44972919

Since then, I’ve taken it further with RunMat Accelerate: the runtime now automatically fuses operations and routes work between CPU and GPU. You write MATLAB-style code, and RunMat runs your computation across CPUs and GPUs for speed. No CUDA, no kernel code.

Under the hood, it builds a graph of your array math, fuses long chains into a few kernels, keeps data on the GPU when that helps, and falls back to CPU JIT / BLAS for small cases.

On an Apple M2 Max (32 GB), here are some current benchmarks (median of several runs):

* 5M-path Monte Carlo * RunMat ≈ 0.61 s * PyTorch ≈ 1.70 s * NumPy ≈ 79.9 s → ~2.8× faster than PyTorch and ~130× faster than NumPy on this test.

* 64 × 4K image preprocessing pipeline (mean/std, normalize, gain/bias, gamma, MSE) * RunMat ≈ 0.68 s * PyTorch ≈ 1.20 s * NumPy ≈ 7.0 s → ~1.8× faster than PyTorch and ~10× faster than NumPy.

* 1B-point elementwise chain (sin / exp / cos / tanh mix) * RunMat ≈ 0.14 s * PyTorch ≈ 20.8 s * NumPy ≈ 11.9 s → ~140× faster than PyTorch and ~80× faster than NumPy.

If you want more detail on how the fusion and CPU/GPU routing work, I wrote up a longer post here: https://runmat.org/blog/runmat-accel-intro-blog

You can run the same benchmarks yourself from the GitHub repo in the main HN link. Feedback, bug reports, and “here’s where it breaks or is slow” examples are very welcome.

github.com
16 3
Summary
Show HN: Open-source full-stack starter built on TanStack Start
ivandalmet about 2 hours ago

Show HN: Open-source full-stack starter built on TanStack Start

Start UI Web is an open-source React UI library that provides a collection of reusable UI components and tools to help developers build modern and responsive web applications quickly. The library offers a range of components, from basic elements like buttons and forms to more complex features like modals, tooltips, and data visualizations.

github.com
8 1
Summary
peterwoodman about 3 hours ago

Show HN: A simple Markdown note app with templates and PDF support

A lightweight note-taking app built with Go + HTMX, featuring nested pages, templates, shared spaces, and OCR to make scanned PDFs searchable. The web app uses responsive design to make it scale to desktop or mobile.

panto.app
3 2
Summary
Show HN: CoThou – Control what AI search engines say about your business
MartyD about 3 hours ago

Show HN: CoThou – Control what AI search engines say about your business

I built CoThou after seeing search and AI answer engines give completely incorrect information about my company. Turns out, they prioritize structured, citable content, so I reverse-engineered how they choose sources and built CoThou to become the source of truth.

How it works For businesses: Create a company profile. When search and AI answer engines are asked about your company, they’ll cite your company profile and its content, not Wikipedia or outdated info.

For publishers and knowledge workers: Publish at your personal profile with proper citations (300M+ academic papers indexed). When someone asks search and AI answer engines about your topic, it will cite your work linking to your profile and allowing citation tracking.

Try it now (unlimited during beta): → https://cothou.com

It’s v0.01 and rough around the edges. Try it and let me know what breaks.

What’s next: Currently training a custom 32B MoE (Mixture of Experts) LLM with 3B active parameters scheduled to go live in Q1/2026. The key difference: it breaks down complex queries into parallel subtasks that execute live on an infinite canvas. You’ll see agents plan and build in real time, instead of waiting for a progress bar.

Examples: “Write a 300-page book on the history of computing” “Create a 60-second TikTok ad for my SaaS”

It handles research, outlines, storyboarding, asset generation, voice-overs, and music simultaneously.

Since only ~3B parameters are active per token, it runs 8–10× cheaper and faster than dense 32B models, while still matching or outperforming premium models on reasoning, coding, and long-context tasks.

Building through partnerships with NVIDIA Inception and Microsoft for Startups.

Would love HN feedback on: - Improving citation accuracy - Building trust with AI parsers - What sources to add next (currently 100M companies + 300M academic papers) - Anything else

Marty (Founder)

cothou.com
2 0
Summary
Show HN: Webclone.js – A simple tool to clone websites
jadesee about 13 hours ago

Show HN: Webclone.js – A simple tool to clone websites

I needed a lightweight way to archive documentation from a website. wget and similar tools failed to clone the site reliably (missing assets, broken links, etc.), so I ended up building a full website-cloning tool using Node.js + Puppeteer.

Repo: https://github.com/jademsee/webclone

Feedback, issues, and PRs are very welcome.

github.com
16 3
Summary
tlhunter 1 day ago

Show HN: RFC Hub

I've worked at several companies during the past two decades and I kept encountering the same issues with internal technical proposals:

- Authors would change a spec after I started writing code

- It's hard to find what proposals would benefit from my review

- It's hard to find the right person to review my proposals

- It's not always obvious if a proposal has reached consensus (e.g. buried comments)

- I'm not notified if a proposal I approved is now ready to be worked on

And that's just scratching the surface. The most popular solutions (like Notion or Google Drive + Docs) mostly lack semantics. For example it's easy as a human to see a table in a document with rows representing reviewers and a checkbox representing review acceptance but it's hard to formally extract meaning and prevent a document from "being published" when criteria isn't met.

RFC Hub aims to solve these issues by building an easy to use interface around all the metadata associated with technical proposals instead of containing it textually within the document itself.

The project is still under heavy development as I work on it most nights and weekends. The next big feature I'm planning is proposal templates and the ability to refer to documents as something other than RFCs (Request for Comments). E.g. a company might have a UIRFC for GUI work (User Interface RFCs), a DBADR (Database Architecture Decision Record), etc. And while there's a built-in notification system I'm still working on a Slack integration. Auth works by sending tokens via email but of course RFC Hub needs Google auth.

Please let me know what you think!

rfchub.app
28 9
Show HN: An AI zettelkasten that extracts ideas from articles, videos, and PDFs
schoblaska 1 day ago

Show HN: An AI zettelkasten that extracts ideas from articles, videos, and PDFs

Hey HN! Over the weekend (leaning heavily on Opus 4.5) I wrote Jargon - an AI-managed zettelkasten that reads articles, papers, and YouTube videos, extracts the key ideas, and automatically links related concepts together.

Demo video: https://youtu.be/W7ejMqZ6EUQ

Repo: https://github.com/schoblaska/jargon

You can paste an article, PDF link, or YouTube video to parse, or ask questions directly and it'll find its own content. Sources get summarized, broken into insight cards, and embedded for semantic search. Similar ideas automatically cluster together. Each insight can spawn research threads - questions that trigger web searches to pull in related content, which flows through the same pipeline.

You can explore the graph of linked ideas directly, or ask questions and it'll RAG over your whole library plus fresh web results.

Jargon uses Rails + Hotwire with Falcon for async processing, pgvector for embeddings, Exa for neural web search, crawl4ai as a fallback scraper, and pdftotext for academic papers.

github.com
34 7
Show HN: Doomscrolling Research Papers
davailan about 8 hours ago

Show HN: Doomscrolling Research Papers

Hi HN,

Would love your thoughts on Open Paper Digest. It’s a mobile feed that let’s you “doomscroll” through summaries of popular papers that were published recently.

Backstory There’s a combination of factors lead me to build this:

1. Quality of content social media apps has decreased, but I still notice that it is harder than ever for me to stay away from these apps. 2. I’ve been saying for a while now that I should start reading papers to keep up with what’s going on in AI-world.

Initially, I set out to build something solely for point 2. This version was more search-focussed, and focussed on simplifying the whole text of a paper, not summarizing. Still, I wasn’t using it. After yet another 30 min doomscroll on a bus last month, point 1 came into the picture and I changed how Open Paper Digest worked. That’s what you can see today!

How it works It is checking Huggingface Trending Papers and the large research labs daily to find papers to add to the index. The PDFs gets converted to markdown using Mistral OCR, this is then given to Gemini 2.5 to create a 5 minute summary.

I notice that I am now going to the site daily, so that’s a good sign. I’m curious what you all think, and what feedback you might have.

Cheers, Arthur

openpaperdigest.com
10 5
Show HN: Elf – A CLI Helper for Advent of Code
cak about 8 hours ago

Show HN: Elf – A CLI Helper for Advent of Code

I built a CLI tool called elf to streamline Advent of Code workflows. It removes a lot of the repetitive steps around fetching inputs, submitting answers safely, and checking private leaderboards.

The tool focuses on: - Input fetching with caching (no repeated downloads, works offline) - Safe answer submissions with guardrails to prevent duplicate or invalid guesses - Private leaderboard viewer (table or JSON) - Status calendar and guess history viewer - Optional Python API for scripting or automation

It’s built with Typer, httpx, Pydantic, and Rich, and aims to be clean, predictable, and easy to extend.

Repo: https://github.com/cak/elf

PyPI: https://pypi.org/project/elf/

Feedback and questions are welcome.

github.com
3 3
Show HN: I wrote a book for software engineers, based on 11 years at Uber
ten-fold about 3 hours ago

Show HN: I wrote a book for software engineers, based on 11 years at Uber

Hi HN, I recently left Uber after an intense decade as Senior and then Staff Engineer.

Coming from a small startup, it took me years to learn how to be successful in tech. When I left, I decided to write down the raw, unfiltered advice that you rarely hear from managers.

It’s a fun, quick read through 7 playbooks.

Enjoy the free PDF for the next 48 hours.

Ask me anything! :)

rfonti.gumroad.com
4 2
Summary
rafferty97 about 8 hours ago

Show HN: Visual, local-first data tool

I've spent the past two years building an app for quick, ad-hoc data manipulation because I was dissatisfied with the existing landscape of tools. I thought others might find it useful too, so I'm throwing it out into the world.

Currently, if you want to grab some CSV or JSON data and do a sequence of operations on it (filter, sort, aggregate, etc.), the path of least resistance is to open an IDE or notebook and write code. This is fine for simple tasks, but can get messy quickly and doesn't offer the same immediate visual feedback loop that something like a spreadsheet does.

I thought there ought to exist a tool that offers a similar UX to a spreadsheet but with the power of a dataframe library, so I built one.

There's no signup and it runs entirely within the browser, using Rust compiled to WebAssembly for the data processing and Solid.JS for the UI. Projects are persisted to IndexedDB and files are read directly from disk using the file system API, so nothing sensitive ever leaves your computer.

I know documentation is currently scarce, but if there's enough interest, I'm happy to work on this.

Any questions or feedback are of course welcome. I'm just curious whether anyone would actually find this tool useful.

columns.dev
3 1
Summary
Show HN: Boing
gregsadetsky 3 days ago

Show HN: Boing

boing.greg.technology
765 144
b44rd about 9 hours ago

Show HN: I want food – Simple swipe based restaurant discovery app

iwant.food
4 2
Show HN: I was reintroduced to computers: Raspberry Pi
observer2022 about 9 hours ago

Show HN: I was reintroduced to computers: Raspberry Pi

The article recounts the author's reintroduction to computers through the Raspberry Pi, a small, affordable, and versatile single-board computer. It highlights the author's enthusiasm for exploring the capabilities of the Raspberry Pi and its potential for various applications.

airoboticist.blog
4 1
Summary
Show HN: FFmpeg Engineering Handbook
endcycles 1 day ago

Show HN: FFmpeg Engineering Handbook

The article provides a comprehensive guide to the engineering practices and tools used in the development of FFmpeg, a popular multimedia framework. It covers topics such as build systems, testing, and continuous integration, offering insights into the project's technical processes.

github.com
14 0
Summary
antiochIst 7 days ago

Show HN: Real-time system that tracks how news spreads across 200k websites

I built a system that monitors ~200,000 news RSS feeds in near real-time and clusters related articles to show how stories spread across the web.

It uses Snowflake’s Arctic model for embeddings and HNSW for fast similarity search. Each “story cluster” shows who published first, how fast it propagated, and how the narrative evolved as more outlets picked it up.

Would love feedback on the architecture, scaling approach, and any ways to make the clusters more accurate or useful.

Live demo: https://yandori.io/news-flow/

yandori.io
251 70
Show HN: Nano PDF – A CLI Tool to Edit PDFs with Gemini's Nano Banana
GavCo 3 days ago

Show HN: Nano PDF – A CLI Tool to Edit PDFs with Gemini's Nano Banana

The new Gemini 3 Pro Image model (aka Nano Banana) is incredible at generating slides, so I thought it would be fun to build a CLI tool that lets you edit PDF presentations using plain English. The tool converts the page you want to edit into an image, sends it to the model API together with your prompt to generate an edited image, then converts the updated image back and stitches into the original document.

Examples:

- `nano-pdf edit deck.pdf 5 "Update the revenue chart to show Q3 at $2.5M"`

- `nano-pdf add deck.pdf 15 "Create an executive summary slide with 5 bullet points"`

Features:

- Edit multiple pages in parallel

- Add entirely new slides that match your deck's style

- Google Search enabled by default so the model can look up current data

- Preserves text layer for copy/paste and search

It can work with any kind of PDF but I expect it would be most useful for a quick edit to a deck or something similar.

GitHub: https://github.com/gavrielc/Nano-PDF

github.com
171 39
Summary
Show HN: Fixing Google Nano Banana Pixel Art with Rust
HugoDz 6 days ago

Show HN: Fixing Google Nano Banana Pixel Art with Rust

The article describes the Spritefusion Pixel Snapper, a tool that allows users to create and export pixel art from a variety of image sources. The tool offers a range of features, including image cropping, pixel scaling, and color palette customization, aimed at simplifying the pixel art creation process.

github.com
187 34
Summary
ben8888 about 14 hours ago

Show HN: Explicode – Write Markdown in code comments

Explicode is a VS Code extension that lets you write Markdown directly inside your code comments. It provides a live preview of both code and documentation, making it easy to create clean, structured docs—almost like Overleaf, but for your code editor. Great for open source projects and academia.

Key features: - Write Markdown in multiline comments - Live preview renders Markdown and code side by side - Supports most popular programming languages - Export documentation to Markdown or HTML - Docs live in the code and update automatically with Git

Demo GIF: https://raw.githubusercontent.com/benatfroemming/explicode-e...

Download on the VS Code Marketplace: https://marketplace.visualstudio.com/items?itemName=Explicod...

I’m looking for feedback on usability, bugs, and new feature ideas. If you’re a developer and want to contribute to making Explicode even better, please reach out!

2 0
flx1012 about 15 hours ago

Show HN: Watsn.ai – Scarily accurate lie detector

No signup required—just upload or record a video to verify its truthfulness. You can test it on anyone: internet clips, your significant other, or even yourself.

I know there are tons of scammy 'lie detector' apps out there, but I built this using SOTA multimodal models in hopes of creating a genuine breakthrough in the space. It analyzes micro-expressions, voice patterns, and context. In my own testing (over 50 trials), it reached about 85% accuracy, which honestly felt a bit scary.

It’s also fun to test on famous YouTube clips (like Obama talking about UFOs). I’d love to hear what you think and will be improving Watsn.ai every day based on your feedback!

watsn.ai
3 2
Summary
mikeayles 7 days ago

Show HN: KiDoom – Running DOOM on PCB Traces

I got DOOM running in KiCad by rendering it with PCB traces and footprints instead of pixels.

Walls are rendered as PCB_TRACK traces, and entities (enemies, items, player) are actual component footprints - SOT-23 for small items, SOIC-8 for decorations, QFP-64 for enemies and the player.

How I did it:

Started by patching DOOM's source code to extract vector data directly from the engine. Instead of trying to render 64,000 pixels (which would be impossibly slow), I grab the geometry DOOM already calculates internally - the drawsegs[] array for walls and vissprites[] for entities.

Added a field to the vissprite_t structure to capture entity types (MT_SHOTGUY, MT_PLAYER, etc.) during R_ProjectSprite(). This lets me map 150+ entity types to appropriate footprint categories.

The DOOM engine sends this vector data over a Unix socket to a Python plugin running in KiCad. The plugin pre-allocates pools of traces and footprints at startup, then just updates their positions each frame instead of creating/destroying objects. Calls pcbnew.Refresh() to update the display.

Runs at 10-25 FPS depending on hardware. The bottleneck is KiCad's refresh, not DOOM or the data transfer.

Also renders to an SDL window (for actual gameplay) and a Python wireframe window (for debugging), so you get three views running simultaneously.

Follow-up: ScopeDoom

After getting the wireframe renderer working, I wanted to push it somewhere more physical. Oscilloscopes in X-Y mode are vector displays - feed X coordinates to one channel, Y to the other. I didn't have a function generator, so I used my MacBook's headphone jack instead.

The sound card is just a dual-channel DAC at 44.1kHz. Wired 3.5mm jack → 1kΩ resistors → scope CH1 (X) and CH2 (Y). Reused the same vector extraction from KiDoom, but the Python script converts coordinates to ±1V range and streams them as audio samples.

Each wall becomes a wireframe box, the scope traces along each line. With ~7,000 points per frame at 44.1kHz, refresh rate is about 6 Hz - slow enough to be a slideshow, but level geometry is clearly recognizable. A 96kHz audio interface or analog scope would improve it significantly (digital scopes do sample-and-hold instead of continuous beam tracing).

Links:

KiDoom GitHub: https://github.com/MichaelAyles/KiDoom, writeup: https://www.mikeayles.com/#kidoom

ScopeDoom GitHub: https://github.com/MichaelAyles/ScopeDoom, writeup: https://www.mikeayles.com/#scopedoom

mikeayles.com
361 49
Summary
martincsweiss about 17 hours ago

Show HN: NeurIPS 2025 Poster Navigator

I woke up Sunday morning ready to schedule my week at NeurIPS. To my immediate horror, the NeurIPS.cc poster sessions have 1k+ posters in a stupid little dropdown. So I built a little app to help navigate them by research area/keywords/etc. Built it in a few hours with codex, gemini-cli, and Claude code. Same stack that produced 50% of the papers at NeurIPS ;)

Free to use, no signup.

neurips2025.tiptreesystems.com
3 0
Show HN: Glasses to detect smart-glasses that have cameras
nullpxl 5 days ago

Show HN: Glasses to detect smart-glasses that have cameras

Hi! Recently smart-glasses with cameras like the Meta Ray-bans seem to be getting more popular. As does some people's desire to remove/cover up the recording indicator LED. I wanted to see if there's a way to detect when people are recording with these types of glasses, so a little bit ago I started working this project. I've hit a little bit of a wall though so I'm very much open to ideas!

I've written a bunch more on the link (+photos are there), but essentially this uses 2 fingerprinting approaches: - retro-reflectivity of the camera sensor by looking at IR reflections. mixed results here. - wireless traffic (primarily BLE, also looking into BTC and wifi)

For the latter, I'm currently just using an ESP32, and I can consistently detect when the Meta Raybans are 1) pairing, 2) first powered on, 3) (less consistently) when they're taken out of the charging case. When they do detect something, it plays a little jingle next to your ear.

Ideally I want to be able to detect them when they're in use, and not just at boot. I've come across the nRF52840, which seems like it can follow directed BLE traffic beyond the initial broadcast, but from my understanding it would still need to catch the first CONNECT_REQ event regardless. On the bluetooth classic side of things, all the hardware looks really expensive! Any ideas are appreciated. Thanks!

github.com
504 193
Summary
hilti 1 day ago

Show HN: Furnace – the ultimate chiptune music tracker

It's this time of year to discover cool projects bringing back memories of the good old days.

I am still learning ImGUI and this is a master piece in my opinion.

https://github.com/tildearrow/furnace

15 0
BigBigMiao about 17 hours ago

Show HN: Net RazorConsole – Build Interactive TUI with Razor and Spectre.Console

Finally, after landing component preview support and moving the codebase under the RazorConsole org, we think it’s the right time to introduce RazorConsole to Hacker News.

# RazorConsole

RazorConsole is a library for building interactive terminal applications using Razor components, rendered through Spectre.Console. If you’ve used React Ink, the idea will feel familiar: a declarative component model that stays cleanly separated from your application logic. If you like how Blazor/Razor expresses UI but want to target the terminal, RazorConsole might be a good fit.

# Highlights

- Author terminal UI using familiar Razor/Component syntax

- Render Razor components directly into Spectre.Console renderables

- Keep your UI declarative and composable, similar to Blazor and React Ink

# Links

- GitHub: https://github.com/RazorConsole/RazorConsole

- Website: https://razorconsole.github.io/RazorConsole

A special shout-out to Nick Chapsas, who created an excellent introduction video:https://www.youtube.com/watch?v=1C1gTRm7BB4. His coverage brought a huge boost during RazorConsole’s cold-start phase, and we sincerely appreciate it. If you want a quick, clear overview of what the project does, his video is the perfect starting point.

# What’s next

- More interaction: mouse and scroll-wheel events

- More layouts & styling: additional layout primitives (e.g., flex-like patterns), potential CSS-style syntax

- More components: a component registry experience similar to shadcn

razorconsole.github.io
3 0
Summary
Show HN: Identifiy test coverage gaps in your Go projects
alien_ 2 days ago

Show HN: Identifiy test coverage gaps in your Go projects

github.com
8 1
keooodev about 21 hours ago

Show HN: My pushback against ANPR carparks in the UK

In my area I have 8 ANPR car parks within a 10 min radius that are free to park in, but you need to remember to enter your registration plate if not you are hit with a £70+ fine. This is easy to forget for older people. People who are with the kids. ect ectt. so I have made an app that sends a push notification after you enter one.Its free. I will add pro features in the future to keep it alive and server costs. Im in the process of populating the car parks but users can still add there own in the local area if they want

getstung.io
2 0
Summary
Show HN: An LLM-Powered Tool to Catch PCB Schematic Mistakes
wafflesfreak 4 days ago

Show HN: An LLM-Powered Tool to Catch PCB Schematic Mistakes

Netlist.io is a platform that connects businesses with verified digital talent, offering services such as web development, digital marketing, and software engineering. The site aims to streamline the hiring process and help companies find skilled freelancers to meet their digital needs.

netlist.io
54 29
Summary
Show HN: Cm-colors –I got tired of manually fixing wcag contrast,so I made this
lalithaar about 22 hours ago

Show HN: Cm-colors –I got tired of manually fixing wcag contrast,so I made this

I usually look up palettes and the UI comes out nice except some color pairs don't pass wcag color contrast and well - just isn't readable Ended up writing a tiny library that automatically nudges your text color just enough to pass AA or AAA, while keeping it visually similar to the original along with a color contrast linter so we can use it in CI Thought you guys might find it useful too, it's foss :>

github.com
5 0
Summary