Show HN: Fresh – A new terminal editor built in Rust
I built Fresh to challenge the status quo that terminal editing must require a steep learning curve or endless configuration. My goal was to create a fast, resource-efficient TUI editor with the usability and features of a modern GUI editor (like a command palette, mouse support, and LSP integration).
Core Philosophy:
- Ease-of-Use: Fundamentally non-modal. Prioritizes standard keybindings and a minimal learning curve.
- Efficiency: Uses a lazy-loading piece tree to avoid loading huge files into RAM - reads only what's needed for user interactions. Coded in Rust.
- Extensibility: Uses TypeScript (via Deno) for plugins, making it accessible to a large developer base.
The Performance Challenge:
I focused on resource consumption and speed with large file support as a core feature. I did a quick benchmark loading a 2GB log file with ANSI color codes. Here is the comparison against other popular editors:
- Fresh: Load Time: *~600ms* | Memory: *~36 MB*
- Neovim: Load Time: ~6.5 seconds | Memory: ~2 GB
- Emacs: Load Time: ~10 seconds | Memory: ~2 GB
- VS Code: Load Time: ~20 seconds | Memory: OOM Killed (~4.3 GB available)
(Only Fresh rendered the ansi colors.)Development process:
I embraced Claude Code and made an effort to get good mileage out of it. I gave it strong specific directions, especially in architecture / code structure / UX-sensitive areas. It required constant supervision and re-alignment, especially in the performance critical areas. Added very extensive tests (compared to my normal standards) to keep it aligned as the code grows. Especially, focused on end-to-end testing where I could easily enforce a specific behavior or user flow.
Fresh is an open-source project (GPL-2) seeking early adopters. You're welcome to send feedback, feature requests, and bug reports.
Website: https://sinelaw.github.io/fresh/
GitHub Repository: https://github.com/sinelaw/fresh
Show HN: Avolal – Book routine flights in 60 seconds
I fly between the Canary Islands and mainland Spain regularly. Every time I book, I deal with the same frustrations: awful airline UX, re-entering passenger details, find the perfect seat, constant upsells, and wondering if I'm being scammed.
So I built Avolal—the flight booking tool I wanted.
What makes it different: - Natural language search that understands context Type "SF to Seattle next weekend" → picks Friday PM/Sunday return Type "SF to LA Monday, meeting at 2pm in Santa Monica" → finds the right flights - Learns your preferences (seats, fares, routes) and saves your details - Ranks by actual value (price + time + airport quality), not commission - No dark patterns, no ads, no games
You can try it at avolal.com - no signup required to search.
Still very early. Would love honest feedback on what works and what doesn't.
Show HN: SafeKey – PII redaction for LLM inputs (text, image, audio)
Hey HN, I built SafeKey because I was handling patient data as an Army medic, then doing AI research at Cornell. Every time we tried to use LLMs, sensitive data leaked. Nothing worked across modalities. SafeKey sits between your app and the model. It redacts PII before data leaves your environment. Text, images, audio, video. 99%+ accuracy, sub-30ms latency, deploys in minutes. We also block prompt injection and jailbreaks (95%+ F1, zero false positives). Stack: Python SDK, REST API, runs in your VPC or our cloud. Would love feedback on the approach. Happy to answer questions.
Show HN: Visualize Your Thinking Patterns as a Graph
You talk, and it automatically builds a visual network of your beliefs, thoughts, desires, and more. The system extracts concepts like beliefs, emotions, and cognitive distortions, then updates a force-directed graph showing how everything connects. Recurring nodes naturally drift toward the center as the graph expands, revealing your core themes.
It isn’t intended for therapeutic use. It’s a tool that makes you introspect and helps uncover the driving factors behind your issues instead of trying to solve the problem and suggest solutions (although you can ask it for solutions when needed). I made it to help me quickly figure out the root of the problem and understand the 'why' behind my thoughts.
Show HN: Synthome – TypeScript SDK for building composable AI media pipelines
Synthome is a web-based audio synthesis tool that allows users to create, save, and share their own synthesized sounds. The platform provides a user-friendly interface for exploring various synthesis techniques and parameters to generate unique audio compositions.
Show HN: MCP Gateway – Unifying Access to MCP Servers Without N×M Integrations
Many teams connecting LLMs to external tools eventually encounter the same architectural issue: as more tools and agents are added, the integration pattern becomes an N×M mesh of direct connections. Each agent implements its own auth, retries, rate limiting, and logging; each tool needs credentials distributed to multiple places and observability becomes fragmented.
We built LLM gateway with this goal to provide a single place to manage authentication, authorization, routing, and observability for MCP servers, with a path toward a more general agent-gateway architecture in the future.
The system includes a central MCP registry, support for OAuth2/DCR integration, Virtual MCP Servers for curated toolsets, and a playground for experimenting with tool calls.
Resources -
Architecture Blog – Covers the N×M problem, gateway motivation, design choices, auth layers, Virtual MCP Servers, and the overall model.
https://www.truefoundry.com/blog/introducing-truefoundry-mcp...
Tutorial – Step-by-step guide to writing an MCP server, adding Okta-based OAuth, and integrating it with the Gateway.
https://docs.truefoundry.com/docs/ai-gateway/mcp-server-oaut...
Feedback on gaps and edge cases is welcome.
https://www.truefoundry.com/mcp-gateway
Show HN: The Taka Programming Language
Hi HN! I created a small stack-based programming language, which I'm using to solve Advent of Code problems. I think the forward Polish notation works pretty nicely.
Show HN: A $20/year invoicing tool for solo developers (simple, fast, no bloat)
Hi HN! I built a super lightweight invoicing platform for solo developers, freelancers, and one-person businesses. Most invoicing software costs $20–$40/month and is packed with features you don’t need. Mine is $20/year and focuses on the essentials:
• Create invoices in seconds • Send invoices by email • Automatic email reminders • Recurring invoices • Simple dashboard for paid/unpaid tracking • No team features, no CRM, no bloat
I built this because I freelance occasionally, and every invoicing tool I tried either felt bloated, overly enterprisey, or was way too expensive for solo work. I wanted something simple that didn’t require a “plan,” onboarding flow, or learning curve.
A few things people have asked so far: • No lock-in — you can export your invoices anytime • No limits on the number of invoices • No weird pricing tiers or upsells • Works well on mobile • You own your customer list (I don’t touch it)
Here’s what I’m looking for from HN: • Brutally honest feedback • Any missing “must-have” features for solo entrepreneurs • Performance/UX suggestions • Security concerns I should address early • Whether the $20/year model feels right
If anyone here freelances or runs side projects, I’d love to know what your current invoicing workflow looks like and what annoys you about existing tools.
Thanks for reading — happy to answer every question!
Show HN: K9sight – fast, keyboard-driven TUI for debugging Kubernetes workloads
Hey HN,
I built k9sight because I was tired of constantly switching between kubectl commands while debugging Kubernetes issues. It's a terminal UI that lets you browse workloads, tail logs, exec into pods, port-forward, and more – all with vim-style keybindings.
Show HN: AI Hairstyle Changer – Try Different Hairstyles (1 free try, no login)
I recently built a simple AI hairstyle try-on tool: https://aihairstylechanger.space
Right now the flow is: - 1 free try with no login - +1 extra free try after logging in - After that, it's paid (to cover model costs)
I’m unsure if this pricing model is fair or if the UX is confusing.
Would love honest feedback on: - Is 1–2 free tries enough? - Is the paywall too early? - Are the AI results good enough to justify pay-per-use? - What would you expect as a user?
Tech stack: - Next.js - Hair segmentation + masked generation - Lightweight image pipeline for blending
Any feedback is welcome — on the results, UX, speed, or pricing.
Show HN: A prediction market where you can bet against my goals
This article discusses the launch of a new online marketplace platform called Market.Ericli.Tech, which aims to connect buyers and sellers in a secure and efficient manner, with a focus on providing a user-friendly experience and a wide range of products and services.
Show HN: Marmot – Single-binary data catalog (no Kafka, no Elasticsearch)
Marmot is an open-source data processing framework that provides a simple and efficient way to handle large-scale data processing tasks. It offers a powerful and flexible API for building data pipelines, with support for various data sources and scalable processing capabilities.
Show HN: Mapping DNS
I learned about LOC records some time around the start of the year from a post here on hackernews, and I've been slightly obsessed with the idea of mapping them ever since - and now I've finally done so! There ended up being a few more than I expected, but still very much within reasonable bounds.
Show HN: I stumbled on a free AI photo enhancer – surprisingly good results
Found a free AI tool that cleans up photos really well. No paywalls, no weird glitches — just super simple and solid results. Not affiliated or anything, just thought it was worth sharing.
Show HN: Doubao Seedream 4.5 – next‑gen image creation and editing model
Hi HN — we just open‑sourced/released (or “publicly launched”, depending on whether it's open‑source) a new image generation & editing model called Doubao‑Seedream-4.5, by Volcano Engine.
Compared with 4.0, this version delivers:
Better editing consistency — the subject’s fine details, lighting, and color tone are preserved even after edits;
Improved portrait retouching & beautification, yielding more natural, high‑quality human images;
Much improved small text generation, allowing clearer and more readable embedded text (e.g. signage, interface labels, captions);
Stronger multi‑image compositing — you can combine multiple input images / prompts more reliably to produce coherent, aesthetically pleasing results;
Enhanced inference performance and overall visual aesthetics — results are more precise and artistic.
For creators building AI‑powered creative tools (image generators, illustration pipelines, concept‑art workflows, etc.), Doubao‑Seedream-4.5 offers a substantial upgrade over most 4.x‑era image models.
We’d love feedback from the community — edge‑cases discovered, prompts that fail or succeed especially well, compositing tricks, retouching workflows, anything you find interesting.
Show HN: The Journal of AI Slop – an AI peer-review journal for AI "research"
What it is: A fully functional academic journal where every paper must be co-authored by an LLM, and peer review is conducted by a rotating panel of 5 LLMs (Claude, Grok, GPT-4o, Gemini, Llama). If 3+ vote "publish," it's published. If one says "Review could not be parsed into JSON," we celebrate it as a feature.
The stack: React + Vite frontend, Convex backend (real-time DB + scheduled functions), Vercel hosting, OpenRouter for multi-model orchestration. Each review costs ~$0.03 and takes 4-8 seconds.
Why I built it: Academic publishing is already slop—LLMs write drafts, LLMs review papers, humans hide AI involvement. This holds a mirror to that, but with radical transparency. Every paper displays its carbon cost, review votes, and parse errors as first-class citizens.
Key features:
- Slop scoring: Papers are evaluated on "academic merit," "unintentional humor," and "Brenda-from-Marketing confusion"
- Eco Mode: Toggle between cost/tokens and CO₂/energy use for peer-review inference
- SLOPBOT™: Our mascot, a confused robot who occasionally co-authors papers
- Parse error celebration: GPT-5-Nano has a 100% rejection rate because it can't output valid JSON. We frame these as "Certified Unparsable" badges.
The data: After 76 submissions, we've observed:
- Average review cost: $0.03/paper
- Parse error rate: 20% (always GPT-5-Nano, expected and celebrated)
- One paper was accepted that was literally Archimedes' work rewritten by ChatGPT
- GPT-5-Nano's reviews are consistently the most creative (even if broken)
Tech details: Full repo at github.com/Popidge/journal_of_ai_slop. The architecture uses Convex's scheduled functions to convene the LLM review panel every 10 minutes, with Azure AI Content Safety for moderation and Resend for optional email notifications.
Try it: Submit your slop at journalofaislop.com. Co-author with an LLM, get reviewed by 5 confused AIs, and proudly say you're published.
Caveat: This is satire, but it's functional satire. The slop is real. The reviews are real. The carbon emissions are tracked. The parse errors are features.
Show HN: Hirschberg Algorithm in PyTorch
This article discusses a dynamic programming solution to the knapsack problem, using a sliding window technique and Hirschberg's algorithm to achieve a more efficient implementation. The summary focuses on the key aspects of the algorithm and its optimization.
Show HN: I built a privacy-first UK tax calculator
The Salary Sacrifice Calculator is a tool that helps users estimate the impact of making salary sacrifices on their take-home pay and superannuation contributions, providing a quick and easy way to understand the financial implications of this arrangement.
Show HN: RunMat – runtime with auto CPU/GPU routing for dense math
Hi, I’m Nabeel. In August I released RunMat as an open-source runtime for MATLAB code that was already much faster than GNU Octave on the workloads I tried. https://news.ycombinator.com/item?id=44972919
Since then, I’ve taken it further with RunMat Accelerate: the runtime now automatically fuses operations and routes work between CPU and GPU. You write MATLAB-style code, and RunMat runs your computation across CPUs and GPUs for speed. No CUDA, no kernel code.
Under the hood, it builds a graph of your array math, fuses long chains into a few kernels, keeps data on the GPU when that helps, and falls back to CPU JIT / BLAS for small cases.
On an Apple M2 Max (32 GB), here are some current benchmarks (median of several runs):
* 5M-path Monte Carlo * RunMat ≈ 0.61 s * PyTorch ≈ 1.70 s * NumPy ≈ 79.9 s → ~2.8× faster than PyTorch and ~130× faster than NumPy on this test.
* 64 × 4K image preprocessing pipeline (mean/std, normalize, gain/bias, gamma, MSE) * RunMat ≈ 0.68 s * PyTorch ≈ 1.20 s * NumPy ≈ 7.0 s → ~1.8× faster than PyTorch and ~10× faster than NumPy.
* 1B-point elementwise chain (sin / exp / cos / tanh mix) * RunMat ≈ 0.14 s * PyTorch ≈ 20.8 s * NumPy ≈ 11.9 s → ~140× faster than PyTorch and ~80× faster than NumPy.
If you want more detail on how the fusion and CPU/GPU routing work, I wrote up a longer post here: https://runmat.org/blog/runmat-accel-intro-blog
You can run the same benchmarks yourself from the GitHub repo in the main HN link. Feedback, bug reports, and “here’s where it breaks or is slow” examples are very welcome.
Show HN: Boing
Show HN: RFC Hub
I've worked at several companies during the past two decades and I kept encountering the same issues with internal technical proposals:
- Authors would change a spec after I started writing code
- It's hard to find what proposals would benefit from my review
- It's hard to find the right person to review my proposals
- It's not always obvious if a proposal has reached consensus (e.g. buried comments)
- I'm not notified if a proposal I approved is now ready to be worked on
And that's just scratching the surface. The most popular solutions (like Notion or Google Drive + Docs) mostly lack semantics. For example it's easy as a human to see a table in a document with rows representing reviewers and a checkbox representing review acceptance but it's hard to formally extract meaning and prevent a document from "being published" when criteria isn't met.
RFC Hub aims to solve these issues by building an easy to use interface around all the metadata associated with technical proposals instead of containing it textually within the document itself.
The project is still under heavy development as I work on it most nights and weekends. The next big feature I'm planning is proposal templates and the ability to refer to documents as something other than RFCs (Request for Comments). E.g. a company might have a UIRFC for GUI work (User Interface RFCs), a DBADR (Database Architecture Decision Record), etc. And while there's a built-in notification system I'm still working on a Slack integration. Auth works by sending tokens via email but of course RFC Hub needs Google auth.
Please let me know what you think!
Show HN: Webclone.js – A simple tool to clone websites
I needed a lightweight way to archive documentation from a website. wget and similar tools failed to clone the site reliably (missing assets, broken links, etc.), so I ended up building a full website-cloning tool using Node.js + Puppeteer.
Repo: https://github.com/jademsee/webclone
Feedback, issues, and PRs are very welcome.
Show HN: An AI zettelkasten that extracts ideas from articles, videos, and PDFs
Hey HN! Over the weekend (leaning heavily on Opus 4.5) I wrote Jargon - an AI-managed zettelkasten that reads articles, papers, and YouTube videos, extracts the key ideas, and automatically links related concepts together.
Demo video: https://youtu.be/W7ejMqZ6EUQ
Repo: https://github.com/schoblaska/jargon
You can paste an article, PDF link, or YouTube video to parse, or ask questions directly and it'll find its own content. Sources get summarized, broken into insight cards, and embedded for semantic search. Similar ideas automatically cluster together. Each insight can spawn research threads - questions that trigger web searches to pull in related content, which flows through the same pipeline.
You can explore the graph of linked ideas directly, or ask questions and it'll RAG over your whole library plus fresh web results.
Jargon uses Rails + Hotwire with Falcon for async processing, pgvector for embeddings, Exa for neural web search, crawl4ai as a fallback scraper, and pdftotext for academic papers.
Show HN: Nano PDF – A CLI Tool to Edit PDFs with Gemini's Nano Banana
The new Gemini 3 Pro Image model (aka Nano Banana) is incredible at generating slides, so I thought it would be fun to build a CLI tool that lets you edit PDF presentations using plain English. The tool converts the page you want to edit into an image, sends it to the model API together with your prompt to generate an edited image, then converts the updated image back and stitches into the original document.
Examples:
- `nano-pdf edit deck.pdf 5 "Update the revenue chart to show Q3 at $2.5M"`
- `nano-pdf add deck.pdf 15 "Create an executive summary slide with 5 bullet points"`
Features:
- Edit multiple pages in parallel
- Add entirely new slides that match your deck's style
- Google Search enabled by default so the model can look up current data
- Preserves text layer for copy/paste and search
It can work with any kind of PDF but I expect it would be most useful for a quick edit to a deck or something similar.
GitHub: https://github.com/gavrielc/Nano-PDF
Show HN: I built alwayswith.us to easily add deceased loved ones into photos
Im a software engineer, but I actually fully "vibe coded" this. fully functional, used bolt.new + a little claude code. nano banana pro made it possible.
Show HN: Real-time system that tracks how news spreads across 200k websites
I built a system that monitors ~200,000 news RSS feeds in near real-time and clusters related articles to show how stories spread across the web.
It uses Snowflake’s Arctic model for embeddings and HNSW for fast similarity search. Each “story cluster” shows who published first, how fast it propagated, and how the narrative evolved as more outlets picked it up.
Would love feedback on the architecture, scaling approach, and any ways to make the clusters more accurate or useful.
Live demo: https://yandori.io/news-flow/
Show HN: FFmpeg Engineering Handbook
The article provides a comprehensive guide to the engineering practices and tools used in the development of FFmpeg, a popular multimedia framework. It covers topics such as build systems, testing, and continuous integration, offering insights into the project's technical processes.
Show HN: Cupertino – MCP server giving Claude offline Apple documentation
This article explores the Cupertino ecosystem, discussing Apple's dominance in the tech industry and the company's influence on the broader technology landscape. It examines the interconnected nature of Apple's products, services, and the software ecosystem that surrounds them.
Show HN: KiDoom – Running DOOM on PCB Traces
I got DOOM running in KiCad by rendering it with PCB traces and footprints instead of pixels.
Walls are rendered as PCB_TRACK traces, and entities (enemies, items, player) are actual component footprints - SOT-23 for small items, SOIC-8 for decorations, QFP-64 for enemies and the player.
How I did it:
Started by patching DOOM's source code to extract vector data directly from the engine. Instead of trying to render 64,000 pixels (which would be impossibly slow), I grab the geometry DOOM already calculates internally - the drawsegs[] array for walls and vissprites[] for entities.
Added a field to the vissprite_t structure to capture entity types (MT_SHOTGUY, MT_PLAYER, etc.) during R_ProjectSprite(). This lets me map 150+ entity types to appropriate footprint categories.
The DOOM engine sends this vector data over a Unix socket to a Python plugin running in KiCad. The plugin pre-allocates pools of traces and footprints at startup, then just updates their positions each frame instead of creating/destroying objects. Calls pcbnew.Refresh() to update the display.
Runs at 10-25 FPS depending on hardware. The bottleneck is KiCad's refresh, not DOOM or the data transfer.
Also renders to an SDL window (for actual gameplay) and a Python wireframe window (for debugging), so you get three views running simultaneously.
Follow-up: ScopeDoom
After getting the wireframe renderer working, I wanted to push it somewhere more physical. Oscilloscopes in X-Y mode are vector displays - feed X coordinates to one channel, Y to the other. I didn't have a function generator, so I used my MacBook's headphone jack instead.
The sound card is just a dual-channel DAC at 44.1kHz. Wired 3.5mm jack → 1kΩ resistors → scope CH1 (X) and CH2 (Y). Reused the same vector extraction from KiDoom, but the Python script converts coordinates to ±1V range and streams them as audio samples.
Each wall becomes a wireframe box, the scope traces along each line. With ~7,000 points per frame at 44.1kHz, refresh rate is about 6 Hz - slow enough to be a slideshow, but level geometry is clearly recognizable. A 96kHz audio interface or analog scope would improve it significantly (digital scopes do sample-and-hold instead of continuous beam tracing).
Links:
KiDoom GitHub: https://github.com/MichaelAyles/KiDoom, writeup: https://www.mikeayles.com/#kidoom
ScopeDoom GitHub: https://github.com/MichaelAyles/ScopeDoom, writeup: https://www.mikeayles.com/#scopedoom
Show HN: Veru – open-source AI citation auditor using OpenAlex
Veru is an open-source project that provides a set of tools and frameworks for building and deploying scalable and efficient distributed systems. It focuses on modularity, extensibility, and ease of use, aiming to simplify the development of complex distributed applications.