Show HN: Omni – Open-source workplace search and chat, built on Postgres
Hey HN!
Over the past few months, I've been working on building Omni - a workplace search and chat platform that connects to apps like Google Drive/Gmail, Slack, Confluence, etc. Essentially an open-source alternative to Glean, fully self-hosted.
I noticed that some orgs find Glean to be expensive and not very extensible. I wanted to build something that small to mid-size teams could run themselves, so I decided to build it all on Postgres (ParadeDB to be precise) and pgvector. No Elasticsearch, or dedicated vector databases. I figured Postgres is more than capable of handling the level of scale required.
To bring up Omni on your own infra, all it takes is a single `docker compose up`, and some basic configuration to connect your apps and LLMs.
What it does:
- Syncs data from all connected apps and builds a BM25 index (ParadeDB) and HNSW vector index (pgvector) - Hybrid search combines results from both - Chat UI where the LLM has tools to search the index - not just basic RAG - Traditional search UI - Users bring their own LLM provider (OpenAI/Anthropic/Gemini) - Connectors for Google Workspace, Slack, Confluence, Jira, HubSpot, and more - Connector SDK to build your own custom connectors
Omni is in beta right now, and I'd love your feedback, especially on the following:
- Has anyone tried self-hosting workplace search and/or AI tools, and what was your experience like? - Any concerns with the Postgres-only approach at larger scales?
Happy to answer any questions!
The code: https://github.com/getomnico/omni (Apache 2.0 licensed)
Show HN: Sairo – Self-hosted S3 browser with 2.4ms search across 134K objects
I built Sairo because searching for a file in the AWS S3 console takes ~14 seconds on a large bucket, and MinIO removed their web console from the free edition last year.
Sairo is a single Docker container that indexes your bucket into SQLite FTS5 and gives you full-text search in 2.4ms (p50) across 134K objects / 38 TB. No external databases, no microservices, no message queues.
What it does: - Instant search across all your objects (SQLite FTS5, 1,300 objects/sec indexing) - File preview for 45+ formats — Parquet schemas, CSV tables, PDFs, images, code - Password-protected share links with expiration - Version management — browse, restore, purge versions and delete markers - Storage analytics with growth trend charts - RBAC, 2FA, OAuth, LDAP, audit logging - CLI with 24 commands (brew install ashwathstephen/sairo/sairo)
Works with AWS S3, MinIO, Cloudflare R2, Wasabi, Backblaze B2, Ceph, and any S3-compatible endpoint.
docker run -d -p 8000:8000 \
-e S3_ENDPOINT=https://your-endpoint.com \
-e S3_ACCESS_KEY=xxx -e S3_SECRET_KEY=xxx \
stephenjr002/sairo
Site: https://sairo.dev
GitHub: https://github.com/AshwathStephen/sairoI'd love honest feedback — what's missing, what would make you actually switch to this?
Show HN: Web Audio Studio – A Visual Debugger for Web Audio API Graphs
Hi HN,
I’ve been working on a browser-based tool for exploring and debugging Web Audio API graphs.
Web Audio Studio lets you write real Web Audio API code, run it, and see the runtime graph it produces as an interactive visual representation. Instead of mentally tracking connect() calls, you can inspect the actual structure of the graph, follow signal flow, and tweak parameters while the audio is playing.
It includes built-in visualizations for common node types — waveforms, filter responses, analyser time and frequency views, compressor transfer curves, waveshaper distortion, spatial positioning, delay timing, and more — so you can better understand what each part of the graph is doing. You can also insert an AnalyserNode between any two nodes to inspect the signal at that exact point in the chain.
There are around 20 templates (basic oscillator setups, FM/AM synthesis, convolution reverb, IIR filters, spatial audio, etc.), so you can start from working examples and modify them instead of building everything from scratch.
Everything runs fully locally in the browser — no signup, no backend.
The motivation came from working with non-trivial Web Audio graphs and finding it increasingly difficult to reason about structure and signal flow once things grow beyond simple examples. Most tutorials show small snippets, but real projects quickly become harder to inspect. I wanted something that stays close to the native Web Audio API while making the runtime graph visible and inspectable.
This is an early alpha and desktop-only for now.
I’d really appreciate feedback — especially from people who have used Web Audio API in production or built audio tools. You can leave comments here, or use the feedback button inside the app.
https://webaudio.studio
Show HN: Timber – Ollama for classical ML models, 336x faster than Python
Timber is a lightweight, high-performance logging library for Java and Kotlin that provides a simple and flexible API for logging messages. It supports multiple logging backends, including Logcat, Timber, and SLF4J, and offers features such as tree-structured logging and custom tag generation.
Show HN: IDAssist – AI augmented reverse engineering for IDA Pro
I realize there may be some AI fatigue in the HN community, but I've genuinely seen a marked productivity boost using these tools - hence the desire to share them.
With the releases of my GhidrAssist (Ghidra) and BinAssist (Binary Ninja) LLM reverse engineering plugins over the past year, a number of people have reached out to ask "where's the IDA Pro plugin?"
Well - as of today, both IDAssist and IDAssistMCP are live on Github!
As you'd expect, they all share the same look and feel, workflow and feature set, including:
- AI-Powered Function Analysis: One-click function explanation with security assessment
- Interactive Chat: Context-aware chat with macros to inject functions, variables, and disassembly as well as builtin MCP client
- Smart Suggestions: AI-generated rename proposals for functions and variables
- ReAct Agent: Autonomous multi-step investigation using MCP tools
- Semantic Knowledge Graph: Visual graph of function relationships with similarity search
- RAG Document Search: Query your own docs (.txt, .md, .pdf) alongside binary analysis
- MCP Integration: Connect external tool servers for augmented LLM capabilities
- Multi-Provider Support: Anthropic, OpenAI, Gemini, X.ai, Ollama, LMStudio, LiteLLM, etc.
These aren't just jump-on-the-AI-bandwagon contributions. The GhidrAssist and BinAssist plugins represent over a year of steady development and refinement and are battle-tested in day-to-day use. I use them daily to score significant productivity gains in my RE day job as well as on personal projects.
If that sounds interesting, give them a try.
Show HN: Augno – a Stripe-like ERP for manufacturing
I’ve spent the past 4 years building software for a knitting factory, including selecting and implementing an ERP. That experience was really painful.
Most manufacturing ERPs are hard to evaluate before signing an enterprise contract, poorly documented, and clearly not designed with serious API users in mind. Even when APIs exist, they’re often inconsistent or bolted on as an afterthought.
We built Augno to change that. The goal is to provide a Stripe-like experience for manufacturing ERP. We want it to be a usable out-of-the-box product and a well-designed, cohesive API that developers can actually build on.
We put a lot of effort into API design, documentation, and sandboxing. You can create a free account, explore the sandbox, and only move to production when you’re ready. There’s a free tier to make evaluation straightforward, without sales calls or contracts. Our focus is to let teams spend their engineering time on things that drive revenue - like custom quoting, order workflows, or integrations - instead of fighting their ERP.
We’re actively expanding the public API and rolling out additional endpoints over the next few months.
I made an account that you can check out. You can login at https://www.augno.com/auth/login username: hackernews password: aveGLZ9Nn4MA7cg!
Docs are here: https://docs.augno.com/
I’d really appreciate feedback from anyone who’s dealt with manufacturing systems or ERPs before!
Show HN: RDAP API – Normalized JSON for Domain/IP Lookups (Whois Replacement)
I discovered RDAP while working on a hobby project where I needed to check if IP addresses were residential or not. RDAP was giving me more data than WHOIS, and it returns JSON instead of plain text. WHOIS is going away anyway. ICANN now requires RDAP for all gTLDs, and many registries are returning less data over port 43 or dropping it entirely. But RDAP is not easy to work with directly.
There is no single server. You have to check the IANA bootstrap registry to find which server handles each TLD, and some ccTLDs have working RDAP servers that are not even listed there. For .com and .net, the registry only has basic data. You need a second request to the registrar server to get contacts and abuse info. Then there is vcardArray, a deeply nested array-of-arrays format for contact data. And every server has its own rate limits.
I built an API that does all of that and gives you clean JSON back. One endpoint, same schema for every TLD. Here is what you get for google.com with ?follow=true (follows the registrar link automatically):
{
"domain": "google.com",
"registrar": { "name": "MarkMonitor Inc.", "iana_id": "292" },
"dates": { "registered": "1997-09-15T04:00:00Z", "expires": "2028-09-14T04:00:00Z" },
"nameservers": ["ns1.google.com", "ns2.google.com", "ns3.google.com", "ns4.google.com"],
"entities": { "registrant": { "organization": "Google LLC", "country_code": "US" } }
}
You also get status codes, DNSSEC, abuse contacts, etc. There is a free lookup tool on the homepage to try it, no signup needed.Supplemental servers. The IANA bootstrap only covers ~1,200 TLDs. I keep a list of 30 extra RDAP servers (for TLDs like .io, .de, .me, .us) that work but are not registered with IANA. Synced daily.
Registrar follow-through. For thin registries like .com, the registry only has dates and nameservers. The registrar has the rest on a different server. The API follows that link and merges both.
SDKs. Open source clients for Python, Node.js, PHP, Go and Java.
Responses are cached for 24 hours to reduce load on upstream RDAP servers.
This is my first SaaS, just launched. Would love honest feedback.
The API starts at $9/mo with a 7-day free trial.
Show HN: Atrium – An open-source, self-hosted client portal
I started a solo software engineering lab earlier this year and wanted a professional foundation for my clients from day one.
I looked at platforms like HoneyBook, but they are expensive and didn't feel built for developers. I wanted a lightweight, self-hosted, and white-label solution that handled the essentials—file sharing, project tracking, and invoicing—without the SaaS tax. So I built Atrium.
It’s a clean interface that gives clients a single place to track progress and handle billing under my own branding, rather than me stitching together fragmented tools.
Core Features:
* White-Labeling: Fully customizable branding for the client-facing UI.
* Updates & Collaboration: Status updates and file sharing for clients, plus internal notes for team-only coordination.
* Asset Management: Support for S3, MinIO, R2, or local storage.
* Invoicing: Integrated PDF generation and billing.
I’m using this to run my lab’s client operations and would love technical feedback, contributions or feature requests from the community.
GitHub: https://github.com/Vibra-Labs/Atrium
Show HN: Photon – Rust pipeline that embeds/tags/hashes images locally w SigLIP
Open-source Rust-based image processing pipeline that takes images and outputs structured JSON — 768-dim vector embeddings, semantic tags from a 68K-term vocabulary, EXIF metadata, content hashes, and thumbnails.
Everything runs locally via SigLIP + ONNX Runtime. Single binary, no Python, no Docker, no cloud dependency. Optional BYOK LLM descriptions (Ollama, Anthropic, OpenAI).
Show HN: Ragtoolina – MCP tool that adds codebase RAG to AI coding agents
Ragtoolina is an MCP server that pre-indexes your codebase and provides targeted context to AI coding agents instead of letting them scan files one by one.
I benchmarked it on Cal.com (~300K LOC, 40K GitHub stars):
- 63% token reduction across 5 diverse query types - 43% fewer tool calls - Cost: $3.01 vs $7.81 per 5-query session
The benchmark covers different complexity levels — from simple call-chain tracing to architectural understanding queries. RAG overhead doesn't pay off on trivial lookups (one query was actually worse with Ragtoolina), but on complex multi-file tasks the savings are substantial (up to 79% token reduction).
Quality evaluation was done via blind AI-judge scoring (the judge didn't know which answer came from which system) across correctness, completeness, specificity, and conciseness. Ragtoolina matched or exceeded baseline quality on 4 out of 5 tasks.
Works with any MCP-compatible client: Claude Code, Cursor, Windsurf, Claude Desktop.
Free tier available, would appreciate any feedback.
Show HN: Logira – eBPF runtime auditing for AI agent runs
I started using Claude Code (claude --dangerously-skip-permissions) and Codex (codex --yolo) and realized I had no reliable way to know what they actually did. The agent's own output tells you a story, but it's the agent's story.
logira records exec, file, and network events at the OS level via eBPF, scoped per run. Events are saved locally in JSONL and SQLite. It ships with default detection rules for credential access, persistence changes, suspicious exec patterns, and more. Observe-only – it never blocks.
https://github.com/melonattacker/logira
Show HN: Commitdog – Git on steroids CLI (pure Go, ~3MB binary)
This article explores the benefits of using Commit Dog, a tool that helps developers track and visualize their Git commit history. It discusses how Commit Dog can improve code management, collaboration, and developer productivity.
Show HN: OpenBerth – Deploy AI-built apps and tools to your own server
OpenBerth is an open-source web development framework that provides a set of tools and libraries for building modern web applications quickly and efficiently. It offers a modular design, advanced routing, and seamless integration with popular front-end frameworks like React, Angular, and Vue.js.
Show HN: Audio Toolkit for Agents
The article describes a SAS audio processor, a tool that allows users to process audio files and perform various operations such as normalization, equalization, and conversion between different audio formats. The processor is built using the SAS programming language and is designed to be a powerful and flexible tool for audio processing tasks.
Show HN: Vibe Code your 3D Models
Hi HN,
I’m the creator of SynapsCAD, an open-source desktop application I've been building that combines an OpenSCAD code editor, a real-time 3D viewport, and an AI assistant.
You can write OpenSCAD code, compile it directly to a 3D mesh, and use an LLM (OpenAI, Claude, Gemini, ...) to modify the code through natural language.
Demo video: https://www.youtube.com/watch?v=cN8a5UozS5Q
A bit about the architecture:
- It’s built entirely in Rust.
- The UI and 3D viewport are powered by Bevy 0.15 and egui.
- It uses a pure-Rust compilation pipeline (openscad-rs for parsing and csgrs for constructive solid geometry rendering) so there are no external tools or WASM required.
- Async AI network calls are handled by Tokio in the background to keep the Bevy render loop smooth.
Disclaimer: This is a very early prototype. The OpenSCAD parser/compiler doesn't support everything perfectly yet, so you will definitely hit some rough edges if you throw complex scripts at it.
I mostly just want to get this into the hands of people who tinker with CAD or Rust.
I'd be super happy for any feedback, architectural critiques, or bug reports—especially if you can drop specific OpenSCAD snippets that break the compiler in the GitHub issues!
GitHub (Downloads for Win/Mac/Linux): https://github.com/ierror/synaps-cad
Happy to answer any questions about the tech stack or the roadmap!
Show HN: Axiom – structural OCR for handwritten STEM notes
I built Axiom after repeatedly running into the same problem with my own handwritten STEM notes.
On paper, everything looks clean — equations aligned, steps grouped properly, tables laid out clearly. But the moment I scanned those pages and ran them through OCR (including LLM-based tools), the structure would fall apart. The characters were mostly correct, but the layout — which is what actually makes math readable — was gone.
Aligned equations would lose alignment. Multi-step derivations would collapse into a single paragraph. Numbered problems would merge together. Tables would turn into plain text. Technically it was “extracted,” but practically it was unusable without manually fixing everything in LaTeX.
That gap is what Axiom tries to solve.
Instead of focusing purely on transcription accuracy, I focused on structural preservation. The current pipeline looks roughly like this:
1. OCR from image or PDF.
2. A structural prompt tuned specifically for math alignment, derivation grouping, numbered block preservation, and table detection.
3. A post-processing layer that normalizes LaTeX/Markdown output, merges math blocks, protects numbering tokens, and stabilizes table columns.
4. Export as compile-ready LaTeX, Markdown, or searchable PDF.
The hardest part wasn’t getting the characters right. It was preventing structural drift — especially with aligned equations and multi-line derivations. I added alignment pattern detection, atomic pagination for LaTeX environments, and normalization passes to keep math blocks intact across pages.
The goal isn’t “AI transcription.” It’s making handwritten STEM notes survive digitization without losing their mathematical structure.
It runs entirely in the browser:
https://www.useaxiomnotes.com
Show HN: Vertex.js – A 1kloc SPA Framework
Vertex is a 1kloc SPA framework containing everything you need from React, Ractive-Load and jQuery while still being jQuery-compatible.
vertex.js is a single, self-contained file with no build step and no dependencies.
Also exhibits the curious quality of being faster than over a decade of engineering at Facebook in some cases: https://files.catbox.moe/sqei0d.png
Show HN: Xpandas – running Pandas-style computation directly in pure C++
Hi HN,
I’ve been exploring whether pandas can be used as a computation description, rather than a runtime.
The idea is to write data logic in pandas / NumPy, then freeze that logic into a static compute graph and execute it in pure C++, without embedding Python.
This is not about reimplementing pandas or speeding up Python. It’s about situations where pandas-style logic is useful, but Python itself becomes a liability (latency, embedding, deployment).
The project is still small and experimental, but it already works for a restricted subset of pandas-like operations and runs deterministically in C++.
Repo: https://github.com/CVPaul/xpandas
I’d love feedback on whether this direction makes sense, and where people think it would break down.
Show HN: OxyJen – Java framework to orchestrate LLMs in a graph-style execution
For the past few months, I've been building OxyJen, an open-source framework for building reliable, multi-step AI pipelines in Java. In most Java LLM projects, everything is still just strings. You build a prompt, make a call, and then parse your regex and wait if it works on the "almost-JSON" that comes back. It's brittle, untestable, and feels like the opposite of Java's "contract-first" philosophy.
OxyJen's approach is different. It's a graph-based orchestration framework(currently sequential) where LLMs are treated as native, reliable nodes in a pipeline, not as magical helper utilities. The core idea is to bring deterministic reliability to probabilistic AI calls. Everything is a node in a graph based system, LLMNode, LLMChain, LLM api is used to build a simple LLM node for the graph with retry/fallback, jitter/backoff, timeout enforcements(currently supports OpenAI).
PromptTemplate, Variable(required/optional) and PromptRegistry is used to build and store reusable prompts which saves you from re writing prompts.
JSONSchema and SchemaGenerator is used to build schema from POJOs/Records which will provide Json enforcements on outputs of these LLMs. SchemaNode<T> wraps SchemaEnforcer and validator to map LLM output directly to the classes. Enforcer also makes sure your output is correct and takes maximum retries.
Currently working on the Tool API to help users build their custom tools in Oxyjen. I'm a solo builder right now and the project is in its very early stage so I would really appreciate any feedback and contributions(even a small improvement in docs would be valuable).
OxyJen: https://github.com/11divyansh/OxyJen
Show HN: Workz–Git worktrees with zero-config dep sync and a built-in MCP server
I built workz to solve a daily frustration: git worktree add drops you into a directory with no .env, no node_modules, and 2GB of disk wasted per branch if you reinstall.
workz does three things:
Auto-syncs — symlinks heavy dirs (node_modules, target, .venv) and copies .env files into every new worktree Fuzzy switching — skim-powered TUI to jump between worktrees, with shell cd integration like zoxide MCP server — workz mcp exposes 6 tools so Claude Code/Cursor can create and manage worktrees autonomously without human intervention Written in Rust, single binary, zero config for Node/Rust/Python/Go/Java projects.
cargo install workz or brew install rohansx/tap/workz
https://github.com/rohansx/workz
Show HN: Steward – an ambient agent that handles low-risk work
I built Steward because most AI assistants still have to be summoned.
The idea here is different: Steward runs in the background, watches signals from tools like GitHub, email, calendar, chat, and screen context, and tries to move low-risk work forward before the user explicitly asks.
The core mechanism is a policy gate between perception and execution. Low-risk and reversible actions can be handled automatically with an audit trail. Higher-risk or irreversible actions must be escalated for explicit approval. Instead of constant notifications, the system is designed to brief the user periodically on what was done, what is pending, and what actually needs judgment.
Right now it is an early local-first prototype. It runs with a simple `make start`, opens a local dashboard, and uses an OpenAI-compatible API key.
I’d love feedback on a few things:
* whether “policy-gated autonomy” is the right abstraction for this kind of agent * where the boundary should be between silent automation and interruption * how people would structure connectors and context aggregation for a system like this
Show HN: Now I Get It – Translate scientific papers into interactive webpages
Understanding scientific articles can be tough, even in your own field. Trying to comprehend articles from others? Good luck.
Enter, Now I Get It!
I made this app for curious people. Simply upload an article and after a few minutes you'll have an interactive web page showcasing the highlights. Generated pages are stored in the cloud and can be viewed from a gallery.
Now I Get It! uses the best LLMs out there, which means the app will improve as AI improves.
Free for now - it's capped at 20 articles per day so I don't burn cash.
A few things I (and maybe you will) find interesting:
* This is a pure convenience app. I could just as well use a saved prompt in Claude, but sometimes it's nice to have a niche-focused app. It's just cognitively easier, IMO.
* The app was built for myself and colleagues in various scientific fields. It can take an hour or more to read a detailed paper so this is like an on-ramp.
* The app is a place for me to experiment with using LLMs to translate scientific articles into software. The space is pregnant with possibilities.
* Everything in the app is the result of agentic engineering, e.g. plans, specs, tasks, execution loops. I swear by Beads (https://github.com/steveyegge/beads) by Yegge and also make heavy use of Beads Viewer (https://news.ycombinator.com/item?id=46314423) and Destructive Command Guard (https://news.ycombinator.com/item?id=46835674) by Jeffrey Emanuel.
* I'm an AWS fan and have been impressed by Opus' ability to write good CFN. It still needs a bunch of guidance around distributed architecture but way better than last year.
Show HN: I built an AI tool that walks you through Toyota's 5 Whys method
FiveWhys.ai is an AI-powered platform that helps businesses uncover the root causes of problems and make data-driven decisions. The platform uses natural language processing and causal inference to analyze data and guide users through a structured problem-solving process.
Show HN: Interactive 3D WebGL Globe for real-time daylight cycles
Oclock is an interactive 3D WebGL globe that visualizes real-time daylight cycles. Built with Globe.gl and pre-processed spatial data pipeline.
Processing Pipeline
The data generation is handled by timezone_data_generator.py. This script performs the following:
1. Geometry Analysis: Uses shapely to find a representative point inside each country's boundary.
2. Timezone Mapping: Uses timezonefinder to look up the specific IANA timezone string for those coordinates.
3. Data Injection: Injects the timezone and coordinates back into the GeoJSON properties for use by the frontend.
Live Demo: https://azialle.github.io/Oclock/
Show HN: Visualize Git commit histories as animated force-directed graphs
Visualize and analyze complete Git commit histories as animated force-directed graphs. See how commit density, branch activity, and contributor participation evolve over time.
Live site: https://nshcr.github.io/git-commits-threadline/
This project helps you quickly inspect:
- repository growth over long time ranges
- branch structure and active thread distribution
- contribution patterns across maintainers and collaborators
Show HN: Unfucked - version all changes (by any tool) - local-first/source avail
I built unf after I pasted a prompt into the wrong agent terminal and it overwrote hours of hand-edits across a handful of files. Git couldn't help because I hadn't finished/committed my in progress work. I wanted something that recorded every save automatically so I could rewind to any point in time. I wanted to make it difficult for an agent to permanently screw anything up, even with an errant rm -rf
unf is a background daemon that watches directories you choose (via CLI) and snapshots every text file on save. It stores file contents in an object store, tracks metadata in SQLite, and gives you a CLI to query and restore any version. The install includes a UI, as well to explore the history through time.
The tool skips binaries and respects `.gitignore` if one exists. The interface borrows from git so it should feel familiar: unf log, unf diff, unf restore.
I say "UN-EF" vs U.N.F, but that's for y'all to decide: I started by calling the project Unfucked and got unfucked.ai, which if you know me and the messes I get myself into, is a fitting purchase.
The CLI command is `unf` and the Tauri desktop app is titled "Unfudged" (kids safe name).
How it works: https://unfucked.ai/tech (summary below)
The daemon uses FSEvents on macOS and inotify on Linux. When a file changes, `unf` hashes the content with BLAKE3 and checks whether that hash already exists in the object store — if it does, it just records a new metadata entry pointing to the existing blob. If not, it writes the blob and records the entry. Each snapshot is a row in SQLite. Restores read the blob back from the object store and overwrite the file, after taking a safety snapshot of the current state first (so restoring is itself reversible).
There are two processes. The core daemon does the real work of managing FSEvents/inotify subscriptions across multiple watched directories and writing snapshots. A sentinel watchdog supervises it, kept alive and aligned by launchd on macOS and systemd on Linux. If the daemon crashes, the sentinel respawns it and reconciles any drift between what you asked to watch and what's actually being watched. It was hard to build the second daemon because it felt like conceding that the core wasn't solid enough, but I didn't want to ship a tool that demanded perfection to deliver on the product promise, so the sentinel is the safety net.
Fingers crossed, I haven’t seen it crash in over a week of personal usage on my Mac. But, I don't want to trigger "works for me" trauma.
The part I like most: On the UI, I enjoy viewing files through time. You can select a time section and filter your projects on a histogram of activity. That has been invaluable in seeing what the agent was doing.
On the CLI, the commands are composable. Everything outputs to stdout so you can pipe it into whatever you want. I use these regularly and AI agents are better with the tool than I am:
# What did my config look like before we broke it?
unf cat nginx.conf --at 1h | nginx -t -c /dev/stdin
# Grep through a deleted file
unf cat old-routes.rs --at 2d | grep "pub fn"
# Count how many lines changed in the last 10 minutes
unf diff --at 10m | grep '^[+-]' | wc -l
# Feed the last hour of changes to an AI for review
unf diff --at 1h | pbcopy
# Compare two points in time with your own diff tool
diff <(unf cat app.tsx --at 1h) <(unf cat app.tsx --at 5m)
# Restore just the .rs files that changed in the last 5 minutes
unf diff --at 5m --json | jq -r '.changes[].file' | grep '\.rs$' | xargs -I{} unf restore {} --at 5m
# Watch for changes in real time
watch -n5 'unf diff --at 30s'
What was new for me: I came to Rust in Nov. 2025 honestly because of HN enthusiasm and some FOMO. No regrets. I enjoy the language enough that I'm now working on custom clippy lints to enforce functional programming practices. This project was also my first Apple-notarized DMG, my first Homebrew tap, and my second Tauri app (first one I've shared).Install & Usage:
> brew install cyrusradfar/unf/unfudged
Then unf watch in a directory. unf help covers the details (or ask your agent to coach).EDIT: Folks are asking for the source, if you're interested watch https://github.com/cyrusradfar/homebrew-unf -- I'll migrate there if you want it.
Show HN: I built open source Gmail organizer because I refused to pay $30/month
For the past few weeks I was looking for a decent Gmail tool but everything good costs $25-30/month and forces you to leave your Gmail inbox entirely. I also didn't trust where my email data was going. So I built NeatMail. It lives inside your Gmail, no new inbox to learn. What it does: - Auto-labels incoming emails instantly (Payments, University, Work etc — custom or pre-made) - Drafts replies automatically for emails that need a response, right inside Gmail - Everything is customizable — fonts, signature, labels, privacy settings The model is built in-house. Open source so you can read every line. Your data never hits a third party server. It's in beta. Looking for honest feedback from people who live in their inbox. GitHub: https://github.com/Lakshay1509/NeatMail Try it: https://www.neatmail.app/
Show HN: OpenTamago – P2P GenAI Tamagotch
I grew up with Tamagotchis and wanted to reimagine that experience for the generative AI era. The result is OpenTamago.
It is an experimental MVP where you can share AI character cards and chat using P2P connections. Because it bypasses external servers to communicate, it is designed to be private and safe.
I'm exploring the potential of P2P architecture in AI chat interfaces. Feedback is always welcome. Code is open sourced.
Show HN: Qeltrix-V6, Encryption Gateway
V6 is the network-native evolution of the Qeltrix encrypted archiving system. It turns any data stream into a live, seekable, encrypted V6 container — without needing the entire file first.
Show HN: RetroTick – Run classic Windows EXEs in the browser
RetroTick parses PE/NE/MZ binaries, emulates an x86 CPU, and stubs enough Win32/Win16/DOS APIs to run classics like FreeCell, Minesweeper, Solitaire and QBasic, entirely in the browser. Built with Preact + Vite + TypeScript.
Demo: https://retrotick.com
GitHub: https://github.com/lqs/retrotick