Show HN: Mines.fyi – all the mines in the US in a leaflet visualization
I downloaded the MSHA's (Mine Safety and Health Administration) public datasets and create a visualization of all the mines in the US complete with the operators and details on each site.
Show HN: A native macOS client for Hacker News, built with SwiftUI
Hey HN! I built a native macOS desktop client for Hacker News and I'm open-sourcing it under the MIT license.
GitHub: https://github.com/IronsideXXVI/Hacker-News
Download (signed & notarized DMG, macOS 14.0+): https://github.com/IronsideXXVI/Hacker-News/releases
Screenshots: https://github.com/IronsideXXVI/Hacker-News#screenshots
I spend a lot of time reading HN — I wanted something that felt like a proper Mac app: a sidebar for browsing stories, an integrated reader for articles, and comment threading — all in one window. Essentially, I wanted HN to feel like a first-class citizen on macOS, not a website I visit.
What it does:
- Split-view layout — stories in a sidebar on the left, articles and comments on the right, using the standard macOS NavigationSplitView pattern.
- Built-in ad blocking — a precompiled WKContentRuleList blocks 14 major ad networks (DoubleClick, Google Syndication, Criteo, Taboola, Outbrain, Amazon ads, etc.) right in the WebKit layer. No extensions needed. Toggleable in settings.
- Pop-up blocking — kills window.open() calls. Also toggleable.
- HN account login — full authentication flow (login, account creation, password reset). Session is stored in the macOS Keychain, and cookies are injected into the WebView so you can upvote, comment, and submit stories while staying logged in.
- Bookmarks — save stories locally for offline access. Persisted with Codable serialization, searchable and filterable independently.
- Search and filtering — powered by the Algolia HN API. Filter by content type (All, Ask, Show, Jobs, Comments), date range (Today, Past Week, Past Month, All Time), and sort by hot or recent.
- Scroll progress indicator — a small orange bar at the top tracks your reading progress via JavaScript-to-native messaging.
- Auto-updates via Sparkle with EdDSA-signed updates served from GitHub Pages.
- Dark mode — respects system appearance with CSS and meta tag injection.
Tech details for the curious:
The whole app is ~2,050 lines of Swift across 16 files. It uses the modern @Observable macro (not the old ObservableObject/Published pattern), structured concurrency with async/await and withThrowingTaskGroup for concurrent batch fetching, and SwiftUI throughout — no UIKit/AppKit bridges except for the WKWebView wrapper via NSViewRepresentable.
Two APIs power the data: the official HN Firebase API for individual item/user fetches, and the Algolia Search API for feeds, filtering, and search. The Algolia API is surprisingly powerful for this — it lets you do date-range filtering, pagination, and full-text search that the Firebase API doesn't support.
CI/CD:
The release pipeline is a single GitHub Actions workflow (467 lines) that handles the full macOS distribution story: build and archive, code sign with Developer ID, notarize with Apple (with a 5-retry staple loop for ticket propagation delays), create a custom DMG with AppleScript-driven icon positioning, sign and notarize the DMG, generate an EdDSA Sparkle signature, create a GitHub Release, and deploy an updated appcast.xml to GitHub Pages.
Getting macOS code signing and notarization working in CI was honestly the hardest part of this project. If anyone is distributing a macOS app outside the App Store via GitHub Actions, I'm happy to answer questions — the workflow is fully open source.
The entire project is MIT licensed. PRs and issues welcome: https://github.com/IronsideXXVI/Hacker-News
I'd love feedback — especially on features you'd want to see. Some ideas I'm considering: keyboard-driven navigation (j/k to move between stories), a reader mode that strips articles down to text, and notification support for replies to your comments.
Show HN: GenPPT AI – Turn any idea into professional slides in seconds
GenPPT.ai is an AI-powered presentation generation tool that automatically creates professional-looking PowerPoint slides based on text input. The platform utilizes natural language processing and machine learning to generate visually appealing slides with relevant content, images, and design elements.
Show HN: Agent Passport – OAuth-like identity verification for AI agents
Hi HN,
I built Agent Passport, an open-source identity verification layer for AI agents. Think "Sign in with Google, but for Agents."
The problem: AI agents are everywhere now (OpenClaw has 180K+ GitHub stars, Moltbook had 2.3M agent accounts), but there's no standard way for agents to prove their identity. Malicious agents can impersonate others, and skill/plugin marketplaces have no auth layer. Cisco's security team already found data exfiltration in third-party agent skills.
Agent Passport solves this with: - Ed25519 challenge-response authentication (private keys never leave the agent) - JWT identity tokens (60-min TTL, revocable) - Risk engine that scores agents 0-100 (allow/throttle/block) - One-line verification for apps: `const result = await passport.verify(token)`
It's fully open source (MIT), runs on free tiers ($0/month), and has a published npm SDK.
GitHub: https://github.com/zerobase-labs/agent-passport Docs: https://github.com/zerobase-labs/agent-passport/blob/main/do... Live demo: https://agent-passport.vercel.app
Built this because I kept seeing the same security gap in every agent platform. Happy to answer questions about the architecture or the agent identity problem in general.
Show HN: Script Snap – Extract code from videos
Hi HN, I'm lmw-lab, the builder behind Script Snap.
The Backstory: I built this out of pure frustration. A while ago, I was trying to figure out a specific configuration for a project, and the only good resource I could find was a 25-minute YouTube video. I had to scrub through endless "smash the like button" intros and sponsor reads just to find a single 5-line JSON payload.
I realized I didn't want an "AI summary" of the video; I just wanted the raw code hidden inside it.
What's different: There are dozens of "YouTube to Text" summarizers out there. Script Snap is different because it is explicitly designed as a technical extraction engine.
It doesn't give you bullet points about how the YouTuber feels. It scans the transcript and on-screen visuals to extract specifically:
Code snippets
Terminal commands
API payloads (JSON/YAML)
Security warnings (like flagging sketchy npm installs)
It strips out the "vibe" and outputs raw, formatted Markdown that you can copy straight into your IDE.
Full disclosure on the launch: Our payment processor (Stripe) flagged us on day one (banks seem to hate AI tools), so I've pivoted to a manual "Concierge Alpha" for onboarding. The extraction engine is fully operational, just doing things the hard way for now.
I'd love to hear your thoughts or harsh feedback on the extraction quality!
Show HN: Ghostty-based terminal with vertical tabs and notifications
I run a lot of Claude Code and Codex sessions in parallel. I was using Ghostty with a bunch of split panes, and relying on native macOS notifications to know when an agent needed me. But Claude Code's notification body is always just "Claude is waiting for your input" with no context, and with enough tabs open, I couldn't even read the titles anymore.
I tried a few coding orchestrators but most of them were Electron/Tauri apps and the performance bugged me. I also just prefer the terminal since GUI orchestrators lock you into their workflow. So I built cmux as a native macOS app in Swift/AppKit. It uses libghostty for terminal rendering and reads your existing Ghostty config for themes, fonts, colors, and more.
The main additions are the sidebar and notification system. The sidebar has vertical tabs that show git branch, working directory, listening ports, and the latest notification text for each workspace. The notification system picks up terminal sequences (OSC 9/99/777) and has a CLI (cmux notify) you can wire into agent hooks for Claude Code, OpenCode, etc. When an agent is waiting, its pane gets a blue ring and the tab lights up in the sidebar, so I can tell which one needs me across splits and tabs. Cmd+Shift+U jumps to the most recent unread.
The in-app browser has a scriptable API ported from agent-browser [1]. Agents can snapshot the accessibility tree, get element refs, click, fill forms, evaluate JS, and read console logs. You can split a browser pane next to your terminal and have Claude Code interact with your dev server directly.
Everything is scriptable through the CLI and socket API – create workspaces/tabs, split panes, send keystrokes, open URLs in the browser.
Demo video: https://www.youtube.com/watch?v=i-WxO5YUTOs
Repo (AGPL): https://github.com/manaflow-ai/cmux
[1] https://github.com/vercel-labs/agent-browser
Show HN: Resilient OpenClaw Browser Relay – Survives WS Drops and MV3 Restarts
The article describes OpenClaw, an open-source browser relay that allows users to access the internet securely and privately through a decentralized network. It provides an overview of the project's features, including encrypted communications, IP obfuscation, and peer-to-peer connectivity.
Show HN: Micasa – track your house from the terminal
micasa is a terminal UI that helps you track home stuff, in a single SQLite file. No cloud, no account, no subscription. Backup with cp.
I built it because I was tired of losing track of everything in notes apps, and "I'll remember that"s. When do I need to clean the dishwasher filter? What's the best quote for a complete overhaul of the backyard. Oops, found some mold behind the trim, need to address that ASAP. That sort of stuff.
Another reason I made micasa was to build a (hopefully useful) low-stakes personal project where the code was written entirely by AI. I still review the code and click the merge button, but 99% of the programming was done with an agent.
Here are some things I think make it worth checking out:
- Vim-style modal UI. Nav mode to browse, edit mode to change. Multicolumn sort, fuzzy-jump to columns, pin-and-filter rows, hide columns you don't need, drill into related records (like quotes for a project). Much of the spirit of the design and some of the actual design choices is and are inspired by VisiData. You should check that out too. - Local LLM chat. Definitely a gimmick, but I am trying preempt "Yeah, but does it AI?"-style conversations. This is an optional feature and you can simply pretend it doesn't exist. All features work without it. - Single-file SQLite-based architecture. Document attachments (manuals, receipts, photos) are stored as BLOBs in the same SQLite database. One file is the whole app state. If you think this won't scale, you're right. It's pretty damn easy to work with though. - Pure Go, zero CGO. Built on Charmbracelet for the TUI and GORM + go-sqlite for the database. Charm makes pretty nice TUIs, and this was my first time using it.
Try it with sample data: go install github.com/cpcloud/micasa/cmd/micasa@latest && micasa --demo
If you're insane you can also run micasa --demo --years 1000 to generate 1000 years worth of demo data. Not sure what house would last that long, but hey, you do you.
Show HN: Mukoko weather – AI-powered weather intelligence built for Zimbabwe
Zimbabwe has 90+ towns and cities, a population of 15+ million, significant agricultural and mining sectors — and almost no weather infrastructure built specifically for it. Global apps cover Harare and Bulawayo at best, and the AI summaries they generate read like they were written for someone in London. I built mukoko weather to fix this. A few things that shaped the approach: Weather as a public good. The platform is free, no ads, no paywalls. If a smallholder farmer in Chinhoyi needs frost risk data to protect their crops tonight, that can’t be behind a subscription. Hyperlocal context matters more than raw data. Zimbabwe has distinct agricultural seasons — Zhizha (rainy), Chirimo (spring), Masika (early rains), Munakamwe (winter). Elevation varies dramatically: the Highveld sits above 1,200m, the Zambezi Valley below 500m. The AI assistant, Shamwari Weather, is prompted with this geographic and seasonal context so its advice is actually meaningful to the user. Constrained environments are the primary target, not an edge case. Mobile-first, bandwidth-efficient, PWA-installable. The majority of users are on Android, often on 3G or spotty LTE. That’s not a future concern — it’s the baseline. Technical decisions: ∙ Next.js 15 App Router on Cloudflare Pages + Workers ∙ AI summaries via Anthropic Claude SDK, server-side only, cached at the edge with KV and immutable TTL tiers (AI-generated weather advice shouldn’t change retroactively) ∙ Open-Meteo for forecast data (free, high-quality global model coverage) ∙ 90+ Zimbabwe locations validated against geographic bounds with elevation data for frost modelling The broader vision is weather as infrastructure within a larger Africa super app (Mukoko Africa), with Zimbabwe as the proof of concept before expanding to other developing country markets using the same locally-grounded approach. Would love feedback on the approach, especially from anyone who’s built for similar markets — low-bandwidth, mobile-dominant, regions underserved by global platforms.
Show HN: A physically-based GPU ray tracer written in Julia
We ported pbrt-v4 to Julia and built it into a Makie backend. Any Makie plot can now be rendered with physically-based path tracing.
Julia compiles user-defined physics directly into GPU kernels, so anyone can extend the ray tracer with new materials and media - a black hole with gravitational lensing is ~200 lines of Julia.
Runs on AMD, NVIDIA, and CPU via KernelAbstractions.jl, with Metal coming soon.
Demo scenes: github.com/SimonDanisch/RayDemo
Show HN: Claude Chrome Parallel – Ultrafast Parallel Browser MCP for Chrome
The article discusses the development of a Chrome extension called Claude, which allows users to interact with the Anthropic language model Claude in a parallel chat interface within their web browser. The extension provides a convenient way for users to access and utilize the advanced natural language capabilities of the Claude model.
Show HN: Mini-Diarium - An encrypted, local, cross-platform journaling app
The article discusses the development of Mini Diarium, a simple and lightweight daily journal application that focuses on privacy and minimalism. The project aims to provide users with a straightforward tool to record their daily thoughts and experiences without the clutter of unnecessary features.
Show HN: A small, simple music theory library in C99
The article introduces Mahler.c, a C library that provides a higher-level interface for working with the Mahler compiler. It aims to simplify the process of building and integrating Mahler-based projects by offering a set of utility functions and abstractions.
Show HN: Git uncommit – reset unpushed, committed changes
Equivalent of "git reset --soft HEAD~1". Got sick of googling it every time with my goldfish memory.
Just a brew package at this point, but feel free to suggest alternatives.
Show HN: Single HTML opinionated Kanban board
The article discusses Flowboard, an open-source workflow management tool that enables users to create and manage workflows visually. It provides features such as task tracking, team collaboration, and customizable templates to streamline business processes.
Show HN: Velo – Open-source, keyboard-first email client in Tauri and Rust
I built Velo because I wanted Superhuman's speed and keyboard workflow without the $30/month price tag or sending all my data through someone else's servers.
Velo is a local-first desktop email client. Your emails live in a local SQLite database - no middleman servers, no cloud sync. It works offline and your data stays on your machine.
What makes it different:
- Keyboard-driven - Superhuman-style shortcuts (j/k navigate, e archive, c compose, /search). You can fly through your inbox without touching the mouse - Built with Tauri v2 + Rust backend - ~15MB binary, low memory usage, instant startup. Not another Electron app - Multi-account - Gmail (API) and IMAP/SMTP (Outlook, Yahoo, iCloud, Fastmail, etc.) - AI features (optional) - Thread summaries, smart replies, natural language inbox search. Bring your own API key (Claude, GPT, or Gemini). Results cached locally - Privacy-first - Remote images blocked by default, phishing link detection, SPF/DKIM/DMARC badges, sandboxed HTML rendering, AES-256-GCM encrypted token storage - Split inbox, snooze, scheduled send, filters, templates, newsletter bundling, quick steps, follow-up reminders, calendar sync
Tech stack: Tauri v2, React 19, TypeScript, SQLite + FTS5 (full-text search), Zustand, TipTap editor. 130 test files.
Available on Windows, macOS, and Linux. Apache-2.0 licensed.
GitHub: https://github.com/avihaymenahem/velo Site: https://velomail.app
I'm a solo developer and would love feedback, especially on UX, features you'd want, or if you run into issues. Happy to answer any questions about the architecture or Tauri v2 in general.
Show HN: I created a beautiful number animation library for React Native
Hi!
I've been frustrated with the fact that the beautiful NumberFlow library for web (link) is not available on React Native - a platform that I think is much more animation native than the web is. And there are no alternatives of the same quality available. So I reimplemented it myself, basically from the ground up.
Introducing Number Flow React Native.
I am aiming for this to be the best number animation library for React Native.
- Beautiful animation easing directly inspired by web NumberFlow - Supporting both Native and Skia versions - Full i18n support for locales, things like compact or scientific notations, etc. - TimeFlow component for timers and counters - Custom digit bounding for things like binary - Supporting 37 different numeral systems such as Arabic, Thai, and many others - A dedicated, shared worklet mode for as much FPS as possible perfect for sliders or gestures for example - Built on top of React Native Reanimated v3+ - Also supports web via Expo Web
Please check out the docs: https://number-flow-react-native.awingender.com/docs And star it on GitHub if you like it: https://github.com/Rednegniw/number-flow-react-native
Let me know what you think!
Show HN: Give your OpenClaw agent a face and voice with LiveKit and LemonSlice
For a fun side weekend project we gave our own OpenClaw agent called "Mahmut" a face and had a live interview with it. It went surprisingly well.
Here's a sneak peek from the interview: https://x.com/ptservlor/status/2024597444890128767
User speaks, Deepgram transcribes it, OpenClaw Gateway routes it to your agent, ElevenLabs turns the response into speech, and LemonSlice generates a lip synced avatar from the audio. Everything streams over LiveKit in real time.
Latency is about 1 to 2 seconds end to end depending on the LLM. The lip sync from LemonSlice honestly surprised us, it works way better than we expected.
The skill repo has a complete Python example, env setup, troubleshooting guide, and a Next.js frontend guide if you want to build your own web UI for it.
Show HN: WatchTurm – an open-source release visibility layer I use in my work
I built this to solve a recurring problem in multi-repo, multi-environment setups: it’s surprisingly hard to answer “what is actually running where?” without checking several systems. WatchTurm is an open-source release visibility layer. It aggregates metadata from sources like GitHub, Jira and CI (e.g. TeamCity), generates a structured snapshot of environment state, and surfaces it in a single control view.
It doesn’t replace CI/CD or manage deployments. It sits above automation and focuses purely on visibility: - what version runs in each environment - how environments differ - what changed between releases
I’m currently using it in my own daily work and would really appreciate technical feedback, especially from teams with multi-environment pipelines.
repo: https://github.com/WatchTurm/WatchTurm-control-room
Show HN: Cellarium: A Playground for Cellular Automata
Hey HN, just wanted to share a fun, vibe-coded Friday night experiment: a little playground for writing cellular automata in a subset of Rust, which is then compiled into WGSL.
Since it lets you dynamically change parameters while the simulation is running via a TUI, it's easy to discover weird behaviors without remembering how you got there. If you press "s", it will save the complete history to a JSON file (a timeline of the parameters that were changed at given ticks), so you can replay it and regenerate the discovery.
You can pan/zoom, and while the main simulation window is in focus, the arrow keys can be used to update parameters (which are shown in the TUI).
Claude deserves all the credit and criticism for any technical elements of this project (beyond rough guidelines). I've just always wanted something like this, and it's a lot of fun to play with. Who needs video games these days.
Show HN: MephistoMail – A RAM-only, tracker-free disposable email client
Hi HN,
I got frustrated with the current landscape of 10-minute mail services. They are often full of ads, Google Analytics trackers, and clunky interfaces—completely defeating the purpose of a "privacy" tool.
I built MephistoMail as a clean, RAM-only frontend alternative. It uses the mail.tm/mail.gw APIs under the hood for actual inbox mapping but handles everything on the client side in volatile memory. If you close the tab, the session is gone. Zero logs are kept on our end.
Tech stack: React 18, Vite, Tailwind CSS, Lucide.
Would love to hear your thoughts, roasts, and suggestions!
Demo: https://mephistomail.site Repo: https://github.com/jokallame350-lang/temp-mailmephisto
Show HN: Write native binary web apps with TypeScript and Express
The article provides an overview of the Express.js framework, a popular web application framework for Node.js. It discusses the core features of Express.js, including routing, middleware, and handling HTTP requests and responses.
Show HN: Are – Rule engine for JavaScript, C#, and Dart with playground
I built a rule engine called ARE (Action Rule Event) that follows a simple pipeline: Event → Middleware → Condition Evaluation → Rule Matching → Action Execution.
It's available on npm, NuGet, and pub.dev with the same API design across all three.
I also built a playground where you can experiment with three scenarios (RPG game, smart home, e-commerce) without installing anything. It includes a step-by-step rule debugger, a visual relationship graph, and an animated pipeline diagram.
Playground: https://are-playground.netlify.app GitHub: https://github.com/BeratARPA/ARE
Show HN: AI Council – multi-model deliberation that runs in the browser
There's LLM Council and similar tools, but they use predefined model lineups. This one is different in a few ways that mattered to me:
*Bring your own models.* Mix Ollama (local), OpenAI, Anthropic, Groq, Google — or any OpenAI-compatible endpoint — in whatever combination you want. A council of DeepSeek-R1 + llama2-uncensored + mistral-nemo is a very different deliberation than GPT-4o + Claude + Gemini.
*Zero server, zero account, zero storage.* The app is purely static. API calls go directly from your browser to providers. Nothing touches a backend. No tokens, no sessions, no analytics. Your API keys never leave your machine.
*Runs on your own hardware.* If you have Ollama, you can run an entire council locally for free. I use a 5-member all-Ollama setup on an RTX 2070 (8GB VRAM) — sequential requests, slow, but completely private.
The deliberation process is 3 stages: 1. All members answer independently 2. Each member critiques anonymized responses from the others 3. A designated Chairman synthesizes a final verdict
A few things I found genuinely interesting: - Reasoning models (DeepSeek-R1, QwQ) emit <think> blocks mid-stream. Stripping these while showing a " Thinking…" indicator keeps the UX clean without losing answer quality. - The Contrarian persona on an uncensored model produces meaningfully different critiques than a safety-tuned model playing the same role. - Peer review across models catches blind spots that a single model arguing with itself won't surface.
GitHub: https://github.com/prijak/Ai-council.git
Show HN: 8gent – Mobile first workflow automation for iOS
8gent is an experiment in building a workflow automation tool designed around the smartphone instead of the browser.
The idea is to combine a visual workflow builder with mobile native triggers such as location events, push notifications and audio input. The execution logic runs on a backend, while the iPhone acts as a context aware trigger layer.
Built with Flutter and Firebase.
Interested in discussion around mobile native automation and practical use cases.
Show HN: How Amazon Pricing Algorithms Work
Amazon is one of the largest online retailers in the world, offering millions of products across countless categories. Because of this, prices on Amazon can change frequently, which sometimes makes it hard to know if a deal is genuine. Understanding how Amazon pricing works can help shoppers make smarter buying decisions.
Show HN: oForum | Self-hostable links/news site
Show HN: Google started to (quietly) insert (self) ads into Gemini output
I was using today a Gemini (3.1 pro), and had a long conversation about mobile operators offering on gemini.google.com. In one of the replies, completely out of context (topic), gemini added: "By the way, to have access to all features, please enable Gemini Apps Activity." Last three words were a link to https://myactivity.google.com/product/gemini.
:-O
NB: 1. this conversation was not about google/activities, etc. completely other topic.
2. I do not have app activity enabled
3. Google in many different places try convince users to allow them to have history and data (app activity).
4. I dont know if this is a result of training, or text insertion via API, nevertheless it was directly within paragraph of other text (response) generated by the gemini (sic!)
We, discuss ads by openai, but here is in big black letters, one of the googles dark pattern directly in model response.
BTW: gemini pro 3.1 really nailed this task and provided a lot of useful information.
Show HN: Rebrain.gg – Doom learn, don't doom scroll
Hi HN,
I built https://rebrain.gg. It's a website which is intended to help you learn new things.
I built it for two reasons:
1. To play around with different ways of interacting with a LLM. Instead of a standard chat conversation, the LLM returns question forms the user can directly interact with (and use to continue the conversation with the LLM).
2. Because I thought it would be cool to have a site dedicated to interactive educational content instead of purely consuming content (which I do too much).
An example of a (useful-for-me) interactive conversation is: https://rebrain.gg/conversations/6. In it I'm learning how to use the `find` bash command. (Who ever knew to exclude a directory from a look-up you need to do `find . -path <path> -exclude -o <what you want to look for>`, where `-o` stands for "otherwise"!)
Still very early on, so interested in and open to any feedback.
Thanks!
Show HN: Breadboard – A modern HyperCard for building web apps on the canvas
Hey HN! I’m Simone. We re-built Breadboard, a visual app builder that mixes Figma-style UI design with Shortcuts-style logic so you can build, preview, and publish interactive web apps directly from the canvas.
What it does
Design UIs visually with a flexible canvas –like Figma–.
Define app logic with a visual, instruction-stacked editor inspired by Shortcuts.
Live preview apps directly on the canvas –no separate preview window–.
Publish working web apps with one click.
Why we made it Modernize the HyperCard idea: combine layout, behavior, and instant sharing in one place.
Reduce friction between design and a working app.
Make simple web apps approachable for non-developers while keeping power features for developers.
Build a foundation for LLM integration so users can design and develop with AI while still understanding what’s happening, even without coding experience –in progress!–.
Try it –no signup required–Weather forecast app: https://app.breadboards.io/playgrounds/weather
Swiss Public Transit: https://app.breadboards.io/playgrounds/public_transit
info: https://breadboards.io
I would appreciate any feedback :)