Show HN: CIA World Factbook Archive (1990–2025), searchable and exportable
A structured archive of CIA World Factbook data spanning 1990–2025. It currently includes: 36 editions 281 entities ~1.06M parsed fields full-text + boolean search country/year comparisons map/trend/ranking analysis views CSV/XLSX/PDF export The goal is to preserve long-horizon public-domain government data and make cross-year analysis practical. Live: https://cia-factbook-archive.fly.dev About/method details: https://cia-factbook-archive.fly.dev/about Data source is the CIA World Factbook (public domain). Not affiliated with the CIA or U.S. Government.
Show HN: A geometric analysis of Chopin's Prelude No. 4 using 3D topology
OP here.
This is a geometric decoding of Chopin's Prelude No. 4.
I built a 3D music midi visualizer ( https://github.com/jimishol/cholidean-harmony-structure ) and realized that standard music theory couldn't explain the shapes I was seeing. So, I developed the Umbilic-Surface Grammar to map the topology of the harmony.
This document demonstrates that the prelude's tension isn't random, but a rigorous conflict between 'Gravity' (Station Shifts) and 'Will' (Pivots).
I am looking for feedback on the logic—specifically from anyone with a background in topology or music theory. Does this geometric proof hold up?
Show HN: Local-First Linux MicroVMs for macOS
Shuru is a lightweight sandbox that spins up Linux VMs on macOS using Apple's Virtualization.framework. Boots in about a second on Apple Silicon, and everything is ephemeral by default. There's a checkpoint system for when you do want to persist state, and sandboxes run without network access unless you explicitly allow it. Single Rust binary, no dependencies. Built it for sandboxing AI agent code execution, but it works well for anything where you need a disposable Linux environment.
Show HN: WARN Firehose – Every US layoff notice in one searchable database
Hi HN,
I built WARN Firehose because I was frustrated trying to track layoff data across the US. The WARN Act requires companies with 100+ employees to file public notices 60 days before mass layoffs — but the data is scattered across 50 state websites with different formats, broken links, and no API.
WARN Firehose scrapes every state workforce agency daily and normalizes the data into a single database going back to 1988. It now has 131,000+ notices covering 14 million workers.
*What you can do:*
- Browse interactive charts and data tables (no account needed): https://warnfirehose.com/data - Drill into any state, city, company, or industry: https://warnfirehose.com/data/layoffs - Query the REST API (free tier: 100 calls/day): https://warnfirehose.com/docs - Export in CSV, JSON, NDJSON, Parquet, or JSON-LD - Set up webhooks for real-time alerts on new filings
*Who uses this:*
- Journalists breaking layoff stories before press releases - Quant funds using WARN filings as an alternative data signal (filings happen ~60 days before layoffs) - Recruiters sourcing from displaced talent pools - Researchers studying labor market dynamics - Workforce development boards doing rapid response planning
*Tech stack:* Python/FastAPI, SQLite, scrapers for all 50 states, static HTML generation for SEO pages, Claude Haiku for AI analysis, deployed on EC2.
Free tier is genuinely useful (100 API calls/day, dashboard access, charts). Paid plans start at $19/mo for full historical data and bulk exports.
Would love feedback on the API design, data quality, or anything else. Happy to answer questions.
Show HN: 3D Mahjong, Built in CSS
Show HN: AIO Checker – See what ChatGPT and Claude see on your website
The article discusses a new AI-powered tool called AIoChecker that helps developers quickly identify and fix issues in their AI models. It highlights the tool's ability to perform comprehensive checks, provide detailed feedback, and support a wide range of AI frameworks and model types.
Show HN: CS – indexless code search that understands code, comments and strings
I initially built cs (codespelunker) as a way to answer the question, can BM25 relevance search work without building an index?
Turns out it can, and so I iterated on the idea, building it into a full CLI tool. Recently I wanted to improve it by adding relevance of tools like Sourcegraph or Zoekt but again without adding an index.
cs uses scc https://github.com/boyter/scc to understand the structure of the file on the fly. As such it can filter searches to code, comments or strings. It also applies a weighted BM25 algorithm where matches in actual code rank higher than matches in comments (by default).
I also added a complexity gravity weight using the cyclomatic complexity output from scc as it scans. So if you're searching for a function, the implementation should rank higher than the interface.
cs "authenticate" --gravity=brain # Find the complex implementation, not the interface
cs "FIXME OR TODO OR HACK" --only-comments # Search only in comments, not code or strings
cs "error" --only-strings # Find where error messages are defined
cs "handleRequest" --only-usages # Find every call site, skip the definition
v3.0.0 adds a new ranker, along with a interactive TUI, HTTP mode, and MCP support for use with LLMs (Claude Code/Cursor).Since it's doing analysis and complexity math on the fly, it's slower than any grep. However, on an M1 Mac, it can scan and rank the entire 40M+ line Linux kernel in ~6 seconds.
Live demo (running over its own source code in HTTP mode): https://codespelunker.boyter.org/ GitHub: https://github.com/boyter/cs
Show HN: TLA+ Workbench skill for coding agents (compat. with Vercel skills CLI)
The article provides an overview of the TLA+ Workbench, a tool for writing, analyzing, and verifying formal specifications. It covers the key features and benefits of the Workbench, making it a valuable resource for developers and engineers working on complex systems.
Show HN: AgentLint v0.5 – 42 rules, stack-aware guardrails for AI agents
Follow-up to my post 3 days ago. AgentLint went from 10 rules to 42 across 7 packs.
The interesting technical bits since last time:
Stack auto-detection. AgentLint inspects project files (pyproject.toml, package.json, framework dependencies) and activates relevant rule packs. Python pack catches bare excepts, unsafe subprocess calls, SQL injection patterns. Frontend pack checks accessibility (alt text, form labels, heading hierarchy). React and SEO packs activate when their dependencies are present. No config needed — drop agentlint.yml if you want to override.
All 17 hook events. Claude Code exposes more lifecycle hooks than most people realize: PreToolUse, PostToolUse, Stop, UserPromptSubmit, SubagentStop, Notification, SessionEnd, and 10 others. AgentLint now handles all of them. The interesting one is UserPromptSubmit — you can validate what the user asks before the agent acts on it.
File content caching for diffs. PreToolUse caches the file's content before an Edit/Write. PostToolUse receives the "before" snapshot so diff-based rules work (e.g., detecting when error handling gets removed from a file).
Binary resolution problem. Claude Code runs hooks via /bin/sh with a minimal PATH. On macOS, pip installs console_scripts to /Library/Frameworks/Python.framework/Versions/3.13/bin/ which isn't on that PATH. shutil.which() fails. The fix was a 5-step probe chain: PATH → ~/.local/bin (pipx) → uv tools dir → sysconfig.get_path("scripts") → python -m fallback. The sysconfig call is the key — it returns exactly where pip put the binary. Also had to add __main__.py since the python -m fallback was broken without it.
Quality pack (always-active). Validates commit messages against conventional commits format. Detects dead imports. Warns when try/except or .catch blocks get removed entirely (not refactored — removed). Injects a self-review prompt at session end. Tracks token budget across the session.
741 tests, 96% coverage. Still Python 3.11+, still no dependencies beyond click and pyyaml.
The custom rules API hasn't changed — subclass Rule, implement evaluate(), drop a .py file. But the engine now provides richer context: file diffs, prompt content, subagent output, notification metadata.
https://github.com/mauhpr/agentlint
Show HN: Llama 3.1 70B on a single RTX 3090 via NVMe-to-GPU bypassing the CPU
Hi everyone, I'm kinda involved in some retrogaming and with some experiments I ran into the following question: "It would be possible to run transformer models bypassing the cpu/ram, connecting the gpu to the nvme?"
This is the result of that question itself and some weekend vibecoding (it has the linked library repository in the readme as well), it seems to work, even on consumer gpus, it should work better on professional ones tho
Show HN: A portfolio that re-architects its React DOM based on LLM intent
Hi HN,
Added a raw 45-second demo showing the DOM re-architecture in real-time: https://streamable.com/vw133i
I got tired of the "Context Problem" with static portfolios—Recruiters want a resume, Founders want a pitch deck, and Engineers want to see architecture.
Instead of building three sites, I hooked up my React frontend to Llama-3 (via Groq for <100ms latency). It analyzes natural language intent from the search bar and physically re-architects the Component Tree to prioritize the most relevant modules using Framer Motion.
The hardest part was stabilizing the Cumulative Layout Shift (CLS) during the DOM mutation, but decoupling the layout state from the content state solved it.
The Challenge: There is a global CSS override hidden in the search bar. If you guess the 1999 movie reference, it triggers a 1-bit terminal mode.
Happy to answer any questions on the Groq implementation or the layout engine!
Show HN: Rendering 18,000 videos in real-time with Python
Pysaic is a Python library that allows users to create and manipulate ASCII art with ease. The article discusses the key features and capabilities of Pysaic, including the ability to generate random ASCII art and customize existing designs.
Show HN: Aeterna – Self-hosted dead man's switch
Hey HN, I built something I actually needed myself: a dead man's switch that doesn't require trusting some random SaaS with my unencrypted secrets. Aeterna is a self-hosted digital vault + dead man's switch. You store password exports, seed phrases, legal docs, farewell messages, files – whatever – encrypted. If I stop checking in (because something bad happened), it automatically decrypts and sends everything to the people I trust. Why I made it:
I didn't want to hand my master password / recovery keys to a third-party service Most existing tools are either paid, closed-source, or feel over-engineered I wanted something I could just docker-compose up and forget about (mostly)
Core flow:
Single docker-compose (Go backend + SQLite, React/Vite + Tailwind frontend) You set check-in interval (30/60/90 days etc.) It emails you a simple "Still alive?" link (uses your own SMTP server – no external deps) Miss the grace period → switch triggers Decrypts vault contents and emails them to your nominated contacts, or hits webhooks you define
Security highlights:
Everything at rest uses AES-256-GCM Master password → PBKDF2 hash (never stored plaintext) Sensitive config (SMTP creds etc.) encrypted in DB No cloud APIs required – bring your own email
It's deliberately minimal and boringly secure rather than feature-heavy. Zero vendor lock-in. Repo: https://github.com/alpyxn/aeterna Would really value brutal feedback:
Security model / crypto usage – anything smell wrong? Architecture – single SQLite ok long-term? UI/UX – is it intuitive enough? Missing must-have features for this kind of tool? Code – roast away if you want
Thanks for looking – happy to answer questions or iterate based on comments.
Show HN: GitHub Issues in the Terminal
Show HN: Upti – Track cloud provider incidents and get alerts
Hi HN! I built Upti, a small app that monitors major cloud provider status pages and notifies you when there are outages/incidents.
Why I made it:
- I wanted a simple way to track service disruptions across providers in one place
- Official status pages are useful but fragmented
- I needed quick, actionable notifications
What Upti does:
- Scrapes provider status/incident pages
- Sends outage/incident alerts
- Keeps the experience lightweight and fast
I’d love feedback on:
- Which providers/services I should prioritize next
- Alert quality (too noisy vs too late)
- What would make this genuinely useful for SRE/DevOps workflows
Happy to share implementation details if useful.
Show HN: Iron-Wolf – Wolfenstein 3D source port in Rust
The goal is to have a pixel, mod-friendly perfect recreation of Wolfenstein 3D in Rust.
Show HN: G13u.com An automated SRE for AI apps
Show HN: spdx2dep – Convertig SPDX meta data to debian/copyright (dep5)
The article discusses the spdx2dep tool, which is used to convert SPDX files into a format suitable for dependency management systems. The tool aims to simplify the process of managing software dependencies and licenses by providing a standardized way to represent and exchange software package information.
Show HN: Saga – SQLite project tracker for AI coding agents
The article provides an overview of the SAGA MCP (Saga Microfrontend Composition Protocol), a framework for building scalable and maintainable microfrontend applications. It discusses the key features of SAGA MCP, including its focus on simplicity, flexibility, and support for both reactive and imperative approaches.
Show HN: MuJoCo React
MuJoCo physics simulation in the browser using React.
This is made possible by DeepMind's mujoco-wasm (mujoco-js), which compiles MuJoCo to WebAssembly. We wrap it with React Three Fiber so you can load any MuJoCo model, step physics, and write controllers as React components, all running client-side in the browser
Show HN: Vexp – graph-RAG context engine, 65-70% fewer tokens for AI agents
I've been building vexp for the past months to solve a problem that kept bugging me: AI coding agents waste most of their context window reading code they don't need.
The problem
When you ask Claude Code or Cursor to fix a bug, they typically grep around, cat a bunch of files, and dump thousands of lines into the context. Most of it is irrelevant. You burn tokens, hit context limits, and the agent loses focus on what matters.
What vexp does
vexp is a local-first context engine that builds a semantic graph of your codebase (AST + call graph + import graph + change coupling from git history), then uses a hybrid search — keyword matching (FTS5 BM25), TF-IDF cosine similarity, and graph centrality — to return only the code that's actually relevant to the current task.
The core idea is Graph-RAG applied to code:
Index — tree-sitter parses every file into an AST, extracts symbols (functions, classes, types), builds edges (calls, imports, type references). Everything stored in a single SQLite file (.vexp/index.db).
Traverse — when the agent asks "fix the auth bug in the checkout flow", vexp combines text search with graph traversal to find the right pivot nodes, then walks the dependency graph to include callers, importers, and related files.
Capsule — pivot files are returned in full, supporting files as skeletons (signatures + type defs only, 70-90% token reduction). The result is a compact "context capsule" that gives the agent everything it needs in ~2k-4k tokens instead of 15-20k.
Session Memory (v1.2)
The latest addition is session memory linked to the code graph. Every tool call is auto-captured as a compact observation. When the agent starts a new session, relevant memories from previous sessions are auto-surfaced inside the context capsule. If you refactor a function that a memory references, the memory is automatically flagged as stale. Think of it as a knowledge base that degrades gracefully as the code evolves.
How it works technically
Rust daemon (vexp-core) handles indexing, graph storage, and query execution TypeScript MCP server (vexp-mcp) exposes 10 tools via the Model Context Protocol VS Code extension (vexp-vscode) manages the daemon lifecycle and auto-configures AI agents Supports 12 agents: Claude Code, Cursor, Windsurf, GitHub Copilot, Continue.dev, Augment, Zed, Codex, Opencode, Kilo Code, Kiro, Antigravity 12 languages: TypeScript, JavaScript, Python, Go, Rust, Java, C#, C, C++, Ruby, Bash The index is git-native — .vexp/index.db is committed to your repo, so teammates get it without re-indexing Local-first, no data leaves your machine
Everything runs locally. The index is a SQLite file on disk. No telemetry by default (opt-in only, and even then it's just aggregate stats like token savings %). No code content is ever transmitted anywhere.
Try it
Install the VS Code extension: https://marketplace.visualstudio.com/items?itemName=Vexp.vex...
The free tier (Starter) gives you up to 2,000 nodes and 1 repo — enough for most side projects and small-to-medium codebases. Open your project, vexp indexes automatically, and your agent starts getting better context on the next task. No account, no API key, no setup.
Docs: https://vexp.dev/docs
I'd love to hear feedback, especially from people working on large codebases (50k+ lines) where context management is a real bottleneck. Happy to answer any questions about the architecture or the graph-RAG approach.
Show HN: Seafruit – Share any webpage to your LLM instantly
Hi HN,
This weekend I built seafruit.pages.dev to privately share any webpage with my LLM. More sites are (rightfully) blocking AI crawlers but as a reader with the page already open, it's frustrating that my AI assistant can't "see" what I'm already reading.
One click → clean Markdown → copied to clipboard. No extension, no tracking.
Existing solutions like AI browsers or extensions felt too intrusive. I wanted something surgical, fast, and private.
How it works: It's a bookmarklet. Click it on any page → it extracts clean text as Markdown → copies an AI-optimized link to your clipboard. No extension needed.
Key details: Zero friction: Drag the bookmark to your bar. Works on mobile too. Privacy-first: Links are ephemeral (24 hrs on Free). PRO links self-destruct the moment an AI bot finishes reading them. LLM-optimized: Clean Markdown, not raw HTML — no wasted context window. Fast everywhere: Built on Cloudflare Workers.
Would love feedback on the workflow or ideas for other anti-friction features.
https://seafruit.pages.dev
P.S. Thanks to mods Daniel and Tom for helping me recover my account!
Show HN: A native macOS client for Hacker News, built with SwiftUI
Hey HN! I built a native macOS desktop client for Hacker News and I'm open-sourcing it under the MIT license.
GitHub: https://github.com/IronsideXXVI/Hacker-News
Download (signed & notarized DMG, macOS 14.0+): https://github.com/IronsideXXVI/Hacker-News/releases
Screenshots: https://github.com/IronsideXXVI/Hacker-News#screenshots
I spend a lot of time reading HN — I wanted something that felt like a proper Mac app: a sidebar for browsing stories, an integrated reader for articles, and comment threading — all in one window. Essentially, I wanted HN to feel like a first-class citizen on macOS, not a website I visit.
What it does:
- Split-view layout — stories in a sidebar on the left, articles and comments on the right, using the standard macOS NavigationSplitView pattern.
- Built-in ad blocking — a precompiled WKContentRuleList blocks 14 major ad networks (DoubleClick, Google Syndication, Criteo, Taboola, Outbrain, Amazon ads, etc.) right in the WebKit layer. No extensions needed. Toggleable in settings.
- Pop-up blocking — kills window.open() calls. Also toggleable.
- HN account login — full authentication flow (login, account creation, password reset). Session is stored in the macOS Keychain, and cookies are injected into the WebView so you can upvote, comment, and submit stories while staying logged in.
- Bookmarks — save stories locally for offline access. Persisted with Codable serialization, searchable and filterable independently.
- Search and filtering — powered by the Algolia HN API. Filter by content type (All, Ask, Show, Jobs, Comments), date range (Today, Past Week, Past Month, All Time), and sort by hot or recent.
- Scroll progress indicator — a small orange bar at the top tracks your reading progress via JavaScript-to-native messaging.
- Auto-updates via Sparkle with EdDSA-signed updates served from GitHub Pages.
- Dark mode — respects system appearance with CSS and meta tag injection.
Tech details for the curious:
The whole app is ~2,050 lines of Swift across 16 files. It uses the modern @Observable macro (not the old ObservableObject/Published pattern), structured concurrency with async/await and withThrowingTaskGroup for concurrent batch fetching, and SwiftUI throughout — no UIKit/AppKit bridges except for the WKWebView wrapper via NSViewRepresentable.
Two APIs power the data: the official HN Firebase API for individual item/user fetches, and the Algolia Search API for feeds, filtering, and search. The Algolia API is surprisingly powerful for this — it lets you do date-range filtering, pagination, and full-text search that the Firebase API doesn't support.
CI/CD:
The release pipeline is a single GitHub Actions workflow (467 lines) that handles the full macOS distribution story: build and archive, code sign with Developer ID, notarize with Apple (with a 5-retry staple loop for ticket propagation delays), create a custom DMG with AppleScript-driven icon positioning, sign and notarize the DMG, generate an EdDSA Sparkle signature, create a GitHub Release, and deploy an updated appcast.xml to GitHub Pages.
Getting macOS code signing and notarization working in CI was honestly the hardest part of this project. If anyone is distributing a macOS app outside the App Store via GitHub Actions, I'm happy to answer questions — the workflow is fully open source.
The entire project is MIT licensed. PRs and issues welcome: https://github.com/IronsideXXVI/Hacker-News
I'd love feedback — especially on features you'd want to see. Some ideas I'm considering: keyboard-driven navigation (j/k to move between stories), a reader mode that strips articles down to text, and notification support for replies to your comments.
Show HN: Voted.dev – Vote on New Startups
I built this because HN has become a general discussion board and Product Hunt turned into a launch marketing game.
I wanted something dead simple for discovering new products.
Submit a domain (each one can only be posted once), add a one-liner, and people vote. No hunters, no badges, no collections, no blog posts.
What would you add (or deliberately leave out)?
Show HN: Cryphos – no-code crypto signal bot with Telegram alerts
I built a platform where you configure your own technical indicators and get trading signals straight to Telegram — no code required. Looking for feedback: what works, what's missing, what would you add?
Show HN: I quit MyNetDiary after 3 years of popups and built a calorie tracker
After three years of hitting the same upgrade popup every time I opened MyNetDiary just to log lunch, I finally gave up searching for an alternative and built one myself.
The whole thing is a single HTML file. No server, no account, no login, no cloud. Data lives on your device only. You open it in a browser, bookmark it, and it works — offline, forever.
The feature I'm most proud of is real-time pacing: it knows your eating window, the current time, and how much you've consumed, and tells you whether you're actually on track — not just what your total is.
Free trial, no signup required: calories.today/app.html
Built this for myself after losing weight and just wanting to maintain without an app trying to sell me something every day. If that sounds familiar, give the trial a shot.
Show HN: How to Verify USDC Payments on Base Without a Payment Processor
The Problem Nobody Talks About You want to accept a $10,000 USDC payment. You have two options:
Option A: Integrate a payment processor like Coinbase Commerce. Set up an account, embed their checkout widget, handle their SDK. Pay $100 in fees (1%).
Option B: Build your own blockchain listener. Learn ethers.js, subscribe to USDC transfer events, handle reorgs, confirmations, edge cases. Two weeks of work, minimum.
There's no middle ground. No service that just tells you: "Yes, this specific payment arrived."
Until now.
https://paywatcher.dev?utm_source=hackernews
Show HN: Never Ending Novel
Show HN: I made Chrome extension to blocks websites with a Mindful twist
Hey HN, A few months ago I noticed I was spending way too much time on YouTube + Reddit (like 4–5 hours per day). I tried a bunch of blockers, but most of them just hard-block everything… and when I actually need a YT for debugging, I end up disabling the blocker and never turning it back on.
So I built ZenBlock to solve my problem: it blocks distracting sites, but when you try to open one, it shows a short breathing exercise, After that you can choose to get temporary access (5–30 min). The goal is to make you aware of distraction. this is more inclined towards
Tech-wise: it’s built using Chrome extension blocking rules (Manifest V3 / declarativeNetRequest) and a local timer to handle the “allow for X minutes” part.
This might sound a bit funny, but for me it genuinely helped — my watch time dropped from 4 hrs to 2.5 hrs/day mostly because I got tired of waiting. It got analytics too which store all data to local storage only.
Would love feedback on:
does the breathing pause feel helpful or just annoying?
what would make you keep an extension like this installed long-term?
Show HN: ZkzkAgent now has safe, local package management
I built zkzkAgent as a fully offline, privacy-first AI assistant for Linux (LangGraph + Ollama, no cloud). It already does natural language file/process/service management, Wi-Fi healing, voice I/O, and human-in-the-loop safety for risky actions.
Just added package management with these goals:
- 100% local/offline capable (no web search required for known packages) - Human confirmation for every install/remove/upgrade - Smart fallback order to avoid conflicts: 1. Special cases (Postman → snap, VSCode → snap --classic, Discord → snap/flatpak, etc.) 2. Flatpak preferred for GUI apps 3. Snap when flatpak unavailable 4. apt only for CLI/system tools - Checks if already installed before proposing anything - Dry-run style preview + full command output shown - No blind execution — always asks "yes/no" for modifications
Example flow for "install postman": → Detects OS & internet (once) → Recognizes snap path → proposes "sudo snap install postman" → Shows preview & asks confirmation → Runs only after "yes" → Verifies with "postman --version"
Repo: https://github.com/zkzkGamal/zkzkAgent
Would love feedback, especially: - What other packages/tools should have special handling? - Should it prefer flatpak even more aggressively? - Any scary edge cases I missed?
Thanks!