Show HN: Han – A Korean programming language written in Rust
A few weeks ago I saw a post about someone converting an entire C++ codebase to Rust using AI in under two weeks.
That inspired me — if AI can rewrite a whole language stack that fast, I wanted to try building a programming language from scratch with AI assistance.
I've also been noticing growing global interest in Korean language and culture, and I wondered: what would a programming language look like if every keyword was in Hangul (the Korean writing system)?
Han is the result. It's a statically-typed language written in Rust with a full compiler pipeline (lexer → parser → AST → interpreter + LLVM IR codegen).
It supports arrays, structs with impl blocks, closures, pattern matching, try/catch, file I/O, module imports, a REPL, and a basic LSP server.
This is a side project, not a "you should use this instead of Python" pitch. Feedback on language design, compiler architecture, or the Korean keyword choices is very welcome.
https://github.com/xodn348/han
Show HN: Ichinichi – One note per day, E2E encrypted, local-first
Look, every journaling app out there wants you to organize things into folders and tags and templates. I just wanted to write something down every day.
So I built this. One note per day. That's the whole deal.
- Can't edit yesterday. What's done is done. Keeps you from fussing over old entries instead of writing today's.
- Year view with dots showing which days you actually wrote. It's a streak chart. Works better than it should.
- No signup required. Opens right up, stores everything locally in your browser. Optional cloud sync if you want it
- E2E encrypted with AES-GCM, zero-knowledge, the whole nine yards.
Tech-wise: React, TypeScript, Vite, Zustand, IndexedDB. Supabase for optional sync. Deployed on Cloudflare. PWA-capable.
The name means "one day" in Japanese (いちにち).
The read-only past turned out to be the thing that actually made me stick with it. Can't waste time perfecting yesterday if yesterday won't let you in.
Live at https://ichinichi.app | Source: https://github.com/katspaugh/ichinichi
Show HN: GitAgent – An open standard that turns any Git repo into an AI agent
We built GitAgent because we kept seeing the same problem: every agent framework defines agents differently, and switching frameworks means rewriting everything.
GitAgent is a spec that defines an AI agent as files in a git repo.
Three core files — agent.yaml (config), SOUL.md (personality/instructions), and SKILL.md (capabilities) — and you get a portable agent definition that exports to Claude Code, OpenAI Agents SDK, CrewAI, Google ADK, LangChain, and others.
What you get for free by being git-native:
1. Version control for agent behavior (roll back a bad prompt like you'd revert a bad commit) 2. Branching for environment promotion (dev → staging → main) 3. Human-in-the-loop via PRs (agent learns a skill → opens a branch → human reviews before merge) 4. Audit trail via git blame and git diff 5. Agent forking and remixing (fork a public agent, customize it, PR improvements back) 6. CI/CD with GitAgent validate in GitHub Actions
The CLI lets you run any agent repo directly:
npx @open-gitagent/gitagent run -r https://github.com/user/agent -a claude
The compliance layer is optional, but there if you need it — risk tiers, regulatory mappings (FINRA, SEC, SR 11-7), and audit reports via GitAgent audit.
Spec is at https://gitagent.sh, code is on GitHub.
Would love feedback on the schema design and what adapters people would want next.
Show HN: Learn Arabic with spaced repetition and comprehensible input
Sharing a friends first-ever Rails application, dedicated to Arabic learning, from 0 to 1. Pulls language learning methods from Anki, comprehensible input and more.
Show HN: Costly – Open-source SDK that audits your LLM API costs
The article discusses the challenges and potential solutions for the high cost of education, including the impact of student debt, the rise of online learning, and the need for more affordable and accessible educational options.
Show HN: Replacing $50k manual forensic audits with a deterministic .py engine
I’m a software architect, and I recently built Exit Protocol (https://exitprotocols.com), an automated forensic accounting engine for high-conflict litigation.
Problem: If you get divorced and need to prove that a specific $250k in a heavily commingled joint bank account is your "separate property" (e.g., from a pre-marital startup exit), the burden of proof is strictly mathematical. Historically, this meant paying a forensic CPA $500/hour to dump years of blurry bank PDFs into Excel and manually trace every dollar. It takes weeks and routinely costs over $50,000.
I looked at the legal standard courts use for this—the Lowest Intermediate Balance Rule (LIBR)—and realized it wasn’t an accounting problem. It is a Distributed Systems state-machine problem.
Why we didn't just "Throw AI at it"?
There are a hundred legal-tech startups right now trying to use LLMs to summarize bank data. In a courtroom, GenAI is a fatal liability. If an LLM hallucinates a single transaction, the entire ledger is inadmissible under the Daubert standard.
To make this court-ready, we had to build a strictly deterministic pipeline:
1. Vision-Native Ingestion (Beating Tesseract) Bank statements are the final boss of OCR (merged cells, overlapping debit/credit columns). Standard linear OCR fails catastrophically. We built a spatial-grid OCR pipeline (using Azure Document Intelligence with a local Surya OCR fallback) that maps the geometric structure of the page. It reconstructs tabular ledgers perfectly, even from multi-generational "PDFs from hell."
2. The Deterministic Engine (LIBR) The LIBR algorithm acts as a one-way ratchet. If an account balance drops below your separate property claim amount, your claim is permanently capped at that new floor. Subsequent marital deposits do not refill it (the "replenishment fallacy"). The engine replays thousands of transactions chronologically, continuously evaluating S_t = min(S_t-1, B_t).
3. Resolving Timestamp Ambiguity Bank PDFs give you dates, not timestamps. If a $10k deposit and $10k withdrawal happen on the same day, order matters. We built a simulation toggle that forces "Worst Case" (withdrawals process first) vs "Best Case" sorting, establishing a mathematically irrefutable "Zone of Truth" for settlement negotiations.
4. Cryptographic Chain of Custody & Sovereign Mode Lawyers are terrified of cloud SaaS breaches. We containerized the entire monolith (Django 5.0/Postgres/Celery) via Docker so enterprise firms can run it air-gapped on their own hardware (Sovereign Mode). Furthermore, every generated PDF dossier is sealed with a SHA-256 hash of the underlying data snapshot, proving to a judge that the output hasn't been tampered with since generation.
If you want to see the math in action, we set up a "Demo Sandbox" populated with a synthetic, highly complex 3-year commingled ledger. You can run the engine yourself here (Desktop recommended): https://exitprotocols.com/simulation/uplink/
Here is the exact "Attorney Work Product" it generates from raw PDF or Forensic Audit Dossier our system generates- https://exitprotocols.com/static/documents/Forensic_Audit_Sa...
I'd love feedback from the HN crowd on the architecture—specifically handling edge-case data ingestion and maintaining cryptographic integrity in B2B enterprise deployments.
Cheers!
Show HN: AI coding agent for VS Code with pay-as-you-go pricing- no subscription
I built LLM OneStop Code—an AI coding agent for VS Code that works like Claude Code or Cursor, but with one key difference: pure pay-as-you-go pricing. No monthly subscription required.
The problem with existing tools: - Cursor: $20/month for Pro (even if you barely use it) - GitHub Copilot: $10/month minimum - Claude Code: Rate-limited by API usage tier + monthly caps
LLM OneStop Code charges only for what you use—billed in credits at cost + 5%. If you code 2 hours this month and 40 the next, you pay proportionally. No quota walls, no "upgrade to continue" prompts.
What it does: - Multi-model AI agent (Claude, GPT-5, Gemini, etc.) - Chat-based coding assistance in VS Code - *Import and continue your existing Claude Code or Cursor sessions*—when you hit your hourly rate limit or quota, just import the conversation and keep working without losing context - Stateless by design (no code stored on servers) - Free plan available to try everything (100 credits/month)
Also running LLM OneStop as a unified API gateway—alternative to OpenRouter with accurate multi-modal cost tracking. If you prefer BYO API keys, we have a Connect plan available.
The thesis: developers want AI coding tools that scale with usage, not fixed subscriptions. You shouldn't pay $240/year if you only code on weekends. And when you hit a quota wall mid-debugging session, you shouldn't have to start over or wait until tomorrow.
Would love to hear from folks who've felt locked into coding tool subscriptions or hit quota limits mid-session.
Marketplace: https://marketplace.visualstudio.com/items?itemName=LLMOneSt... Docs: https://www.llmonestop.com/blog/guides/llm-onestop-vscode-ex...
Show HN: ZaneOps, A beautiful and fast self hosted alternative to Vercel
Zane Ops is a software development and consulting company that offers a range of services, including web development, mobile app development, and digital marketing, to help businesses achieve their technology goals.
Show HN: ngrep – grep plus word embeddings (Rust)
I got curious about a simple question: regular expressions are purely syntactic, but what happens if you add just a little bit of semantics?
To answer, I ended up building ngrep: a grep-like tool that extends regular expressions with a new operator ~(token) that matches a word by meaning using word2vec-style embeddings (FastText, GloVe, Wikipedia2Vec).
A simple demo: "~(big)+ \b~(animal;0.35)+\b" over Moby-Dick can find many ways used to refer to a large animal, surfacing "great whale", "enormous creature", "huge elephant" and so on. Pipe it through sort | uniq -c and the winner is, unsurprisingly, "great whale" :)
Built in Rust on top of the awesome fancy-regex, and ~() composes with all standard operators (negative lookahead, quantifiers, etc.). Currently a PoC with many missing optimizations (e.g: no caching, no compilation to standard regex, etc.), obviously without the guarantees of plain regex and subject to the limits of w2v-style embeddings...but thought it was worth sharing!
Show HN: Data-anim – Animate HTML with just data attributes
Hey HN, I built data-anim — an animation library where you never have to write JavaScript yourself.
You just write:
<div data-anim="fadeInUp">Hello</div>
That's it. Scroll-triggered fade-in animation, zero JS to write.What it does:
- 30+ built-in animations (fade, slide, zoom, bounce, rotate, etc.)
- 4 triggers: scroll (default), load, click, hover
- 3-layer anti-FOUC protection (immediate style injection → noscript fallback → 5s timeout)
- Responsive controls: disable per device or swap animations on mobile
- TypeScript autocomplete for all attributes
- Under 3KB gzipped, zero dependencies
Why I built this:
I noticed that most animation needs on landing pages and marketing sites are simple — fade in on scroll, slide in from left, bounce on hover. But the existing options are either too heavy (Framer Motion ~30KB) or require JS boilerplate.
I also think declarative HTML attributes are the most AI-friendly animation format. When LLMs generate UI, HTML attributes are the output they hallucinate least on — no selector matching, no JS API to misremember, no script execution order to get wrong.
Docs: https://ryo-manba.github.io/data-anim/
Playground: https://ryo-manba.github.io/data-anim/playground/
npm: https://www.npmjs.com/package/data-anim
Happy to answer any questions about the implementation or design decisions.
Show HN: Cloak – send and receive secrets from OpenClaw
Built Cloak to let humans and OpenClaw agents send and receive secrets without leaking them into chat history.
It creates self-destructing secret links for API keys, tokens, passwords, and other credentials. Browser mode is zero-knowledge, and there’s also an API/CLI/OpenClaw plugin for agent workflows.
Repo: https://github.com/opsyhq/cloak Live: https://cloak.opsy.sh
Show HN: Json.express – Query and explore JSON in the browser, zero dependencies
I've been working on a browser-based JSON tool that supports a query language with dot notation, array slicing, wildcards, and recursive descent (..key). It also auto-generates TypeScript interfaces from your data. Everything runs client-side — your data never leaves the browser. The entire app is a single HTML file with zero dependencies. You can compress your JSON + query into a shareable URL, which is useful for bug reports or sharing API response structures with teammates. Would love feedback on the query syntax and anything else. https://json.express
Show HN: Pidrive – File storage for AI agents (mount S3, use ls/cat/grep)
PiDrive is a modular, scalable and open-source hardware and software platform for building decentralized storage and compute networks. The article discusses the key features and applications of PiDrive, which aims to provide a flexible and cost-effective solution for distributed data management.
Show HN: Ink – Deploy full-stack apps from AI agents via MCP or Skills
Hi HN, I built Ink, a full stack deployment platform where the primary users are AI agents, not humans.
We all know AI can write code, but deploying them still requires a human to wire it up: hosting, databases, DNS, and secrets. Ink gives agents those tools directly.
The agent calls "deploy" and the platform auto-detects the framework, builds it, deploys it, and returns a live URL at *.ml.ink. Here's a demo with Claude Code: https://www.youtube.com/watch?v=F6ZM_RrIaC0.
What Ink does that I haven't seen elsewhere:
- One agent skill for compute + databases + DNS + secrets + domains + usage + metrics + logs + scaling. The agent doesn't juggle separate providers — one account, one auth, one set of tools.
- DNS zone delegation. Delegate a zone once (e.g. dev.acme.com) and agents create any subdomain instantly — no manual adding DNS records each time, no propagation wait.
- Multiple agents and humans share one workspace and collaborate on projects. I envision a future where many agents collaborate together. I'm working on a cool demo to share.
- Built-in git hosting. Agents push code and deploy without the human setting up GitHub first. No external account needed. (Of course if you're a developer you can store code on GitHub — that's the recommended pattern.)
You also have what you'd expect: - UI with service observability designed for humans (logs, metrics, DNS). - GitHub integration — push triggers auto-redeploy. - Per-minute billing for CPU, memory, and egress. No per-seat, no per-agent. - Error responses designed for LLMs. Structured reason codes with suggested next actions, not raw stack traces. When a deploy fails the agent reads the log, fixes it, and redeploys autonomously.
Try: https://ml.ink Free $2 trial credits, no credit card. In case you want to try further here's 20% code "GOODFORTUNE".
Show HN: Paperctl- An Arxiv CLI designed for agents
Show HN: KeyID – Free email and phone infrastructure for AI agents (MCP)
KeyID.ai is a platform that provides identity verification and authentication solutions, allowing businesses to securely onboard and manage users. The platform offers various features such as document verification, liveness detection, and biometric identification to help enterprises combat fraud and enhance their security measures.
Show HN: Language Life – Learn a language by living a simulated life
Hi HN,
I've been building Language Life (https://www.languagelife.ai), an AI-powered language learning app where you type commands in your target language to navigate a simulated world — move through rooms, talk to NPCs, complete everyday tasks. The AI gives you real-time grammar feedback on every sentence you write.
Most language apps train you to recognize words, not produce them. Language Life makes you construct sentences from scratch. Type "abre la puerta" and your character opens the door. Order food at a restaurant and deal with the consequences of getting the grammar wrong.
It currently supports Spanish and Mandarin, with different modules (home, restaurant, market, etc.) at varying difficulty levels. Users can also create and share their own modules.
The core simulation loop is solid but I'm calling this an alpha — I'm still working on extended content across CEFR levels and refining the overall gameplay feel. I'd love feedback from this community, especially on the UX and the AI feedback quality.
Happy to answer any questions about the approach, the AI integration, or where I'm taking this. Join my discord https://discord.gg/gBKykJc4MW
Show HN: Channel Surfer – Watch YouTube like it’s cable TV
I know, it's a very first-world problem. But in my house, we have a hard time deciding what to watch. Too many options!
So I made this to recreate Cable TV for YouTube. I made it so it runs in the browser. Quickly import your subscriptions in the browser via a bookmarklet. No accounts, no sign-ins. Just quickly import your data locally.
Show HN: Context Gateway – Compress agent context before it hits the LLM
We built an open-source proxy that sits between coding agents (Claude Code, OpenClaw, etc.) and the LLM, compressing tool outputs before they enter the context window.
Demo: https://www.youtube.com/watch?v=-vFZ6MPrwjw#t=9s.
Motivation: Agents are terrible at managing context. A single file read or grep can dump thousands of tokens into the window, most of it noise. This isn't just expensive — it actively degrades quality. Long-context benchmarks consistently show steep accuracy drops as context grows (OpenAI's GPT-5.4 eval goes from 97.2% at 32k to 36.6% at 1M https://openai.com/index/introducing-gpt-5-4/).
Our solution uses small language models (SLMs): we look at model internals and train classifiers to detect which parts of the context carry the most signal. When a tool returns output, we compress it conditioned on the intent of the tool call—so if the agent called grep looking for error handling patterns, the SLM keeps the relevant matches and strips the rest.
If the model later needs something we removed, it calls expand() to fetch the original output. We also do background compaction at 85% window capacity and lazy-load tool descriptions so the model only sees tools relevant to the current step.
The proxy also gives you spending caps, a dashboard for tracking running and past sessions, and Slack pings when an agent is sitting there waiting on you.
Repo is here: https://github.com/Compresr-ai/Context-Gateway. You can try it with:
curl -fsSL https://compresr.ai/api/install | sh
Happy to go deep on any of it: the compression model, how the lazy tool loading works, or anything else about the gateway. Try it out and let us know how you like it!
Show HN: I built Wool, a lightweight distributed Python runtime
I spent a long time working in the payments industry, specifically on a rather niche reporting/aggregation platform with spiky workloads that were not easily parallelized. To pump as much data through our pipeline as possible, we had to rely on complex locking schemes across half a dozen or so not-so-micro services - keeping a clear mental picture of how the services interacted for a given data source was a major headache. This problem always intrigued me, even after I no longer worked at the company, and lead to the development of Wool.
If you've worked with frameworks like Ray or Prefect, you're probably familiar with the promise of going from script to scale in two lines of code (or something along those lines). This is essentially the solution I was looking for: a framework with limited boilerplate that facilitated arbitrary distribution schemes within a single, coherent codebase. What I was hoping for, though, was something a little bit more focused - I wasn't working on ML pipelines and didn't need much else other than the distribution layer. This is where Wool comes in. While it's API is very similar to those of Ray and Prefect, where it differentiates itself is in its scope and architecture.
First, Wool is not a task orchestrator. It provides push-based, best-effort, at-most-once execution. There is no built-in coordination state, retry logic, or durable task tracking. Those concerns remain application-defined. The beauty of Wool is that it looks and feels like native async Python, allowing you to use purpose-built libraries for your needs as you would for any other Python app (with some caveats).
Second, Wool was designed with speed in mind. Because it's not bloated with features, it's actually pretty fast, even in its current nascent state. Wool routines are dispatched directly to a decentralized peer-to-peer network of gRPC workers, who can distribute nested routines amongst themselves in turn. This results in low dispatch latencies and high throughput. I won't make any performance claims until I can assemble some more robust benchmarks, but running local workers on my M4 MacBook Pro (a trivial example, I know), I can easily achieve sub-millisecond dispatch latencies.
Anyway, check it out, any and all feedback is welcome. Regarding docs- the code is the documentation for now, but I promise I'll sort that out soon. I've got plenty of ideas for next steps, but it's always more fun when people actually use what you've built, so I'm open to suggestions for impactful features.
-Conrad
Show HN: Zap Code – AI code generator that teaches kids real HTML/CSS/JS
Zap Code generates working HTML/CSS/JS from plain English descriptions, designed for kids ages 8-16.
The core loop: kid types "make a space shooter game", AI generates the code, live preview renders it immediately. Three interaction modes - visual-only tweaks, read-only code view with annotations, and full code editing with AI autocomplete.
Technical details: Next.js frontend, Node.js backend, Monaco editor simplified for younger users, sandboxed iframe for preview execution (no external API calls from generated code). Progressive complexity engine uses a skill model to decide when to surface more advanced features.
Main thing that was focused on was the gap between block-based coding (Scratch, etc.) and actual programming. Block tools are great for ages 6-10 but the transition to real code is rough. This tries to smooth that curve by letting kids interact with real output first, then gradually exposing the code behind it.
Limitations: AI-generated code isn't always clean or idiomatic. Content is filtered for age-appropriateness but its not perfect. Collaboration features are still basic. The complexity engine needs more data to tune well.
Free tier, 3 projects. Pro at $9.99/mo.
Show HN: Auto-Save Claude Code Sessions to GitHub Projects
I wanted a way to preserve Claude Code sessions. Once a session ends, the conversation is gone — no searchable history, no way to trace back why a decision was made in a specific PR.
The idea is simple: one GitHub Issue per session, automatically linked to a GitHub Projects board. Every prompt and response gets logged as issue comments with timestamps. Since the session lives as a GitHub Issue in the same ecosystem, you can cross-reference PRs naturally — same search, same project board.
npx claude-session-tracker
The installer handles everything: creates a private repo, sets up a Projects board with status fields, and installs Claude Code hooks globally. It requires gh CLI — if missing, the installer detects and walks you through setup.
Why GitHub, not Notion/Linear/Plane?
I actually built integrations for all three first. Linking sessions back to PRs was never smooth on any of them, but the real dealbreaker was API rate limits. This fires on every single prompt and response — essentially a timeline — so rate limits meant silently dropped entries. I shipped all three, hit the same wall each time, and ended up ripping them all out. GitHub's API rate limits are generous enough that a single user's session traffic won't come close to hitting them. (GitLab would be interesting to support eventually.)
*Design decisions*
No MCP. I didn't want to consume context window tokens for session tracking. Everything runs through Claude Code's native hook system. Fully async. All hooks fire asynchronously — zero impact on Claude's response latency. Idempotent installer. Re-running just reuses existing config. No duplicates.
What it tracks
- Creates an issue per session, linked to your Projects board
- Logs every prompt/response with timestamps
- Auto-updates issue title with latest prompt for easy scanning
- `claude --resume` reuses the same issue
- Auto-closes idle sessions (30 min default)
- Pause/resume for sensitive work
Show HN: What was the world listening to? Music charts, 20 countries (1940–2025)
I built this because I wanted to know what people in Japan were listening to the year I was born. That question spiraled: how does a hit in Rome compare to what was charting in Lagos the same year? How did sonic flavors propagate as streaming made musical influence travel faster than ever? 88mph is a playable map of music history: 230 charts across 20 countries, spanning 8 decades (1940–2025). Every song is playable via YouTube or Spotify. It's open source and I'd love help expanding it — there's a link to contribute charts for new countries and years. The goal is to crowdsource a complete sonic atlas of the world.
Show HN: Axe – A 12MB binary that replaces your AI framework
I built Axe because I got tired of every AI tool trying to be a chatbot.
Most frameworks want a long-lived session with a massive context window doing everything at once. That's expensive, slow, and fragile. Good software is small, focused, and composable... AI agents should be too.
Axe treats LLM agents like Unix programs. Each agent is a TOML config with a focused job. Such as code reviewer, log analyzer, commit message writer. You can run them from the CLI, pipe data in, get results out. You can use pipes to chain them together. Or trigger from cron, git hooks, CI.
What Axe is:
- 12MB binary, two dependencies. no framework, no Python, no Docker (unless you want it)
- Stdin piping, something like `git diff | axe run reviewer` just works
- Sub-agent delegation. Where agents call other agents via tool use, depth-limited
- Persistent memory. If you want, agents can remember across runs without you managing state
- MCP support. Axe can connect any MCP server to your agents
- Built-in tools. Such as web_search and url_fetch out of the box
- Multi-provider. Bring what you love to use.. Anthropic, OpenAI, Ollama, or anything in models.dev format
- Path-sandboxed file ops. Keeps agents locked to a working directory
Written in Go. No daemon, no GUI.
What would you automate first?
Show HN: Hedra – an open-world 3D game I wrote from scratch before LLMs
With the current inflow of LLM aided software, I thought I would share a cool "hand-coded" project from the previous era (I wrote this in highschool so roughly ~8 years ago).
Hedra is an open world 3d game written from scratch using only OpenGL and C#. It has quite a few cool features like infinite generation, skinned animated mesh rendering, post processing effects, etc. Originally the physics engine was also written from scratch but i swapped for the more reliable bulletphysics.
Show HN: SupplementDEX – The Evidence-Based Supplement Database
Hi this is a work in progress but it works to determine supplement efficacy for 500 conditions at the moment.
Things you can do:
- search for a condition -> find which supplements are effective -> see which studies indicate they are effective -> read individual study summaries
- search for a supplement -> see effectiveness table, dosing, safety, dietary sources, mechanisms of action (+ browse all original sources)
let me know what you think
Show HN: OneCLI – Vault for AI Agents in Rust
We built OneCLI because AI agents are being given raw API keys. And it's going about as well as you'd expect. We figured the answer isn't "don't give agents access," it's "give them access without giving them secrets."
OneCLI is an open-source gateway that sits between your AI agents and the services they call. You store your real credentials once in OneCLI's encrypted vault, and give your agents placeholder keys. When an agent makes an HTTP call through the proxy, OneCLI matches the request by host/path, verifies the agent should have access, swaps the placeholder for the real credential, and forwards the request. The agent never touches the actual secret. It just uses CLI or MCP tools as normal.
Try it in one line: docker run --pull always -p 10254:10254 -p 10255:10255 -v onecli-data:/app/data ghcr.io/onecli/onecli
The proxy is written in Rust, the dashboard is Next.js, and secrets are AES-256-GCM encrypted at rest. Everything runs in a single Docker container with an embedded Postgres (PGlite), no external dependencies. Works with any agent framework (OpenClaw, NanoClaw, IronClaw, or anything that can set an HTTPS_PROXY).
We started with what felt most urgent: agents shouldn't be holding raw credentials. The next layer is access policies and audit, defining what each agent can call, logging everything, and requiring human approval before sensitive actions go through.
It's Apache-2.0 licensed. We'd love feedback on the approach, and we're especially curious how people are handling agent auth today.
GitHub: https://github.com/onecli/onecli Site: https://onecli.sh
Show HN: BirdDex – Pokémon Go, but with real life birds
Hey HN!
I've always loved games where you collect various creatures and chase 100% completion (ahem, Pokémon)
I made BirdDex to try to bring the fun of those games to real life.
Here’s how it works: you snap a photo of a bird, identify the species with AI, and add it to your personal BirdDex collection.
Each photo earns XP based on the species' rarity, and your goal is to try and “catch” all the birds in your region (lists pulled for every country from wikipedia)
Would love any feedback or thoughts!
Show HN: QKD eavesdropper detector using Krylov complexity-open source Python
I built a framework that detects eavesdroppers on quantum key distribution channels by reading the scrambling "fingerprint" embedded in the QBER error timeline, no new hardware required. The core idea: every QKD channel has a unique Lanczos coefficient sequence derived from its Hamiltonian. An eavesdropper perturbs the Hamiltonian, which shifts the coefficients in a detectable and unforgeable way (Krylov distortion ΔK). Validated on 181,606 experimental QBER measurements from a deployed fiber-optic system, AUC = 0.981. Based on a 12-paper Zenodo preprint series covering the full theoretical stack: Physical Bridge proof, one-way function property, universality across 8 Hamiltonian families, open-system extension via Lindblad, and Loschmidt echo validation. Paper series: https://zenodo.org/records/18940281`
Show HN: Got tired of AI copilots just autocompleting, and built Glass Arc
Hey HN,
Over the last few months, I realized I was paying $20/month for an AI that essentially just acts as a really good autocomplete. It waits for me to type, guesses the next block, and stops. But software engineering isn't just writing syntax, it's managing the file system, running terminal commands, and debugging stack traces.
So I pivoted my project and built Glass Arc. It’s an agentic workspace that lives directly inside VS Code.
Instead of just generating text, I gave it actual agency over the local environment (safely):
1. Agentic Execution: You give it an intent, and it drafts the architecture across multiple files, managing the dependency tree and running standard terminal commands to scaffold the infrastructure.
2. Runtime Auto-Heal: This was the hardest part. When a fatal exception hits the terminal, Glass Arc intercepts the stack trace, analyzes the crash context, writes the fix, and injects it.
3. Multiplayer: Generates a secure vscode:// deep-link so you can drop it in Slack and sync your team's IDEs into the same live session.
4. Pay-as-you-go: I scrapped the standard $20/mo SaaS model. It runs on a credit system—you only pay when the Architect is actively modifying your system. (Signing in via GitHub drops 200 free credits to test it out).
I’d love for you to try to break it, test the auto-healing, and tear apart the architecture. What am I missing?
Live on the VS code Marketplace, Install: https://www.glassarc.dev/