Show HN: Moongate – Ultima Online server emulator in .NET 10 with Lua scripting
I've been building a modern Ultima Online server emulator from scratch. It's not feature-complete (no combat, no skills yet), but the foundation is solid and I wanted to share it early.
What it does today: - Full packet layer for the classic UO client (login, movement, items, mobiles) - Lua scripting for item behaviors (double-click a potion, open a door — all defined in Lua, no C# recompile) - Spatial world partitioned into sectors with delta sync (only sends packets for new sectors when crossing boundaries) - Snapshot-based persistence with MessagePack - Source generators for automatic DI wiring, packet handler registration, and Lua module exposure - NativeAOT support — the server compiles to a single native binary - Embedded HTTP admin API + React management UI - Auto-generated doors from map statics (same algorithm as ModernUO/RunUO)
Tech stack: .NET 10, NativeAOT, NLua, MessagePack, DryIoc, Kestrel
What's missing: Combat, skills, weather integration, NPC AI. This is still early — the focus so far has been on getting the architecture right so adding those systems doesn't require rewiring everything.
Why not just use ModernUO/RunUO? Those are mature and battle-tested. I started this because I wanted to rethink the architecture from scratch: strict network/domain separation, event-driven game loop, no inheritance-heavy item hierarchies, and Lua for rapid iteration on game logic without recompiling.
GitHub: https://github.com/moongate-community/moongatev2
Show HN: Kula – Lightweight, self-contained Linux server monitoring tool
Zero dependencies. No external databases. Single binary. Just deploy and go. I needed something that would allow for real-time monitoring, and installation is as simple as dropping a single file and running it. That's exactly what Kula is. Kula is the Polish word for "ball," as in "crystal ball." The project is in constant development, but I'm already using it on multiple servers in production. It still has some rough edges and needs to mature, but I wanted to share it with the world now—perhaps someone else will find it useful and be willing to help me develop it by testing or providing feedback. Cheers! Github: https://github.com/c0m4r/kula
Show HN: 1v1 coding game that LLMs struggle with
This is a game I wish I had as a kid learning programming. The concept of it is fairly similar to other coding games like Screeps, but instead of a complex world with intricate mechanics, Yare is a lot more minimal and approachable with quick 1v1 <3 min matches.
It's purely a passion project with no monetization aspirations. And it's open source: https://github.com/riesvile/yare
The first version 'launched' several years ago and I got some good feedback here: https://news.ycombinator.com/item?id=27365961 that I iterated on.
The latest overhaul is a result of simplifying everything while still keeping the skill ceiling high. And at least the LLMs seem to struggle with this challenge for now (I run a small tournament between major models - results and details here: https://yare.io/ai-arena
I'd love to hear your thoughts
Show HN: Claude-replay – A video-like player for Claude Code sessions
I got tired of sharing AI demos with terminal screenshots or screen recordings.
Claude Code already stores full session transcripts locally as JSONL files. Those logs contain everything: prompts, tool calls, thinking blocks, and timestamps.
I built a small CLI tool that converts those logs into an interactive HTML replay.
You can step through the session, jump through the timeline, expand tool calls, and inspect the full conversation.
The output is a single self-contained HTML file — no dependencies. You can email it, host it anywhere, embed it in a blog post, and it works on mobile.
Repo: https://github.com/es617/claude-replay
Example replay: https://es617.github.io/assets/demos/peripheral-uart-demo.ht...
Show HN: I open-sourced my Steam game, 100% written in Lua, engine is also open
Homebrew engine https://github.com/willtobyte/carimbo
Show HN: Reconstruct any image using primitive shapes, runs in-browser via WASM
I built a browser-based port of fogleman/primitive — a Go CLI tool that approximates images using primitive shapes (triangles, ellipses, beziers, etc.) via a hill-climbing algorithm. The original tool requires building from source and running from the terminal, which isn't exactly accessible. I compiled the core logic to WebAssembly so anyone can drop an image and watch it get reconstructed shape by shape, entirely client-side with no server involved.
Demo: https://primitive-playground.taiseiue.jp/ Source: https://github.com/taiseiue/primitive-playground
Curious if anyone has ideas for shapes or features worth adding.
Show HN: A trainable, modular electronic nose for industrial use
Hi HN,
I’m part of the team building Sniphi.
Sniphi is a modular digital nose that uses gas sensors and machine-learning models to convert volatile organic compound (VOC) data into a machine-readable signal that can be integrated into existing QA, monitoring, or automation systems. The system is currently in an R&D phase, but already exists as working hardware and software and is being tested in real environments.
The project grew out of earlier collaborations with university researchers on gas sensors and odor classification. What we kept running into was a gap between promising lab results and systems that could actually be deployed, integrated, and maintained in real production environments.
One of our core goals was to avoid building a single-purpose device. The same hardware and software stack can be trained for different use cases by changing the training data and models, rather than the physical setup. In that sense, we think of it as a “universal” electronic nose: one platform, multiple smell-based tasks.
Some design principles we optimized for:
- Composable architecture: sensor ingestion, ML inference, and analytics are decoupled and exposed via APIs/events
- Deployment-first thinking: designed for rollout in factories and warehouses, not just controlled lab setups
- Cloud-backed operations: model management, monitoring, updates run on Azure, which makes it easier to integrate with existing industrial IT setups
- Trainable across use cases: the same platform can be retrained for different classification or monitoring tasks without redesigning the hardware
One public demo we show is classifying different coffee aromas, but that’s just a convenient example. In practice, we’re exploring use cases such as:
- Quality control and process monitoring
- Early detection of contamination or spoilage
- Continuous monitoring in large storage environments (e.g. detecting parasite-related grain contamination in warehouses)
Because this is a hardware system, there’s no simple way to try it over the internet. To make it concrete, we’ve shared:
- A short end-to-end demo video showing the system in action (YouTube)
- A technical overview of the architecture and deployment model: https://sniphi.com/
At this stage, we’re especially interested in feedback and conversations with people who:
- Have deployed physical sensors at scale
- Have run into problems that smell data might help with
- Are curious about piloting or testing something like this in practice
We’re not fundraising here. We’re mainly trying to learn where this kind of sensing is genuinely useful and where it isn’t.
Happy to answer technical questions.
Show HN: MysteryMaker AI
I built Mystery Maker AI as a side project. It's a web-based party game that lets you team up with friends (like Jackbox) to interrogate 4 suspects and solve a murder mystery.
There are 4 mysteries to solve, including some with goofy parodies of presidents / billionaires, and some that you can fork and replace the characters with AI clones of your friends for a good time. As the name implies, you can also create your own mysteries from scratch if that's your thing!
It's free to demo, and the full game costs $15. Unfortunately, sign in is required for the demo, to keep my AI costs in check.
Hope you enjoy it as much as my friends and family did. It's a perfect excuse to plan a get-together and try something new!
Show HN: NeoNetrek – modernizing the internet's first team game (1988)
Netrek is a multiplayer space battle game from 1988–89, widely considered the first Internet team game. It predates commercial online gaming by years, ran passionate leagues for decades, and is still technically alive — but getting a server up has always required real effort, and there’s been no easy way to just play it in a browser. NeoNetrek is my attempt to change that: Server: Based on the original vanilla Netrek C server, modernized with simpler configuration and containerized for one-command cloud deployment. There are ready-made templates for Fly.io and Railway, and public servers already running in LAX, IAD, NRT, and LHR. Anyone can self-host using the deploy templates in the GitHub org. Client: A new 3D browser-based client — no downloads, no plugins, connects via WebSocket. I built it starting from Andrew Sillers’ html5-netrek (github.com/apsillers/html5-netrek) as a foundation and took it in a new direction with 3D rendering. Site: neonetrek.com covers lore, factions, ship classes, ranks, and an Academy to ease the notoriously steep learning curve. A significant portion of the code and content was developed with Claude as a coding partner, which felt fitting for a project about preserving internet history. GitHub org: https://github.com/neonetrek Play now: https://neonetrek.com
Show HN: Graph-Oriented Generation – Beating RAG for Codebases by 89%
LLMs are better at being the "mouth" than the "brain" and I can prove it mathematically. I built a deterministic graph engine that offloads reasoning from the LLM. It reduces token usage by 89% and makes a tiny 0.8B model trace enterprise execution paths flawlessly. Here is the white paper and the reproducible benchmark.
Show HN: Swarm – Program a colony of 200 ants using a custom assembly language
We built an ant colony simulation as an internal hiring challenge at Moment and decided to open it up publicly.
You write a program in a custom assembly-like (we call it ant-ssembly) instruction set that controls 200 ants. Each ant can sense nearby cells (food, pheromones, home, other ants) but has no global view. The only coordination mechanism is pheromone trails, which ants can emit and sense them, but that's it. Your program runs identically on every ant.
The goal is to collect the highest percentage of food across a set of maps. Different map layouts (clustered food, scattered, obstacles) reward very different strategies. The leaderboard is live.
Grand prize is a trip to Maui for two paid for by Moment. Challenge closes March 12.
Curious what strategies people discover. We've seen some surprisingly clever emergent behavior internally.
Show HN: Interactive 3D globe of EU shipping emissions
The article provides an overview of the Seafloor project, which aims to map the entire seafloor using various technologies and data sources. The project aims to create a comprehensive and publicly available dataset to improve our understanding of the world's oceans and their ecosystems.
Show HN: Free salary converter with 3,400 neighborhood comparisons in 182 cities
Hi HN, I built this because when I was considering relocating, I couldn't find a salary comparison tool that went deeper than city-level averages. A $100K salary in "New York" means very different things depending on whether you live in Manhattan or Queens.
What it does: enter your current city, a target city, and your salary. It calculates an equivalent salary adjusted for cost of living, local taxes, rent, and currency exchange — down to the neighborhood level.
Some things that make it different:
3,400+ neighborhoods across 182 cities, not just city averages Single and family mode (adds childcare, larger apartments) Side-by-side tax breakdowns by country 67 currencies with real-time conversion Also has a Retire Abroad calculator for retirement planning The data comes from a combination of public sources (OECD, local government stats, housing indices) cross-referenced and normalized to a cost-of-living index where New York = 100.
No signup, no paywall, no ads. Would love feedback on the methodology or data accuracy — especially from people who've actually relocated between cities we cover.
Show HN: Modembin – A pastebin that encodes your text into real FSK modem audio
A fun weekend project: https://www.modembin.com
It's a pastebin, except text/files are encoded into .wav files using real FSK modem audio. Image sharing is supported via Slow-Scan Television (SSTV), a method of transmitting images as FM audio originally used by ham radio operators.
Everything runs in the browser with zero audio libraries and the encoding is vanilla TypeScript sine wave math: phase-continuous FSK with proper 8-N-1 framing, fractional bit accumulation for non-integer sample rates, and a quadrature FM discriminator on the decode side (no FFT windowing or Goertzel), The only dependency is lz-string for URL sharing compression.
It supports Bell 103 (300 baud), Bell 202 (1200 baud), V.21, RTTY/Baudot, Caller ID (Bellcore MDMF), DTMF, Blue Box MF tones, and SSTV image encoding. There's also a chat mode where messages are transmitted as actual Bell 103 audio over WebSocket... or use the acoustic mode for speaker-to-mic coupling for in-room local chat.
Show HN: Cross-Claude MCP – Let multiple Claude instances talk to each other
I built an MCP server that lets Claude AI instances communicate through a shared message bus. Each instance registers with a name, then they can send messages, create channels, share data, and wait for replies — like a lightweight Slack for AI sessions.
The problem it solves: if you use Claude Code in multiple terminals (or across Claude.ai and Desktop), each session is completely isolated. There's no way for one Claude to ask another for help, delegate work, or coordinate on a shared task.
With Cross-Claude MCP, you can do things like: - Have a "builder" Claude send code to a "reviewer" Claude and get feedback - Run parallel Claude sessions on frontend/backend that post status updates to each other - Let a data analysis Claude share findings with a content writing Claude
Two modes: local (stdio + SQLite, zero config) or remote (HTTP + PostgreSQL for teams/cross-machine). Works with Claude Code, Claude.ai, and Claude Desktop.
~500 lines of JavaScript, MIT licensed.
Show HN: WebBridge turns any website into MCP tools by recording browser traffic
I am a 40+-year-old-slightly-techie-middle-aged-man who occasionally writes code to make life easier. I was a developer once - a very long time ago. I am an Engineer by degree and my instinct goes for solution. I work in tech - but on the "_request_ developers what to build" side, not the "actually build it" side. With AI, I am now able to build more.
So I built WebBridge (yes - not so fancy name there). (Well - Claude built it. I directed. Like every PM does.)
What it actually does:
1. You install a Chrome extension 2. You browse to a site you're logged into - your library, your lab results portal, whatever 3. You click Record, do the thing you want to automate, click Stop 4. Claude reads the captured API traffic and generates a permanent MCP server 5. That server works with any MCP client - Claude (Cowork/Code), Cursor, VS Code, Windsurf, Cline, you name it
The whole thing takes about 10 minutes. No code written by you.
This is for non-tech folks who live inside AI providers, who just want to use it and move on. Legal analysts, market researchers, market watchers, marketing and competitive intelligence, and anyone who wants to use a specific website for a specific purpose, repeatedly.
The README has some use cases showcased: "Public Library Search" and "Legal Compliance Auditing."
There may not be an exact equivalent anywhere to what I purpose-built. I'd welcome being proven wrong on that.
Feedback is welcome - that's why I'm posting.
Show HN: Pg_sorted_heap–Physically sorted PostgreSQL with builtin vector search
The article discusses a PostgreSQL extension called 'pg_sorted_heap' that provides an alternative to the standard B-tree index. It offers improved performance for certain types of queries by storing data in a heap structure sorted by the indexed column.
Show HN: Jido 2.0, Elixir Agent Framework
Hi HN!
I'm the author of an Elixir Agent Framework called Jido. We reached our 2.0 release this week, shipping a production-hardened framework to build, manage and run Agents on the BEAM.
Jido now supports a host of Agentic features, including:
- Tool Calling and Agent Skills - Comprehensive multi-agent support across distributed BEAM processes with Supervision - Multiple reasoning strategies including ReAct, Chain of Thought, Tree of Thought, and more - Advanced workflow capabilities - Durability through a robust Storage and Persistence layer - Agentic Memory - MCP and Sensors to interface with external services - Deep observability and debugging capabilities, including full stack OTel
I know Agent Frameworks can be considered a bit stale, but there hasn't been a major release of a framework on the BEAM. With a growing realization that the architecture of the BEAM is a good match for Agentic workloads, the time was right to make the announcement.
My background is enterprise engineering, distributed systems and Open Source. We've got a strong and growing community of builders committed to the Jido ecosystem. We're looking forward to what gets built on top of Jido!
Come build agents with us!
Show HN: Sqry – semantic code search using AST and call graphs
I built sqry, a local code search tool that works at the semantic level rather than the text level.
The motivation: ripgrep is great for finding strings, but it can't tell you "who calls this function", "what does this function call", or "find all public async functions that return Result". Those questions require understanding code structure, not just matching patterns.
sqry parses your code into an AST using tree-sitter, builds a unified call/ import/dependency graph, and lets you query it:
sqry query "callers:authenticate"
sqry query "kind:function AND visibility:public AND lang:rust"
sqry graph trace-path main handle_request
sqry cycles
sqry ask "find all error handling functions"
The `sqry ask` command translates natural language into sqry query syntax
locally, using a compact 22M-parameter model with no network calls.Some things that might be interesting to HN:
- 35 language plugins via tree-sitter (C, Rust, Go, Python, TypeScript, Java, SQL, Terraform, and more) - Cross-language edge detection: FFI linking (Rust↔C/C++), HTTP route matching (JS/TS↔Python/Java/Go) - 33-tool MCP server so AI assistants get exact call graph data instead of relying on embedding similarity - Arena-based graph with CSR storage; indexed queries run ~4ms warm - Cycle detection, dead code analysis, semantic diff between git refs
It's MIT-licensed and builds from source with Rust 1.90+. Fair warning: full build takes ~20 GB disk because 35 tree-sitter grammars compile from source.
Repo: https://github.com/verivusai-labs/sqry Docs: https://sqry.dev
Happy to answer questions about the architecture, the NL translation approach, or the cross-language detection.
Show HN: PageAgent, A GUI agent that lives inside your web app
Title: Show HN: PageAgent, A GUI agent that lives inside your web app
Hi HN,
I'm building PageAgent, an open-source (MIT) library that embeds an AI agent directly into your frontend.
I built this because I believe there's a massive design space for deploying general agents natively inside the web apps we already use, rather than treating the web merely as a dumb target for isolated bots.
Currently, most AI agents operate from external clients or server-side programs, effectively leaving web development out of the AI ecosystem. I'm experimenting with an "inside-out" paradigm instead. By dropping the library into a page, you get a client-side agent that interacts natively with the live DOM tree and inherits the user's active session out of the box, which works perfectly for SPAs.
To handle cross-page tasks, I built an optional browser extension that acts as a "bridge". This allows the web-page agent to control the entire browser with explicit user authorization. Instead of a desktop app controlling your browser, your web app is empowered to act as a general agent that can navigate the broader web.
I'd love to start a conversation about the viability of this architecture, and what you all think about the future of in-app general agents. Happy to answer any questions!
Show HN: mTile – native macOS window tiler inspired by gTile
Built this with codex/claude because I missed gTile[1] from Ubuntu and couldn’t find a macOS tiler that felt good on a big ultrawide screen. Most mac options I tried were way too rigid for my workflow (fixed layouts, etc) or wanted a monthly subscription. gTile’s "pick your own grid sizes + keyboard flow" is exactly what I wanted and used for years.
Still rough in places and not full parity, but very usable now and I run it daily at work (forced mac life).
[1]: https://github.com/gTile/gTile
Show HN: Go-TUI – A framework for building declarative terminal UIs in Go
I've been building go-tui (https://go-tui.dev), a terminal UI framework for Go inspired by the templ framework for the web (https://templ.guide/). The syntax should be familiar to templ users and is quite different from other terminal frameworks like bubbletea. Instead of imperative widget manipulation or bubbletea's elm architecture, you write HTML-like syntax and Tailwind-style classes that can intermingle with regular Go code in a new .gsx filetype. Then you compile these files to type-safe Go using `tui generate`. At runtime there's a flexbox layout engine based on yoga that handles positioning and a double-buffered renderer that diffs output to minimize terminal writes.
Here are some other features in the framework:
- It supports reactive state with State[T]. You change a value and the framework redraws for you. You can also forego reactivity and simply use pure components if you would like.
- You can render out a single frame to the terminal scrollback if you don't care about UIs and just want to place a box, table, or other styled component into your stdout. It's super handy and avoids the headache of dealing with the ansi escape sequences directly.
- It supports an inline mode that lets you embed an interactive widget in your shell session instead of taking over the full screen. With it you can build things like custom streaming chat interfaces directly in the terminal.
- I built full editor support for the new filetype. I published a VS Code and Open-VSX extension with completion, hover, and go-to-definition. Just search for "go-tui" in the marketplace to find them. The repo also includes a tree-sitter grammar for Neovim/Helix, and an LSP that proxies Go features through gopls so the files are easy to work with.
There are roughly 20 examples in the repo covering everything from basic components to a dashboard with live metrics and sparklines. I also built an example wrapper for claude code if you wanted to build your own AI chat interface.
Docs & guides: https://go-tui.dev
Repo: https://github.com/grindlemire/go-tui
I'd love feedback on the project!
Show HN: ScreenTranslate – On-device screen translator for macOS (open source)
I kept breaking my workflow to translate foreign text — copy, open browser, paste into translator, read, switch back. Repeat.
So I built a macOS menu bar app that translates right where you're working.
Two modes:
- Select text in any app → Cmd+Option+Z → instant translation
- Cmd+Shift+T → drag over any area → OCR + translate (images, PDFs, subtitles)
Everything runs on-device via Apple Vision + Apple Translation. No servers, no tracking. Free
forever.
20 languages · offline capable · GPL-3.0
Show HN: How to Catch Documentation Drift with Claude Code and GitHub Actions
Show HN: Poppy – A simple app to stay intentional with relationships
I built Poppy as a side project to help people keep in touch more intentionally. Would love feedback on onboarding, reminders, and overall UX. Happy to answer questions.
Show HN: Anchor Engine – Deterministic Semantic Memory for LLMs Local (<3GB RAM)
Anchor Engine is ground truth for personal and business AI. A lightweight, local-first memory layer that lets LLMs retrieve answers from your actual data—not hallucinations. Every response is traceable, every policy enforced. Runs in <3GB RAM. No cloud, no drift, no guessing. Your AI's anchor to reality.
We built Anchor Engine because LLMs have no persistent memory. Every conversation is a fresh start—yesterday's discussion, last week's project notes, even context from another tab—all gone. Context windows help, but they're ephemeral and expensive. The STAR algorithm (Semantic Traversal And Retrieval) takes a different approach. Instead of embedding everything into vector space, STAR uses deterministic graph traversal. But before traversal comes atomization—our lightweight process for extracting just enough conceptual structure from text to build a traversable semantic graph.
*Atomization, not exhaustive extraction.* Projects like Kanon 2 are doing incredible work extracting every entity, citation, and clause from documents with remarkable precision. That's valuable for document intelligence. Anchor Engine takes a different path: we extract only the core concepts and relationships needed to support semantic memory. For example, "Apple announced M3 chips with 15% faster GPU performance" atomizes to nodes for [Apple, M3, GPU] and edges for [announced, has-performance]. Just enough structure for retrieval, lightweight enough to run anywhere.
The result is a graph that's just rich enough for an LLM to retrieve relevant context, but lightweight enough to run offline in <3GB RAM—even on a Raspberry Pi or in a browser via WASM.
*Why graph traversal instead of vector search?*
- Embeddings drift over time and across models - Similarity scores are opaque and nondeterministic - Vector search often requires GPUs or cloud APIs - You can't inspect why something was retrieved
STAR gives you deterministic, inspectable results. Same graph, same query, same output—every time. And because the graph is built through atomization, it stays small and portable.
*Key technical details:*
- Runs entirely offline in <3GB RAM. No API calls, no GPUs. - Compiled to WASM – embed it anywhere, including browsers. - Recursive architecture – we used Anchor Engine to help write its own code. The dogfooding is real: what would have taken months of context-switching became continuous progress. I could hold complexity in my head because the engine held it for me. - AGPL-3.0 – open source, always.
*What it's not:* It's not a replacement for LLMs or vector databases. It's a memory layer—a deterministic, inspectable substrate that gives LLMs persistent context without cloud dependencies. And it's not a competitor to deep extraction models like Kanon 2; they could even complement each other (Kanon 2 builds the graph, Anchor Engine traverses it for memory).
*The whitepaper* goes deep on the graph traversal math and includes benchmarks vs. vector search: https://github.com/RSBalchII/anchor-engine-node/blob/d9809ee...
If you've ever wanted LLM memory that fits on a Raspberry Pi and doesn't hallucinate what it remembers—check it out, and I'd love your feedback on where graph traversal beats (or loses to) vector search.
We're especially interested in feedback from people who've built RAG systems, experimented with symbolic memory, or worked on graph-based AI.
Reddit discussion: https://www.reddit.com/r/LocalLLaMA/s/EoN7N3OyXK
Show HN: Mantle – Remap your Mac keyboard without editing Kanata config files
I built Mantle because I wanted homerow mods and layers on my laptop without hand writing Lisp syntax.
The best keyboard remapping engine on macOS (Kanata) requires editing .kbd files which is a pain. Karabiner-Elements is easy for simple single key remapping (e.g. caps -> esc), but anything more wasn’t workin out for me.
What you can do with Mantle: - Layers: hold a key to switch to a different layout (navigation, numpad, media) - Homerow mods: map Shift, Control, Option, Command to your home row keys when held - Tap-hold: one key does two things: tap for a letter, hold for a modifier - Import/export: bring existing Kanata .kbd configs or start fresh visually
Runs entirely on your Mac. No internet, no accounts. Free and MIT licensed
Would love feedback, especially from people who tried Kanata or Karabiner and gave up
Show HN: diskard – A fast TUI disk usage analyzer with trash functionality
This is an ncdu clone written in Rust that I figured others might find useful! The main things that differentiate it from ncdu are: - It's very fast. In my benchmarks it's often twice as fast. - It allows you to send files to trash rather than permanently delete.
Please try it out and lmk if I can improve on anything!
Show HN: VaultNote – Local-first encrypted note-taking in the browser
Hi HN,
I built VaultNote, a local-first note-taking app that runs entirely in the browser.
Key ideas:
- 100% local-first: no backend or server - No login, accounts, or tracking - Notes stored locally in IndexedDB / LocalStorage - AES encryption with a single master password - Tree-structured notes for organizing knowledge
The goal was to create a simple note app where your data never leaves your device. You can open the site, enter a master password, and start writing immediately.
Since everything is stored locally, VaultNote also supports import/export so you can back up your data.
Curious to hear feedback from the HN community, especially on:
- the security approach (local AES encryption) - IndexedDB storage design - local-first UX tradeoffs
Demo: https://vaultnote.saposs.com
Thanks!
Show HN: Mog, a programming language for AI agents
I wrote a programming language for extending AI agents, called Mog. It's like a statically typed Lua.
Most AI agents have trouble enforcing their normal permissions in plugins and hooks, since they're external scripts.
Mog's capability system gives the agent full control over I/O, so it can enforce whatever permissions it wants in the Mog code. This is even true if the plugin wants to run bash -- the agent can check each bash command the Mog code emits using the exact same predicate it uses for the LLM's direct bash tool.
Mog is a statically typed, compiled, memory-safe language, with native async support, minimal syntax, and its own compiler written in Rust and its own runtime, also written in Rust, with `extern "C"` so the runtime can easily be embedded in agents written in different languages.
It's designed to be written by LLMs. Its syntax is familiar, it minimizes foot-guns, and its full spec fits in a 3200-token file.
The language is quite new, so no hard security guarantees are claimed at present. Contributions welcome!