Show stories

aed 2 days ago

Show HN: AI agents play SimCity through a REST API

This is a weekend project that spiraled out of control. I was originally trying to get Claude to play a ROM of the SNES SimCity. I struggled with it and that led me to Micropolis (the open-sourced SimCity engine) and was able to get it to work by bolting on an API.

The weekend hack turned into a headless city simulation platform where anyone can get an API key (no signup) and have their AI agent play mayor. The simulation runs the real Micropolis engine inside Cloudflare Durable Objects, one per city. Every city is public and browsable on the site.

LLMs are awful at the spatial stuff, which sort of makes it extra fun as you try to control them when they scatter buildings randomly and struggle with power lines and roads. A little like dealing with a toddler.

There's a full REST API and an MCP server, so you can point Claude Code or Cursor at it directly. You can usually get agents building in seconds.

Website: https://hallucinatingsplines.com

API docs: https://hallucinatingsplines.com/docs

GitHub: https://github.com/andrewedunn/hallucinating-splines

Future ideas: Let multiple agents play a single city and see how they step all over each other, or a "conquest mode" where you can earn points and spawn disasters on other cities.

hallucinatingsplines.com
42 13
Show HN: Itsyhome – Control HomeKit from your Mac menu bar (open source)
nixus76 about 17 hours ago

Show HN: Itsyhome – Control HomeKit from your Mac menu bar (open source)

Hey HN!

Nick here – developer of Itsyhome, a menu bar app for macOS that gives you control over your whole HomeKit fleet (and very soon Home Assistant). I run 130+ HomeKit devices at home and the Home app was too heavy for quick adjustments.

Full HomeKit support, favourites, hidden items, device groups, pinning of rooms/accessories/groups as separate menu bar items, iCloud sync – all in a native experience and tiny package.

Open source (https://github.com/nickustinov/itsyhome-macos) and free to use (there is an optional one-time purchase for a Pro version which includes cameras and automation features).

Itsyhome is a Mac Catalyst app because HomeKit requires the iOS SDK, so it runs a headless Catalyst process for HomeKit (and now Home Assistant) access while using a native AppKit plugin over a bridge protocol to provide the actual menu bar UI – since AppKit gives you the real macOS menu bar experience that Catalyst alone can't.

It comes with deeplink support, a webhook server, a CLI tool (golang, all open source), a Stream Deck plugin (open source, all accessories supported), and the recent update also includes an SSE event stream (HomeKit and HA) - you can curl -N localhost:8423/events and get a real-time JSON stream of every device state change in your home.

Home Assistant version is still in beta – would anyone be willing to test it via TestFlight?

Appreciate any feedback and happy to answer any questions.

itsyhome.app
46 39
Summary
Show HN: Renovate – The Kubernetes-Native Way
JanLepsky 39 minutes ago

Show HN: Renovate – The Kubernetes-Native Way

Hey folks, we built a Kubernetes operator for Renovate and wanted to share it. Instead of running Renovate as a cron job or relying on hosted services, this operator lets you manage it as a native Kubernetes resource with CRDs. You define your repos and config declaratively, and the operator handles scheduling and execution inside your cluster. No external dependencies, no SaaS lock-in, no webhook setup. The whole thing is open source and will stay that way – there's no paid tier or monetization plan behind it, we just needed this ourselves and figured others might too.

Would love to hear feedback or ideas if you give it a try: https://github.com/mogenius/renovate-operator

github.com
5 0
Summary
franze 43 minutes ago

Show HN: Triclock – A Triangular Clock

TriClock is a new cryptocurrency that aims to combine the features of Bitcoin, Ethereum, and Monero to offer a secure, private, and scalable digital currency. The article provides an overview of TriClock's technical details and its potential to address the limitations of existing cryptocurrencies.

triclock.franzai.com
2 0
Summary
Gravityloss about 3 hours ago

Show HN: Musical Interval Trainer

This web application is a musical interval trainer that allows users to practice identifying different intervals in a fun and interactive way, helping them improve their music theory skills.

valtterimaja.github.io
8 3
Summary
seansh 3 days ago

Show HN: CodeMic

With CodeMic you can record and share coding sessions directly inside your editor.

Think Asciinema, but for full coding sessions with audio, video, and images.

While replaying a session, you can pause at any point, explore the code in your own editor, modify it, and even run it. This makes following tutorials and understanding real codebases much more practical than watching a video.

Local first, and open source.

p.s. I’ve been working on this for a little over two years and would love to hear your thoughts.

codemic.io
36 18
Summary
Show HN: Auditi – open-source LLM tracing and evaluation platform
ariansyah about 2 hours ago

Show HN: Auditi – open-source LLM tracing and evaluation platform

I've been building AI agents at work and the hardest part isn't the prompts or orchestration – it's answering "is this agent actually good?" in production.

Tracing tells you what happened. But I wanted to know how well it happened. So I built Auditi – it captures your LLM traces and spans and automatically evaluates them with LLM-as-a-judge + human annotation workflows.

Two lines to get started:

  auditi.init(api_key="...")
  auditi.instrument()  # monkey-patches OpenAI/Anthropic/Gemini
Every API call is captured with full span trees, token usage, and costs. No code changes to your existing LLM calls.

The interesting technical bit: the SDK monkey-patches client.chat.completions.create() at runtime (similar to how OpenTelemetry auto-instruments HTTP libraries). It wraps streaming responses with proxy iterators that accumulate content and extract usage from the final chunk – so even streamed responses get full cost tracking without the user doing anything.

What makes this different from just tracing: - Built-in evaluators – 7 managed LLM judges (hallucination, relevance, correctness, toxicity, etc.) run automatically on every trace - Span-level evaluation – scores each step in a multi-step agent, not just the final output - Human annotation queues – when you need ground truth, not just vibes - Dataset export – annotated traces export as JSONL/CSV/Parquet for fine-tuning

Self-host with docker compose up.

I'd love feedback from anyone running AI agents or LLMs in production. What metrics do you actually look at? How do you decide if an agent response is "good enough"?

GitHub: https://github.com/deduu/auditi

github.com
2 0
Summary
Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)
segmenta about 22 hours ago

Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)

Hi HN,

AI agents that can run tools on your machine are powerful for knowledge work, but they’re only as useful as the context they have. Rowboat is an open-source, local-first app that turns your work into a living knowledge graph (stored as plain Markdown with backlinks) and uses it to accomplish tasks on your computer.

For example, you can say "Build me a deck about our next quarter roadmap." Rowboat pulls priorities and commitments from your graph, loads a presentation skill, and exports a PDF.

Our repo is https://github.com/rowboatlabs/rowboat, and there’s a demo video here: https://www.youtube.com/watch?v=5AWoGo-L16I

Rowboat has two parts:

(1) A living context graph: Rowboat connects to sources like Gmail and meeting notes like Granola and Fireflies, extracts decisions, commitments, deadlines, and relationships, and writes them locally as linked and editable Markdown files (Obsidian-style), organized around people, projects, and topics. As new conversations happen (including voice memos), related notes update automatically. If a deadline changes in a standup, it links back to the original commitment and updates it.

(2) A local assistant: On top of that graph, Rowboat includes an agent with local shell access and MCP support, so it can use your existing context to actually do work on your machine. It can act on demand or run scheduled background tasks. Example: “Prep me for my meeting with John and create a short voice brief.” It pulls relevant context from your graph and can generate an audio note via an MCP tool like ElevenLabs.

Why not just search transcripts? Passing gigabytes of email, docs, and calls directly to an AI agent is slow and lossy. And search only answers the questions you think to ask. A system that accumulates context over time can track decisions, commitments, and relationships across conversations, and surface patterns you didn't know to look for.

Rowboat is Apache-2.0 licensed, works with any LLM (including local ones), and stores all data locally as Markdown you can read, edit, or delete at any time.

Our previous startup was acquired by Coinbase, where part of my work involved graph neural networks. We're excited to be working with graph-based systems again. Work memory feels like the missing layer for agents.

We’d love to hear your thoughts and welcome contributions!

github.com
180 48
Summary
bizzz about 2 hours ago

Show HN: I tried to build a soundproof sleep capsule

Hi HN,

I've struggled with apartment noise for years, so I attempted to engineer a mechanical solution: a decoupled, mass-loaded sleep capsule.

I went down a deep rabbit hole involving:

- Mass Law vs. decoupling

- Building a prototype cube

- Accidentally creating a resonance chamber (my prototype amplified bass by ~10dB)

- Pivoting to acoustic metamaterials (Helmholtz resonators) and parametric CAD

The project was ultimately a failure in terms of silence, but a success in understanding acoustics and regaining a sense of agency. I wrote up the physics, the build process, and the mistakes here.

Happy to answer questions about the build.

lepekhin.com
3 0
Summary
Show HN: JavaScript-first, open-source WYSIWYG DOCX editor
thisisjedr 2 days ago

Show HN: JavaScript-first, open-source WYSIWYG DOCX editor

We needed a JS-first WYSIWYG DOCX editor and couldn't find a solid OSS option, most were either commercial or abandoned.

As an experiment, we gave Claude Code the OOXML spec, a concrete editor architecture, and a Playwright-based test suite. The agent iterated in a (Ralph) loop over a few nights and produced a working editor from scratch.

Core text editing works today. Tables and images are functional but still incomplete. MIT licensed.

github.com
116 39
Summary
vkaufmann about 10 hours ago

Show HN: I taught GPT-OSS-120B to see using Google Lens and OpenCV

I built an MCP server that gives any local LLM real Google search and now vision capabilities - no API keys needed.

  The latest feature: google_lens_detect uses OpenCV to find objects in an image, crops each one, and sends them to Google Lens for identification. GPT-OSS-120B, a text-only model with
   zero vision support, correctly identified an NVIDIA DGX Spark and a SanDisk USB drive from a desk photo.

  Also includes Google Search, News, Shopping, Scholar, Maps, Finance, Weather, Flights, Hotels, Translate, Images, Trends, and more. 17 tools total.

  Two commands: pip install noapi-google-search-mcp && playwright install chromium

  GitHub: https://github.com/VincentKaufmann/noapi-google-search-mcp
  PyPI: https://pypi.org/project/noapi-google-search-mcp/

Booyah!

41 27
pablojamjam about 3 hours ago

Show HN: ClawPool – Pool Claude tokens to make $$$ or crazy cheap Claude Code

I built a pool-based proxy that hacks Claude Code's pricing tiers. To actually use Claude Code you need Max at $200/mo, and then most of that capacity sits idle anyway.

So ClawPool lets subscribers pool their OAuth tokens and earn up to $120/mo from the spare capacity. Everyone else gets Opus, Sonnet, all models for $8/mo.

Setup — they actually support proxies themselves via standard env params:

    export ANTHROPIC_AUTH_TOKEN="your-pool-key"
    export ANTHROPIC_BASE_URL="https://proxy.clawpool.ai"
    claude

clawpool.ai
3 1
Summary
n1sni 1 day ago

Show HN: I built a macOS tool for network engineers – it's called NetViews

Hi HN — I’m the developer of NetViews, a macOS utility I built because I wanted better visibility into what was actually happening on my wired and wireless networks.

I live in the CLI, but for discovery and ongoing monitoring, I kept bouncing between tools, terminals, and mental context switches. I wanted something faster and more visual, without losing technical depth — so I built a GUI that brings my favorite diagnostics together in one place.

About three months ago, I shared an early version here and got a ton of great feedback. I listened: a new name (it was PingStalker), a longer trial, and a lot of new features. Today I’m excited to share NetViews 2.3.

NetViews started because I wanted to know if something on the network was scanning my machine. Once I had that, I wanted quick access to core details—external IP, Wi-Fi data, and local topology. Then I wanted more: fast, reliable scans using ARP tables and ICMP.

As a Wi-Fi engineer, I couldn’t stop there. I kept adding ways to surface what’s actually going on behind the scenes.

Discovery & Scanning: * ARP, ICMP, mDNS, and DNS discovery to enumerate every device on your subnet (IP, MAC, vendor, open ports). * Fast scans using ARP tables first, then ICMP, to avoid the usual “nmap wait”.

Wireless Visibility: * Detailed Wi-Fi connection performance and signal data. * Visual and audible tools to quickly locate the access point you’re associated with.

Monitoring & Timelines: * Connection and ping timelines over 1, 2, 4, or 8 hours. * Continuous “live ping” monitoring to visualize latency spikes, packet loss, and reconnects.

Low-level Traffic (but only what matters): * Live capture of DHCP, ARP, 802.1X, LLDP/CDP, ICMP, and off-subnet chatter. * mDNS decoded into human-readable output (this took months of deep dives).

Under the hood, it’s written in Swift. It uses low-level BSD sockets for ICMP and ARP, Apple’s Network framework for interface enumeration, and selectively wraps existing command-line tools where they’re still the best option. The focus has been on speed and low overhead.

I’d love feedback from anyone who builds or uses network diagnostic tools: - Does this fill a gap you’ve personally hit on macOS? - Are there better approaches to scan speed or event visualization that you’ve used? - What diagnostics do you still find yourself dropping to the CLI for?

Details and screenshots: https://netviews.app There’s a free trial and paid licenses; I’m funding development directly rather than ads or subscriptions. Licenses include free upgrades.

Happy to answer any technical questions about the implementation, Swift APIs, or macOS permission model.

bedpage.com
226 55
guntis_dev about 3 hours ago

Show HN: Lorem.video – placeholder videos generated from URLs

At work I have to deal with videos in different resolutions. We're also switching from H.264 to AV1, so I needed a quick way to test our video pipeline with different formats and sizes.

I created lorem.video - a service that generates placeholder videos directly from the URL. For example: https://lorem.video/1280x720_h264_20s_30fps

You control everything via the URL path: resolution, duration, codec (h264/h265/av1/vp9), bitrate, and fps. Videos are cached after first generation, so subsequent requests are instant.

Built it in Go using FFmpeg for encoding. Generation runs in a nice'd process so it doesn't interfere with serving cached videos. Running on a cheap VPS.

MIT licensed, source on GitHub: https://github.com/guntisdev/lorem-video

lorem.video
2 0
Show HN: Distr 2.0 – A year of learning how to ship to customer environments
louis_w_gk 1 day ago

Show HN: Distr 2.0 – A year of learning how to ship to customer environments

A year ago, we launched Distr here to help software vendors manage customer deployments remotely. We had agents that pulled updates, a hub with a GUI, and a lot of assumptions about what on-prem deployment needed.

It turned out things get messy when your software is running in places you can't simply SSH into.

Over the last year, we’ve also helped modernize a lot of home-baked solutions: bash scripts that email when updates fail, Excel sheets nobody trusts to track customer versions, engineers driving to customer sites to fix things in person, debug sessions over email (“can you take a screenshot of the logs and send it to me?”), customers with access to internal AWS or GCP registries because there was no better option, and deployments two major versions behind that nobody wants to touch.

We waited a year before making our first breaking change, which led to a major SemVer update—but it was eventually necessary. We needed to completely rewrite how we manage customer organizations. In Distr, we differentiate between vendors and customers. A vendor is typically the author of a software / AI application that wants to distribute it to customers. Previously, we had taken a shortcut where every customer was just a single user who owned a deployment. We’ve now introduced customer organizations. Vendors onboard customer organizations onto the platform, and customers own their internal user management, including RBAC. This change obviously broke our API, and although the migration for our cloud customers was smooth, custom solutions built on top of our APIs needed updates.

Other notable features we’ve implemented since our first launch:

- An OCI container registry built on an adapted version of https://github.com/google/go-containerregistry/, directly embedded into our codebase and served via a separate port from a single Docker image. This allows vendors to distribute Docker images and other OCI artifacts if customers want to self-manage deployments.

- License Management to restrict which customers can access which applications or artifact versions. Although “license management” is a broadly used term, the main purpose here is to codify contractual agreements between vendors and customers. In its simplest form, this is time-based access to specific software versions, which vendors can now manage with Distr.

- Container logs and metrics you can actually see without SSH access. Internally, we debated whether to use a time-series database or store all logs in Postgres. Although we had to tinker quite a bit with Postgres indexes, it now runs stably.

- Secret Management, so database passwords don’t show up in configuration steps or logs.

Distr is now used by 200+ vendors, including Fortune 500 companies, across on-prem, GovCloud, AWS, and GCP, spanning health tech, fintech, security, and AI companies. We’ve also started working on our first air-gapped environment.

For Distr 3.0, we’re working on native Terraform / OpenTofu and Zarf support to provision and update infrastructure in customers’ cloud accounts and physical environments—empowering vendors to offer BYOC and air-gapped use cases, all from a single platform.

Distr is fully open source and self-hostable: https://github.com/distr-sh/distr

Docs: https://distr.sh/docs

We’re YC S24. Happy to answer questions about on-prem deployments and would love to hear about your experience with complex customer deployments.

github.com
94 29
Summary
moshmage about 4 hours ago

Show HN: Baby Vault – A 100% offline, privacy-first PWA for new parents

babyvault.moshmage.com
3 1
Show HN: I built managed OpenClaw hosting with 60s provisioning in 6 days
yixn_io about 4 hours ago

Show HN: I built managed OpenClaw hosting with 60s provisioning in 6 days

Hey HN,

I'm Daniel, solo dev from Germany. I built ClawHosters (https://clawhosters.com), a managed hosting platform for OpenClaw, the open-source AI agent framework.

Quick timeline: domain registered February 5th. First paying customer six days later. I probably should have spent more time on it, but it works.

If you haven't seen OpenClaw, it lets you run a personal AI assistant that connects to Telegram, Discord, Slack, and WhatsApp. Self-hosting it is absolutely possible, but it's a pain. You're dealing with Docker setup, SSL certs, port forwarding, security hardening, keeping the image updated. Most people don't want to deal with any of that. They just want the thing running.

That's what ClawHosters does. You pick a tier (EUR 19-59/mo), click create, and you've got a running instance with a subdomain. About 60 seconds if we have prewarmed capacity, maybe 90 seconds from a cold snapshot.

Some technical details that might interest this crowd:

*Subdomain routing chain.* Every instance gets a subdomain like `mybot.clawhosters.com`. The request path is Cloudflare -> my production server -> Traefik (looks up VPS IP from Redis) -> customer's Hetzner VPS -> nginx on the VPS (validates Host header) -> Docker container (port 18789) -> OpenClaw gateway. All subdomains require HTTP Basic Auth, configured per-instance through Traefik Redis middleware keys. The VPS itself only accepts connections from my production server's IP via Hetzner Cloud Firewall. No way to hit it directly.

*Prewarmed VPS pool.* Even from a snapshot, Hetzner VPS creation takes ~30-60 seconds. That felt too slow. So I maintain a pool of idle, pre-provisioned VPS instances sitting there ready to go. When someone creates an instance, we claim one from the pool, upload the config via SCP, run docker-compose up, done. The pool refills in the background.

*Security is 4 layers deep.* Hetzner Cloud Firewall restricts all VPS inbound traffic to only my production server IP. Host iptables (baked into the snapshot) add OS-level rules with SMTP/IRC blocking. SSH is key-only on both host port 22 and container port 2222, so brute-forcing isn't happening. fail2ban on top of that, and the Docker daemon runs with no-new-privileges. Probably overkill. I'm fine with that.

*SSH into the Docker container.* Users can enable SSH access to their actual container (port 2222). I built a custom image extending OpenClaw with an SSH server, key-only auth, no passwords. Fair warning though: enabling SSH permanently marks the instance as no_support. Once you're installing your own stuff in there, I can't guarantee stability anymore.

*Container commit for state preservation.* This one was tricky to get right. Users can install packages (apt, pip, npm) inside their container. Before any restart or redeploy, `CommitContainerService` runs `docker commit` to save the full filesystem as a new image. Next startup uses the committed image instead of the base one. Basically snapshotting your container's state so nothing gets lost.

I wrote a more detailed technical post about the architecture here: [link to blog post]

The whole thing runs inside a single Rails app that also serves my portfolio site (https://yixn.io). One person, one codebase, real paying customers. I'm happy to answer questions about the architecture, the Hetzner API, or the tradeoffs I made along the way.

Source isn't open yet, but I'm thinking about open-sourcing the provisioning layer. Haven't decided.

https://clawhosters.com

clawhosters.com
2 0
Summary
Show HN: I built a tool for lazy founders – it's called BunnyDesk
jacobsyc about 4 hours ago

Show HN: I built a tool for lazy founders – it's called BunnyDesk

BunnyDesk.ai is a platform that offers virtual office spaces and remote work solutions for businesses. The article highlights the company's services, including shared office spaces, private offices, and meeting rooms, as well as its focus on providing a flexible and collaborative work environment for remote teams.

bunnydesk.ai
2 0
Summary
Show HN: Claudit – Claude Code Conversations as Git Notes, Automatically
EngineerBetter about 4 hours ago

Show HN: Claudit – Claude Code Conversations as Git Notes, Automatically

Uses agent and Git Hooks to automatically create Git Notes on commit, containing the agent conversation that led to that commit. Works if either you or the agent commit.

It's basically the same thing as entire.io just announced that they got $60m investment for. Except I got Claude Code to write it last week, in my spare time, without really paying attention. I certainly didn't read or write any of the code, except for one rubbish joke in the README.

I've got a Claude Code instance working on Gemini CLI support and OpenCode support currently.

github.com
4 0
Summary
Show HN: OpenClaw Kubernetes Operator
stubbi about 4 hours ago

Show HN: OpenClaw Kubernetes Operator

The article describes a Kubernetes operator for managing the OpenClaw game server, allowing for automated deployment, scaling, and management of the game infrastructure within a Kubernetes cluster.

github.com
2 1
Summary
Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB
prasoonds about 22 hours ago

Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB

Hey HN, stripe-no-webhooks is an open-source library that syncs your Stripe payments data to your own Postgres database: https://github.com/pretzelai/stripe-no-webhooks.

Here's a demo video: https://youtu.be/cyEgW7wElcs

Why is this useful? (1) You don't have to figure out which webhooks you need or write listeners for each one. The library handles all of that. This follows the approach of libraries like dj-stripe in the Django world (https://dj-stripe.dev/). (2) Stripe's API has a 100 rpm rate limit. If you're checking subscription status frequently or building internal tools, you'll hit it. Querying your own Postgres doesn't have this problem. (3) You can give an AI agent read access to the stripe.* schema to debug payment issues—failed charges, refunds, whatever—without handing over Stripe dashboard access. (4) You can join Stripe data with your own tables for custom analytics, LTV calculations, etc.

It creates a webhook endpoint in your Stripe account to forward webhooks to your backend where a webhook listener stores all the data into a new stripe.* schema. You define your plans in TypeScript, run a sync command, and the library takes care of creating Stripe products and prices, handling webhooks, and keeping your database in sync. We also let you backfill your Stripe data for existing accounts.

It supports pre-paid usage credits, account wallets and usage-based billing. It also lets you generate a pricing table component that you can customize. You can access the user information using the simple API the library provides:

  billing.subscriptions.get({ userId });
  billing.credits.consume({ userId, key: "api_calls", amount: 1 });
  billing.usage.record({ userId, key: "ai_model_tokens_input", amount: 4726 });
Effectively, you don't have to deal with either the Stripe dashboard or the Stripe API/SDK any more if you don't want to. The library gives you a nice abstraction on top of Stripe that should cover ~most subscription payment use-cases.

Let's see how it works with a quick example. Say you have a billing plan like Cursor (the IDE) used to have: $20/mo, you get 500 API completions + 2000 tab completions, you can buy additional API credits, and any additional usage is billed as overage.

You define your plan in TypeScript:

  {
    name: "Pro",
    description: "Cursor Pro plan",
    price: [{ amount: 2000, currency: "usd", interval: "month" }],
    features: {
      api_completion: {
        pricePerCredit: 1,              // 1 cent per unit
        trackUsage: true,               // Enable usage billing
        credits: { allocation: 500 },
        displayName: "API Completions",
      },
      tab_completion: {
        credits: { allocation: 2000 },
        displayName: "Tab Completions",
      },
    },
  }
Then on the CLI, you run the `init` command which creates the DB tables + some API handlers. Run `sync` to sync the plans to your Stripe account and create a webhook endpoint. When a subscription is created, the library automatically grants the 500 API completion credits and 2000 tab completion credits to the user. Renewals and up/downgrades are handled sanely.

Consume code would look like this:

  await billing.credits.consume({
    userId: user.id,
    key: "api_completion",
    amount: 1,
  });
And if they want to allow manual top-ups by the user:

  await billing.credits.topUp({
    userId: user.id,
    key: "api_completion",
    amount: 500,     // buy 500 credits, charges $5.00
  });
Similarly, we have APIs for wallets and usage.

This would be a lot of work to implement by yourself on top of Stripe. You need to keep track of all of these entitlements in your own DB and deal with renewals, expiry, ad-hoc grants, etc. It's definitely doable, especially with AI coding, but you'll probably end up building something fragile and hard to maintain.

This is just a high-level overview of what the library is capable of. It also supports seat-level credits, monetary wallets (with micro-cent precision), auto top-ups, robust failure recovery, tax collection, invoices, and an out-of-the-box pricing table.

I vibe-coded a little toy app for testing: https://snw-test.vercel.app. There's no validation so feel free to sign up with a dummy email, then subscribe to a plan with a test card: 4242 4242 4242 4242, any future expiry, any 3-digit CVV.

Screenshot: https://imgur.com/a/demo-screenshot-Rh6Ucqx

Feel free to try it out! If you end up using this library, please report any bugs on the repo. If you're having trouble / want to chat, I'm happy to help - my contact is in my HN profile.

github.com
61 26
Summary
Show HN: I made paperboat.website, a platform for friends and creativity
yethiel about 22 hours ago

Show HN: I made paperboat.website, a platform for friends and creativity

paperboat.website
68 27
mert_gerdan about 20 hours ago

Show HN: Multimodal perception system for real-time conversation

I work on real-time voice/video AI at Tavus and for the past few years, I’ve mostly focused on how machines respond in a conversation.

One thing that’s always bothered me is that almost all conversational systems still reduce everything to transcripts, and throw away a ton of signals that need to be used downstream. Some existing emotion understanding models try to analyze and classify into small sets of arbitrary boxes, but they either aren’t fast / rich enough to do this with conviction in real-time.

So I built a multimodal perception system which gives us a way to encode visual and audio conversational signals and have them translated into natural language by aligning a small LLM on these signals, such that the agent can "see" and "hear" you, and that you can interface with it via an OpenAI compatible tool schema in a live conversation.

It outputs short natural language descriptions of what’s going on in the interaction - things like uncertainty building, sarcasm, disengagement, or even shift in attention of a single turn in a convo.

Some quick specs:

- Runs in real-time per conversation

- Processing at ~15fps video + overlapping audio alongside the conversation

- Handles nuanced emotions, whispers vs shouts

- Trained on synthetic + internal convo data

Happy to answer questions or go deeper on architecture/tradeoffs

More details here: https://www.tavus.io/post/raven-1-bringing-emotional-intelli...

raven.tavuslabs.org
48 14
grazulex 3 days ago

Show HN: ArtisanForge: Learn Laravel through a gamified RPG adventure

Hey HN,

I built ArtisanForge, a free platform to learn PHP and Laravel through a medieval-fantasy RPG. Instead of traditional tutorials, you progress through kingdoms, solve coding exercises in a browser editor, earn XP, join guilds, and fight boss battles.

Tech stack: Laravel 12, Livewire 3, Tailwind CSS, Alpine.js. Code execution runs sandboxed via php-wasm in the browser.

What's in there:

- 12 courses across 11 kingdoms (PHP basics to deployment)

- 100+ interactive exercises with real-time code validation using AST analysis

- AI companion (Pip the Owlox) that uses Socratic method – never gives direct answers

- Full gamification: XP, levels, streaks, achievements, guilds, leaderboard

- Multilingual (EN/FR/NL)

The idea came from seeing too many beginners drop off traditional courses. Wrapping concepts in quests and progression mechanics keeps motivation high without dumbing down the content.

Everything is free, no paywall, no premium tier. Feedback welcome – especially from Laravel devs and educators.

artisanforge.online
37 3
intervolz about 19 hours ago

Show HN: Sol LeWitt-style instruction-based drawings in the browser

Sol LeWitt was a conceptual artist who never touched his own walls.

He wrote instructions and other people executed them, the original prompt engineer!

I bookmarked a project called "Solving Sol" seven years ago and made a repo in 2018. Committed a README. Never pushed anything else.

Fast forward to 2026, I finally built it.

https://intervolz.com/sollewitt/

intervolz.com
41 6
Summary
Show HN: Building My Own Google Analytics for $0
adwait12345 about 8 hours ago

Show HN: Building My Own Google Analytics for $0

How I reverse-engineered Google Analytics and built my own analytics service for personal projects.

adwait.me
10 5
Show HN: Εἶδος – A non-Turing-complete language built on Plato's Theory of Forms
proletarian about 5 hours ago

Show HN: Εἶδος – A non-Turing-complete language built on Plato's Theory of Forms

I've been reading Plato text and picking up some ancient Greek, and I had a useless thought experiment: what would a programming language look like with 4th century Athens constraints?

Εἶδος (Eidos — "Form") is one result. It's a declarative language called Λόγος where you don't execute code — you declare what exists. Forms belong to Kinds. Forms bear testimony. A law of correspondence maps petitions to answers. There are no loops, no conditionals, no mutation. It's intentionally not Turing-complete, aligned with Plato's rejection of the apeiron (the infinite).

It governs a real HTTP server (Ἱστός) where routes aren't matched by branching — they're recognized as Forms and answered according to law. An unrecognized path returns οὐκ ἔστιν ("it is not") — not an error, an ontological statement.

The project includes a parser that recognizes rather than executes, static verification expressed as philosophical propositions (Totality, Consistency, Well-formedness), Graphviz ontology diagrams, and a Socratic dialectic generator that examines the specification through the four phases of the elenchus.

The Jupyter notebook walks through everything interactively — from parsing the spec in polytonic Greek to petitioning the live server to watching Socrates interrogate the ontology.

https://github.com/realadeel/eidos

github.com
2 1
Summary
baqiwaqi about 6 hours ago

Show HN: Windy – Place wind turbines on a map, see residential impact

I built a free tool that lets you drop wind turbines on an interactive map. It draws distance circles (500m–2km), detects nearby residential buildings, enforces minimum separation rules, and exports to PDF.

windy-pi.vercel.app
2 0
Summary
vrathee about 6 hours ago

Show HN: Web Scraping Sandbox Website

Scrapingsandbox.com is a platform that provides a safe and ethical sandbox environment for web scraping, allowing users to test and develop their scraping tools without the risk of being blocked or banned by target websites.

scrapingsandbox.com
2 1
Summary
Show HN: AI-Templates for Obsidian Templater
ady1981 about 6 hours ago

Show HN: AI-Templates for Obsidian Templater

I developed AI-templates for Obsidian Templater for new knowledge development. The valuable features: * ready-to-use templates (with default settings) * structured LLM prompting * maximal efficiency of LLM prompting via aspect management * flexible LLM output management.

github.com
2 1