Show HN: Pdfwithlove – PDF tools that run 100% locally (no uploads, no back end)
Most PDF web tools make millions by uploading documents that never needed to leave your computer.
pdfwithlove does the opposite:
1. 100% local processing 2. No uploads, no backend, no tracking
Features include merge/split/edit/compress PDFs, watermarks & signatures, and image/HTML/Office → PDF conversion.
Show HN: I quit coding years ago. AI brought me back
Quick background: I used to code. Studied it in school, wrote some projects, but eventually convinced myself I wasn't cut out for it. Too slow, too many bugs, imposter syndrome — the usual story. So I pivoted, ended up as an investment associate at an early-stage angel fund, and haven't written real code in years.
Fast forward to now. I'm a Buffett nerd — big believer in compound interest as a mental model for life. I run compound interest calculations constantly. Not because I need to, but because watching numbers grow over 30-40 years keeps me patient when markets get wild. It's basically meditation for long-term investors.
The problem? Every compound interest calculator online is terrible. Ugly interfaces, ads covering half the screen, can't customize compounding frequency properly, no year-by-year breakdowns. I've tried so many. They all suck.
When vibe coding started blowing up, something clicked. Maybe I could actually build the calculators I wanted? I don't have to be a "real developer" anymore — I just need to describe what I want clearly.
So I tried it.
Two weeks and ~$100(Opus 4.5 thinking model) in API costs later: I somehow have 60+ calculators. Started with compound interest, naturally. Then thought "well, while I'm here..." and added mortgage, loan amortization, savings goals, retirement projections. Then it spiraled — BMI calculator, timezone converter, regex tester. Oops.
The AI (I'm using Claude via Windsurf) handled the grunt work beautifully. I'd describe exactly what I wanted — "compound interest calculator with monthly/quarterly/yearly options, year-by-year breakdown table, recurring contribution support" — and it delivered. With validation, nice components, even tests.
What I realized: my years away from coding weren't wasted. I still understood architecture, I still knew what good UX looked like, I still had domain expertise (financial math). I just couldn't type it all out efficiently. AI filled that gap perfectly.
Vibe coding didn't make me a 10x engineer. But it gave me permission to build again. Ideas I've had for years suddenly feel achievable. That's honestly the bigger win for me.
Stack: Next.js, React, TailwindCSS, shadcn/ui, four languages (EN/DE/FR/JA). The AI picked most of this when I said "modern and clean."
Site's live at https://calquio.com . The compound interest calculator is still my favorite page — finally exactly what I wanted.
Curious if others have similar stories. Anyone else come back to building after stepping away?
Show HN: AWS-doctor – A terminal-based AWS health check and cost optimizer in Go
The article provides a guide for setting up and using AWS Doctor, an open-source tool that automates the process of diagnosing and troubleshooting issues with AWS resources, helping users identify and resolve problems more efficiently.
Show HN: Dock – Slack minus the bloat, tax, and 90-day memory loss
Hey HN – I built Dock after years of team chat frustrations as a founder. Free forever for teams up to 5. Unlimited search, unlimited history. No "upgrade to see messages older than 90 days" nonsense. Built for teams who work both async and sync/real-time when it matters. runs on SOC 2 infra, compliant, secure and in-transit and at-rest encryption, runs on Cloudflare.
Early stage – would love feedback from anyone who's felt the same pain.
Show HN: Lume 0.2 – Build and Run macOS VMs with unattended setup
Hey HN, Lume is an open-source CLI for running macOS and Linux VMs on Apple Silicon. Since launch (https://news.ycombinator.com/item?id=42908061), we've been using it to run AI agents in isolated macOS environments. We needed VMs that could set themselves up, so we built that.
Here's what's new in 0.2:
*Unattended Setup* – Go from IPSW to a fully configured VM without touching the keyboard. We built a VNC + OCR system that clicks through macOS Setup Assistant automatically. No more manual setup before pushing to a registry:
lume create my-vm --os macos --ipsw latest --unattended tahoe
You can write custom YAML configs to set up any macOS version your way.*HTTP API + Daemon* – A REST API on port 7777 that runs as a background service. Your scripts and CI pipelines can manage VMs that persist even if your terminal closes:
curl -X POST localhost:7777/lume/vms/my-vm/run -d '{"noDisplay": true}'
*MCP Server* – Native integration with Claude Desktop and AI coding agents. Claude can create, run, and execute commands in VMs directly: # Add to Claude Desktop config
"lume": { "command": "lume", "args": ["serve", "--mcp"] }
# Then just ask: "Create a sandbox VM and run my tests"
*Multi-location Storage* – macOS disk space is always tight, so from user feedback we added support for external drives. Add an SSD, move VMs between locations: lume config storage add external-ssd /Volumes/ExternalSSD/lume
lume clone my-vm backup --source-storage default --dest-storage external-ssd
*Registry Support* – Pull and push VM images from GHCR or GCS. Create a golden image once, share it across your team.We're seeing people use Lume for: - Running Claude Code in an isolated VM (your host stays clean, reset mistakes by cloning) - CI/CD pipelines for Apple platform apps - Automated UI testing across macOS versions - Disposable sandboxes for security research
To get started:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/libs/lume/scripts/install.sh)"
lume create sandbox --os macos --ipsw latest --unattended tahoe
lume run sandbox --shared-dir ~/my-project
Lume is MIT licensed and Apple Silicon only (M1/M2/M3/M4) since it uses Apple's native Virtualization Framework directly—no emulation.Lume runs on EC2 Mac instances and Scaleway if you need cloud infrastructure. We're also working on a managed cloud offering for teams that need macOS compute on demand—if you're interested, reach out.
We're actively developing this as part of Cua (https://github.com/trycua/cua), our Computer Use Agent SDK. We'd love your feedback, bug reports, or feature ideas.
GitHub: https://github.com/trycua/cua Docs: https://cua.ai/docs/lume
We'll be here to answer questions!
Show HN: Beats, a web-based drum machine
Hello all!
I've been an avid fan of Pocket Operators by Teenage Engineering since I found out about them. I even own an EP-133 K.O. II today, which I love.
A couple of months ago, Reddit user andiam03 shared a Google Sheet with some drum patterns [1]. I thought it was a very cool way to share and understand beats.
During the weekend I coded a basic version of this app I am sharing today. I iterated over it in my free time, and yesterday I felt like I had a pretty good version to share with y'all.
It's not meant to be a sequencer but rather a way to experiment with beats and basic sounds, save them, and use them in your songs. It also has a sharing feature with a link.
It was built using Tone.js [2], Stimulus [3] and deployed in Render [4] as a static website. I used an LLM to read the Tone.js documentation and generate sounds, since I have no knowledge about sound production, and modified from there.
Anyway, hope you like it! I had a blast building it.
[0]: https://teenage.engineering
[1]: https://docs.google.com/spreadsheets/d/1GMRWxEqcZGdBzJg52soe...
[2]: https://tonejs.github.io
[3]: https://stimulus.hotwired.dev
[4]: http://render.com
Show HN: Intent Layer: A context engineering skill for AI agents
This article explores the concept of the Intent Layer, a crucial aspect of building robust and scalable software applications. The article discusses how the Intent Layer decouples the user interface from the application logic, allowing for improved flexibility, maintainability, and testability.
Show HN: CodeAnswr – Stack Overflow alternative with E2E encryption
Built this after seeing developers leak credentials on SO daily. Key features: (1) Privacy scanner with 20+ detection patterns (2) Client-side RSA encryption for sensitive questions (3) AI job matching based on actual contributions. Stack: Cloudflare Workers + D1 + Vectorize. Runs on 100% free tier. Looking for feedback on the crypto implementation.
Show HN: Xenia – A monospaced font built with a custom Python engine
I'm an engineer who spent the last year fixing everything I hated about monofonts (especially that double-story 'a').
I built a custom Python-based procedural engine to generate the weights because I wanted more logical control over the geometry. It currently has 700+ glyphs and deep math support.
Regular weight is free for the community. I'm releasing more weights based on interest.
Show HN: Run LLMs in Docker for any language without prebuilding containers
I've been looking for a way to run LLMs safely without needing to approve every command. There are plenty of projects out there that run the agent in docker, but they don't always contain the dependencies that I need.
Then it struck me. I already define project dependencies with mise. What if we could build a container on the fly for any project by reading the mise config?
I've been using agent-en-place for a couple of weeks now, and it's working great! I'd love to hear what y'all think
Show HN: HTTP:COLON – A quick HTTP header/directive inspector and reference
Hi HN -- I built HTTP:COLON, a small, open-source web tool for quickly checking a site’s HTTP response headers and learning what they mean as you go.
Link: https://httpcolon.dev/
What it does
- Enter a URL and fetch its response headers
- Groups common headers into handy buckets (cache, content, security)
- Includes short docs/tooltips for headers and directives so you can look things up while debugging. I find hovering on highlighted headers quite useful!
Supports different HTTP methods (GET/POST/PUT/DELETE)
Deep links
- You can link directly to a host, e.g. https://httpcolon.dev/www.google.com
(or any domain) to jump straight into inspecting it.
Why I made it
- I kept bouncing between DevTools, MDN, and random blog posts while debugging caching + security headers. I wanted one place that’s quick for “what am I getting back?” and “what does this header/directive do?”
It’s in beta, and I’d love feedback on:
- Missing features you’d want for day-to-day debugging (export/share formats, comparisons, presets, etc.)
Thanks!
Show HN: Figma-use – CLI to control Figma for AI agents
I'm Dan, and I built a CLI that lets AI agents design in Figma.
What it does: 100 commands to create shapes, text, frames, components, modify styles, export assets. JSX importing that's ~100x faster than any plugin API import. Works with any LLM coding assistant.
Why I built it: The official Figma MCP server can only read files. I wanted AI to actually design — create buttons, build layouts, generate entire component systems. Existing solutions were either read-only or required verbose JSON schemas that burn through tokens.
Demo (45 sec): https://youtu.be/9eSYVZRle7o
Tech stack: Bun + Citty for CLI, Elysia WebSocket proxy, Figma plugin. The render command connects to Figma's internal multiplayer protocol via Chrome DevTools for extra performance when dealing with large groups of objects.
Try it: bun install -g @dannote/figma-use
Looking for feedback on CLI ergonomics, missing commands, and whether the JSX syntax feels natural.
Show HN: Opal Editor, free Obsidian alternative for markdown and site publishing
A fully featured markdown editor and publisher. Free, open-source and browser-first (no backend required). Built with modern technologies like React, TypeScript, Shadcn/UI, and Vite. (thoughtfully crafted, not vibe coded)
Show HN: GibRAM an in-memory ephemeral GraphRAG runtime for retrieval
Hi HN,
I have been working with regulation-heavy documents lately, and one thing kept bothering me. Flat RAG pipelines often fail to retrieve related articles together, even when they are clearly connected through references, definitions, or clauses.
After trying several RAG setups, I subjectively felt that GraphRAG was a better mental model for this kind of data. The Microsoft GraphRAG paper and reference implementation were helpful starting points. However, in practice, I found one recurring friction point: graph storage and vector indexing are usually handled by separate systems, which felt unnecessarily heavy for short-lived analysis tasks.
To explore this tradeoff, I built GibRAM (Graph in-buffer Retrieval and Associative Memory). It is an experimental, in-memory GraphRAG runtime where entities, relationships, text units, and embeddings live side by side in a single process.
GibRAM is intentionally ephemeral. It is designed for exploratory tasks like summarization or conversational querying over a bounded document set. Data lives in memory, scoped by session, and is automatically cleaned up via TTL. There are no durability guarantees, and recomputation is considered cheaper than persistence for the intended use cases.
This is not a database and not a production-ready system. It is a casual project, largely vibe-coded, meant to explore what GraphRAG looks like when memory is the primary constraint instead of storage. Technical debt exists, and many tradeoffs are explicit.
The project is open source, and I would really appreciate feedback, especially from people working on RAG, search infrastructure, or graph-based retrieval.
GitHub: https://github.com/gibram-io/gibram
Happy to answer questions or hear why this approach might be flawed.
Show HN: ChunkHound, a local-first tool for understanding large codebases
ChunkHound’s goal is simple: local-first codebase intelligence that helps you pull deep, core-dev-level insights on demand, generate always-up-to-date docs, and scale from small repos to enterprise monorepos — while staying free + open source and provider-agnostic (VoyageAI / OpenAI / Qwen3, Anthropic / OpenAI / Gemini / Grok, and more).
I’d love your feedback — and if you have, thank you for being part of the journey!
Show HN: Streaming gigabyte medical images from S3 without downloading them
WSIStreamer is an open-source platform for real-time whole-slide image (WSI) visualization and analysis. It enables the streaming and exploration of large-scale pathology slides, allowing users to view, annotate, and analyze digital histology samples remotely.
Show HN: Auto-switch keyboard layout per physical keyboard (Rust, Linux/KDE)
The article describes a keyboard layout daemon that allows users to customize their keyboard layouts on Linux systems. It provides a user-friendly interface for managing and switching between different keyboard layouts.
Show HN: Speed Miners – A tiny RTS resource mini-game
I've always loved RTS games and wanted to make a game similar for a long time. I thought I'd just try and build a mini / puzzle game around the resource gathering aspects of an RTS.
Objective: You have a base at the center and you need to mine and "refine" all of the resources on the map in as short a time as possible.
By default, the game will play automatically, but not optimally (moving and buying upgrades). You can disable that with the buttons. You can select drones and right click to move them to specific resources patches and buy upgrades as you earn upgrade points.
I've implemented three different levels and some basic sounds. I used Phaser at the game library (first time using it). It won't work well on a mobile.
Show HN: Kling.to – Self-hosted email marketing with full data ownership
I'm Caesar, and I built kling.to because I believe marketing automation should be accessible to everyone without sacrificing data ownership.
Kling is a self-hosted email marketing platform that lets you run campaigns, build automation workflows, segment audiences, and track attribution while keeping full control of your customer data. It deploys via Docker and uses MongoDB with BullMQ for job scheduling.
Core capabilities: - Visual workflow builder for abandoned cart recovery, win-back campaigns, and post-purchase sequences - Multi-channel messaging (email, SMS, WhatsApp, push notifications) - Last-touch and multi-touch attribution tracking - Customer segmentation with purchase history and engagement data - Template management with personalization variables - Real-time deliverability monitoring (sent, delivered, opened, clicked, bounced)
I started this project because existing marketing automation tools either lock you into their infrastructure or cost too much for small teams. We're giving businesses the choice to self-host and own their data.
The platform is built for e-commerce stores, engineering-led teams, and privacy-conscious organizations that want control over their marketing stack.
I'd value feedback on: - Architecture choices (MongoDB + BullMQ for job scheduling) - Deliverability infrastructure for self-hosted setups - Automation workflow UX and trigger logic - Attribution model implementation
Demo video: https://youtu.be/hbFjX525AwA Website: https://kling.to
Happy to answer questions about the technical implementation or deployment process.
Show HN: Mist – a lightweight, self-hosted PaaS
Hi HN,
We’re building Mist, a lightweight, self-hosted PaaS for running and managing applications on your own servers.
We started Mist because we wanted a middle ground between raw VPS management and heavy, all-in-one platforms. Existing PaaS solutions often felt too complex or resource-intensive for small servers, homelabs, and side projects. We wanted something that keeps the PaaS experience but stays simple and predictable.
Our goals with Mist are: - A simple PaaS to deploy and manage apps on your own infrastructure - HTTPS, routing, and app access handled automatically - Low resource usage so it runs well on small VPSes - Self-hosted and transparent, with minimal magic
Mist focuses on being an opinionated but lightweight layer on top of your server. It doesn’t try to hide everything behind abstractions, but it does aim to remove the repetitive operational work that comes with managing apps by hand.
Mist is still early, and this is where we really need help. We’re actively looking for:
- Users who want a simple self-hosted PaaS and can share real-world feedback
- Contributors who want to help shape the core features, architecture, and UX
Website: https://trymist.cloud
Repo: https://github.com/corecollectives/mist
Show HN: Map of illegal dumping reports in Oakland
The article presents an interactive map that allows users to report and view incidents of illegal dumping in Oakland, California. The map aims to help the community and local authorities address the issue of illegal waste disposal in the city.
Show HN: Terravision – Generate Cloud architecture diagrams from Terraform code
Show HN: WebTerm – Browser-based terminal emulator
WebTerm.app is an online terminal emulator that allows users to access a Linux-based command-line interface directly in their web browsers, providing a convenient way to perform various tasks and experiments without the need for local software installation.
Show HN: TinyCity – A tiny city SIM for MicroPython (Thumby micro console)
Show HN: App to spoof GPS location on iOS without jailbreaking
The article describes an iOS Location Spoofer, a tool that allows users to spoof their GPS location on iOS devices. This can be useful for testing location-based apps or bypassing location restrictions, but may also have privacy implications.
Show HN: I built a tool to assist AI agents to know when a PR is good to go
I've been using Claude Code heavily, and kept hitting the same issue: the agent would push changes, respond to reviews, wait for CI... but never really know when it was done.
It would poll CI in loops. Miss actionable comments buried among 15 CodeRabbit suggestions. Or declare victory while threads were still unresolved.
The core problem: no deterministic way for an agent to know a PR is ready to merge.
So I built gtg (Good To Go). One command, one answer:
$ gtg 123 OK PR #123: READY CI: success (5/5 passed) Threads: 3/3 resolved
It aggregates CI status, classifies review comments (actionable vs. noise), and tracks thread resolution. Returns JSON for agents or human-readable text.
The comment classification is the interesting part — it understands CodeRabbit severity markers, Greptile patterns, Claude's blocking/approval language. "Critical: SQL injection" gets flagged; "Nice refactor!" doesn't.
MIT licensed, pure Python. I use this daily in a larger agent orchestration system — would love feedback from others building similar workflows.
Show HN: Hekate – A Zero-Copy ZK Engine Overcoming the Memory Wall
Most ZK proving systems are optimized for server-grade hardware with massive RAM. When scaling to industrial-sized traces (2^20+ rows), they often hit a "Memory Wall" where allocation and data movement become a larger bottleneck than the actual computation.
I have been developing Hekate, a ZK engine written in Rust that utilizes a Zero-Copy streaming model and a hybrid tiled evaluator. To test its limits, I ran a head-to-head benchmark against Binius64 on an Apple M3 Max laptop using Keccak-256.
The results highlight a significant architectural divergence:
At 2^15 rows: Binius64 is faster (147ms vs 202ms), but Hekate is already 10x more memory efficient (44MB vs ~400MB).
At 2^20 rows: Binius64 hits 72GB of RAM usage, entering swap hell on a laptop. Hekate processes the same workload in 4.74s using just 1.4GB of RAM.
At 2^24 rows (16.7M steps): Hekate finishes in 88s with a peak RAM of 21.5GB. Binius64 is unable to complete the task due to OOM/Swap on this hardware.
The core difference is "Materialization vs. Streaming". While many engines materialize and copy massive polynomials in RAM during Sumcheck and PCS operations, Hekate streams them through the CPU cache in tiles. This shifts the unit economics of ZK proving from $2.00/hour high-memory cloud instances to $0.10/hour commodity hardware or local edge devices.
I am looking for feedback from the community, especially those working on binary fields, GKR, and memory-constrained SNARK/STARK implementations.
Show HN: Sparrow-1 – Audio-native model for human-level turn-taking without ASR
For the past year I've been working to rethink how AI manages timing in conversation at Tavus. I've spent a lot of time listening to conversations. Today we're announcing the release of Sparrow-1, the most advanced conversational flow model in the world.
Some technical details:
- Predicts conversational floor ownership, not speech endpoints
- Audio-native streaming model, no ASR dependency
- Human-timed responses without silence-based delays
- Zero interruptions at sub-100ms median latency
- In benchmarks Sparrow-1 beats all existing models at real world turn-taking baselines
I wrote more about the work here: https://www.tavus.io/post/sparrow-1-human-level-conversation...
Show HN: Webctl – Browser automation for agents based on CLI instead of MCP
Hi HN, I built webctl because I was frustrated by the gap between curl and full browser automation frameworks like Playwright.
I initially built this to solve a personal headache: I wanted an AI agent to handle project management tasks on my company’s intranet. I needed it to persist cookies across sessions (to handle SSO) and then scrape a Kanban board.
Existing AI browser tools (like current MCP implementations) often force unsolicited data into the context window—dumping the full accessibility tree, console logs, and network errors whether you asked for them or not.
webctl is an attempt to solve this with a Unix-style CLI:
- Filter before context: You pipe the output to standard tools. webctl snapshot --interactive-only | head -n 20 means the LLM only sees exactly what I want it to see.
- Daemon Architecture: It runs a persistent background process. The goal is to keep the browser state (cookies/session) alive while you run discrete, stateless CLI commands.
- Semantic targeting: It uses ARIA roles (e.g., role=button name~="Submit") rather than fragile CSS selectors.
Disclaimer: The daemon logic for state persistence is still a bit experimental, but the architecture feels like the right direction for building local, token-efficient agents.
It’s basically "Playwright for the terminal."
Show HN: Tusk Drift – Turn production traffic into API tests
Hi HN! In the past few months my team and I have been working on Tusk Drift, a system that records real API traffic from your service, then replays those requests as deterministic tests. Outbound I/O (databases, HTTP calls, etc.) gets automatically mocked using the recorded data.
Problem we're trying to solve: Writing API tests is tedious, and hand-written mocks drift from reality. We wanted tests that stay realistic because they come from real traffic.
versus mocking libraries: Tools like VCR/Nock intercept HTTP within your tests. Tusk Drift records full request/response traces externally (HTTP, DB, Redis, etc.) and replays them against your running service, no test code or fixtures to write/maintain.
How it works:
1. Add a lightweight SDK (we currently support Python and Node.js)
2. Record traffic in any environment.
3. Run `tusk run`, the CLI sandboxes your service and serves mocks via Unix socket
We run this in CI on every PR. Also been using it as a test harness for AI coding agents, they can make changes, run `tusk run`, and get immediate feedback without needing live dependencies.
Source: https://github.com/Use-Tusk/tusk-drift-cli
Demo: https://github.com/Use-Tusk/drift-node-demo
Happy to answer questions!