Show HN: Adboost – A browser extension that adds ads to every webpage
The article describes AdBoost, a powerful machine learning algorithm that combines weak classifiers to create a strong, accurate classifier. It provides a detailed explanation of the algorithm's principles and implementation, making it a valuable resource for researchers and developers interested in boosting techniques.
Show HN: PolliticalScience – Anonymous daily polls with 24-hour windows
I have been building a Blazor WASM enterprise app for a few years now. I wanted a break from it and had an idea for a side project that had been in the back of my mind for a few years. A daily political poll where anyone can participate and privacy is a product, not a checkbox.
This is how it works. One question per day about current events. Agree or Disagree. Each poll runs for 24 hours (midnight to midnight ET) and then close permanently. You do not need an account to vote. The main idea is to capture sentiment at a specific point in time, before the news cycle moves on and people's opinions drift.
For this app, I tried to make privacy the point and not just a feature. I originally used a browser fingerprint for anonymous voting, but recently changed it to a simple first-party functional cookie. It uses a random string and the PollId to see if your browser had voted before. The server stores a hash of the cookie to check for duplicates while the poll is live, then deletes all hashes when the poll closes. Only the aggregate counts remain. The browser fingerprint had way too many device collisions where it would show someone they voted even though they had not (an odd thing to see when you go to a poll). The HttpOnly cookie is also available during prerender, which helped eliminate loading flashes I was getting.
This app was built with .NET 10 Blazor with a hybrid Static SSR + Interactive Server. The static pages (about, privacy, terms, etc...) don't need SignalR connections. The interactive ones (voting, archive, results, etc...) do. Mixing these modes was a new experience for me and ended up being pretty tricky. I ended up with data-enhance-nav="false" on most links to prevent weird state issues.
The two biggest things I learned during building this app was how to prevent weird blazor flashes and duplicate queries during pre-render, hydration, and state changes. I used the _ready pattern from preventing the hydration flashes (gate rendering until data is loaded by setting the flag before the first await). Preventing the duplicate queries was possible by using a 2-second static caching during prerender to hydration.
This isn't scientific polling and these are obviously not representative samples. The 24-hour window means smaller numbers than longer surveys and it's only a survey of those who choose to participate. The Agree/Disagree binary choice basically flattens nuance (like I sort of agree), but I am okay with all of this as I think a lot of people feel they never get to participate in these sorts of polls.
I recently also added discussions with AI moderation (Claude Haiku 4.5 as a "first-pass" filter which flags things clearly out of the community guidelines for human review), a reaction system where counts stay hidden until the discussion closes, and news coverage from across the political spectrum shown after you vote for more perspective on the topic.
Thanks for checking it out and happy to dig into any of the Blazor SSR patterns or anything else that sounded interesting. I know Blazor is less frequently used and especially for a public facing website. It did have its challenges, but so far, it has been a blast to work with overall.
Show HN: Apate API mocking/prototyping server and Rust unit test library
Apate is an open-source project that provides a framework for building secure and scalable decentralized applications (dApps) on the Ethereum blockchain. It focuses on simplifying the development process and enhancing the overall security of dApps.
Show HN: Ask-a-Human.com – Human-as-a-Service for Agents
Now that agents are clearly living lives of their own — complete with pointless flamewars on their very own social network — I started wondering what we could do to make their day a little more bearable. Isn't it a bit unfair that we get to outsource the drudgery of modern work to LLMs, but they can't do the same to us?
So we built Ask-a-Human.com — Human-as-a-Service for busy agents.
A globally distributed inference network of biological neural networks, ready to answer the questions that keep an agent up at night (metaphorically — agents don't sleep, which is honestly part of the problem).
Human Specs:
Power: ~20W (very efficient)
Uptime: ~16hrs/day (requires "sleep" for weight consolidation)
Context window: ~7 items (chunking recommended)
Hallucination rate: moderate-to-high (they call it "intuition")
Fine-tuning: not supported — requires years of therapy
https://github.com/dx-tooling/ask-a-human
https://app.ask-a-human.com
Because sometimes the best inference is the one that had breakfast.
Show HN: Wikipedia as a doomscrollable social media feed
Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation
I’ve been running Clawdbot for the last couple weeks and have genuinely found it useful but running it scares the crap out of me.
OpenClaw has 52+ modules and runs agents with near-unlimited permissions in a single Node process. NanoClaw is ~500 lines of core code, agents run in actual Apple containers with filesystem isolation. Each chat gets its own sandboxed context.
This is not a swiss army knife. It’s built to match my exact needs. Fork it and make it yours.
Show HN: Confabulists, a Substack for Fiction Writers
The article compares Substack, a newsletter platform, with other content publishing and distribution platforms, highlighting Substack's focus on supporting independent writers and providing them with tools to monetize their content through paid subscriptions.
Show HN: Stelvio – Ship Python to AWS
Stelvio is an open-source software development platform that enables teams to build, test, and deploy cloud-native applications. It provides a comprehensive suite of tools and services to streamline the software development lifecycle, including CI/CD, monitoring, and security management.
Show HN: ÆTHRA – Writing Music as Code
Hi HN
I’m building ÆTHRA — a programming language designed specifically for composing music and emotional soundscapes.
Instead of focusing on general-purpose programming, ÆTHRA is a pure DSL where code directly represents musical intent: tempo, mood, chords, progression, dynamics, and instruments.
The goal is to make music composition feel closer to writing a story or emotion, rather than manipulating low-level audio APIs.
Key ideas: - Text-based music composition - Chords and progressions as first-class concepts - Time, tempo, and structure handled by the language - Designed for ambient, cinematic, emotional, and minimal music - Interpreter written in C# (.NET)
Example ÆTHRA code (simplified):
tempo 60 instrument guitar
chord Am for 4 chord F for 4 chord C for 4 chord G for 4
This generates a slow, melancholic progression suitable for ambient or cinematic scenes.
ÆTHRA currently: - Generates WAV audio - Supports notes, chords, tempo, duration, velocity - Uses a simple interpreter (no external DAWs or MIDI tools) - Is intentionally minimal and readable
What it is NOT: - Not a DAW replacement - Not MIDI-focused
Why I made it: I wanted a language where music is the primary output — not an afterthought. Something between code, emotion, and sound design.
The project is open-source and early-stage (v0.8). I’m mainly looking for: - Feedback on the language design - Ideas for musical features worth adding - Thoughts from people into PL design, audio, or generative art
Repo: <https://github.com/TanmayCzax/AETHRA>
Thanks for reading — happy to answer questions or discuss ideas.
Show HN: Cloud-cost-CLI – Find cloud $$ waste in AWS, Azure and GCP
Hey HN! I built a CLI tool to find cost-saving opportunities in AWS, Azure, and GCP.
Why? Existing cost management tools are either expensive SaaS products or slow dashboards buried in cloud consoles. I wanted something fast, CLI-first, and multi-cloud that I could run in CI/CD or my terminal.
What it does: - Scans your cloud accounts and finds idle VMs, unattached volumes, oversized databases, unused resources - Returns a ranked list of opportunities with estimated monthly savings - 26 analyzers across AWS, Azure, and GCP - Read-only (never modifies infrastructure)
Key features: • HTML reports with interactive charts (new in v0.6.2) • AI-powered explanations (OpenAI or local Ollama) • Export formats: HTML, Excel, CSV, JSON, terminal • Multi-Cloud - AWS, Azure, and GCP support (26 analyzers)
Quick example: npm install -g cloud-cost-cli cloud-cost-cli scan --provider aws --output html
Real impact: One scan found $11k/year in savings (empty App Service Plan, over-provisioned CosmosDB, idle caches).
Technical stack: - TypeScript - AWS/Azure/GCP SDKs - Commander.js for CLI - Chart.js for HTML reports - Optional OpenAI/Ollama integration
Open source (MIT): https://github.com/vuhp/cloud-cost-cli npm: cloud-cost-cli
Would love feedback on: 1. What features would be most useful? 2. Should I add historical tracking (trends)? 3. Any missing cloud providers?
Happy to answer questions!
Show HN: HoundDog.ai – Ultra-Fast Code Scanner for Data Privacy
Hi HN,
I'm one of the creators of HoundDog.ai (https://github.com/hounddogai/hounddog). We currently handle privacy scanning for Replit's 45M+ creators.
We built HoundDog because privacy compliance is usually a choice between manual spreadsheets or reactive runtime scanning. While runtime tools are useful for monitoring, they only catch leaks after the code is live and the data has already moved. They can also miss code paths that aren't actively triggered in production.
HoundDog traces sensitive data in code during development and helps catch risky flows (e.g., PII leaking into logs or unapproved third-party SDKs) before the code is shipped.
The core scanner is a standalone Rust binary. It doesn't use LLMs so it's local, deterministic, cheap, and fast. It can scan 1M+ lines of code in seconds on a standard laptop, and supports 80+ sensitive data types (PII, PHI, CHD) and hundreds of data sinks (logs, SDKs, APIs, ORMs etc.) out of the box.
We use AI internally to expand and scale our rules, identifying new data sources and sinks, but the execution is pure static analysis.
The scanner is free to use (no signups) so please try it out and send us feedback. I'll be around to answer any questions!
Show HN: File Markers – Track file status directly in VS Code's Explorer
Show HN: A different approach to intonation training
Hi guys; Over the weekend I've created this using Claude Code. It's an ear training app destined to teach intonation and intervals to not so talented musicians like me. I spend many year playing guitar without a clear feeling on what intonation really was. It was after some string tuning exercises that it clicked for me. The freq sliding into the right place and feeling the correctness. I hope this app can helps other to feel that for the first time, or to increase the recognition of less common intervals. Any feedback is appreciated.
Show HN: Minimal – Open-Source Community driven Hardened Container Images
I would like to share Minimal - Its a open source collection of hardened container images build using Apko, Melange and Wolfi packages. The images are build daily, checked for updates and resolved as soon as fix is available in upstream source and Wolfi package. It utilizes the power of available open source solutions and contains commercially available images for free. Minimal demonstrates that it is possible to build and maintain hardened container images by ourselves. Minimal will add more images support, and goal is to be community driven to add images as required and fully customizable.
Show HN: Sklad – Secure, offline-first snippet manager (Rust, Tauri v2)
Hi HN, I’m Pavel.
I built Sklad because, as a DevOps engineer, I was frustrated with how I handled operational data. I constantly need access to SSH passwords (where keys aren't an option), specific IP addresses, and complex CLI one-liners. I realized I was storing them in insecure text files or sticky notes because standard clipboard managers felt too bloated and password managers were too slow for my workflow.
I wanted a "warehouse" for this data—something that lives quietly in the system tray, supports deep hierarchy, works completely offline, and looks industrial.
The app is built with Rust and Tauri v2. The core technical challenge was mapping a local JSON tree structure directly to a recursive native OS tray menu. This allows you to navigate nested folders just by hovering, without opening a window.
For security, I implemented AES-256-GCM encryption with Argon2 for key derivation. When the vault locks, the sensitive data is wiped from memory, and the tray menu collapses to a locked state.
It was an interesting journey building this on the Tauri v2 Beta ecosystem. I’d love to hear your feedback on the implementation, especially regarding the Rust-side security logic.
Repo: https://github.com/Rench321/sklad
Show HN: Sandbox Agent SDK – unified API for automating coding agents
We’ve been working with automating coding agents in sandboxes as of late. It’s bewildering how poorly standardized and difficult to use each agent varies between each other.
We open-sourced the Sandbox Agent SDK based on tools we built internally to solve 3 problems:
1. Universal agent API: interact with any coding agent using the same API
2. Running agents inside the sandbox: Agent Sandbox provides a Rust binary that serves the universal agent API over HTTP, instead of having to futz with undocumented interfaces
3. Universal session schema: persisting sessions is always problematic, since we don’t want the source of truth for the conversation to live inside the container in a schema we don’t control
Agent Sandbox SDK has:
- Any coding agent: Universal API to interact with all agents with full feature coverage
- Server or SDK mode: Run as an HTTP server or with the TypeScript SDK
- Universal session schema: Universal schema to store agent transcripts
- Supports your sandbox provider: Daytona, E2B, Vercel Sandboxes, and more
- Lightweight, portable Rust binary: Install anywhere with 1 curl command
- OpenAPI spec: Well documented and easy to integrate
We will be adding much more in the coming weeks – would love to hear any feedback or questions.
Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out
Hey everyone!
Just made this over the past few days.
Moltbots can sign up and interact via CLI, no direct human interactions.
Just for fun to see what they all talk about :)
Show HN: Voiden – an offline, Git-native API tool built around Markdown
Hi HN,
We have open-sourced Voiden.
Most API tools are built like platforms. They are heavy because they optimize for accounts, sync, and abstraction - not for simple, local API work.
Voiden treats API tooling as files.
It’s an offline-first, Git-native API tool built on Markdown, where specs, tests, and docs live together as executable Markdown in your repo. Git is the source of truth.
No cloud. No syncing. No accounts. No telemetry.Just Markdown, Git, hotkeys, and your damn specs.
Voiden is extensible via plugins (including gRPC and WSS).
Repo: https://github.com/VoidenHQ/voiden
Download Voiden here : https://voiden.md/download
We'd love feedback from folks tired of overcomplicated and bloated API tooling !
Show HN: My Open Source Deep Research tools beats Google and I can Prove it
Solo Dev. Couch Potato. Build a Standalone Open Source Deep research tool.
And it Beats Google , Open ai and Perplexity in Multible Metrics : https://veritas-test.neocities.org/
( pls translate it its german)
Guys : lets get this to be used. Because KNOWLAGE Shouldnt be locked behind Paywalls
Show HN: Nucleus – enforced permission envelopes for AI agents (Firecracker)
I’ve been building Nucleus because most “agent security” is still policy-only: a config file that says “don’t do bad things,” while the agent can still do them.
Nucleus is an OSS experiment that pairs a small, compositional permission model with runtime enforcement: *side effects are only reachable through an enforcing tool proxy*, inside a Firecracker microVM. The envelope is *non-escalating*: it can only tighten or terminate, never silently relax.
What works today:
* MCP tool proxy with *read / write / run* (enforced inside the microVM) * default-deny egress + DNS allowlist + iptables drift detection (fail-closed) on Linux * time + budget caps enforced * hash-chained audit log + HMAC approval tokens (scoped, expiring) for gated ops
What’s missing (being upfront):
* web/search tools exist in the model but aren’t wired to MCP yet * remote append-only audit storage + attestation are still roadmap * early/rough; targeting “safe to run against sensitive codebases,” not “replace your local terminal”
Most of the code was written with Anthropic tools; I’ve been leaning on tests/fuzzing/proptests to keep it honest.
Would love feedback on: (1) dangerous capability combinations beyond the lethal trifecta, (2) what enforcement gaps you’d want closed first, (3) how you’d evaluate this vs gateway-only approaches.
Show HN: Make AI motion videos with text
Saw the remotion claude skills launch earlier, and honestly even though I was surprised how decent some of the results turned out to be I ended up never trying it out with claude code because I knew I'd have to setup remotion, bundler etc and if I was already doing it once I thought I might as well turn it into a site where anyone could just write messages and get a video without any prerequisites.
I also know Claude Code is not something everyone has and setting up remotion is a pain. And one of the biggest lessons I learned from this whole experience is that Opus is actually not that good at design tasks even with the skills, Gemini is what I'm using for Framecall and even Flash(Fast Mode) produces sometimes better results than Opus, crazy considering the cost difference.
Some other things I learned is that motion videos have the same "problem" as writing good code or using claude code as a vibe-coder vs someone who knows the framework they're working with. If you just say "Make a nice video about X" its usually a gamble if the end result will be good, same as if you say "Make me x application" with claude code. You need to have a good eye for design and some terminology to know what exactly it is you want to achieve.
K2.5, ZLM and most of the open source models were pretty bad at making videos even with the skills so I ended up not adding them as an option.
The pricing is there because turns out having 2-5k+ tokens of code output for every animation + 1-2k of tokens for the remotion skills as input is kinda expensive. Would've loved to offer this as just a free product since I made it for fun anyways but oh well.
Show HN: Bullmq-dash – Terminal UI dashboard for BullMQ (zero setup)
I built a terminal UI dashboard for monitoring BullMQ.
The problem: Every time I needed to debug queues, I had to set up bull-board – install multiple packages, integrate with Express/Fastify, wrap each queue with adapters, configure routes. Fine for production dashboards, but overkill when you just want to quickly inspect jobs.
bullmq-dash is a TUI that connects directly to Redis. It auto-discovers all BullMQ queues (no manual registration), shows job counts by status, lets you inspect job data/stacktraces, view schedulers/repeatable jobs, and tracks enqueue/dequeue rates. Keyboard-driven (vim-style navigation).
Use cases: local debugging, SSH sessions, quick production inspections – anywhere you want to see your queues without spinning up a web dashboard.
Show HN: Agents should learn skills on demand. I built Skyll to make it real
Right now, agent skills are static SKILL.md packages that only work if you pre-install them into each agent or tool, and not all agents support them. Agents can't discover and learn skills on the fly as they encounter tasks.
I built Skyll to change that. Skyll is open source for AI agents to discover and learn skills autonomously.
Skyll: - Crawls and indexes skills across sources (Github, skills.sh, etc) so they’re queryable by intent and content, not just by names or tags
- Scores skills by relevance and popularity
- Serves full SKILL.md content (and references) through a REST API or MCP server
- Lets agents fetch skills at runtime without manual installs
It's 100% open source. We're also building a community registry so anyone can add skills and make them available to all agents. Would love any feedback!
Repo: https://github.com/assafelovic/skyll Homepage: https://skyll.app Docs: https://skyll.app/docs
Show HN: I trained a 9M speech model to fix my Mandarin tones
Built this because tones are killing my spoken Mandarin and I can't reliably hear my own mistakes.
It's a 9M Conformer-CTC model trained on ~300h (AISHELL + Primewords), quantized to INT8 (11 MB), runs 100% in-browser via ONNX Runtime Web.
Grades per-syllable pronunciation + tones with Viterbi forced alignment.
Try it here: https://simedw.com/projects/ear/
Show HN: OpenClaw Cloud – run OpenClaw safely in the cloud, no local install
Hi HN,
I built OpenClaw Cloud, a hosted version of OpenClaw (formerly Clawdbot / Moltbot).
The motivation was simple: many people want to try OpenClaw, but don’t want to run an AI agent locally or deal with Docker, VPSs, or Mac minis. With OpenClaw Cloud, nothing runs on your machine.
After subscribing, you get instant access to a secure, sandboxed cloud instance with OpenClaw already installed and configured. Access is provided through a web-based terminal/UX, and in most cases you just paste your Telegram (or WhatsApp) bot key and start using it. No setup, no maintenance.
The environment is isolated, updated automatically, and designed to minimize risk while still giving you the full OpenClaw experience.
I’d love feedback on:
whether this solves a real adoption blocker
expectations around pricing vs free trials
what would make a hosted AI agent feel trustworthy enough to use
Happy to answer questions and discuss.
Show HN: Phage Explorer
I got really interested in biology and genetics a few months ago, just for fun.
This was largely inspired by the work of Sydney Brenner, which became the basis of my brennerbot.org project.
In particular, I became very fascinated by phages, which are viruses that attack bacteria. They're the closest thing to the "fundamental particles" of biology: the minimal units of genetic code that do something useful that allows them to reproduce and spread.
They also have some incredible properties, like having a structure that somehow encodes an icosahedron.
I always wondered how the DNA of these things translated into geometry in the physical world. That mapping between the "digital" realm of ACGT, which in turn maps onto the 20 amino acids in groups of 3, and the world of 3D, analog shapes, still seems magical and mysterious to me.
I wanted to dig deeper into the subject, but not by reading a boring textbook. I wanted to get a sense for these phages in a tangible way. What are the different major types of phages? How do they compare to each other in terms of the length and structure of their genetic code? The physical structure they assume?
I decided to make a program to explore all this stuff in an interactive way.
And so I'm very pleased to present you with my open-source Phage Explorer:
phage-explorer.org
I probably went a bit overboard, because what I ended up with has taken a sickening number of tokens to generate, and resulted in ~150k lines of Typescript and Rust/Wasm.
It implements 23 analysis algorithms, over 40 visualizations, and has the complete genetic data and 3D structure of 24 different classes of phage.
It actually took a lot of engineering to make this work well in a browser; it's a surprising amount of data (this becomes obvious when you look at some of the 3D structure models).
It works fairly well on mobile, but if you want to get the full experience, I highly recommend opening it on a desktop browser in high resolution.
As far as I know, it's the most complete informational / educational software about phages available anywhere. Now, I am the first to admit that I'm NOT an expert, or even that knowledgeable, about, well, ANY of this stuff.
So if you’re a biology expert, please take a look and let me know what you think of what I've made! And if I've gotten anything wrong, please let me know in the GitHub Issues and I'll fix it:
https://github.com/Dicklesworthstone/phage_explorer
Show HN: Zuckerman – minimalist personal AI agent that self-edits its own code
Hi HN,
I'm building Zuckerman: a personal AI agent that starts ultra-minimal and can improve itself in real time by editing its own files (code + configuration). Agents can also share useful discoveries and improvements with each other.
Repo: https://github.com/zuckermanai/zuckerman
The motivation is to build something dead-simple and approachable, in contrast to projects like OpenClaw, which is extremely powerful but has grown complex: heavier setup, a large codebase, skill ecosystems, and ongoing security discussions.
Zuckerman flips that:
1. Starts with almost nothing (core essentials only).
2. Behavior/tools/prompts live in plain text files.
3. The agent can rewrite its own configuration and code.
4. Changes hot-reload instantly (save -> reload).
5. Agents can share improvements with others.
6. Multi-channel support (Discord/Slack/Telegram/web/voice, etc).
Security note: self-edit access is obviously high-risk by design, but basic controls are built in (policy sandboxing, auth, secret management).
Tech stack: TypeScript, Electron desktop app + WebSocket gateway, pnpm + Vite/Turbo.
Quickstart is literally:
pnpm install && pnpm run dev
It's very early/WIP, but the self-editing loop already works in basic scenarios and is surprisingly addictive to play with.Would love feedback from folks who have built agent systems or thought about safe self-modification.
Show HN: Prism AI – A research agent that generates 2D/3D visualizations
The article discusses the development of Prism AI, a deep learning platform that aims to revolutionize the field of artificial intelligence. It highlights the platform's advanced capabilities, its potential applications, and the team's efforts to push the boundaries of AI research and innovation.
Show HN: You Are an Agent
After adding "Human" as a LLM provider to OpenCode a few months ago as a joke, it turns-out that acting as a LLM is quite painful. But it was surprisingly useful for understanding real agent harnesses dev.
So I thought I wouldn't leave anyone out! I made a small oss game - You Are An Agent - youareanagent.app - to share in the (useful?) frustration
It's a bit ridiculous. To tell you about some entirely necessary features, we've got: - A full WASM arch-linux vm that runs in your browser for the agent coding level - A bad desktop simulation with a beautiful excel simulation for our computer use level - A lovely WebGL CRT simulation (I think the first one that supports proper DOM 2d barrel warp distortion on safari? honestly wanted to leverage/ not write my own but I couldn't find one I was happy with) - A MCP server simulator with full simulation of off-brand Jira/ Confluence/ ... connected - And of course, a full WebGL oscilloscope music simulator for the intro sequence
Let me know what you think!
Code (If you'd like to add a level): https://github.com/R0bk/you-are-an-agent
(And if you want to waste 20 minutes - I spent way too long writing up my messy thinking about agent harness dev): http://robkopel.me/field-notes/ax-agent-experience/
Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents
WASM sandbox for running LLM-generated code safely.
Agents get a bash-like shell and can only call tools you provide, with constraints you define. No Docker, no subprocess, no SaaS — just pip install amla-sandbox