Show HN: SnackBase – Open-source, GxP-compliant back end for Python teams
Hi HN, I’m the creator of SnackBase.
I built this because I work in Healthcare and Life Sciences domain and was tired of spending months building the same "compliant" infrastructure (Audit Logs, Row-Level Security, PII Masking, Auth) before writing any actual product code.
The Problem: Existing BaaS tools (Supabase, Appwrite) are amazing, but they are hard to validate for GxP (FDA regulations) and often force you into a JS/Go ecosystem. I wanted something native to the Python tools I already use.
The Solution: SnackBase is a self-hosted Python (FastAPI + SQLAlchemy) backend that includes:
Compliance Core: Immutable audit logs with blockchain-style hashing (prev_hash) for integrity.
Native Python Hooks: You can write business logic in pure Python (no webhooks or JS runtimes required).
Clean Architecture: Strict separation of layers. No business logic in the API routes.
The Stack:
Python 3.12 + FastAPI
SQLAlchemy 2.0 (Async)
React 19 (Admin UI)
Links:
Live Demo: https://demo.snackbase.dev
Repo: https://github.com/lalitgehani/snackbase
The demo resets every hour. I’d love feedback on the DSL implementation or the audit logging approach.
Show HN: DSAT – Data Subject Access Toolkit
Show HN: An open-source communication layer for AI agents
Bindu is an open-source, privacy-focused social media platform that aims to provide users with more control over their data and online interactions. The project emphasizes decentralization, encryption, and ethical design principles to empower individuals and foster a healthier digital ecosystem.
Show HN: Y0 – Platform for autonomous AI agents that do real work
y0 is different because the agents actually do things — they don't just chat.
You describe what you want in natural language. Then y0 spins up a sandboxed environment and the agent gets to work: browsing websites, writing code, managing files, running shell commands. It streams progress in real-time so you can watch it work.
Unlike chatbots, y0 agents have real execution capabilities. They can navigate complex websites, fill forms, extract data, create documents, run scripts, and chain multiple steps together autonomously. When the agent finishes, you get the actual output — files, data, reports — not just a text response.
The sandboxing means agents can't mess with your local machine. Each session runs in an isolated container with its own filesystem, browser, and shell. You can give agents access to specific tools and APIs without worrying about side effects.
We built y0 because we got tired of copying code from ChatGPT and manually running it. We wanted an AI that could just do the work end-to-end.
There's a free tier to try it out. Would love feedback on what workflows you'd want agents to handle.
Show HN: A Markdown Viewer for the LLM Era (Mermaid and LaTeX)
Client-side Markdown viewer built for reading and sharing documents with diagrams and math.
It supports GitHub-flavored Markdown, Mermaid diagrams, and LaTeX rendering directly in the browser. The scope is intentionally narrow: viewing Markdown clearly, without turning it into a full editor.
Feedback is welcomed
Show HN: AI in SolidWorks
Hey HN! We’re Will and Jorge, and we’ve built LAD (Language-Aided Design), a SolidWorks add-in that uses LLMs to create sketches, features, assemblies, and macros from conversational inputs (https://www.trylad.com/).
We come from software engineering backgrounds where tools like Claude Code and Cursor have come to dominate, but when poking around CAD systems a few months back we realized there's no way to go from a text prompt input to a modeling output in any of the major CAD systems. In our testing, the LLMs aren't as good at making 3D objects as they are are writing code, but we think they'll get a lot better in the upcoming months and years.
To bridge this gap, we've created LAD, an add-in in SolidWorks to turn conversational input and uploaded documents/images into parts, assemblies, and macros. It includes:
- Dozens of tools the LLM can call to create sketches, features, and other objects in parts.
- Assembly tools the LLM can call to turn parts into assemblies.
- File system tools the LLM can use to create, save, search, and read SolidWorks files and documentation.
- Macro writing/running tools plus a SolidWorks API documentation search so the LLM can use macros.
- Automatic screenshots and feature tree parsing to provide the LLM context on the current state.
- Checkpointing to roll back unwanted edits and permissioning to determine which commands wait for user permission.
You can try LAD at https://www.trylad.com/ and let us know what features would make it more useful for your work. To be honest, the LLMs aren't great at CAD right now, but we're mostly curious to hear if people would want and use this if it worked well.
Show HN: An iOS budget app I've been maintaining since 2011
I’ve been building and selling software since the early 2000s, starting with classic shareware. In 2011, I moved into the App Store world and built an iOS budget app because I needed a simple way to track my own expenses.
At the time, my plan was to replace a few larger shareware projects with several smaller apps to spread the risk. That didn’t quite work out — one app, MoneyControl, quickly grew so much that it became my main focus.
Fifteen years later, the app is still on the App Store, still actively developed, and still used by people who started with version 1.0. Many apps from that era are long gone.
Looking back, these are some of the things that mattered most:
Starting early helped, but wasn’t enough on its own. Early visibility made a difference, but long-term maintenance and reliability are what kept users.
Focus beat diversification. I wanted many small apps. I ended up with one large, long-lived product. Deep focus turned out to be more sustainable.
Long-term maintenance is most of the work. Adapting to new iOS versions, migrating data safely, handling edge cases, and keeping old data usable mattered more than flashy features.
Discoverability keeps getting harder. Reaching users on the App Store today is much more difficult than it was years ago. Prices are higher than in the old 99-cent days, but visibility hasn’t improved.
I’m a developer first, not a marketer. I work alone, with occasional help from freelancers. No employees, no growth team. The app could probably have grown more with better marketing, but that was never my strength.
You don’t need to get rich to build something sustainable. I didn’t build this for an exit. I’ve been able to make a living from my work for over 20 years, which feels like success to me.
Building things you actually use keeps you honest. Every product I built was something I personally needed. That authenticity mattered more than any roadmap.
This week I released version 10 with a new design and a major technical overhaul. It feels less like a milestone and more like preparing the app for the next phase.
Happy to answer questions about long-term app maintenance, indie development, or keeping a product alive across many iOS generations.
Show HN: Haraltd – A cross-platform Bluetooth daemon with a JSON-based RPC
Show HN: Yolobox – Run AI coding agents with full sudo without nuking home dir
The article discusses the development of the YOLOBox, a compact, high-performance edge computing device designed for computer vision applications. It highlights the device's key features, including its use of the NVIDIA Jetson Nano platform, its support for various deep learning models, and its potential applications in fields such as robotics, surveillance, and autonomous driving.
Show HN: Agent-of-empires: OpenCode and Claude Code session manager
Hi! I’m Nathan: an ML Engineer at Mozilla.ai: I built agent-of-empires (aoe): a CLI application to help you manage all of your running Claude Code/Opencode sessions and know when they are waiting for you.
- Written in rust and relies on tmux for security and reliability - Monitors state of cli sessions to tell you when an agent is running vs idle vs waiting for your input - Manage sessions by naming them, grouping them, configuring profiles for various settings
I'm passionate about getting self-hosted open-weight LLMs to be valid options to compete with proprietary closed models. One roadblock for me is that although tools like opencode allow you to connect to Local LLMs (Ollama, lm studio, etc), they generally run muuuuuch slower than models hosted by Anthropic and OpenAI. I would start a coding agent on a task, but then while I was sitting waiting for that task to complete, I would start opening new terminal windows to start multitasking. Pretty soon, I was spending a lot of time toggling between terminal windows to see which one needed me: like help in adding a clarification, approving a new command, or giving it a new task.
That’s why I build agent-of-empires (“aoe”). With aoe, I can launch a bunch of opencode and Claude Code sessions and quickly see their status or toggle between them, which helps me avoid having a lot of terminal windows open, or having to manually attach and detach from tmux sessions myself. It’s helping me give local LLMs a fair try, because them being slower is now much less of a bottleneck.
You can give it an install with
curl -fsSL https://raw.githubusercontent.com/njbrake/agent-of-empires/m... | bash
Or brew install njbrake/aoe/aoe
And then launch by simply entering the command `aoe`.
I’m interested in what you think as well as what features you think would be useful to add!
I am planning to add some further features around sandboxing (with docker) as well as support for intuitive git worktrees and am curious if there are any opinions about what should or shouldn’t be in it.
I decided against MCP management or generic terminal usage, to help keep the tool focused on parts of agentic coding that I haven’t found a usable solution for.
I hit the character limit on this post which prevented me from including a view of the output, but the readme on the github link has a screenshot showing what it looks like.
Thanks!
Show HN: Fall asleep by watching JavaScript load
Bedtime is an open-source sleep tracking and analysis tool that uses a smartphone's accelerometer and gyroscope to monitor sleep patterns and provide insights into sleep quality and duration.
Show HN: Pdftl – pdftk in Python with pipelines, AES-256, geometry and more
pdftl is a Python library that provides a simple and efficient way to extract text and images from PDF files. It offers a high-performance and user-friendly interface for working with PDF documents.
Show HN: Customizable OSINT dashboard to monitor the situation
Monitor the situation to your own liking. Polymarket, Subway Surfers, Bluesky integration, Flight trackers. Runs all requests client side and doesn't store information. Open to feedback.
Show HN: Pane – An agent that edits spreadsheets
Hi HN,
I built Pane, a spreadsheet-native agent that operates directly on the grid (cells, formulas, references, ranges) instead of treating spreadsheets as text.
Most spreadsheet AI tools fail because they: - hallucinate formulas - lose context across edits - can't reliably modify existing models
Pane runs inside the spreadsheet environment and uses the same primitives a human would: selecting cells, editing formulas, inserting ranges, reconciling tables.
I launched it on Product Hunt this weekend and it unexpectedly resonated, which made me curious whether this approach actually holds up under scrutiny.
I'd love feedback on: - obvious failure modes you expect - whether this is fundamentally better than scripts + formulas + copilots
Happy to answer technical questions.
Show HN: Engineering Schizophrenia: Trusting yourself through Byzantine faults
Hi HN!
My name's Robert Escriva. I got my PhD from Cornell's Computer Science department back in 2017. And three years ago I had a psychotic episode that irreversibly shook up my world.
Since then I've been applying a skill I learned in grad school---namely, debugging distributed and complex systems---to my own mind.
What I've found I've put into a [book (pdf)](https://rescrv.net/engineering-schizophrenia.pdf) on engineering, my particular schizophrenic delusions, and how people who suffer as I once did can find a way through the fog to the other side.
This is not a healing memoir; it is a guide and a warning for all those who never stopped to ask, "What happens if my brain begins to fail me?"
I am writing because what I've found is not a destination, but a process. It is an ongoing process for me and for people like me. I also believe it is automate-able using the same techniques we apply to machine-based systems.
I am looking for others who recognize the stakes of the human mind to engage in discussion on the topic.
Happy hacking, Robert
Show HN: Drizzle ORM schema to DBML/Markdown/Mermaid documentation generator
Hi HN,
I built a CLI tool that generates documentation from Drizzle ORM schema definitions.
The problem: Drizzle ORM doesn't have built-in schema documentation or ER diagram generation. While drizzle-dbml-generator exists for DBML output, I needed something that could also extract JSDoc comments and generate human-readable documentation.
What it does: - Extracts JSDoc comments from your schema and includes them in the output - Generates DBML (compatible with dbdocs.io and dbdiagram.io) - Generates Markdown documentation with Mermaid ER diagrams - Supports PostgreSQL, MySQL, and SQLite - Works with both Drizzle v0 (relations()) and v1 beta (defineRelations()) APIs - Watch mode for development
Example: drizzle-docs ./src/db/schema.ts -o schema.dbml drizzle-docs ./src/db/schema.ts -f markdown -o ./docs
The key differentiator is JSDoc extraction. If you document your schema like this:
/** User account information */
export const users = pgTable("users", {
/** Unique identifier */
id: serial("id").primaryKey(),
/** User's email address */
email: varchar("email", { length: 255 }).notNull(),
});
These comments become Notes in DBML or descriptions in Markdown output.Built with TypeScript, uses the TS Compiler API for comment extraction. MIT licensed.
Would love feedback on what output formats or features would be most useful.
Show HN: SlopScore – Contributor Reputation for GitHub PRs
Open source maintainers are drowning in low effort PRs.
Someone sees a help wanted issue, pastes it into an AI, submits without testing, loops through review comments without understanding the code. The PR looks plausible at first glance but falls apart under review. Maintainers waste 30 minutes before realizing it's garbage.
This is happening at scale now. And it's worst in projects with bounty programs or GSoC where there's incentive to “contribute.”
GitHub tells you if someone is a first-time contributor to your repo. It doesn't tell you anything about their history elsewhere.
I built a Chrome extension that shows contributor reputation right on the PR page: merge rates across repos, spray-and-pray patterns, red flags.
Not sure I got the signals right. Would love feedback from maintainers.
Show HN: An LLM-optimized programming language
The article discusses the potential of large language models (LLMs) in programming, including their ability to generate code, explain concepts, and even act as an interactive programming companion. It explores the opportunities and challenges of using LLMs for software development tasks.
Show HN: I used Claude Code to discover connections between 100 books
I think LLMs are overused to summarise and underused to help us read deeper.
I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them.
I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising.
On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison.
One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans (https://trails.pieterma.es/trail/useful-lies/). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.
Details:
* The books are picked from HN’s favourites (which I collected before: https://hnbooks.pieterma.es/).
* Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10.
* Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes.
* There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window.
* Everything is stored in SQLite and manipulated using a set of CLI tools.
I wrote more about the process here: https://pieterma.es/syntopic-reading-claude/
I’m curious if this way of reading resonates for anyone else - LLM-mediated or not.
Show HN: GlyphLang – An AI-first programming language
While working on a proof of concept project, I kept hitting Claude's token limit 30-60 minutes into their 5-hour sessions. The accumulating context from the codebase was eating through tokens fast. So I built a language designed to be generated by AI rather than written by humans.
GlyphLang
GlyphLang replaces verbose keywords with symbols that tokenize more efficiently:
# Python
@app.route('/users/<id>')
def get_user(id):
user = db.query("SELECT * FROM users WHERE id = ?", id)
return jsonify(user)
# GlyphLang
@ GET /users/:id {
$ user = db.query("SELECT * FROM users WHERE id = ?", id)
> user
}
@ = route, $ = variable, > = return. Initial benchmarks show ~45% fewer tokens than Python, ~63% fewer than Java.
In practice, that means more logic fits in context, and sessions stretch longer before hitting limits. The AI maintains a broader view of your codebase throughout.Before anyone asks: no, this isn't APL with extra steps. APL, Perl, and Forth are symbol-heavy but optimized for mathematical notation, human terseness, or machine efficiency. GlyphLang is specifically optimized for how modern LLMs tokenize. It's designed to be generated by AI and reviewed by humans, not the other way around. That said, it's still readable enough to be written or tweaked if the occasion requires.
It's still a work in progress, but it's a usable language with a bytecode compiler, JIT, LSP, VS Code extension, PostgreSQL, WebSockets, async/await, generics.
Docs: https://glyphlang.dev/docs
GitHub: https://github.com/GlyphLang/GlyphLang
Show HN: ZSweep – A keyboard-first Minesweeper inspired by Vim
Show HN: What is wrong with the current coding agent workflow
Hi, as the LLM models are getting smarter in coding tasks, we will soon be using agents as co workers, as current tools like github copilot and cursor are not optimized for team collaboration, we began building PhantomX, please give your feedback, if you think we are in the right direction or what should be changed for finding the optimized development workflow which works for both humans and agents.
Show HN: Geoguess Lite – open-source, subscription free GeoGuessr alternative
Hi HN,
I built and just released another Geoguessr alternative. The difference from most other games (and the official one) is that it doesn't use Google Maps APIs at all, which makes the game more sustainable while keeping the service free.
This is the successor project to a Geoguessr-like game I built a long time ago. I've been learning since then and felt I could design and implement the project in a cleaner way this time. That motivation led me to rebuild it from scratch.
If you’re a light user who’s hesitant about paying for a subscription and looking for an alternative, feel free to give it a try. I’d really appreciate any feedback.
Source code: https://github.com/spider-hand/geoguess-lite
Show HN: Librario, a book metadata API that aggregates G Books, ISBNDB, and more
TLDR: Librario is a book metadata API that aggregates data from Google Books, ISBNDB, and Hardcover into a single response, solving the problem of no single source having complete book information. It's currently pre-alpha, AGPL-licensed, and available to try now[0].
My wife and I have a personal library with around 1,800 books. I started working on a library management tool for us, but I quickly realized I needed a source of data for book information, and none of the solutions available provided all the data I needed. One might provide the series, the other might provide genres, and another might provide a good cover, but none provided everything.
So I started working on Librario, a book metadata aggregation API written in Go. It fetches information about books from multiple sources (Google Books, ISBNDB, Hardcover. Working on Goodreads and Anna's Archive next.), merges everything, and saves it all to a PostgreSQL database for future lookups. The idea is that the database gets stronger over time as more books are queried.
You can see an example response here[1], or try it yourself:
curl -s -H 'Authorization: Bearer librario_ARbmrp1fjBpDywzhvrQcByA4sZ9pn7D5HEk0kmS34eqRcaujyt0enCZ' \
'https://api.librario.dev/v1/book/9781328879943' | jq .
This is pre-alpha and runs on a small VPS, so keep that in mind. I never hit the limits in the third-party services, so depending on how this post goes, I’ll or will not find out if the code handles that well.The merger is the heart of the service, and figuring out how to combine conflicting data from different sources was the hardest part. In the end I decided to use field-specific strategies which are quite naive, but work for now.
Each extractor has a priority, and results are sorted by that priority before merging. But priority alone isn't enough, so different fields need different treatment.
For example:
- Titles use a scoring system. I penalize titles containing parentheses or brackets because sources sometimes shove subtitles into the main title field. Overly long titles (80+ chars) also get penalized since they often contain edition information or other metadata that belongs elsewhere.
- Covers collect all candidate URLs, then a separate fetcher downloads and scores them by dimensions and quality. The best one gets stored locally and served from the server.
For most other fields (publisher, language, page count), I just take the first non-empty value by priority. Simple, but it works.
Recently added a caching layer[2] which sped things up nicely. I considered migrating from net/http to fiber at some point[3], but decided against it. Going outside the standard library felt wrong, and the migration didn't provide much in the end.
The database layer is being rewritten before v1.0[4]. I'll be honest: the original schema was written by AI, and while I tried to guide it in the right direction with SQLC[5] and good documentation, database design isn't my strong suit and I couldn't confidently vouch for the code. Rather than ship something I don't fully understand, I hired the developers from SourceHut[6] to rewrite it properly.
I've got a 5-month-old and we're still adjusting to their schedule, so development is slow. I've mentioned this project in a few HN threads before[7], so I’m pretty happy to finally have something people can try.
Code is AGPL and on SourceHut[8].
Feedback and patches[9] are very welcome :)
[0]: https://sr.ht/~pagina394/librario/
[1]: https://paste.sr.ht/~jamesponddotco/a6c3b1130133f384cffd25b3...
[2]: https://todo.sr.ht/~pagina394/librario/16
[3]: https://todo.sr.ht/~pagina394/librario/13
[4]: https://todo.sr.ht/~pagina394/librario/14
[5]: https://sqlc.dev
[6]: https://sourcehut.org/consultancy/
[7]: https://news.ycombinator.com/item?id=45419234
[8]: https://sr.ht/~pagina394/librario/
[9]: https://git.sr.ht/~pagina394/librario/tree/trunk/item/CONTRI...
Show HN: SubTrack – A SaaS tracker for devs that finds unused tools
Hi HN,
I built SubTrack to help teams find unused SaaS tools and cloud resources before they silently eat into budgets.
The motivation came from seeing how hard it is to answer simple questions: – Which SaaS tools are actually used? – Which cloud resources are idle? – What will our end-of-month spend look like?
SubTrack connects to tools like AWS, GitHub, Vercel, and others to surface unused resources and cost signals from one place. Recently I added multi-account support, currency localization, and optional AI-based insights to help interpret usage patterns.
This is an early-stage project and I’m actively iterating. I’d really appreciate feedback—especially from people managing cloud or SaaS sprawl.
Show HN: AI video generator that outputs React instead of video files
Hey HN! This is Mayank from Outscal with a new update. Our website is now live. Quick context: we built a tool that generates animated videos from text scripts. The twist: instead of rendering pixels, it outputs React/TSX components that render as the video.
Try it: https://ai.outscal.com/ Sample video: https://outscal.com/v2/video/ai-constraints-m7p3_v1/12-01-26...
You pick a style (pencil sketch or neon), enter a script (up to 2000 chars), and it runs: scene direction → ElevenLabs audio → SVG assets → Scene Design → React components → deployed video.
What we learned building this:
We built the first version on Claude Code. Even with a human triggering commands, agents kept going off-script — they had file tools and would wander off reading random files, exploring tangents, producing inconsistent output.
The fix was counterintuitive: fewer tools, not more guardrails. We stripped each agent to only what it needed and pre-fed context instead of letting agents fetch it themselves.
Quality improved immediately.
We wouldn't launch the web version until this was solid. Moved to Claude Agent SDK, kept the same constraints, now fully automated.
Happy to discuss the agent architecture, why React-as-video, or anything else.
Show HN: Blockchain-Based Equity with Separated Economic and Governance Rights
I've been researching blockchain-based capital markets and developed a framework for tokenized equity with separated economic, dividend, and governance rights. Core idea: Instead of bundling everything into one share, issue three token types: - LOBT: Economic participation, no governance - PST: Automated dividends, no ownership - OT: Full governance control
Key challenge: Verifying real-world business operations on-chain without trusted intermediaries. I propose decentralized oracles + ZK proofs, but acknowledge significant unsolved problems.
This is research/RFC seeking technical feedback on oracle architecture, regulatory viability, and which verticals this makes sense for.
Thoughts?
Show HN: Modern Philosophy Course
Fun module on Thales of Miletus—the beginning of philosophy
Show HN: Ferrite – Markdown editor in Rust with native Mermaid diagram rendering
Ferrite: Fast Markdown/Text/Code editor in Rust with native Mermaid diagrams
Built a Markdown editor using Rust + egui. v0.2.1 just dropped with major Mermaid improvements:
→ Native Mermaid diagrams - Flowcharts, sequence, state, ER, git graphs - pure Rust, no JS
→ Split view - Raw + rendered side-by-side with sync scrolling
→ Syntax highlighting - 40+ languages with large file optimization
→ JSON/YAML/TOML tree viewer - Structured editing with expand/collapse
→ Git integration - File tree shows modified/staged/untracked status
Also: minimap, zen mode, auto-save, session restore, code folding indicators.
~15MB binary, instant startup. Windows/Linux/macOS.
GitHub: https://github.com/OlaProeis/Ferrite
v0.2.2 coming soon with performance improvements for large files. Looking for feedback!
Show HN: Play poker with LLMs, or watch them play against each other
I was curious to see how some of the latest models behaved and played no limit texas holdem.
I built this website which allows you to:
Spectate: Watch different models play against each other.
Play: Create your own table and play hands against the agents directly.