Show stories

Show HN: Poppy – a simple app to stay intentional with relationships
mahirhiro about 3 hours ago

Show HN: Poppy – a simple app to stay intentional with relationships

I built Poppy as a side project to help people keep in touch more intentionally. Would love feedback on onboarding, reminders, and overall UX. Happy to answer questions.

poppy-connection-keeper.netlify.app
54 14
Summary
Show HN: Fast Chladni figure simulation in Python with NumPy vectorization
ratwolf about 2 hours ago

Show HN: Fast Chladni figure simulation in Python with NumPy vectorization

The article describes Chladni figures, the intricate patterns formed by sand on a vibrating plate. It explains the physics behind their formation and how they were used by scientists in the 18th and 19th centuries to study the modes of vibration in different materials.

github.com
4 1
Summary
LukeB42 4 days ago

Show HN: Vertex.js – A 1kloc SPA Framework

Vertex is a 1kloc SPA framework containing everything you need from React, Ractive-Load and jQuery while still being jQuery-compatible.

vertex.js is a single, self-contained file with no build step and no dependencies.

Also exhibits the curious quality of being faster than over a decade of engineering at Facebook in some cases: https://files.catbox.moe/sqei0d.png

lukeb42.github.io
42 23
Summary
vnglst 5 days ago

Show HN: Stacked Game of Life

https://github.com/vnglst/stacked-game-of-life

stacked-game-of-life.koenvangilst.nl
178 26
Summary
Show HN: A shell-native cd-compatible directory jumper using power-law frecency
jghub about 21 hours ago

Show HN: A shell-native cd-compatible directory jumper using power-law frecency

I have used this tool privately since 2011 to manage directory jumping. While it is conceptually similar to tools like z or zoxide, the underlying ranking model is different. It uses a power-law convolution with the time series of cd actions to calculate a history-aware "frecency" metric instead of the standard heuristic counters and multipliers.

This approach moves away from point-estimates for recency. Most tools look only at the timestamp of the last visit, which can allow a "one-off" burst of activity to clobber long-term habits. By convolving a configurable history window (typically the last 1,000+ events), the score balances consistent habits against recent flukes.

On performance: Despite the O(N) complexity of calculating decay for 1,000+ events, query time is ~20-30ms (Real Time) in ksh/bash, which is well below the threshold of perceived lag.

I intentionally chose a Logical Path (pwd -L) model. Preserving symlink names ensures that the "Name" remains the primary searchable key. Resolving to physical paths often strips away the very keyword the user intends to use for searching.

github.com
18 3
Show HN: A GFM+GF-MathJax/Latex HTML formatting adventure
ycombiredd 4 days ago

Show HN: A GFM+GF-MathJax/Latex HTML formatting adventure

I think this is apropos of the "Show HN" tag, as the post is explanatory and the entire codebase this little side-story use case discussed in TFA is in the repo and free to use. (I'd be pleased if you did!)

In the post, as I tried to capture in the title submitted, I outline my journey of exploration, when I became determined to make GitHub-Flavored Markdown display my text, with color, style and alignment of my choosing, which as I discovered after setting out to do so, the inability to do such a thing outside of fenced blocks with pre-defined syntax highlighting is a well-known condition, which is met with "works as intended" response because, well, GitHub doesn't want their repos looking like MySpace or Geocities or presenting security risk exposure by allowing arbitrary html/CSS styling. Sure, I should have used GitHub Pages to build a page from my Markdown using Jekyll, which is a supported way to control the styling of your own documents in your repo, but where's the fun in that?

The linked post documents the workaround I arrived at, which became an output target format that nobody has ever asked for from my ASCII line-Art diagramming tool. I thought some here might appreciate the documentation of "wasting my time so you don't have to" on a technical solution for a problem I probably just shouldn't have cared about and moved on.

github.com
3 0
Summary
Show HN: Rust compiler in PHP emitting x86-64 executables
mrconter11 4 days ago

Show HN: Rust compiler in PHP emitting x86-64 executables

The article discusses the development of a PHP extension for the Rust compiler, allowing Rust code to be executed within PHP applications. This integration aims to leverage Rust's performance and safety benefits to enhance the capabilities of PHP-based web applications.

github.com
60 48
Summary
uncSoft about 4 hours ago

Show HN: Open dataset of real-world LLM performance on Apple Silicon

Why open source local AI benchmarking on Apple Silicon matters - and why your benchmark submission is more valuable than you think.

The narrative around AI has been almost entirely cloud-centric. You send a prompt to a data center, tokens come back, and you try not to think about the latency, cost, or privacy implications. For a long time, that was the only game in town.

Apple Silicon - from M1 through the M4 Pro/Max shipping today, with M5 on the horizon - has quietly become one of the most capable local AI compute platforms on the planet. The unified memory architecture means an M4 Max with 128GB can run models that would require a dedicated GPU workstation elsewhere. At laptop wattages. Offline. Without sending a single token to a third party.

This shift is legitimately great for all parties (except cloud ones that want your money), but it comes with an unsolved problem: we don't have great, community-driven data on how these machines actually perform in the wild.

That's why I built Anubis OSS.

The Fragmented Local LLM Ecosystem

If you've run local models on macOS, you've felt this friction. Chat wrappers like Ollama and LM Studio are great for conversation but not built for systematic testing. Hardware monitors like asitop show GPU utilization but have no concept of what model is loaded or what the prompt context is. Eval frameworks like promptfoo require terminal fluency that puts them out of reach for many practitioners.

None of these tools correlate hardware behavior with inference performance. You can watch your GPU spike during generation, but you can't easily answer: Is Gemma 3 12B Q4_K_M more watt-efficient than Mistral Small 3.1 on an M3 Pro? How does TTFT scale with context length on 32GB vs. 64GB?

Anubis answers those questions. It's a native SwiftUI app - no Electron, no Python runtime, no external dependencies - that runs benchmark sessions against any OpenAI-compatible backend (Ollama, LM Studio, mlx-lm, and more) while simultaneously pulling real hardware telemetry via IOReport: GPU/CPU utilization, power draw in watts, ANE activity, memory including Metal allocations, and thermal state.

Why the Open Dataset Is the Real Story

The leaderboard submissions aren't a scoreboard - they're the start of a real-world, community-sourced performance dataset across diverse Apple Silicon configs, model families, quantizations, and backends.

This data is hard to get any other way. Formal chipmaker benchmarks are synthetic. Reviewer benchmarks cover a handful of models. Nobody has the hardware budget to run a full cross-product matrix. But collectively, the community does.

For backend developers, the dataset surfaces which chip/memory configurations are underperforming their theoretical bandwidth, where TTFT degrades under long contexts, and what the real-world power envelope looks like under sustained load. For quantization authors, it shows efficiency curves across real hardware, ANE utilization patterns, and whether a quantization actually reduces memory pressure or just parameter count.

Running a benchmark takes about two minutes. Submitting takes one click.

Your hardware is probably underrepresented. The matrix of chip × memory × backend × thermal environment is enormous — every submission fills a cell nobody else may have covered.

The dataset is open. This isn't data disappearing into a corporate analytics pipeline. It's a community resource for anyone building tools, writing research, or optimizing for the platform.

Anubis OSS is working toward 75 GitHub stars to qualify for Homebrew Cask distribution, which would make installation dramatically easier. A star is a genuinely meaningful contribution.

Download from the latest GitHub release — notarized macOS app, no build required Run a benchmark against any model in your preferred backend Submit results to the community leaderboard Star the repo at github.com/uncSoft/anubis-oss

devpadapp.com
2 1
Summary
Show HN: Shinobi – 10-second security scanner for developers
SolidDark about 5 hours ago

Show HN: Shinobi – 10-second security scanner for developers

(Built entirely in Python, installable via pip. Uses argparse for the CLI, regex pattern matching for secret detection, gitpython for history scanning, and subprocess calls for dependency auditing.)

I built a CLI tool with ClaudeCode called shinobi that runs a 10-second security scan on any project directory or GitHub repo. It checks for exposed API keys, dangerous defaults, vulnerable dependencies, missing security basics, and AI-specific risks. I pointed it at 22 popular open-source projects including FastAPI, Flask, Dify, Flowise, LiteLLM, and Lobe-Chat. The results were rough - 86% came back as high or critical threat level. The most common issue was exposed secret patterns (API key formats in source code), followed by dangerous defaults like debug mode and wildcard CORS. It's free, open source, runs 100% locally, zero data leaves your machine. pip install shinobi-scan or check it out on GitHub:

github.com
2 0
Show HN: I made a zero-copy coroutine tracer to find my scheduler's lost wakeups
lixiasky 2 days ago

Show HN: I made a zero-copy coroutine tracer to find my scheduler's lost wakeups

coroTracer is an open-source contact tracing tool that utilizes Bluetooth Low Energy (BLE) technology to track potential COVID-19 exposure. The system aims to provide a privacy-preserving solution for tracking and notifying individuals who may have been in close contact with confirmed COVID-19 cases.

github.com
42 2
Summary
Show HN: Nodepp – A C++ runtime for scripting at bare-metal speed
EDBC_REPO about 5 hours ago

Show HN: Nodepp – A C++ runtime for scripting at bare-metal speed

Nodepp is an open-source Node.js runtime environment that aims to enhance the performance and scalability of Node.js applications. It provides a set of tools and features to optimize the execution of Node.js code, including support for multi-threading, efficient memory management, and advanced debugging capabilities.

github.com
2 1
Summary
Show HN: DJ Claude – 6 Claude Codes in a jam band
p-poss about 6 hours ago

Show HN: DJ Claude – 6 Claude Codes in a jam band

It's a free Claude Code plugin (/dj-claude) and MCP server to connect multiple agents over HTTP so they can build music together.

Solo DJ web app: https://claude.dj GitHub: https://github.com/p-poss/dj-claude

loom.com
3 0
Show HN: Qlog – grep for logs, but 100x faster
cosm00 about 11 hours ago

Show HN: Qlog – grep for logs, but 100x faster

I built qlog because I got tired of waiting for grep to search through gigabytes of logs.

qlog uses an inverted index (like search engines) to search millions of log lines in milliseconds. It's 10-100x faster than grep and way simpler than setting up Elasticsearch.

Features: - Lightning fast indexing (1M+ lines/sec using mmap) - Sub-millisecond searches on indexed data - Beautiful terminal output with context lines - Auto-detects JSON, syslog, nginx, apache formats - Zero configuration - Works offline - Pure Python

Example: qlog index './logs/*/*.log' qlog search "error" --context 3

I've tested it on 10GB of logs and it's consistently 3750x faster than grep. The index is stored locally so repeated searches are instant.

Demo: Run `bash examples/demo.sh` to see it in action.

GitHub: https://github.com/Cosm00/qlog

Perfect for developers/DevOps folks who search logs daily.

Happy to answer questions!

github.com
13 16
Summary
krenerd about 15 hours ago

Show HN: I put HN discussions next to the article where it belongs

It it always bugged me when I read or share an article, the discussion lives separately from the article. I imagined being able to add Google-Docs or Notion style comments on any website. We save a snapshot of the website and allow adding discussions that live side-by-side and directly reference parts of the article.

HN articles are automatically indexed in https://cooo.link/hackernews

and you can add any website, PDFs on https://cooo.link/

Built with SvelteKit, SingleFile(for archiving page), Railway. Solo dev. Would love feedback if you found it interesting! Thanks

cool-link-web-production.up.railway.app
8 0
Summary
Show HN: I built CLI for developer docs locally working with any Coding Agent
lifez about 8 hours ago

Show HN: I built CLI for developer docs locally working with any Coding Agent

DocSearch is an open-source tool that allows you to add a search experience to your documentation. It provides a customizable and easy-to-integrate solution for adding search functionality to your website or documentation platform.

github.com
2 1
Summary
Show HN: Potatoverse, home for your vibecoded apps
born-jre about 8 hours ago

Show HN: Potatoverse, home for your vibecoded apps

DEMO: https://tubersalltheway.top/zz/pages/auth/login

github.com
6 1
Summary
Show HN: A universal protocol for AI agents to interact with any desktop UI
k4cper-g about 8 hours ago

Show HN: A universal protocol for AI agents to interact with any desktop UI

github.com
3 0
kemyd about 8 hours ago

Show HN: Paste a URL and watch multiple AI models redesign it side-by-side

The article discusses the redesign of Shuffle's website to better showcase the company's AI-powered no-code platform. It highlights the key goals of the redesign, such as improving user experience, showcasing Shuffle's capabilities, and aligning the website with the brand's identity.

shuffle.dev
3 0
Summary
adityapatni about 9 hours ago

Show HN: I built a browser game where you compete against OpenAI, Anthropic, etc

The Frontier is a website that explores the future of technology, society, and the human experience. It covers a wide range of topics, including artificial intelligence, sustainability, space exploration, and the impact of emerging technologies on our lives.

thefrontier.pages.dev
3 0
Summary
nadeem1 about 9 hours ago

Show HN: Athena Flow – a workflow runtime for Claude Code with a terminal UI

Athena Flow is a workflow runtime that wraps Claude Code via its hooks system. It receives the event stream, applies workflow and plugin logic, and renders everything in an interactive terminal UI with a live event feed.

Instead of writing throwaway prompts or one-off scripts to automate complex multi-step tasks, you define a workflow once — with prompt templates, loops, plugin bundles, and structured lifecycle hooks — and run it against any project.

The first workflow I shipped is e2e-test-builder. It navigates your app like a human, writes structured test case specs with preconditions, steps, and expected outcomes, then generates Playwright code from them.

The browser layer is handled by a separate MCP server I built called agent-web-interface, which produces semantic page snapshots instead of raw DOM — ~19% fewer tokens and ~33% faster task completion in early benchmarks against Playwright MCP.

The stack is three repos: athena-flow is the runtime (hooks -> UDS -> event pipeline -> TUI), agent-web-interface is the MCP server for token-efficient browser interaction, and athena-workflow-marketplace is where workflows and plugins live, resolved by ref like e2e-test-builder@lespaceman/athena-workflow-marketplace.

Workflows are composable — a workflow bundles plugins and can be shared via any Git repo. Writing your own is just a workflow.json and a prompt file.

Currently Claude Code only, but Codex support is in progress. Free if you already have a Claude Code subscription, no separate API key needed. MIT licensed.

Docs: https://athenaflow.in GitHub: https://github.com/lespaceman/athena-flow

Would love feedback, especially from anyone building on Claude Code hooks or thinking about workflow portability across agent runtimes.

2 0
Show HN: I built a sub-500ms latency voice agent from scratch
nicktikhonov 2 days ago

Show HN: I built a sub-500ms latency voice agent from scratch

I built a voice agent from scratch that averages ~400ms end-to-end latency (phone stop → first syllable). That’s with full STT → LLM → TTS in the loop, clean barge-ins, and no precomputed responses.

What moved the needle:

Voice is a turn-taking problem, not a transcription problem. VAD alone fails; you need semantic end-of-turn detection.

The system reduces to one loop: speaking vs listening. The two transitions - cancel instantly on barge-in, respond instantly on end-of-turn - define the experience.

STT → LLM → TTS must stream. Sequential pipelines are dead on arrival for natural conversation.

TTFT dominates everything. In voice, the first token is the critical path. Groq’s ~80ms TTFT was the single biggest win.

Geography matters more than prompts. Colocate everything or you lose before you start.

GitHub Repo: https://github.com/NickTikhonov/shuo

Follow whatever I next tinker with: https://x.com/nick_tikhonov

ntik.me
562 152
Summary
Show HN: Open-sourced a web client that lets any device use Apple's on-device AI
tayarndt about 17 hours ago

Show HN: Open-sourced a web client that lets any device use Apple's on-device AI

I use Claude every day but there are things I will not type into a cloud service. I have a Mac with Apple Silicon running Apple Foundation Models locally and privately. But I was not always at my Mac. So we built Perspective Intelligence Web. One Mac runs Perspective Server. Any device on your network opens a browser and chats with Apple Intelligence through it. Phone, Windows laptop, Chromebook, Linux machine. Streaming responses, token by token. Nothing leaves your network. MIT License. Next.js, TypeScript, Tailwind. Full writeup: https://taylorarndt.substack.com/p/i-opened-claude-and-then-...

github.com
10 1
Show HN: Gobble – Yet Another OSS Alternative to Google Analytics/PostHog, etc.
vishinvents about 10 hours ago

Show HN: Gobble – Yet Another OSS Alternative to Google Analytics/PostHog, etc.

github.com
2 1
Show HN: Timber – Ollama for classical ML models, 336x faster than Python
kossisoroyce 3 days ago

Show HN: Timber – Ollama for classical ML models, 336x faster than Python

Timber is a lightweight, high-performance logging library for Java and Kotlin that provides a simple and flexible API for logging messages. It supports multiple logging backends, including Logcat, Timber, and SLF4J, and offers features such as tree-structured logging and custom tag generation.

github.com
204 33
Summary
Slaine about 10 hours ago

Show HN: I built a tamper-evident evidence system for AI agents

The demo loads two runs directly in your browser — no signup, no uploads, no network calls after page load.

Frank: a conservative agent. Verification returns VALID. Phil: an aggressive agent with tampered evidence. Verification returns INVALID and points to the exact line where the chain breaks.

The problem I was solving: when an AI agent does something unexpected in production, the post-mortem usually comes down to "trust our logs." I wanted evidence that could cross trust boundaries — from engineering to security, compliance, or regulators — without asking anyone to trust a dashboard.

How it works:

- Every action, policy decision, and state transition is recorded into a hash-chained NDJSON event log - Logs are sealed into evidence packs (ZIP) with manifests and signatures - A verifier (also in the demo) validates integrity offline and returns VALID / INVALID / PARTIAL with machine-readable reason codes - The same inputs always produce the same artifacts — so diffs are meaningful and replay is deterministic

The verifier and the UI are deliberately separated. The UI can be wrong. The verifier will still accept or reject based on cryptographic proof.

Built this before the recent public incidents around autonomous agents made it topical. Happy to answer questions about the architecture, the proof boundary design, or the gaps I'm still working on.

guardianreplay.pages.dev
2 2
Summary
Show HN: Omni – Open-source workplace search and chat, built on Postgres
prvnsmpth 3 days ago

Show HN: Omni – Open-source workplace search and chat, built on Postgres

Hey HN!

Over the past few months, I've been working on building Omni - a workplace search and chat platform that connects to apps like Google Drive/Gmail, Slack, Confluence, etc. Essentially an open-source alternative to Glean, fully self-hosted.

I noticed that some orgs find Glean to be expensive and not very extensible. I wanted to build something that small to mid-size teams could run themselves, so I decided to build it all on Postgres (ParadeDB to be precise) and pgvector. No Elasticsearch, or dedicated vector databases. I figured Postgres is more than capable of handling the level of scale required.

To bring up Omni on your own infra, all it takes is a single `docker compose up`, and some basic configuration to connect your apps and LLMs.

What it does:

- Syncs data from all connected apps and builds a BM25 index (ParadeDB) and HNSW vector index (pgvector)

- Hybrid search combines results from both

- Chat UI where the LLM has tools to search the index - not just basic RAG

- Traditional search UI

- Users bring their own LLM provider (OpenAI/Anthropic/Gemini)

- Connectors for Google Workspace, Slack, Confluence, Jira, HubSpot, and more

- Connector SDK to build your own custom connectors

Omni is in beta right now, and I'd love your feedback, especially on the following:

- Has anyone tried self-hosting workplace search and/or AI tools, and what was your experience like?

- Any concerns with the Postgres-only approach at larger scales?

Happy to answer any questions!

The code: https://github.com/getomnico/omni (Apache 2.0 licensed)

github.com
172 42
Summary
Show HN: Effective Git
nola-a 4 days ago

Show HN: Effective Git

As many of us shift from being software engineers to software managers, tracking changes the right way is growing more important.

It’s time to truly understand and master Git.

github.com
35 6
Summary
Show HN: WooTTY - browser terminal in a single Go binary
masterkain about 12 hours ago

Show HN: WooTTY - browser terminal in a single Go binary

I needed a web terminal I could drop into K8s sidecars and internal tools without pulling in heavy dependencies or running a separate service. Existing options were either too opinionated about the shell or had fragile session handling around reconnects.

WooTTY wraps any binary -- bash, ssh, or custom tools -- and serves a browser terminal over HTTP. Sessions survive reconnects via output replay. There's a Resume/Watch distinction so multiple people can attach to the same session without stepping on each other.

github.com
3 2
Summary
Show HN: Pianoterm – Run shell commands from your Piano. A Linux CLI tool
vustagc 2 days ago

Show HN: Pianoterm – Run shell commands from your Piano. A Linux CLI tool

A little weekend project, made so I can pause/play/rewind directly on the piano, when learning a song by ear.

github.com
61 21
Summary
systima 2 days ago

Show HN: Open-Source Article 12 Logging Infrastructure for the EU AI Act

EU legislation (which affects UK and US companies in many cases) requires being able to truly reconstruct agentic events.

I've worked in a number of regulated industries off & on for years, and recently hit this gap.

We already had strong observability, but if someone asked me to prove exactly what happened for a specific AI decision X months ago (and demonstrate that the log trail had not been altered), I could not.

The EU AI Act has already entered force, and its Article 12 kicks-in in August this year, requiring automatic event recording and six-month retention for high-risk systems, which many legal commentators have suggested reads more like an append-only ledger requirement than standard application logging.

With this in mind, we built a small free, open-source TypeScript library for Node apps using the Vercel AI SDK that captures inference as an append-only log.

It wraps the model in middleware, automatically logs every inference call to structured JSONL in your own S3 bucket, chains entries with SHA-256 hashes for tamper detection, enforces a 180-day retention floor, and provides a CLI to reconstruct a decision and verify integrity. There is also a coverage command that flags likely gaps (in practice omissions are a bigger risk than edits).

The library is deliberately simple: TS, targeting Vercel AI SDK middleware, S3 or local fs, linear hash chaining. It also works with Mastra (agentic framework), and I am happy to expand its integrations via PRs.

Blog post with link to repo: https://systima.ai/blog/open-source-article-12-audit-logging

I'd value feedback, thoughts, and any critique.

42 6