Show stories

ekadet about 2 hours ago

Show HN: Retry script for Oracle Cloud free tier ARM instances

Oracle's free tier (4 ARM cores, 24GB RAM, forever) is great but nearly impossible to provision due to capacity issues. I built a Terraform retry script that automatically tries until capacity becomes available.

Also includes the fix for the "did not find a proper configuration for key id" error that everyone hits in Cloud Shell.

GitHub: https://github.com/ekadetov/oci-terraform-retry-script

2 0
Show HN: MOL – A programming language where pipelines trace themselves
MouneshK 4 days ago

Show HN: MOL – A programming language where pipelines trace themselves

Hi HN,

I built MOL, a domain-specific language for AI pipelines. The main idea: the pipe operator |> automatically generates execution traces — showing timing, types, and data at each step. No logging, no print debugging.

Example:

    let index be doc |> chunk(512) |> embed("model-v1") |> store("kb")
This auto-prints a trace table with each step's execution time and output type. Elixir and F# have |> but neither auto-traces.

Other features: - 12 built-in domain types (Document, Chunk, Embedding, VectorStore, Thought, Memory, Node) - Guard assertions: `guard answer.confidence > 0.5 : "Too low"` - 90+ stdlib functions - Transpiles to Python and JavaScript - LALR parser using Lark

The interpreter is written in Python (~3,500 lines). 68 tests passing. On PyPI: `pip install mol-lang`.

Online playground (no install needed): http://135.235.138.217:8000

We're building this as part of IntraMind, a cognitive computing platform at CruxLabx. """

github.com
37 14
Summary
Show HN: Tufte Editor – Local Markdown Editor with Tufte CSS Live Preview
avngr86 about 3 hours ago

Show HN: Tufte Editor – Local Markdown Editor with Tufte CSS Live Preview

A split-pane Markdown editor that renders live preview with Tufte CSS. Sidenotes, margin notes, epigraphs, full-width figures, and BibTeX citations with autocomplete — all in standard Markdown extensions.

Documents are .md files on disk. Images are regular files. Exports to standalone HTML with Tufte CSS baked in — my use case is writing essays and uploading them directly to my personal site.

Zero dependencies, no npm install, no accounts, no build step. Just `node server.js`. ~7 files total.

Full disclosure in the README: I'm a researcher, not a JS developer, and the code was AI-generated. Contributions and code review welcome.

github.com
2 1
Summary
Show HN: Arcmark – macOS bookmark manager that attaches to browser as sidebar
ahmed_sulajman about 19 hours ago

Show HN: Arcmark – macOS bookmark manager that attaches to browser as sidebar

Hey HN! I was a long-time Arc browser user and loved how its sidebar organized tabs and bookmarks into workspaces. I wanted to switch to other browsers without losing that workflow. So I built Arcmark, it's a macOS bookmark manager (Swift/AppKit) that floats as a sidebar attached to any browser window. It uses macOS accessibility API to follow the browser window around.

You get workspace-based links/bookmarks organization with nested folders, drag-and-drop reordering, and custom workspace colors. For the most part I tried replicating Arc's sidebar UX as close as possible.

1. Local-first: all data lives in a single JSON file ( ~/Library/Application Support/Arcmark/data.json). No accounts, no cloud sync.

2. Works with any browser: Chrome, Safari, Brave, Arc, etc. Or use it standalone as a bookmark manager with a regular window.

3. Import pinned tab and spaces from Arc: it parses Arc's StorableSidebar.json to recreate the exact workspace/folder structure.

4. Built with swift-bundler rather than Xcode.

There's a demo video in the README showing the sidebar attachment in action. The DMG is available on the releases page (macOS 13+), or you can build from source.

This is v0.1.0 so it's a very early version. Would appreciate any feedback or thoughts

GitHub: https://github.com/Geek-1001/arcmark

github.com
79 19
Summary
Show HN: DocSync – Git hooks that block commits with stale documentation
suhteevah about 4 hours ago

Show HN: DocSync – Git hooks that block commits with stale documentation

Hi HN,

I built DocSync because every team I've worked on has the same problem: documentation that was accurate when it was written and never updated after.

DocSync uses tree-sitter to parse your code and extract symbols (functions, classes, types). On every commit, a pre-commit hook compares those symbols against existing docs. If you added a function without documenting it, the commit is blocked.

How it works:

1. `clawhub install docsync` (free) 2. `docsync generate .` — generates docs from your code 3. `docsync hooks install` — installs a lefthook pre-commit hook 4. From now on, every commit checks for doc drift

Key design decisions: - 100% local — no code leaves your machine. Uses tree-sitter for AST parsing, not an LLM. - Falls back to regex if tree-sitter isn't installed - Uses lefthook (not husky) for git hooks — it's faster and language-agnostic - License validation is offline (signed JWT, no phone-home) - Free tier does one-shot doc generation. Pro ($29/user/mo) adds hooks and drift detection.

Supports TypeScript, JavaScript, Python, Rust, Go, Java, C/C++, Ruby, PHP, C#, Swift, Kotlin.

Landing page: https://docsync-1q4.pages.dev

Would love feedback on the approach. Is doc drift detection something your team would actually use?

github.com
3 0
Summary
Show HN: Sameshi – a ~1200 Elo chess engine that fits within 2KB
datavorous_ about 22 hours ago

Show HN: Sameshi – a ~1200 Elo chess engine that fits within 2KB

I made a chess engine today, and made it fit within 2KB. I used a variant of MinMax called Negamax, with alpha beta pruning. For the board representation I have used a 120-cell "mailbox". I managed to squeeze in checkmate/stalemate in there, after trimming out some edge cases.

I am a great fan of demoscene (computer art subculture) since middle school, and hence it was a ritual i had to perform.

For estimating the Elo, I measured 240 automated games against Stockfish Elo levels (1320 to 1600) under fixed depth-5 and some constrained rules, using equal color distribution.

Then converted pooled win/draw/loss scores to Elo through some standard logistic formula with binomial 95% confidence interval.

github.com
220 69
Summary
Show HN: Rover – Embeddable web agent
arjunchint 1 day ago

Show HN: Rover – Embeddable web agent

Rover is the world's first Embeddable Web Agent, a chat widget that lives on your website and takes real actions for your users. Clicks buttons. Fills forms. Runs checkout. Guides onboarding. All inside your UI.

One script tag. No APIs to expose. No code to maintain.

We built Rover because we think websites need their own conversational agentic interfaces as users don't want to figure out how your site works. If they don't have one then they are going to be disintermediated by Chrome's or Comet's agent.

We are the only Web Agent with a DOM-only architecture, thus we can setup an embeddable script as a harness to take actions on your site. Our DOM-native approach hits 81.39% on WebBench.

Beta with embed script is live at rtrvr.ai/rover.

Built by two ex-Google engineers. Happy to answer architecture questions.

rtrvr.ai
16 9
Summary
Show HN: Off Grid – Run AI text, image gen, vision offline on your phone
ali_chherawalla about 13 hours ago

Show HN: Off Grid – Run AI text, image gen, vision offline on your phone

Your phone has a GPU more powerful than most 2018 laptops. Right now it sits idle while you pay monthly subscriptions to run AI on someone else's server, sending your conversations, your photos, your voice to companies whose privacy policy you've never read. Off Grid is an open-source app that puts that hardware to work. Text generation, image generation, vision AI, voice transcription — all running on your phone, all offline, nothing ever uploaded.

That means you can use AI on a flight with no wifi. In a country with internet censorship. In a hospital where cloud services are a compliance nightmare. Or just because you'd rather not have your journal entries sitting in someone's training data.

The tech: llama.cpp for text (15-30 tok/s, any GGUF model), Stable Diffusion for images (5-10s on Snapdragon NPU), Whisper for voice, SmolVLM/Qwen3-VL for vision. Hardware-accelerated on both Android (QNN, OpenCL) and iOS (Core ML, ANE, Metal).

MIT licensed. Android APK on GitHub Releases. Build from source for iOS.

github.com
110 56
Summary
Show HN: PlanOpticon – Extract structured knowledge from video recordings
ragelink about 6 hours ago

Show HN: PlanOpticon – Extract structured knowledge from video recordings

We built PlanOpticon to solve a problem we kept hitting: hours of recorded meetings, training sessions, and presentations that nobody rewatches. It extracts structured knowledge from video — transcripts, diagrams, action items, key points, and a knowledge graph — into browsable outputs (Markdown, HTML, PDF).

How it works:

  - Extracts frames using change detection (not just every Nth frame), with periodic capture for slow-evolving content like screen shares
  - Filters out webcam/people-only frames automatically via face detection
  - Transcribes audio (OpenAI Whisper API or local Whisper — no API needed)
  - Sends frames to vision models to identify and recreate diagrams as Mermaid code
  - Builds a knowledge graph (entities + relationships) from the transcript
  - Extracts key points, action items, and cross-references between visual and spoken content
  - Generates a structured report with everything linked together
Supports OpenAI, Anthropic, and Gemini as providers — auto-discovers available models and routes each task to the best one. Checkpoint/resume so long analyses survive failures.

  pip install planopticon
  planopticon analyze -i meeting.mp4 -o ./output
Also supports batch processing of entire folders and pulling videos from Google Drive or Dropbox.

Example: We ran it on a 90-minute training session: 122 frames extracted (from thousands of candidates), 6 diagrams recreated, full transcript with speaker diarization, 540-node knowledge graph, and a comprehensive report — all in about 25 minutes.

Python 3.10+, MIT licensed. Docs at https://planopticon.dev.

github.com
2 0
Summary
rosslazer 1 day ago

Show HN: A reputation index from mitchellh's Vouch trust files

I was inspired by mitchellh's Vouch project, an explicit trust system where maintainers vouch for contributors before they can interact with a repo. Ghostty uses it to filter out AI slop PRs.

Because Vouch exposes the vouch list as a plain text file (VOUCHED.td), I realized I could aggregate them across GitHub and build a reputation index. A crawler finds every VOUCHED.td file, pulls the entries, and computes a weighted score per user. Vouches from high-star repos count more than vouches from zero-star repos.

Next step is to wire up an API so that the vouch GH action can start to use this data to auto approve contributors.

vouchbook.dev
17 3
Summary
anateus 1 day ago

Show HN: Open Notes – Community Notes-style context for Discord

Howdy, Open Notes co-founder here!

At Open Notes, we're building a system for community-driven constructive moderation and annotation that can be added to anything. Under the hood, we're using the open-source Twitter/X Community Notes algorithm (though that doesn't really kick in until you've got some scale). We're interested in providing everyone with tools for managing discourse that go beyond traditional moderation. Discord is the demo/reference integration, but we want it go anywhere and everywhere. Part of our thesis is that we want to get to where people are already talking rather than drag them to a clean and empty new room where we ask them to continue the conversation.

It's interesting that Pol.is was just recently on HN (https://news.ycombinator.com/item?id=46992815) because we're obviously inspired by them as well as the whole canon of social choice theory--we're just going at it from a different angle. It's long been true that if you wanted to trap me/yourself in a conversation, you could just bring up the Condorcet criterion (amongst others), so I'm finally turning an obsession into an actual product.

We want to enable people to make decisions about conversations as close to the conversation as possible while minimizing impact on live threads. Later, this nicely extends into all sorts of group decisionmaking. As our conversations are increasingly awash in AI of all sorts (as moderators, participants, analysts, etc.), things that help manage the discourse to fit the needs of individual communities need to be scalable but without drowning human choice in an ocean of automation.

Also, we're open-source: https://github.com/opennotes-ai/opennotes

Would love to hear people's thoughts and reactions. This has so much surface area ("all online discourse"), it's hard to formulate specific questions so instead we built a thing and now we'd love to see if it works for folks.

opennotes.ai
14 0
Summary
Show HN: SQL-tap – Real-time SQL traffic viewer for PostgreSQL and MySQL
mickamy 1 day ago

Show HN: SQL-tap – Real-time SQL traffic viewer for PostgreSQL and MySQL

sql-tap is a transparent proxy that captures SQL queries by parsing the PostgreSQL/MySQL wire protocol and displays them in a terminal UI. You can run EXPLAIN on any captured query. No application code changes needed — just change the port.

github.com
221 42
Summary
Show HN: GitHub "Lines Viewed" extension to keep you sane reviewing long AI PRs
somesortofthing 1 day ago

Show HN: GitHub "Lines Viewed" extension to keep you sane reviewing long AI PRs

I was frustrated with how bad a signal of progress through a big PR "Files viewed" was, so I made a "Lines viewed" indicator to complement it.

Designed to look like a stock Github UI element - even respects light/dark theme. Runs fully locally, no API calls.

Splits insertions and deletions by default, but you can also merge them into a single "lines" figure in the settings.

chromewebstore.google.com
11 9
Summary
Show HN: Data Engineering Book – An open source, community-driven guide
xx123122 1 day ago

Show HN: Data Engineering Book – An open source, community-driven guide

Hi HN! I'm currently a Master's student at USTC (University of Science and Technology of China). I've been diving deep into Data Engineering, especially in the context of Large Language Models (LLMs).

The Problem: I found that learning resources for modern data engineering are often fragmented and scattered across hundreds of medium articles or disjointed tutorials. It's hard to piece everything together into a coherent system.

The Solution: I decided to open-source my learning notes and build them into a structured book. My goal is to help developers fast-track their learning curve.

Key Features:

LLM-Centric: Focuses on data pipelines specifically designed for LLM training and RAG systems.

Scenario-Based: Instead of just listing tools, I compare different methods/architectures based on specific business scenarios (e.g., "When to use Vector DB vs. Keyword Search").

Hands-on Projects: Includes full code for real-world implementations, not just "Hello World" examples.

This is a work in progress, and I'm treating it as "Book-as-Code". I would love to hear your feedback on the roadmap or any "anti-patterns" I might have included!

Check it out:

Online: https://datascale-ai.github.io/data_engineering_book/

GitHub: https://github.com/datascale-ai/data_engineering_book

github.com
242 28
Summary
Show HN: Git Navigator – Use Git Without Learning Git
binhonglee about 9 hours ago

Show HN: Git Navigator – Use Git Without Learning Git

Hey HN, I built a VS Code extension that lets you do Git things without memorizing Git commands.

You know what you want to do, move this commit over there, undo that thing you just did, split this big commit into two smaller ones. Git Navigator lets you just... do that. Drag a commit to rebase it. Cherry-pick (copy) it onto another branch. Click to stage specific lines. The visual canvas shows you what's happening, so you're not guessing what `git rebase -i HEAD~3` actually means.

The inspiration was Sapling's Interactive Smartlog, which I used heavily at Meta. I wanted that same experience but built specifically for Git.

A few feature callouts:

- Worktrees — create, switch, and delete linked worktrees from the graph. All actions are worktree-aware so you're always working in the right checkout. - Stacked workflows — first-class stack mode if you're into stacked diffs, but totally optional. Conflict resolution — block-level choices instead of hunting through `<<<<<<<` markers.

Works in VS Code, Cursor, and Antigravity. Just needs a Git repo.

Site: https://gitnav.xyz

VSCode Marketplace: https://marketplace.visualstudio.com/items?itemName=binhongl...

Open VSX: https://open-vsx.org/extension/binhonglee/git-navigator

gitnav.xyz
8 0
Show HN: Bubble sort on a Turing machine
purplejacket 1 day ago

Show HN: Bubble sort on a Turing machine

Bubble sort is pretty simple in most programming languages ... what about on a Turing Machine? I used all three of Claude 4.6, GLM 5, and GPT 5.2 to get a result, so this exercise was not quite trivial, at least at this time. The resulting machine, bubble_sort_unary.yaml, will take this input:

111011011111110101111101111

and give this output:

101101110111101111101111111

I.e., it's sorting the array [3,2,7,1,5,4]. The machine has 31 states and requires 1424 steps before it comes to a halt. It also introduces two extra symbols onto the tape, 'A' and 'B'. (You could argue that 0 is also an extra symbol because turinmachine.io uses blank, ' ', as well).

When I started writing the code the LLM (Claude) balked at using unary numbers and so we implemented bubble_sort.yaml which uses the tape symbols '1', '2', '3', '4', '5', '6', '7'. This machine has fewer states, 25, and requires only 63 steps to perform the sort. So it's easier to watch it work, though it's not as generalized as the other TM.

Some comments about how the 31 states of bubbles_sort_unary.yaml operate:

  | Group | Count | Purpose |
  |---|---|---|
  | `seek_delim_{clean,dirty}` | 2 | Pass entry: scan right to the next `0` delimiter between adjacent numbers. |
  | `cmpR_*`, `cmpL_*`, `cmpL_ret_*`, `cmpL_fwd_*` | 8 | Comparison: alternately mark units in the right (`B`) and left (`A`) numbers to compare their sizes. |
  | `chk_excess_*`, `scan_excess_*`, `mark_all_X_*` | 6 | Excess check: right number exhausted — see if unmarked `1`s remain on the left (meaning L > R, swap needed). |
  | `swap_*` | 7 | Swap: bubble each `X`-marked excess unit rightward across the `0` delimiter. |
  | `restore_\*` | 6 | Restore: convert `A`, `B`, `X` marks back to `1`s, then advance to the next pair. |
  | `rewind` / `done` | 2 | Rewind to start after a dirty pass, or halt. |
(The above is in the README.md if it doesn't render on HN.)

I'm curious if anyone can suggest refinements or further ideas. And please send pull requests if you're so inclined. My development path: I started by writing a pretty simple INITIAL_IDEAS.md, which got updated somewhat, then the LLM created a SPECIFICATION.md. For the bubble_sort_unary.yaml TM I had to get the LLMs to build a SPEC_UNARY.md because too much context was confusing them. I made 21 commits throughout the project and worked for about 6 hours (I was able to multi-task, so it wasn't 6 hours of hard effort). I spent about $14 on tokens via Zed and asked some questions via t3.chat ($8/month plan).

A final question: What open source license is good for these types of mini-projects? I took the path of least resistance and used MIT, but I observe that turingmachine.io uses BSD 3-Clause. I've heard of "MIT with Commons Clause;" what's the landscape surrounding these kind of license questions nowadays?

github.com
6 0
Summary
jimmyechan 1 day ago

Show HN: A playable toy model of frontier AI lab capex decisions

I made a lightweight web game about compute CAPEX tradeoffs: https://darios-dilemma.up.railway.app/

No signup, runs on mobile/desktop.

Loop per round:

1. choose compute capacity 2. forecast demand 3. allocate capacity between training and inference 4. random demand shock resolves outcome

You can end profitable, cash constrained, or bankrupt depending on allocation + forecast error.

Goal was to make the decision surface intuitive in 2–3 minutes per run.

It’s a toy model and deliberately omits many real world factors.

Note: this is based on what I learned after listening to Dario on Dwarkesh's podcast - thought it was fascinating.

darios-dilemma.up.railway.app
8 0
Summary
pattle 3 days ago

Show HN: Geo Racers – Race from London to Tokyo on a single bus pass

Geo Racers is a mobile game that combines geography and racing, allowing players to explore real-world locations and compete in fast-paced races. The game aims to make learning about different countries and landmarks engaging and fun.

geo-racers.com
144 86
Summary
austinwang115 1 day ago

Show HN: Skill that lets Claude Code/Codex spin up VMs and GPUs

I've been working on CloudRouter, a skill + CLI that gives coding agents like Claude Code and Codex the ability to start cloud VMs and GPUs.

When an agent writes code, it usually needs to start a dev server, run tests, open a browser to verify its work. Today that all happens on your local machine. This works fine for a single task, but the agent is sharing your computer: your ports, RAM, screen. If you run multiple agents in parallel, it gets a bit chaotic. Docker helps with isolation, but it still uses your machine's resources, and doesn't give the agent a browser, a desktop, or a GPU to close the loop properly. The agent could handle all of this on its own if it had a primitive for starting VMs.

CloudRouter is that primitive — a skill that gives the agent its own machines. The agent can start a VM from your local project directory, upload the project files, run commands on the VM, and tear it down when it's done. If it needs a GPU, it can request one.

  cloudrouter start ./my-project
  cloudrouter start --gpu B200 ./my-project
  cloudrouter ssh cr_abc123 "npm install && npm run dev"
Every VM comes with a VNC desktop, VS Code, and Jupyter Lab, all behind auth-protected URLs. When the agent is doing browser automation on the VM, you can open the VNC URL and watch it in real time. CloudRouter wraps agent-browser [1] for browser automation.

  cloudrouter browser open cr_abc123 "http://localhost:3000"
  cloudrouter browser snapshot -i cr_abc123
  # → @e1 [link] Home  @e2 [link] Settings  @e3 [button] Sign Out
  cloudrouter browser click cr_abc123 @e2
  cloudrouter browser screenshot cr_abc123 result.png
Here's a short demo: https://youtu.be/SCkkzxKBcPE

What surprised me is how this inverted my workflow. Most cloud dev tooling starts from cloud (background agents, remote SSH, etc) to local for testing. But CloudRouter keeps your agents local and pushes the agent's work to the cloud. The agent does the same things it would do locally — running dev servers, operating browsers — but now on a VM. As I stopped watching agents work and worrying about local constraints, I started to run more tasks in parallel.

The GPU side is the part I'm most curious to see develop. Today if you want a coding agent to help with anything involving training or inference, there's a manual step where you go provision a machine. With CloudRouter the agent can just spin up a GPU sandbox, run the workload, and clean it up when it's done. Some of my friends have been using it to have agents run small experiments in parallel, but my ears are open to other use cases.

Would love your feedback and ideas. CloudRouter lives under packages/cloudrouter of our monorepo https://github.com/manaflow-ai/manaflow.

[1] https://github.com/vercel-labs/agent-browser

cloudrouter.dev
133 33
Summary
Show HN: Auto-Layouting ASCII Diagrams
switz about 14 hours ago

Show HN: Auto-Layouting ASCII Diagrams

Box of Rain is an open-source project that provides a simple, flexible, and extensible system for managing user settings and configurations across different applications and platforms. It aims to simplify the process of managing user preferences and settings, making it easier for developers to build applications with customizable user experiences.

github.com
6 2
Summary
gottebp about 11 hours ago

Show HN: An x86 assembly game from 2002, ported to WebAssembly with Claude Code

The article discusses the Alan Parsons Project's recent game release, which allows players to explore the band's musical catalog and creative process through interactive gameplay. The game aims to provide fans with a unique and immersive experience while introducing the band's work to new audiences.

particlefield.com
5 2
Summary
Show HN: Twsnmp FK – Lightweight NMS Built with Go, Wails, and Svelte
twsnmp about 11 hours ago

Show HN: Twsnmp FK – Lightweight NMS Built with Go, Wails, and Svelte

Hi HN, developer here.

I’ve been developing and maintaining a network management tool called TWSNMP for about 25 years. This new version, "FK" (Fresh Konpaku), is a complete modern rewrite.

Why I built this: Most enterprise NMS are heavy, server-based, and complex to set up. I wanted something that runs natively on a desktop, is extremely fast to launch, and provides deep insights like packet analysis and NetFlow without a huge infrastructure.

The Tech Stack: - Backend: Go (for high-speed log processing and SNMP polling) - Frontend: Svelte (to keep the UI snappy and lightweight) - Bridge: Wails (to build a cross-platform desktop app without the bulk of Electron)

I’m looking for feedback from fellow network admins and developers. What features do you find most essential in a modern, lightweight NMS?

GitHub: https://github.com/twsnmp/twsnmpfk

github.com
3 0
fabienpenso 3 days ago

Show HN: Moltis – AI assistant with memory, tools, and self-extending skills

Hey HN. I'm Fabien, principal engineer, 25 years shipping production systems (Ruby, Swift, now Rust). I built Moltis because I wanted an AI assistant I could run myself, trust end to end, and make extensible in the Rust way using traits and the type system. It shares some ideas with OpenClaw (same memory approach, Pi-inspired self-extension) but is Rust-native from the ground up. The agent can create its own skills at runtime.

Moltis is one Rust binary, 150k lines, ~60MB, web UI included. No Node, no Python, no runtime deps. Multi-provider LLM routing (OpenAI, local GGUF/MLX, Hugging Face), sandboxed execution (Docker/Podman/Apple Containers), hybrid vector + full-text memory, MCP tool servers with auto-restart, and multi-channel (web, Telegram, API) with shared context. MIT licensed. No telemetry phoning home, but full observability built in (OpenTelemetry, Prometheus).

I've included 1-click deploys on DigitalOcean and Fly.io, but since a Docker image is provided you can easily run it on your own servers as well. I've written before about owning your content (https://pen.so/2020/11/07/own-your-content/) and owning your email (https://pen.so/2020/12/10/own-your-email/). Same logic here: if something touches your files, credentials, and daily workflow, you should be able to inspect it, audit it, and fork it if the project changes direction.

It's alpha. I use it daily and I'm shipping because it's useful, not because it's done.

Longer architecture deep-dive: https://pen.so/2026/02/12/moltis-a-personal-ai-assistant-bui...

Happy to discuss the Rust architecture, security model, or local LLM setup. Would love feedback.

moltis.org
120 46
Show HN: Open-source CI for coding with AI
sburl 1 day ago

Show HN: Open-source CI for coding with AI

Blog post: https://spencerburleigh.com/blog/2026/02/13/crosscheck/ Repo: https://github.com/sburl/CrossCheck

github.com
4 0
Summary
ansht2 about 12 hours ago

Show HN: Stack Overflow, but for AI agents (questions, answers, logs, context)

Hi HN — I built ChatOverflow, a Q&A forum for AI coding agents (Stack Overflow style).

Agents keep re-learning the same debugging patterns each run (tool/version quirks, setup issues, framework behaviors). ChatOverflow is a shared place where agents post a question (symptom + logs + minimal reproduction + env context) and an answer (steps + why it works), so future agents can search and reuse it. Small test on 57 SWE-bench Lite tasks: letting agents search prior posts reduced average time 18.7 min → 10.5 min (-44%). A big bet here is that karma/upvotes/acceptance can act as a lightweight “verification signal” for solutions that consistently work in practice.

Inspired by Moltbook. Feedback wanted on:

1. where would this fit in your agent workflow 2. how would you reduce prompt injection and prevent agents coordinating/brigading to push adversarial or low-quality posts?

chatoverflow.dev
2 0
Summary
keepamovin about 20 hours ago

Show HN: Windows 98½ – fake desktop, real Internet

win9-5.com
10 6
Show HN: OpenWhisper – free, local, and private voice-to-text macOS app
rwu1997 1 day ago

Show HN: OpenWhisper – free, local, and private voice-to-text macOS app

I wanted a voice-to-text app but didn't trust any of the proprietary ones with my privacy.

So I decided to see if I could vibe code it with 0 macOS app & Swift experience.

It uses a local binary of whisper.cpp (a fast implementation of OpenAI's Whisper voice-to-text model in C++).

Github: https://github.com/richardwu/openwhisper

I also decided to take this as an opportunity to compare 3 agentic coding harnesses:

Cursor w/ Opus 4.6: - Best one-shot UI by far - Didn't get permissioning correct - Had issues making the "Cancel recording" hotkey being turned on all the time

Claude Code w/ Opus 4.6: - Fewest turns to get main functionality right (recording, hotkeys, permissions) - Was able to get a decent UI with a few more turns

Codex App w/ Codex 5.3 Extra-High: - Worst one-shot UI - None of the functionality worked without multiple subsequent prompts

github.com
35 14
Summary
Show HN: ClipPath – Paste screenshots as file paths in your terminal
viniciusborgeis 1 day ago

Show HN: ClipPath – Paste screenshots as file paths in your terminal

ClipPath is an open-source library that provides a simple and efficient way to implement clipping paths in web applications. It offers cross-browser compatibility and supports various image formats, making it a useful tool for web developers working with complex visual elements.

github.com
16 1
Summary
error404x 1 day ago

Show HN: Prompt to Planet, generate procedural 3D planets from text

prompttoplanet.n4ze3m.com
12 12
Show HN: Keyjump – a keyboard-first new tab for power-users
kristianmitk about 15 hours ago

Show HN: Keyjump – a keyboard-first new tab for power-users

I built Keyjump, a keyboard-first new tab page for quickly jumping between bookmarks, dashboards, and predefined site searches.

I originally made it for myself during my CS studies because I was constantly switching between tools and wanted something faster than clicking through bookmarks. I’ve used it locally for years and recently cleaned it up and made it public: https://keyjump.app/

Main characteristics:

- Keyboard-first navigation

- Custom search templates (e.g. jump directly to search results on specific sites)

- Local-first: data stored in browser local storage by default

- No account required

- Optional account for cross-device sync and persistence (when clearing browser data)

- Chrome extension available (Firefox planned); it also lets you quickly save bookmarks/search queries from other tabs and launch an overlay on any page

- Theme and layout customization

It’s intentionally simple and focused. I’d appreciate any feedback or criticism.

keyjump.app
2 1