Show stories

cannoneyed about 18 hours ago

Show HN: isometric.nyc – giant isometric pixel art map of NYC

Hey HN! I wanted to share something I built over the last few weeks: isometric.nyc is a massive isometric pixel art map of NYC, built with nano banana and coding agents.

I didn't write a single line of code.

Of course no-code doesn't mean no-engineering. This project took a lot more manual labor than I'd hoped!

I wrote a deep dive on the workflow and some thoughts about the future of AI coding and creativity:

http://cannoneyed.com/projects/isometric-nyc

cannoneyed.com
915 184
Summary
Show HN: Txt2plotter – True centerline vectors from Flux.2 for pen plotters
tsanummy 4 days ago

Show HN: Txt2plotter – True centerline vectors from Flux.2 for pen plotters

I’ve been working on a project to bridge the gap between AI generation and my AxiDraw, and I think I finally have a workflow that avoids the usual headaches.

If you’ve tried plotting AI-generated images, you probably know the struggle: generic tracing tools (like Potrace) trace the outline of a line, resulting in double-strokes that ruin the look and take twice as long to plot.

What I tried previously:

- Potrace / Inkscape Trace: Great for filled shapes, but results in "hollow" lines for line art.

- Canny Edge Detection: Often too messy; it picks up noise and creates jittery paths.

- Standard SDXL: Struggled with geometric coherence, often breaking lines or hallucinating perspective.

- A bunch of projects that claimed to be txt2svg but which produced extremely poor results, at least for pen plotting. (Chat2SVG, StarVector, OmniSVG, DeepSVG, SVG-VAE, VectorFusion, DiffSketcher, SVGDreamer, SVGDreamer++, NeuralSVG, SVGFusion, VectorWeaver, SwiftSketch, CLIPasso, CLIPDraw, InternSVG)

My Approach:

I ended up writing a Python tool that combines a few specific technologies to get a true "centerline" vector:

1. Prompt Engineering: An LLM rewrites the prompt to enforce a "Technical Drawing" style optimized for the generator.

2. Generation: I'm using Flux.2-dev (4-bit). It seems significantly better than SDXL at maintaining straight lines and coherent geometry.

3. Skeletonization: This is the key part. Instead of tracing contours, I use Lee’s Method (via scikit-image) to erode the image down to a 1-pixel wide skeleton. This recovers the actual stroke path.

4. Graph Conversion: The pixel skeleton is converted into a graph to identify nodes and edges, pruning out small artifacts/noise.

5. Optimization: Finally, I feed it into vpype to merge segments and sort the paths (TSP) so the plotter isn't jumping around constantly.

You can see the results in the examples inside the Github repo.

The project is currently quite barebones, but it produces better results than other options I've tested so I'm publishing it. I'm interested in implementing better pre/post processing, API-based generation, and identifying shapes for cross-hatching.

github.com
14 5
Summary
schopra909 about 18 hours ago

Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params)

Writeup (includes good/bad sample generations): https://www.linum.ai/field-notes/launch-linum-v2

We're Sahil and Manu, two brothers who spent the last 2 years training text-to-video models from scratch. Today we're releasing them under Apache 2.0.

These are 2B param models capable of generating 2-5 seconds of footage at either 360p or 720p. In terms of model size, the closest comparison is Alibaba's Wan 2.1 1.3B. From our testing, we get significantly better motion capture and aesthetics.

We're not claiming to have reached the frontier. For us, this is a stepping stone towards SOTA - proof we can train these models end-to-end ourselves.

Why train a model from scratch?

We shipped our first model in January 2024 (pre-Sora) as a 180p, 1-second GIF bot, bootstrapped off Stable Diffusion XL. Image VAEs don't understand temporal coherence, and without the original training data, you can't smoothly transition between image and video distributions. At some point you're better off starting over.

For v2, we use T5 for text encoding, Wan 2.1 VAE for compression, and a DiT-variant backbone trained with flow matching. We built our own temporal VAE but Wan's was smaller with equivalent performance, so we used it to save on embedding costs. (We'll open-source our VAE shortly.)

The bulk of development time went into building curation pipelines that actually work (e.g., hand-labeling aesthetic properties and fine-tuning VLMs to filter at scale).

What works: Cartoon/animated styles, food and nature scenes, simple character motion. What doesn't: Complex physics, fast motion (e.g., gymnastics, dancing), consistent text.

Why build this when Veo/Sora exist? Products are extensions of the underlying model's capabilities. If users want a feature the model doesn't support (character consistency, camera controls, editing, style mapping, etc.), you're stuck. To build the product we want, we need to update the model itself. That means owning the development process. It's a bet that will take time (and a lot of GPU compute) to pay off, but we think it's the right one.

What’s next? - Post-training for physics/deformations - Distillation for speed - Audio capabilities - Model scaling

We kept a “lab notebook” of all our experiments in Notion. Happy to answer questions about building a model from 0 → 1. Comments and feedback welcome!

huggingface.co
81 15
Summary
Show HN: BrowserOS – "Claude Cowork" in the browser
felarof about 18 hours ago

Show HN: BrowserOS – "Claude Cowork" in the browser

Hey HN! We're Nithin and Nikhil, twin brothers building BrowserOS (YC S24). We're an open-source, privacy-first alternative to the AI browsers from big labs.

The big differentiator: on BrowserOS you can use local LLMs or BYOK and run the agent entirely on the client side, so your company/sensitive data stays on your machine!

Today we're launching filesystem access... just like Claude Cowork, our browser agent can read files, write files, run shell commands! But honestly, we didn't plan for this. It turns out the privacy decision we made 9 months ago accidentally positioned us for this moment.

The architectural bet we made 9 months ago: Unlike other AI browsers (ChatGPT Atlas, Perplexity Comet) where the agent loop runs server-side, we decided early on to run our agent entirely on your machine (client side).

But building everything on the client side wasn't smooth. We initially built our agent loop inside a Chrome extension. But we kept hitting walls -- service worker being single thread JS; not having access to NodeJS libraries. So we made the hard decision 2 months ago to throw away everything and start from scratch.

In the new architecture, our agent loop sits in a standalone binary that we ship alongside our Chromium. And we use gemini-cli for the agent loop with some tweaks! We wrote a neat adapter to translate between Gemini format and Vercel AI SDK format. You can look at our entire codebase here: https://git.new/browseros-agent

How we give browser access to filesystem: When Claude Cowork launched, we realized something: because Atlas and Comet run their agent loop server-side, there's no good way for their agent to access your files without uploading them to the server first. But our agent was already local. Adding filesystem access meant just... opening the door (with your permissions ofc). Our agent can now read and write files just like Claude Code.

What you can actually do today:

a) Organize files in my desktop folder https://youtu.be/NOZ7xjto6Uc

b) Open top 5 HN links, extract the details and write summary into a HTML file https://youtu.be/uXvqs_TCmMQ

--- Where we are now If you haven't tried us since the last Show HN (https://news.ycombinator.com/item?id=44523409), give us another shot. The new architecture unlocked a ton of new features, and we've grown to 8.5K GitHub stars and 100K+ downloads:

c) You can now build more reliable workflows using n8n-like graph https://youtu.be/H_bFfWIevSY

d) You can also use BrowserOS as an MCP server in Cursor or Claude Code https://youtu.be/5nevh00lckM

We are very bullish on browser being the right platform for a Claude Cowork like agent. Browser is the most commonly used app by knowledge workers (emails, docs, spreadsheets, research, etc). And even Anthropic recognizes this -- for Claude Cowork, they have janky integration with browser via a chrome extension. But owning the entire stack allows us to build differentiated features that wouldn't be possible otherwise. Ex: Browser ACLs.

Agents can do dumb or destructive things, so we're adding browser-level guardrails (think IAM for agents): "role(agent): can never click buy" or "role(agent): read-only access on my bank's homepage."

Curious to hear your take on this and the overall thesis.

We’ll be in the comments. Thanks for reading!

GitHub: https://github.com/browseros-ai/BrowserOS

Download: https://browseros.com (available for Mac, Windows, Linux!)

github.com
69 25
Summary
Show HN: I've been using AI to analyze every supplement on the market
lilouartz about 20 hours ago

Show HN: I've been using AI to analyze every supplement on the market

Hey HN! This has been my project for a few years now. I recently brought it back to life after taking a pause to focus on my studies.

My goal with this project is to separate fluff from science when shopping for supplements. I am doing this in 3 steps:

1.) I index every supplement on the market (extract each ingredient, normalize by quantity)

2.) I index every research paper on supplementation (rank every claim by effect type and effect size)

3.) I link data between supplements and research papers

Earlier last year, I took pause on a project because I've ran into a few issues:

Legal: Shady companies are sending C&Ds letters demanding their products are taken down from the website. It is not something I had the mental capacity to respond to while also going through my studies. Not coincidentally, these are usually brands with big marketing budgets and poor ingredients to price ratio.

Technical: I started this project when the first LLMs came out. I've built extensive internal evals to understand how LLMs are performing. The hallucinations at the time were simply too frequent to passthrough this data to visitors. However, I recently re-ran my evals with Opus 4.5 and was very impressed. I am running out of scenarios that I can think/find where LLMs are bad at interpreting data.

Business: I still haven't figured out how to monetize it or even who the target customer is.

Despite these challenges, I decided to restart my journey.

My mission is to bring transparency (science and price) to the supplement market. My goal is NOT to increase the use of supplements, but rather to help consumers make informed decisions. Often times, supplementation is not necessary or there are natural ways to supplement (that's my focus this quarter – better education about natural supplementation).

Some things that are helping my cause – Bryan Johnson's journey has drawn a lot more attention to healthy supplementation (blueprint). Thanks to Bryan's efforts, I had so many people in recent months reach out to ask about the state of the project – interest I've not had before.

I am excited to restart this journey and to share it with HN. Your comments on how to approach this would be massively appreciated.

Some key areas of the website:

* Example of navigating supplements by ingredient https://pillser.com/search?q=%22Vitamin+D%22&s=jho4espsuc

* Example of research paper analyzed using AI https://pillser.com/research-papers/effect-of-lactobacillus-...

* Example of looking for very specific strains or ingredients https://pillser.com/probiotics/bifidobacterium-bifidum

* Example of navigating research by health-outcomes https://pillser.com/health-outcomes/improved-intestinal-barr...

* Example of product listing https://pillser.com/supplements/pb-8-probiotic-663

pillser.com
69 32
anticlickwise 4 days ago

Show HN: Interactive physics simulations I built while teaching my daughter

I started teaching my daughter physics by showing her how things actually work - plucking guitar strings to explain vibration, mixing paints to understand light, dropping objects to see gravity in action.

She learned so much faster through hands-on exploration than through books or videos. That's when I realized: what if I could recreate these physical experiments as interactive simulations?

Lumen is the result - an interactive physics playground covering sound, light, motion, life, and mechanics. Each module lets you manipulate variables in real-time and see/hear the results immediately.

Try it: https://www.projectlumen.app/

projectlumen.app
80 21
Summary
SerafimKorablev about 17 hours ago

Show HN: First Claude Code client for Ollama local models

Just to clarify the background a bit. This project wasn’t planned as a big standalone release at first. On January 16, Ollama added support for an Anthropic-compatible API, and I was curious how far this could be pushed in practice. I decided to try plugging local Ollama models directly into a Claude Code-style workflow and see if it would actually work end to end.

Here is the release note from Ollama that made this possible: https://ollama.com/blog/claude

Technically, what I do is pretty straightforward:

- Detect which local models are available in Ollama.

- When internet access is unavailable, the client automatically switches to Ollama-backed local models instead of remote ones.

- From the user’s perspective, it is the same Claude Code flow, just backed by local inference.

In practice, the best-performing model so far has been qwen3-coder:30b. I also tested glm-4.7-flash, which was released very recently, but it struggles with reliably following tool-calling instructions, so it is not usable for this workflow yet.

twitter.com
38 19
tevans3 1 day ago

Show HN: Synesthesia, make noise music with a colorpicker

This is a (silly, little) app which lets you make noise music using a color picker as an instrument. When you click on a specific point in the color picker, a bit of JavaScript maps the binary representation of the clicked-on color's hex-code to a "chord" in the 24 tone-equal-temperament scale. That chord is then played back using a throttled audio generation method which was implemented via Tone.js.

NOTE! Turn the volume way down before using the site. It is noise music. :)

visualnoise.ca
33 13
Summary
Show HN: CLI for working with Apple Core ML models
schappim about 14 hours ago

Show HN: CLI for working with Apple Core ML models

The CoreML-CLI is a command-line tool that simplifies the integration of Core ML models into iOS and macOS applications. It provides an easy-to-use interface for converting various model formats, including TensorFlow, PyTorch, and ONNX, into the Core ML format.

github.com
43 4
Summary
Show HN: Wake – Terminal Session Context for Claude Code via MCP
baobabmeeko about 4 hours ago

Show HN: Wake – Terminal Session Context for Claude Code via MCP

I kept copy-pasting terminal output into Claude Code. Built this instead.

Wake spawns your shell in a PTY, captures commands and output via shell hooks, stores everything in SQLite, and exposes it through an MCP server. Claude Code can then query your terminal history directly.

  - `wake shell` to start a session                                                                                                                                                                                                          
  - Work normally                                                                                                                                                                                                                            
  - Claude sees what happened                                                                                                                                                                                                                
                                                                                                                                                                                                                                             
Written in Rust. Zsh/bash supported. All data stays in ~/.wake/

github.com
2 0
Summary
epsteingpt about 18 hours ago

Show HN: Bible translated using LLMs from source Greek and Hebrew

Built an auditable AI (Bible) translation pipeline: Hebrew/Greek source packets -> verse JSON with notes rolling up to chapters, books, and testaments. Final texts compiled with metrics (TTR, n-grams).

This is the first full-text example as far as I know (Gen Z bible doesn't count).

There are hallucinations and issues, but the overall quality surprised me.

LLMs have a lot of promise translating and rendering 'accessible' more ancient texts.

The technology has a lot of benefit for the faithful, that I think is only beginning to be explored.

biblexica.com
43 59
Summary
Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete
williamzeng0 1 day ago

Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete

Hey HN, we trained and open-sourced a 1.5B model that predicts your next edits, similar to Cursor. You can download the weights here (https://huggingface.co/sweepai/sweep-next-edit-1.5b) or try it in our JetBrains plugin (https://plugins.jetbrains.com/plugin/26860-sweep-ai-autocomp...).

Next-edit autocomplete differs from standard autocomplete by using your recent edits as context when predicting completions. The model is small enough to run locally while outperforming models 4x its size on both speed and accuracy.

We tested against Mercury (Inception), Zeta (Zed), and Instinct (Continue) across five benchmarks: next-edit above/below cursor, tab-to-jump for distant changes, standard FIM, and noisiness. We found exact-match accuracy correlates best with real usability because code is fairly precise and the solution space is small.

Prompt format turned out to matter more than we expected. We ran a genetic algorithm over 30+ diff formats and found simple `original`/`updated` blocks beat unified diffs. The verbose format is just easier for smaller models to understand.

Training was SFT on ~100k examples from permissively-licensed repos (4hrs on 8xH100), then RL for 2000 steps with tree-sitter parse checking and size regularization. The RL step fixes edge cases SFT can’t like, generating code that doesn’t parse or overly verbose outputs.

We're open-sourcing the weights so the community can build fast, privacy-preserving autocomplete for any editor. If you're building for VSCode, Neovim, or something else, we'd love to see what you make with it!

huggingface.co
518 138
Summary
Show HN: I'm writing an alternative to Lutris
death_eternal about 14 hours ago

Show HN: I'm writing an alternative to Lutris

It's free and open source. The aim is to have more transparent access to wine prefixes and the surrounding tooling (winetricks, proton configuration, etc...) per game in comparison to Lutris. Same features like statistics (time played, times launched, times crashed, and so on) per game is available in the app.

github.com
13 2
Summary
albertsikkema about 5 hours ago

Show HN: Extracting React apps from Figma Make's undocumented binary format

The article explores methods for reverse-engineering Figma design files, allowing users to extract and modify the underlying data, such as vector graphics, text elements, and layer structures, without directly accessing the Figma application.

albertsikkema.com
2 3
Summary
Mokshgarg003 about 14 hours ago

Show HN: Figr – AI that thinks through product problems before designing

Built Figr AI because I got tired of AI builder tools market themselves as design tools and end up skipping the hard part.

Every tool I tried would jump straight to screens. But that's not how product design actually works. You don't just design screens. You think through the problem first. The flows, the edge cases, the user journey, where people will get stuck. Then the design comes finally.

Figr does that thinking layer first. It parses your existing product via a chrome extension or takes in screen-records, then works through the problem with you before designing. Surfaces edge cases, maps flows, generates specs, reviews UX. The design comes after the thinking.

It is able to do so because we trained it on over 200k+ real UX patterns and UX principles. Our major focus is on helping in building the right UX by understanding the product.

The difference from Lovable/Bolt/V0: I think those are interface builders. They are good when you know exactly what you want to build but they don't truly help in finding the right solution to the problem. Our aim with Figr is to be more like an AI PM that happens to also design.

Some difficult UX problems we've worked through with it: https://figr.design/gallery

Would love feedback, especially from folks who've hit the same wall with other AI builder/design tools.

figr.design
10 4
Summary
Show HN: Rails UI
justalever 1 day ago

Show HN: Rails UI

RailsUI is a comprehensive open-source library of UI components and design tools for building modern, responsive web applications with Ruby on Rails. It provides a range of pre-built, visually appealing components that can be easily integrated into Rails projects to accelerate development and enhance the user experience.

railsui.com
201 108
Summary
Show HN: C/C++ Cheatsheet – a modern, practical reference for C and C++
crazyguitar about 6 hours ago

Show HN: C/C++ Cheatsheet – a modern, practical reference for C and C++

Hi HN,

I’m the creator of C/C++ Cheatsheet — a modern, practical reference for both C and C++ developers. It includes concise snippet-style explanations of core language features, advanced topics like coroutines and constexpr, system programming sections, debugging tools, and useful project setups. You can explore it online at https://cppcheatsheet.com/.

I built this to help both beginners and experienced engineers quickly find clear examples and explanations without digging through fragmented blogs or outdated docs. It’s open source, regularly maintained, and contributions are welcome on GitHub.

If you’ve ever wanted a lightweight, example-focused guide to: - Modern C++ (templates, lambdas, concepts) - C fundamentals and memory handling - System programming - Debugging & profiling …this site aims to be that resource.

Any feedback is welcome. Thank you.

github.com
3 0
Summary
Show HN: The firmware that got me detained by Swiss Intelligence
reutinger about 6 hours ago

Show HN: The firmware that got me detained by Swiss Intelligence

github.com
5 16
Show HN: ChartGPU – WebGPU-powered charting library (1M points at 60fps)
huntergemmer 2 days ago

Show HN: ChartGPU – WebGPU-powered charting library (1M points at 60fps)

Creator here. I built ChartGPU because I kept hitting the same wall: charting libraries that claim to be "fast" but choke past 100K data points.

The core insight: Canvas2D is fundamentally CPU-bound. Even WebGL chart libraries still do most computation on the CPU. So I moved everything to the GPU via WebGPU:

- LTTB downsampling runs as a compute shader - Hit-testing for tooltips/hover is GPU-accelerated - Rendering uses instanced draws (one draw call per series)

The result: 1M points at 60fps with smooth zoom/pan.

Live demo: https://chartgpu.github.io/ChartGPU/examples/million-points/

Currently supports line, area, bar, scatter, pie, and candlestick charts. MIT licensed, available on npm: `npm install chartgpu`

Happy to answer questions about WebGPU internals or architecture decisions.

github.com
658 207
Summary
Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs
calcsam 3 days ago

Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs

Hi HN, we're Sam, Shane, and Abhi.

Almost a year ago, we first shared Mastra here (https://news.ycombinator.com/item?id=43103073). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback.

Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed.

If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability.

Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity.

Agent development is changing quickly, so we’ve added a lot since February:

- Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks.

- Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part.

- Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage.

- Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server.

(That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.)

Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`.

We'll be around and happy to answer any questions!

github.com
213 69
Show HN: yolo-cage – AI coding agents that can't exfiltrate secrets
borenstein 2 days ago

Show HN: yolo-cage – AI coding agents that can't exfiltrate secrets

I made this for myself, and it seemed like it might be useful to others. I'd love some feedback, both on the threat model and the tool itself. I hope you find it useful!

Backstory: I've been using many agents in parallel as I work on a somewhat ambitious financial analysis tool. I was juggling agents working on epics for the linear solver, the persistence layer, the front-end, and planning for the second-generation solver. I was losing my mind playing whack-a-mole with the permission prompts. YOLO mode felt so tempting. And yet.

Then it occurred to me: what if YOLO mode isn't so bad? Decision fatigue is a thing. If I could cap the blast radius of a confused agent, maybe I could just review once. Wouldn't that be safer?

So that day, while my kids were taking a nap, I decided to see if I could put YOLO-mode Claude inside a sandbox that blocks exfiltration and regulates git access. The result is yolo-cage.

Also: the AI wrote its own containment system from inside the system's own prototype. Which is either very aligned or very meta, depending on how you look at it.

github.com
59 72
Summary
Show HN: gRPC Transport for HashiCorp/Raft
neo2006 about 7 hours ago

Show HN: gRPC Transport for HashiCorp/Raft

This article describes a gRPC-based transport implementation for the Raft distributed consensus protocol, which allows for efficient and scalable communication between Raft nodes in a distributed system.

github.com
2 0
Summary
Show HN: CleanAF – One-click Desktop cleaner for Windows
Zemmouri about 8 hours ago

Show HN: CleanAF – One-click Desktop cleaner for Windows

Hi HN,

I built CleanAF because my Windows Desktop kept turning into a dumping ground for downloads and screenshots.

CleanAF is a tiny one-click tool that:

keeps system icons intact

moves everything else into a timestamped “Current Desktop” folder

auto-sorts files by type

requires no install, no internet, no background service

It’s intentionally simple and does one thing only.

Source + download: https://github.com/TheZemmouri/CleanAF

I’m considering adding undo/restore, scheduling, and exclusion rules if people find it useful.

Feedback welcome.

github.com
2 0
Summary
Show HN: Retain – A unified knowledge base for all your AI coding conversations
Bayram 1 day ago

Show HN: Retain – A unified knowledge base for all your AI coding conversations

Hey HN! I built Retain as the evolution of claude-reflect (github.com/BayramAnnakov/claude-reflect).

The original problem: I use Claude Code/Codex daily for coding, plus claude.ai and ChatGPT occasionally. Every conversation contains decisions, corrections, and patterns I forget existed weeks later. I kept re-explaining the same preferences.

claude-reflect was a CLI tool that extracted learnings from Claude Code sessions. Retain takes this further with a native macOS app that:

- Aggregates conversations from Claude Code, claude.ai, ChatGPT, and Codex CLI - Instant full-text search across thousands of conversations (SQLite + FTS5)

It's local-first - all data stays in a local SQLite database. No servers, no telemetry. Web sync uses your browser cookies to fetch conversations directly.

github.com
45 16
Summary
bhushanwtf about 17 hours ago

Show HN: A Node Based Editor for Three.js Shading Language (TSL)

Three.js recently introduced TSL (Three.js Shading Language), a way to write shaders in pure JavaScript/TypeScript that compiles to both GLSL and WGSL. I built this editor to provide a visual interface for the tsl ecosystem. It allows developers to prototype shaders for WebGPU/WebGL and see the results in real-time. This is a beta release and I'm looking for feedback.

tsl-graph.xyz
4 1
Show HN: High speed graphics rendering research with tinygrad/tinyJIT
quantbagel 1 day ago

Show HN: High speed graphics rendering research with tinygrad/tinyJIT

I saw a tweet that tinygrad is so good that you could make a graphics library that wraps tg. So I’ve been hacking on a gtinygrad, and honestly it convinced me it could be used for legit research.

The JIT + tensor model ends up being a really nice way to express light transport all in simple python, so I reimplemented some new research papers from SIGGRAPH like REstir PG and SZ and it just works. instead of complicated cpp its just a 200 LOC of python.

github.com
28 10
Summary
hkh 2 days ago

Show HN: See the carbon impact of your cloud as you code

Hey folks, I’m Hassan, one of the co-founders of Infracost (https://www.infracost.io). Infracost helps engineers see and reduce the cloud cost of each infrastructure change before they merge their code. The way Infracost works is we gather pricing data from Amazon Web Services, Microsoft Azure and Google Cloud. What we call a ‘Pricing Service’, which now holds around 9 million live price points (!!). Then we map these prices to infrastructure code. Once the mapping is done, it enables us to show the cost impact of a code change before it is merged, directly in GitHub, GitLab etc. Kind of like a checkout-screen for cloud infrastructure.

We’ve been building since 2020 (we were part of YC W21 batch), and iterating on the product, building out a team etc. However, back in 2020 one of our users asked if we can also show the carbon impact alongside costs.

It has been itching my brain since then. The biggest challenge has always been the carbon data. The mapping of carbon data to infrastructure is time consuming, but it is possible since we’ve done it with cloud costs. But we need the raw carbon data first. The discussions that have happened in the last few years finally led me to a company called Greenpixie in the UK. A few of our existing customers were using them already, so I immediately connected with the founder, John.

Greenpixie said they have the data (AHA!!) And their data is verified (ISO-14064 & aligned with the Greenhouse Gas Protocol). As soon as I talked to a few of their customers, I asked my team to see if we can actually finally do this, and build it.

My thinking is this: some engineers will care, and some will not (or maybe some will love it and some will hate it!). For those who care, cost and carbon are actually linked; meaning if you reduce the carbon, you usually reduce the cost of the cloud too. It can act as another motivation factor.

And now, it is here, and I’d love your feedback. Try it out by going to https://dashboard.infracost.io/, create an account, set up with the GitHub app or GitLab app, and send a pull request with Terraform changes (you can use our example terraform file). It will then show you the cost impact alongside the carbon impact, and how you can optimize it.

I’d especially love to hear your feedback on if you think carbon is a big driver for engineers within your teams, or if carbon is a big driver for your company (i.e. is there anything top-down about carbon).

AMA - I’ll be monitoring the thread :)

Thanks

66 27
Show HN: Dotenv Mask Editor: No more embarrassing screen leaks of your .env
xinbenlv 1 day ago

Show HN: Dotenv Mask Editor: No more embarrassing screen leaks of your .env

Hi HN,

I built this because I often work in coworking spaces or do screen sharing, and I've always had this fear of accidentally flashing my .env file with production secrets to the whole room (or recording).

It’s a simple VS Code extension that opens .env files in a custom grid editor. It automatically masks any value longer than 6 characters so I can safely open the file to check keys without exposing the actual secrets.

It runs 100% locally with zero dependencies (I know how sensitive these files are). It just reads the file, renders the grid, and saves it back as standard text.

It's open source (MIT) and I'd love any feedback on the masking logic or other features that would make it safer to use.

Marketplace: https://marketplace.visualstudio.com/items?itemName=xinbenlv... Github https://github.com/xinbenlv/dotenv-mask-editor

marketplace.visualstudio.com
25 23
Summary
Show HN: Open-source certificate from GitHub activity
brendonmatos 5 days ago

Show HN: Open-source certificate from GitHub activity

I built this as a small side project to learn and experiment, and I ended up with this!

I used a subdomain from my personal portfolio, and everything else runs on free tiers.

The project uses Nuxt, SVG, Cloudflare Workers, D1 (SQL), KV, Terraform, and some agentic coding with OpenAI Codex and Claude Code.

What started as a joke among friends turned into a fun excuse to build something end to end, from zero to production, and to explore a few things I’d never touched before.

I’d really appreciate any feedback or suggestions.

certificate.brendonmatos.com
42 14
Show HN: LaReview, local open-source CodeRabbit alternative
deofoo about 16 hours ago

Show HN: LaReview, local open-source CodeRabbit alternative

hihi,

LaReview is a dev-first code review workbench for complex changes.

You give it a PR (GitHub/GitLab) or a diff, and it builds a structured review plan grouped by flows (auth, API, billing) and ordered by risk. The goal is to make big reviews feel like a plan you can actually follow, not an endless scroll.

It runs locally and is designed to work with your existing AI coding agent (bring your own agent). No bot comment spam. You decide what feedback gets posted back to the PR.

Highlights:

  - AI review planning: flow-based tasks + risk ordering
  - Task-focused diffs: isolate only the hunks relevant to one concern
  - Custom rules: enforce standards like “DB queries must have timeouts”
  - Optional diagrams to understand flows before reading code (requires D2)
  - GitHub/GitLab sync to submit selected feedback + generate a summary
  - Export summary to Markdown
CLI:

  lareview
  lareview pr owner/repo#123
  git diff | lareview
  lareview --agent claude
Install:

  brew install --cask puemos/tap/lareview
Repo:

  https://github.com/puemos/lareview
I would love feedback on the workflow, what you would want it to catch, and what would make you trust it in real reviews.

github.com
3 0