Show stories

dnhkng about 8 hours ago

Show HN: How I Topped the HuggingFace Open LLM Leaderboard on Two Gaming GPUs

dnhkng.github.io
206 73
Show HN: 2D RPG base game client recreated in modern HTML5 game engine with AI
erkok about 1 hour ago

Show HN: 2D RPG base game client recreated in modern HTML5 game engine with AI

When I was much younger, I used to play a Korean MMORPG called Helbreath, and I also hosted a bunch of private servers for it. I eventually moved on, but I always loved the game’s aesthetics, its 2D nature, and its atmosphere. That may just be nostalgia talking.

The community maintained private server and client, which to my knowledge were based on leaked official files, were written in fairly archaic C++. If you’re interested in the original sources, I’ve included the main client and server files, Client.cpp and Server.cpp, in the reference folder. I always felt that if the project was rewritten in something more modern and better structured, a lot more could be done with it. But rewriting an MMORPG client and server from scratch is not exactly the kind of thing you do on a whim. That said, there was a guy who got pretty far with a C# rewrite and an XNA-based client, though that project is now also discontinued.

Now that AI has become quite capable, I decided to see how far I could get by hooking up the original assets in a modern HTML5 game engine. I wanted HTML5 because I figured a nearly 30 year old 2D game should run just fine in a browser. I ended up choosing Phaser 3 for a few reasons. Mainly, it's 2D only, free, HTML5 first (JS/TS), and code-first, which mattered because I wanted good Cursor integration for AI assistance. Another thing I liked was its integration with React, which let me build the UI using browser technologies and render the UI at native resolution on top of the WebGL canvas, rather than building the UI inside the game engine itself, which runs at 1024x576 resolution. The original game ran at 640x480.

After about 1.5 months of talking to AI on evenings and weekends, and roughly $200 worth of Cursor usage later, I finished hooking up the original assets in a modern game engine that seems to run just fine in a browser.

By "base game client", I mean that it's not fully hooked up in terms of how the full (MMO)RPG should function, but it does include all the original assets and core mechanics needed to provide a solid foundation if you want to build your own 2D (MMO)RPG on top of it. Continuing to build with AI should also work just fine, since this is how I managed to get that far. The asset library is quite rich, if you ask me, but there is one caveat: these assets are not in the public domain. They are still the property of someone, or some entity, that inherited the IP from the original developer, which is no longer in business. You can read more about that on the GitHub page.

github.com
3 0
Summary
Show HN: DD Photos – open-source photo album site generator (Go and SvelteKit)
dougdonohoe about 8 hours ago

Show HN: DD Photos – open-source photo album site generator (Go and SvelteKit)

I was frustrated with photo sharing sites. Apple's iCloud shared albums take 20+ seconds to load, and everything else comes with ads, cumbersome UIs, or social media distractions. I just want to share photos with friends and family: fast, mobile-friendly, distraction-free.

So I built DD Photos. You export photos from whatever you already use (Lightroom, Apple Photos, etc.) into folders, run `photogen` (a Go CLI) to resize them to WebP and generate JSON indexes, then deploy the SvelteKit static site anywhere that serves files. Apache, S3, whatever. No server-side code, no database.

Built over several weeks with heavy use of Claude Code, which I found genuinely useful for this kind of full-stack project spanning Go, SvelteKit/TypeScript, Apache config, Docker, and Playwright tests. Happy to discuss that experience too.

Live example: https://photos.donohoe.info Repo: https://github.com/dougdonohoe/ddphotos

github.com
52 14
Summary
aidanadd about 21 hours ago

Show HN: A retention mechanic for learning that isn't Duolingo manipulation?

i've spent the last few years shipping learning products at scale - Andrew Ng's AI upskilling platform, my MIT Media Lab spinoff focused on AI coaching. the retention problem was the same everywhere. people would engage with content once and not return. not because the content was bad - rather because there was no mechanism/motivation to make it a habit.

the standard industry answer is gamification — streaks, points, badges. Duolingo has shown this works for language. but I'm skeptical it generalizes. duolingo's retention is built on a very specific anxiety loop that feels increasingly manipulative and doesn't translate well to topics like astrophysics or reading dense research papers.

i've been building Daily - 5 min/day structured social learning on any topic, personalized by knowledge level. Eerly and small (20 users). the interesting design question i keep running into: what actually drives someone to return to learn something they want to learn but don't need to learn? no external accountability, no credential at the end, no job pressure. pure intrinsic motivation is notoriously hard to sustain.

my current hypothesis: the return trigger isn't gamification, it's social - knowing someone else is learning the same thing, or that someone will notice if you stop. testing this in month 1.

has anyone built in this space or thought carefully about the retention mechanic for purely intrinsic learning? curious what the HN crowd has seen work.

dailylabs.co
5 3
Show HN: Satellite imagery object detection using text prompts
eyasu6464 1 day ago

Show HN: Satellite imagery object detection using text prompts

I built a browser-based tool for detecting objects in satellite imagery using vision-language models (VLMs). You draw a polygon on the map and enter a text prompt such as "swimming pools", "oil tanks", or "buses". The system scans the selected area tile-by-tile and returns detections projected back onto the map as GeoJSON.

Pipeline: select area and zoom level, split the region into mercantile tiles, run each tile with the prompt through a VLM, convert predicted bounding boxes to geographic coordinates (WGS84), and render the results back on the map.

It works reasonably well for distinct structures in a zero-shot setting. occluded objects are still better handled by specialized detectors like YOLO models.

There is a public demo and no login required. I am mainly interested in feedback on detection quality, performance tradeoffs between VLMs and specialized detectors, and potential real-world use cases.

useful-ai-tools.com
6 0
Summary
jacomoRodriguez about 2 hours ago

Show HN: Don't share code. Share the prompt

Hey HN, I'm Mario. I recently talked to a colleague about AI, agents and how software development will change in the future. We were wondering why we should even share code anymore when AI agents are already really good at implementing software, just through prompts. Why can't everyone get customized software with prompts?

"Share the prompt, not the code."

Well, I thought, great idea, let's do that. That's why I built Open Prompt Hub: https://openprompthub.io.

Think GitHub just for prompts.

The idea is simple: Users can upload prompts that can then be used by you and your AI tools to generate a script, app, or web service (or prime their agent for a certain task): Just past it into your agent or ide and watch it build for you. If the prompt does not 100% covers your usecase, fork it, tweak it, et voila: tailor-made software ready to use!

The prompts are simple markdown files with a frontematter block for meta information. (The spec can be found here: https://openprompthub.io/docs) They versioned, have information on which AI models build it successfuly and have instructions on how the AI agent can test the resulting software.

Users can mention with which models they have successfully or unsuccessfully executed a prompt (builds or fail). This helps in assessing whether a prompt provides reliable output or not.

Want to create a open prompt file? Here is the prompt for it which will guide you through: https://openprompthub.io/open-prompt-hub/create-open-prompt

Security! Always a topic when dealing with AI and prompts? I've added several security checks that look at every prompt for injections and malicious behavior. Statistical analysis as well as two checks against LLMs for behaviour classification and prompt injection detection.

It's an MVP for now. But all the mentioned features are already included.

If this sounds good, let me know. Try a prompt, fork it, or tell me what you'd change in the spec or security scanner. I'm really curious about what would make you trust and reuse prompts. Or if you like the general idea...

openprompthub.com
2 1
Summary
Show HN: A playable version of the Claude Code Terraform destroy incident
cdnsteve about 7 hours ago

Show HN: A playable version of the Claude Code Terraform destroy incident

youbrokeprod.com
17 7
Show HN: A modern React onboarding tour library
bilater about 5 hours ago

Show HN: A modern React onboarding tour library

react-tourlight is the modern React tour library. Zero dependencies, WCAG 2.1 AA accessible, under 5 kB gzipped. The one that works with React 19.

github.com
4 1
Summary
mrktsm__ about 16 hours ago

Show HN: I Was Here – Draw on street view, others can find your drawings

Hey HN, I made a site where you can draw on street-level panoramas. Your drawings persist and other people can see them in real time.

Strokes get projected onto the 3D panorama so they wrap around buildings and follow the geometry, not just a flat overlay. Uses WebGL2 for rendering, Mapillary for the street imagery.

The idea is for it to become a global canvas, anyone can leave a mark anywhere and others stumble onto it.

washere.live
59 44
Summary
data-leon about 1 hour ago

Show HN: An on-device Mac app for real-time posture reminders

Built this because I wanted a simple posture app I could leave running all day without extra friction. It runs locally on Mac, watches for posture drift in real time, and nudges you when you start slouching.

I’d especially love feedback on accuracy, privacy expectations, and whether the reminders feel useful in practice.

Cheers!

apps.apple.com
3 0
Summary
rubenflamshep about 4 hours ago

Show HN: Agentic Data Analysis with Claude Code

Hey HN, as a former data analyst, I’ve been tooling around trying to get agents to do my old job. The result is this system that gets you maybe 80% of the way there. I think this is a good data point for what the current frontier models are capable of and where they are still lacking (in this case — hypothesis generation and general data intuition).

Some initial learnings: - Generating web app-based reports goes much better if there are explicit templates/pre-defined components for the model to use. - Claude can “heal” broken charts if you give it access to chart images and run a separate QA loop.

Would either feedback from the community or to hear from others that have tried similar things!

rubenflamshepherd.com
4 0
Summary
belisarius222 1 day ago

Show HN: The Mog Programming Language

Hi, Ted here, creator of Mog.

- Mog is a statically typed, compiled, embedded language (think statically typed Lua) designed to be written by LLMs -- the full spec fits in 3,200 tokens. - An AI agent writes a Mog program, compiles it, and dynamically loads it as a plugin, script, or hook. - The host controls exactly which functions a Mog program can call (capability-based permissions), so permissions propagate from agent to agent-written code. - Compiled to native code for low-latency plugin execution -- no interpreter overhead, no JIT, no process startup cost. - The compiler is written in safe Rust so the entire toolchain can be audited for security. Even without a full security audit, Mog is already useful for agents extending themselves with their own code. - MIT licensed, contributions welcome.

Motivations for Mog:

1. Syntax Only an AI Could Love: Mog is written for AIs to write, so the spec fits easily in context (~3200 tokens), and it's intended to minimize foot-guns to lower the error rate when generating Mog code. This is why Mog has no operator precedence: non-associative operations have to use parentheses, e.g. (a + b) * c. It's also why there's no implicit type coercion, which I've found over the decades to be an annoying source of runtime bugs. There's also less support in Mog for generics, and there's absolutely no support for metaprogramming, macros, or syntactic abstraction.

When asking people to write code in a language, these restrictions could be onerous. But LLMs don't care, and the less expressivity you trust them with, the better.

2. Capabilities-Based Permissionsl: There's a paradox with existing security models for AI agents. If you give an agent like OpenClaw unfettered access to your data, that's insecure and you'll get pwned. But if you sandbox it, it can't do most of what you want. Worse, if you run scripts the agent wrote, those scripts don't inherit the permissions that constrain the agent's own bash tool calls, which leads to pwnage and other chaos. And that's not even assuming you run one of the many OpenClaw plugins with malware.

Mog tries to solve this by taking inspiration from embedded languages. It compiles all the way to machine code, ahead of time, but the compiler doesn't output any dangerous code (at least it shouldn't -- Mog is quite new, so that could still be buggy). This allows a host program, such as an AI agent, to generate Mog source code, compile it, and load it into itself using dlopen(), while maintaining security guarantees.

The main trick is that a Mog program on its own can't do much. It has no direct access to syscalls, libc, or memory. It can basically call functions, do heap allocations (but only within the arena the host gives it), and return something. If the host wants the Mog program to be able to do I/O, it has to supply the functions that the Mog program will call. A core invariant is that a Mog program should never be able to crash the host program, corrupt its state, or consume more resources than the host allows.

This allows the host to inspect the arguments to any potentially dangerous operation that the Mog program attempts, since it's code that runs in the host. For example, a host agent could give a Mog program a function to run a bash command, then enforce its own session-level permissions on that command, even though the command was dynamically generated by a plugin that was written without prior knowledge of those permission settings.

(There are a couple other tricks that PL people might find interesting. One is that the host can limit the execution time of the guest program. It does this using cooperative interrupt polling, i.e. the compiler inserts runtime checks that check if the host has asked the guest to stop. This causes a roughly 10% drop in performance on extremely tight loops, which are the worst case. It could almost certainly be optimized.)

3. Self Modification Without Restart: When I try to modify my OpenClaw from my phone, I have to restart the whole agent. Mog fixes this: an agent can compile and run new plugins without interrupting a session, which makes it dynamically responsive to user feedback (e.g., you tell it to always ask you before deleting a file and without any interruption it compiles and loads the code to... actually do that).

Async support is built into the language, by adapting LLVM's coroutine lowering to our Rust port of the QBE compiler, which is what Mog uses for compilation. The Mog host library can be slotted into an async event loop (tested with Bun), so Mog async calls get scheduled seamlessly by the agent's event loop. Another trick is that the Mog program uses a stack inside the memory arena that the host provides for it to run in, rather than the system stack. The system tracks a guard page between the stack and heap. This design prevents stack overflow without runtime overhead.

Lots of work still needs to be done to make Mog a "batteries-included" experience like Python. Most of that work involves fleshing out a standard library to include things like JSON, CSV, Sqlite, and HTTP. One high-impact addition would be an `llm` library that allows the guest to make LLM calls through the agent, which should support multiple models and token budgeting, so the host could prevent the plugin from burning too many tokens.

I suspect we'll also want to do more work to make the program lifecycle operations more ergonomic. And finally, there should be a more fully featured library for integrating a Mog host into an AI agent like OpenClaw or OpenAI's Codex CLI.

moglang.org
160 75
smith-kyle 4 days ago

Show HN: Remotely use my guitar tuner

realtuner.online
246 59
Show HN: DenchClaw – Local CRM on Top of OpenClaw
kumar_abhirup 1 day ago

Show HN: DenchClaw – Local CRM on Top of OpenClaw

Hi everyone, I am Kumar, co-founder of Dench (https://denchclaw.com). We were part of YC S24, an agentic workflow company that previously worked with sales floors automating niche enterprise tasks such as outbound calling, legal intake, etc.

Building consumer / power-user software always gave me more joy than FDEing into an enterprise. It did not give me joy to manually add AI tools to a cloud harness for every small new thing, at least not as much as completely local software that is open source and has all the powers of OpenClaw (I can now talk to my CRM on Telegram!).

A week ago, we launched Ironclaw, an Open Source OpenClaw CRM Framework (https://x.com/garrytan/status/2023518514120937672?s=20) but people confused us with NearAI’s Ironclaw, so we changed our name to DenchClaw (https://denchclaw.com).

OpenClaw today feels like early React: the primitive is incredibly powerful, but the patterns are still forming, and everyone is piecing together their own way to actually use it. What made React explode was the emergence of frameworks like Gatsby and Next.js that turned raw capability into something opinionated, repeatable, and easy to adopt.

That is how we think about DenchClaw. We are trying to make it one of the clearest, most practical, and most complete ways to use OpenClaw in the real world.

Demo: https://www.youtube.com/watch?v=pfACTbc3Bh4#t=43

  npx denchclaw
I use DenchClaw daily for almost everything I do. It also works as a coding agent like Cursor - DenchClaw built DenchClaw. I am addicted now that I can ask it, “hey in the companies table only show me the ones who have more than 5 employees” and it updates it live than me having to manually add a filter.

On Dench, everything sits in a file system, the table filters, views, column toggles, calendar/gantt views, etc, so OpenClaw can directly work with it using Dench’s CRM skill.

The CRM is built on top of DuckDB, the smallest, most performant and at the same time also feature rich database we could find. Thank you DuckDB team!

It creates a new OpenClaw profile called “dench”, and opens a new OpenClaw Gateway… that means you can run all your usual openclaw commands by just prefixing every command with `openclaw --profile dench` . It will start your gateway on port 19001 range. You will be able to access the DenchClaw frontend at localhost:3100. Once you open it on Safari, just add it to your Dock to use it as a PWA.

Think of it as Cursor for your Mac (also works on Linux and Windows) which is based on OpenClaw. DenchClaw has a file tree view for you to use it as an elevated finder tool to do anything on your mac. I use it to create slides, do linkedin outreach using MY browser.

DenchClaw finds your Chrome Profile and copies it fully into its own, so you won’t have to log in into all your websites again. DenchClaw sees what you see, does what you do. It’s an everything app, that sits locally on your mac.

Just ask it “hey import my notion”, “hey import everything from my hubspot”, and it will literally go into your browser, export all objects and documents and put it in its own workspace that you can use.

We would love you all to break it, stress test its CRM capabilities, how it streams subagents for lead enrichment, hook it into your Apollo, Gmail, Notion and everything there is. Looking forward to comments/feedback!

github.com
135 125
Summary
ethan_zhao about 8 hours ago

SHOW HN: A usage circuit breaker for Cloudflare Workers

I run 3mins.news (https://3mins.news), an AI news aggregator built entirely on Cloudflare Workers. The backend has 10+ cron triggers running every few minutes: RSS fetching, article clustering, LLM calls, email delivery.

The problem: Workers Paid Plan has hard monthly limits (10M requests, 1M KV writes, 1M queue ops, etc.). There's no built-in "pause when you hit the limit", CF just starts billing overages. KV writes cost $5/M over the cap, so a retry loop bug can get expensive fast.

AWS has Budget Alerts, but those are passive notifications, by the time you read the email, the damage is done. I wanted active, application-level self-protection.

So I built a circuit breaker that faces inward, instead of protecting against downstream failures (the Hystrix pattern), it monitors my own resource consumption and gracefully degrades before hitting the ceiling.

Key design decisions:

- Per-resource thresholds: Workers Requests ($0.30/M overage) only warns at 80%. KV Writes ($5/M overage) can trip the breaker at 90%. Not all resources are equally dangerous, so some are configured as warn-only (trip=null).

- Hysteresis: Trips at 90%, recovers at 85%. The 5% gap prevents oscillation, without it the system flaps between tripped and recovered every check cycle.

- Fail-safe on monitoring failure: If the CF usage API is down, maintain last known state rather than assuming "everything is fine." A monitoring outage shouldn't mask a usage spike.

- Alert dedup: Per-resource, per-month. Without it you'd get ~8,600 identical emails for the rest of the month once a resource hits 80%.

Implementation: every 5 minutes, queries CF's GraphQL API (requests, CPU, KV, queues) + Observability Telemetry API (logs/traces) in parallel, evaluates 8 resource dimensions, caches state to KV. Between checks it's a single KV read — essentially free.

When tripped, all scheduled tasks are skipped. The cron trigger still fires (you can't stop that), but the first thing it does is check the breaker and bail out if tripped.

It's been running in production for two weeks. Caught a KV reads spike at 82% early in the month, got one warning email, investigated, fixed the root cause, never hit the trip threshold.

The pattern should apply to any metered serverless platform (Lambda, Vercel, Supabase) or any API with budget ceilings (OpenAI, Twilio). The core idea: treat your own resource budget as a health signal, just like you'd treat a downstream service's error rate.

Happy to share code details if there's interest.

Full writeup with implementation code and tests: https://yingjiezhao.com/en/articles/Usage-Circuit-Breaker-for-Cloudflare-Workers

16 7
oah about 7 hours ago

Show HN: Find Engineering Manager Jobs Efficiently

I built a site to find engineering manager jobs and make the manager job search less painful. This was created out of my own job search last year. I was using various job aggregator sites and spreadsheets and wanted a more efficient method.

The main value of the site is: * High-quality job listings and data that is updated daily. It saves countless hours not having to sift through irrelevant or stale job listings.

Give it a try and feel open to share any feedback.

rolebeaver.com
2 0
Summary
gbro3n 1 day ago

Show HN: VS Code Agent Kanban: Task Management for the AI-Assisted Developer

Agent Kanban has 4 main features:

GitOps & team friendly kanban board integration inside VS Code Structured plan / todo / implement via @kanban commands Leverages your existing agent harness rather than trying to bundle a built in one .md task format provides a permanent (editable) source of truth including considerations, decisions and actions, that is resistant to context rot

appsoftware.com
93 48
Summary
Show HN: Get AI to write code that it can read
elijahlucian about 7 hours ago

Show HN: Get AI to write code that it can read

not a new concept, but here's something me and Ana made for our cross-system-cross agent management tool. our own proprietary web design DSL heh

Like any codebase, this will change over time, and if you have tried something similar, your ideas, PRs are welcome, let's build some cool shit!

github.com
2 0
Summary
Show HN: Smux – Terminal Multiplexer built for AI agents
garymiklos about 8 hours ago

Show HN: Smux – Terminal Multiplexer built for AI agents

smux is a lightweight and efficient SOCKS5 proxy server that supports multiplexing connections over a single TCP connection, providing improved performance and reduced overhead for applications that require proxy services.

github.com
5 0
Summary
asabil about 8 hours ago

Show HN: Local-first firmware analyzer using WebAssembly

Hi HN,

I just wanted to share what I have been working on for the past few months: A firmware analyzer for embedded Linux systems that helps uncovering security issues running entirely in the browser.

This is a very early Alpha. It is going to be rough around the edges. But I think it provides quite a lot of value already.

So please go ahead and drop a firmware (only .tar rootfs archives for now) and try to break it :)

xray.boldwark.com
8 0
Show HN: Hopalong Attractor. An old classic with a new perspective in 3D
ratwolf 4 days ago

Show HN: Hopalong Attractor. An old classic with a new perspective in 3D

This article introduces the Hopalong fractal, a visually stunning mathematical attractor, and provides a Python implementation for generating and visualizing it. The article explains the mathematical principles behind the Hopalong fractal and includes example code to help readers explore and experiment with this fascinating fractal.

github.com
22 2
Summary
Show HN: AI agent that runs real browser workflows
heavymemory about 9 hours ago

Show HN: AI agent that runs real browser workflows

I’ve been experimenting with letting an AI agent execute full workflows in a browser.

In this demo I gave it my CV and asked it to find matching jobs. It scans my inbox, opens the listings, extracts the details and builds a Google Sheet automatically.

ghostd.io
4 8
Summary
gepheum 2 days ago

Show HN: Skir – like Protocol Buffer but better

Why I built Skir: https://medium.com/@gepheum/i-spent-15-years-with-protobuf-t...

Quick start: npx skir init

All the config lives in one YML file.

Website: https://skir.build

GitHub: https://github.com/gepheum/skir

Would love feedback especially from teams running mixed-language stacks.

skir.build
111 65
Summary
Show HN: I built a real-time OSINT dashboard pulling 15 live global feeds
vancecookcobxin 2 days ago

Show HN: I built a real-time OSINT dashboard pulling 15 live global feeds

Sup HN,

So I got tired of bouncing between Flightradar, MarineTraffic, and Twitter every time something kicked off globally, so I wrote a dashboard to aggregate it all locally. It’s called Shadowbroker.

I’ll admit I leaned way too hard into the "movie hacker" aesthetic for the UI, but the actual pipeline underneath is real. It pulls commercial/military ADS-B, the AIS WebSocket stream (about 25,000+ ships), N2YO satellite telemetry, and GDELT conflict data into a single MapLibre instance.

Getting this to run without melting my browser was the hardest part. I'm running this on a laptop with an i5 and an RTX 3050, and initially, dumping 30k+ moving GeoJSON features onto the map just crashed everything. I ended up having to write pretty aggressive viewport culling, debounce the state updates, and compress the FastAPI payloads by like 90% just to make it usable.

My favorite part is the signal layer—it actually calculates live GPS jamming zones by aggregating the real-time navigation degradation (NAC-P) of commercial flights overhead.

It’s Next.js and Python. I threw a quick-start script in the releases if you just want to spin it up, but the repo is open if you want to dig into the backend.

Let me know if my MapLibre implementation is terrible, I'm always looking for ways to optimize the rendering.

github.com
304 120
Show HN: A mission-based game to help students apply math in real life
firepegasus11 about 11 hours ago

Show HN: A mission-based game to help students apply math in real life

I was the weakest at maths and still get nightmares about it. After years of trying, shipping, and playing, I realised that rather than turning each concept into a module, it's much better to provide a place where students can apply their learning and see the outcome.

With this, I present Owster Labs. This is a game-based learning module where you go through missions and find solutions. Unlike normal games, this is a very watered-down version of CFD. You have a UAV to work on: understand the situation, prepare your UAV to fly below mountains and evade enemy aircraft. Do your calculations, find the best path, and learn the art of trade-offs and how maths is used in real life.

The system will play the outcome based on your solution and provide feedback. The whole mechanism is focused on making it look great and less fearful for kids.

Kindly try it on your PC and let me know what you think.

owsterlabs.com
2 0
Summary
Show HN: I built a site where strangers leave kind voice notes for each other
thepaulthomson 2 days ago

Show HN: I built a site where strangers leave kind voice notes for each other

kindvoicenotes.com
54 30
Show HN: Hotwire Club – A Learning Community for Hotwire (Turbo/Stimulus/Rails)
julianrubisch about 12 hours ago

Show HN: Hotwire Club – A Learning Community for Hotwire (Turbo/Stimulus/Rails)

Hotwire Club publishes free technical tutorials on Hotwire, each linking to a solution on Patreon (around 2/3 free, the rest available on a paid plan that's 5$ per month). We just open‑sourced our tooling stack.

Recent articles include: - Turbo Frames - Using External Forms - Turbo Frames - Loading Spinner - Faceted Search with Stimulus and Turbo Frames

hotwire.club
3 0
valryon about 5 hours ago

Show HN: We're making an inventory autobattler game and released a WebGL demo

Cosmo Cargo is a retro-inspired space trading game where players navigate through a procedurally generated galaxy, buying and selling resources to build a successful cargo business. The game features a minimalist pixel art style and a focus on strategic decision-making and resource management.

pixelnest.itch.io
2 0
Summary
benja123 about 13 hours ago

Show HN: I wrote an application to help me practice speaking slower

I’ve spoken fast and a bit unclearly my entire life. It’s one of those small curses you’re probably just born with. Speech therapy didn’t help and practicing on my own never really stuck. The worst part is worrying people won’t understand me, especially when presenting something important. I often wear a headset when presenting that plays my voice back to me in real time so I can hear myself speak and try to slow down. It hasn’t stopped me from doing things. I have a great career and good friends. But I still end up repeating myself a lot.

I’ve never found a system that fits into a normal schedule with work, kids, and everything else. So yesterday I built a small tool to help me practice pacing my speech in short sessions whenever I have a few minutes. It gives you paced stories to read, with tongue twisters mixed in, plus a free practice mode where it calculates how fast you were speaking at the end.

It runs fully client-side, uses Google’s speech API, and is open source: https://github.com/interactivecats/steady Note: Just saw there is a bug on mobile where it double counts - trying to fix (on desktop, where I use it, it works fine)

steady.cates.fm
2 0
Summary
steeleduncan 2 days ago

Show HN: Eyot, A programming language where the GPU is just another thread

Cowleyforniastudios announces the launch of Eyot, a new virtual reality game that allows players to explore a mysterious island and uncover its secrets. The game promises immersive gameplay, stunning visuals, and a compelling narrative.

cowleyforniastudios.com
78 18
Summary