Color.io Is Going Offline
Color.io will continue running until December 31, 2025. After that date, the web application and all online services will go offline permanently.
Ask HN: Hearing aid wearers, what's hot?
One of my Phonak Audeo 90’s (RIC) died the other day after 5 years and I’m shopping for new. What’s your go to hearing aid currently if you’ve upgraded recently or have been thinking of doing so?
Moderate loss, have worn them for many years, enjoy listening to music and nature, but also need help in meetings and noisy environments.
Not worried about cost and wanting to get one more good deal out of work insurance before I retire.
Ask HN: Should account creation/origin country be displayed on HN profiles?
Would it be beneficial for a platform to display the country of account origin on each user’s profile? I’m curious how the HN community thinks about this from angles like privacy, moderation, transparency, anti-abuse, and whether it meaningfully improves discussion quality. Are there strong reasons for or against showing this kind of metadata publicly?
Should R ecosystem be a choice for longer-term projects?
There are R packages for scientific papers developed around 2014. These packages are still working with its base code with C in the newest current R version and on CRAN. So I wonder for much longer-term projects, R is a better choice than Python?
Ask HN: Scheduling stateful nodes when MMAP makes memory accounting a lie
We’re hitting a classic distributed systems wall and I’m looking for war stories or "least worst" practices.
The Context: We maintain a distributed stateful engine (think search/analytics). The architecture is standard: a Control Plane (Coordinator) assigns data segments to Worker Nodes. The workload involves heavy use of mmap and lazy loading for large datasets.
The Incident: We had a cascading failure where the Coordinator got stuck in a loop, DDOS-ing a specific node.
The Signal: Coordinator sees Node A has significantly fewer rows (logical count) than the cluster average. It flags Node A as "underutilized."
The Action: Coordinator attempts to rebalance/load new segments onto Node A.
The Reality: Node A is actually sitting at 197GB RAM usage (near OOM). The data on it happens to be extremely wide (fat rows, huge blobs), so its logical row count is low, but physical footprint is massive.
The Loop: Node A rejects the load (or times out). The Coordinator ignores the backpressure, sees the low row count again, and retries immediately.
The Core Problem: We are trying to write a "God Equation" for our load balancer. We started with row_count, which failed. We looked at disk usage, but that doesn't correlate with RAM because of lazy loading.
Now we are staring at mmap. Because the OS manages the page cache, the application-level RSS is noisy and doesn't strictly reflect "required" memory vs "reclaimable" cache.
The Question: Attempting to enumerate every resource variable (CPU, IOPS, RSS, Disk, logical count) into a single scoring function feels like an NP-hard trap.
How do you handle placement in systems where memory usage is opaque/dynamic?
Dumb Coordinator, Smart Nodes: Should we just let the Coordinator blind-fire based on disk space, and rely 100% on the Node to return hard 429 Too Many Requests based on local pressure?
Cost Estimation: Do we try to build a synthetic "cost model" per segment (e.g., predicted memory footprint) and schedule based on credits, ignoring actual OS metrics?
Control Plane Decoupling: Separate storage balancing (disk) from query balancing (mem)?
Feels like we are reinventing the wheel. References to papers or similar architecture post-mortems appreciated.
Ask HN: Good resources to learn financial systems engineering?
I work mainly in energy market communications and systems that facilitate energy trading, balancing and such. Currently most parties there take minutes to process messages and I think there could be a lot to learn from financial systems engineering. Any good resources you can recommend?
Ask HN: What did Stripe change (Value Add)?
What was the revolutionary thing Stripe enabled that changed payments & commerce ? From what I understand - people could do payments via credit-cards & paypal.
What was the value added from stripe that made it differentiated from the solutions / providers before ?
Ask HN: Opinions on facial recognition at air ports?
Both the EU and the US have introduced face scanning at airports to "increase security". EU rules are currently stricter and US rules allow some opt-outs for people that are uncomfortable with it. But it's only a matter of time before it becomes de facto mandatory for everyone. They claim that data is not retained or shared with other parties. Yeah, right, I totally believe that... Can something be done about this? I'm convinced that very few customers think face scanning is an improvement.
Google attacking human thought with Gemini in Google Keep
The blue box question that has been added to the blank slate note taking app is perhapse the most insidious short circuiting of the natural human thought process I've ever seen in a note taking app.
Why would I use an app that lets me track my thoughts when it actively tries to derail my thought process at the most critical moment (the blank slate moment.)
Ask HN: What work problems would your company pay to solve?
I’m researching ideas for a new B2B product and want to understand real bottlenecks teams face.
What problems, inefficiencies, or recurring frustrations do you or your team deal with at work—where, if a solid solution existed, your company would actually pay for it?
Examples could include:
manual workflows
data or reporting pain points
communication gaps
compliance or documentation hassles
tools your team keeps hacking together internally
anything expensive, slow, or annoying
Would love to hear your role/industry (optional) and the specific problem you face.
Ask HN: Have major security breeches been less common lately?
A few years ago, it felt like we had another news story of a major security breech every other day or something. (I'm exaggerating of course but the stories were a regular occurrence.)
It occurred to me today that I couldn't remember the last time I'd seen a story like this.
Have news stories about major security breeches been less common during the (approximately) past two years compared to the two years before that?
I don't know how I would go about verifying this--I'd have to find a way to classify a "big news story" and "major security breech" and then go back through the news--but I'm wondering if others have noticed it.
If it's not just me, the next question would be why. Have actual security breeches gone down, or just reporting on it?
Tell HN: Happy Thanksgiving
Ask HN: Is Techmeme getting paid to boost certain articles?
https://imgur.com/a/EcBT3Cc
I noticed that Techmeme is currently boosting some guy's podcast to the very top of their website as a headline. I've also noticed the website overall becoming very deafeningly pro-AI in the past six months, often using the same idea in my screenshot to boost pro-AI content to the same headline spot or "above the fold" at the very least.
Are they taking kickbacks somehow? I don't see how a random podcast deserves headline treatment like this unless they got paid or some kind of kickback to promote it. And even then, it's not labeled as promoted content. The whole thing smells fishy to me.
Ask HN: Hetzner asking for passport for new account? just me, or everyone?
Just made a Hetzner account, activated 2FA, the usual.
Then go to buy a storage box, and I get this;
> Our automated system check indicates that your account information has an increased level of risk. Please choose one of the following verification methods:
And you can pay 20 EUR up front by PayPal, or hand over your passport (fat chance!)
Is this genuine, or does everyone get this and it's a fake reason?
(I've signed up to pay by bank transfer, so I'm also wondering why they don't ask me for pre-payment by bank transfer. As it is, no way on God's clean earth they get a passport, and I'm not on Paypal, so will try to use a friend's, but seems my second try to board Hetzner train has bounced - first time I left almost immediately, when I saw spaces not permitted in passwords.)
Ask HN: How does one move from BigTech to more fullfilling places?
I have moved to lesser big companies but still found the talent lacking and people make lives hard for others. Especially in the Bay area. I just want to be mission driven and code, but most places the product, founder or mission and hence the people seem flawed without good incentives. Most simply chase promo and managers often take advantage of this to create a bad environment. Ignoring this behavior for long has led me to burnout with no recognition for doing good work.
At this point, I'm willing to take a paycut for more fulfilling place and a non-profit/OSS seem good places as only people aligned with mission would be likely working there. OSS has more odds of being higher quality and technical. But I have no idea how to break in.
Ask HN: What is your monitor setup?
I'm in the market for a new monitor. My setup is both for gaming and working and I'm using a gigabyte 27" 1440p/144hz monitor and an old dell 23" monitor. While refresh rate is great on the gigabyte monitor it leaves a lot to be desired in terms of picture quality. The old DELL looks even better than the newish gigabyte monitor.
I wonder what everyone else is using these days ? I can take some inspirations from your posts.
A logging loop in GKE cost me $1,300 in 3 days – 9.2x my actual infrastructure
Last month, a single container in my GKE cluster (Sao Paulo region) entered an error loop, outputting to stdout at ~2k logs/second. I discovered the hard way that GKE's default behavior is to ingest 100% of this into Cloud Logging with no rate limiting. My bill jumped nearly 1000% before alerts caught it.
Infrastructure (Compute): ~$140 (R$821 BRL) Cloud Logging: ~$1,300 (R$7,554 BRL)
Ratio: Logging cost 9.2x the actual servers.
https://imgur.com/jGrxnkh
I fixed the loop and paused the `_Default` sink immediately.
I opened a billing ticket requesting a "one-time courtesy adjustment" for a runaway resource—standard practice for first-time anomalies on AWS/Azure.
I have been rejected twice.
The latest response: "The team has declined the adjustment request due to our internal policies."
If you run GKE, the `_Default` sink in Log Router captures all container stdout/stderr.
There is NO DEFAULT CAP on ingestion volume which is an absurd!
A simple while(true); do echo "error"; done can bankrupt a small project.
Go to Logging -> Log Router. Edit _Default sink.
Add an exclusion filter: resource.type="k8s_container" severity=INFO (or exclude specific namespaces).
Has anyone successfully escalated a billing dispute past Tier 1 support recently?
It seems their policy is now to enforce full payment even on obvious runaway/accidental usage which is absurd since its LOGS! TEXT!
Tell HN: Wanted to give dang appreciation
Reddit has drifted over time but HN has remained a source of high signal to noise.
Just wanted to thank dang and the moderation team for making this community what it is.
Tell HN: Cursor charged 19 subscriptions, won't refund
I got a fraud warning from my bank a few days ago at 7:04 PM. When I logged into my bank I found 19 pending Cursor subscription charges.
I called the Cursor billing phone number I found on my real Cursor account. It was outside of working hours so got an automated message.
I promptly fired off an email at 7:16 PM making it clear I did not authorize these purchases.
After a few days of painfully slow email responses the conclusion I am getting from them is "the compute resources are fully consumed and cannot be returned or refunded".
Anyone have advice on how to proceed?
Edit: I plan to file a dispute with my bank.
Also curious if others have experienced something similar, because clearly this is a stock "we basically won't ever refund money" response.
Don't obsess with security and privacy unless they are your core business
Making a simple sandwich from ingredients is a full-time job that takes roughly 6 months. You grow chickens, fetch sea water, make bread from ingredients, and so on. Unless you sell a lot of sandwiches you made from scratch, you will bleed a lot of money and time.
Only God can make sandwich instantly. If you try to make a simple pencil, it will probably take more than 6 months.
Now, consider security and privacy. Just constructing what seems to be a reasonably private and robust linux computer took at least a year of full-time effort. It is genuinely more difficult than making a simple sandwich from ingredients, and making a simple sandwich "from ingredients" is a full-time business by itself. The so-called system crafting is a full-time business that doesn't pay.
The cost of constructing a private linux computer with your "personal" labor is your business, your job, your health, your relationships, and everything else in your life. The cost of privacy is extremely high. You need to be okay with rough edges in your computing environment.
If you force yourself to make sandwich and pencil from ingredients, make your furniture, build a house, grow foods, run an e-commerce store, construct a private linux computer, and so on, then you will not be good at any one thing, and you won't be paid much. You are only paid as much as your best expertise. Only specialization can make you rich. If you try to scatter your energy into multiple things including security and privacy, you will remain poor. Even linus torvalds, a rich computer programmer, avoids fiddling with linux kernel options on his linux computers. He just uses fedora without modification. Linus torvalds doesn't care about the fact that his AMD CPU has hardware backdoor and certainly can't be bothered to "manually" construct a backdoor-free router that blocks AMD PSP and Intel ME behind the router. But, he may "buy" computers with Intel ME disabled by others.
If you want to become rich, you should have laser focus on your core business and sacrifice other things like excellent privacy.
Now, you know what it means to sacrifice. Sacrifice may even mean you use mac pro instead of a personally hardened linux desktop. The creator of linux can't be bothered to "manually" harden his own linux computers.
If you want to be rich and have a good life, you should be ready to buy everything outside your core business. Buying things takes infinitely less time than building things from scratch.
Spending time on things outside your core business is basically a financial suicide.
NeuroCode – A Structural Neural IR for Codebases
’ve built NeuroCode, a Python engine that builds a structural intermediate representation (IR) of codebases — including call graphs, module dependencies, and control flow — and stores it in a neural-ready format designed for LLMs.
Most tools treat code as text. NeuroCode treats it as structure. It gives you a CLI (and library) to:
Build the IR with neurocode ir . and store it as .neurocode/ir.toon
Explain files using call/import graphs: neurocode explain path/to/file.py
Run structural checks and generate LLM-ready patch plans (coming soon)
The goal is to bridge static analysis and AI reasoning. You can plug NeuroCode into agents, editors, or pipelines — or use it standalone to get structure-aware insights into your codebase.
No runtime deps, tested with Python 3.10–3.12. Still early (v0.3.0), feedback and contributions welcome.
GitHub: https://github.com/gabrielekarra/neurocode
Tell HN: Google increased existing finetuned model latency by 5x
Since 5 days ago, the latency of our Finetuned 2.5 Flash models has suddenly jumped by 5x. For those less familiar, such finetuned models are often used to get close to the performance of a big model at one specific task with much less latency and cost. This means they're usually used for realtime, production use cases that see a lot of use and where you want to respond to the user quickly. Otherwise, finetuning generally isn't worth it. Many spend a few thousand dollars (at a minimum) on finetuning a model for one such task.
Five days ago, Google released Nano Banana Pro (Gemini 3.0 Image Preview) to the world. And since five days ago, the latency of our existing finetuned models has suddenly quintupled. We've talked with other startups who also make use of finetuned 2.5 Flash models, and they're seeing the exact same, even those in different regions. Obviously this has a big impact on all of our products.
From Google's side, nothing but silence, and this is talking about paid support. The reply to the initial support ticket is a request for basic information that has already been provided in that ticket or is trivially obvious. Since then, it's been more than 48 hours of nothingness.
Of course the timing could be a pure coincidence - though we've never seen any such latency instability before - but we can all see what's most likely here; Nano Banana Pro and Gemini 3 Preview consuming a huge amount of compute, and they're simply sacrificing finetuned model output for those. It's impossible to take them seriously for business use after this, who knows what they'll do next time. For all their faults, OpenAI have been a bastion of stability, despite being the most B2C-focused of all the frontier model providers. Google with Vertex claims to be all about enterprise and then breaks product of their business customers to get consumers their Ghibli images 1% faster. They've surely gotten plenty of tickets about this, and given Google's engineering, they must have automated monitoring that catches such a huge latency increase immediately. Temporary outages are understandable and happen everywhere, see AWS and Cloudflare recently, but 5+ days - if they even fix it - of 5x latency is effectively a 5+ day outage of a service.
I'm posting this mostly as a warning to other startups here to not rely on Google Vertex for user-facing model needs going forward.
Ask HN: Is America in Recession?
Official numbers say the U.S. isn’t in recession—does real life feel different?
Ask HN: What tools do you pay for today that feel overpriced or frustrating?
Hello everyone,
I’d love to hear about:
1. Tools you pay for that feel overpriced or frustrating (especially if you’d replace them immediately if something better existed), and
2. Anything that routinely costs you time, money, or attention (and how much money and time it costs you).
I’m most interested in problems that are painful enough that you’d gladly pay to fix them.
If you’re open to sharing, it'd be nice to know:
1. the exact problem 2. how you solve it now 3. the approximate budget or cost
Thank you. The more concrete and specific, the better.
Ask HN: How do you balance creativity, love for the craft, and money?
Considering with AI an experienced engineer can build anything much faster and we had a discussion around "single person unicorn". How are you balancing your love for the craft and creativity. I see copycats of copycats generating decent $ per month, sometimes I wonder I should I also do the same and leave my job to pursue the unicorn dream? As every 2 years there is layoff, AI taking over... not sure if this make sense or this is just me bored on a weekend.
GhostBin A lightweight pastebin, built with Go and Redis
Hi HN,
I built GhostBin, a lightweight pastebin designed to replace the simplicity and speed that services like ix.io used to offer. ix.io has been down for a long time, and most existing pastebins are either bloated, slow, or not CLI-friendly. I needed something minimal that “just works,” especially for piping logs and command outputs during debugging or writing content. So I made my own.
GhostBin focuses on:
Simplicity: Clean interface and a straightforward API.
Performance: Go + Redis for fast reads/writes.
CLI-first workflow: curl and shell pipelines work out of the box.
Privacy & control: Self-hostable with Docker; no vendor lock-in.
Burn-after-read + expiration: Useful for ephemeral snippets.
Optional deletion secret: Allows secure deletion via API.
Demo: https://www.youtube.com/shorts/RINJI_Q5048
Source: https://github.com/0x30c4/GhostBin
CLI script: https://raw.githubusercontent.com/0x30c4/GhostBin/main/gbin.sh
``` $ curl -F "f=@file.txt" gbin.me ```
``` dmesg | curl -F "f=@-" gbin.me ```
Ask HN: Photos corrupted on Google Pixel phones over time?
I have had this problem for years now: Scrolling through photos on my phone that are maybe a year old or older I notice grey squares here and there in my Google Photos app (used without an account). The files are properly corrupted - can't view them on other devices either. I checked some out in a hex editor and sure enough there is a good chunk of null bytes in the beginning. Sometimes it's just the first byte and changing it from 00 to FF fixes the image. But oftentimes it's a whole lot more 00s to the point where I don't know how to recover the image.
I've been using only Pixel a-series phones for the last couple years (3a, 6a, 9a) and had this happen on all of them. It surely can't be bad storage, can it? I feel like there is a bug in some part of Google's Android OS.
Has anyone of you encountered this issue? I can't believe I'm alone with it.
Malicious Bun Script Found in NPM Package Bumps
*`package.json` includes a `preinstall` script running `node setup_bun.js`, along with `setup_bun.js` and `bun_environment.js` files that appear to contain the malware.*
Ask HN: Working in a language that isn't your native one. How hard was it?
I'm currently interviewing for roles in another language and it's so difficult. I'm wondering if this is universal? I'm struggling to even imagine the daily work in a company. Handling meetings, understanding requirements, standing up for my solutions... I sound like a child. Anyone lived through this? How?
Boring Laser Eyes Simulator: Add laser beams to your eyes with your webcam
https://winterx.github.io/laser-eyes-simulator/
Needed a break from my main project, so I threw together this Laser Eyes Simulator. It's a silly little thing that adds laser beams to your eyes using your webcam. Downloadable images. Hope you enjoy!
Gemini 3 Prompt: - Use the computer's front-facing camera - Use `mediapipe` lib to capture facial landmarks - Use `threejs` to apply a LASER EYE effect to the face captured by the camera based on the real-time 3D landmark information provided by `mediapipe`