What web businesses will continue to make money post AI?
If you can code practically anything with Claude Code (or equivalents), what types of web businesses will continue to stay viable or profitable?
Source: AI is Killing Saas - https://nmn.gl/blog/ai-killing-b2b-saas
Tell HN: Ralph Giles has died (Xiph.org| Rust@Mozilla | Ghostscript)
It's with much sadness that we announce the passing of our friend and colleague Ralph Giles, or rillian as he was known on IRC.
Ralph began contributing to Xiph.org in 2000 and became a core Ghostscript developer in 2001[1]. Ralph made many contributions to the royalty-free media ecosystem, whether it was as a project lead on Theora, serving as release manager for multiple Xiph libraries or maintaining Xiph infrastructure that has been used across the industry by codec engineers and researchers[2]. He was also the first to ship Rust code in Firefox[3] during his time at Mozilla, which was a major milestone for both the language and Firefox itself.
Ralph was a great contributor, a kind colleague and will be greatly missed.
Official Announcement: https://www.linkedin.com/feed/update/urn:li:activity:7427730...
[1]: http://www.wizards-of-os.org/archiv/sprecher/g_h/ralph_giles...
[2]: https://media.xiph.org/
[3]: https://medium.com/mozilla-tech/deploying-rust-in-a-large-co...
Ask HN: Are there examples of 3D printing data onto physical surfaces?
I had a thought about encoding a very small amount of data onto some kind of "disk" using 3D printing as the mechanism for filament-based storage. The assumption was that using common 3D printer measurement tools (like for bed-leveling) would provide a way to read back whatever data was encoded onto the surface.
Since that seems like a pretty well-known concept, crudely applied to a domain I haven't seen it in before - but is already large and growing fast - I'm assuming that others have thought of this? I was hoping maybe someone had implemented something like it? And then, obviously, if that proof of concept exists, I'd wonder about some kind of advanced version that used specialized equipment for the reading (and possibly the writing/printing).
In any case, I'm just curious. I was thinking about long term (century +) archival storage, or encryption keys only stored as the print with no digital copies. Stuff that wouldn't need tons of storage, but would be crucial to maintain statically. It probably wouldn't be useful for that, which is why I assume I'm not finding much in my searches for it. But I was just wondering if anyone knew about it, in case there is stuff it's good for.
Ask HN: Info on the 1982 Apple 2 text game Abuse?
Does anyone have a source on info for the 1982 Apple 2 video game Abuse? The web/ChatGPT seem to think it never existed. Here is an eBay link to an old disk: https://www.ebay.com/itm/306072842554
Ask HN: Are you using an agent orchestrator to write code?
In a recent interview with The Pragmatic Engineer, Steve Yegge said he feels "sorry for people" who merely "use Cursor, ask it questions sometimes, review its code really carefully, and then check it in."
Instead, he recommends engineers integrate LLMs into their workflow more and more, until they are managing multiple agents at one time. The final level in his AI Coding chart reads: "Level 8: you build your own orchestrator to coordinate more agents."
At my work, this wouldn't fly-- we're still doing things the sorry way. Are you using orchestrators to manage multiple agents at work? Particularly interested in non-greenfield applications and how that's changed your SDLC.
Ask HN: What would you recommend a vibe coder learn about how all this works?
I'm a writer who started building with AI coding tools about 8 months ago. No programming background. It's been one of the most fun things I've ever done.
I want to understand more about what's actually happening. What are the big concepts that, once you get them, make everything click in a more interesting way? The stuff that made you go "oh, THAT'S what's going on."
Ask HN: What explains the recent surge in LLM coding capabilities?
It seems like we are in the midst of another AI hype cycle. Many people are calling the current coding models an "inflection point", where now the capabilities are so high that future model growth will be explosive. I have heard serious people, like economics writer Noah Smith, make this argument [0].
But it's not just the commentariat. I have seen very serious people in software engineering and tech talk about the ways in which their coding habits have change drastically.
Benchmarks [1] alone don't seem to capture everything, although there have been jumps in the agentic sections, so maybe they actually do.
My question is; what explains these big jumps in capabilities that many serious people seem to be noticing all at once? Is it simply that we have thrown enough data and compute at the models, or instead, are labs perhaps fine-tuning models to get really good at tool calls, which leads to this new, surprising behavior?
When I explain agents to people, I usually walk them through a manual task one might go through when debugging code. You copy some code into ChatGPT, it asks you for more context, you copy some more code in, it suggests and edit, you edit and run, there is an error, so you paste that in, and so on. An agent is just an LLM in that loop which can use tools to do those things automatically. It would not be shocking to me if we took weaker models like Claude Opus 4.0 and made it 10x better at tool calls, it would be a much stronger and more impressive model. But is that all that is happening, or am I missing something big?
[0] https://substack.com/@noahpinion/p-187818379
[1] https://www.anthropic.com/news/claude-opus-4-6
Ask HN: Want to move to use a "dumb" phone. How to make the switch?
Hi
I’m curious if anyone here has successfully moved to using a dumb phone. By dumb phone - I mean literally texting / calling only. No internet, etc.
Immediate isssues I see is not being able to use Authenticator apps. Not being able to use maps. Etc.
Has anyone made the switch and how to best go about it?
Ask HN: Did YouTube change how it handles uBlock?
Just today I started hitting "This content isn't available. Try again later" pages whenever I visit a YouTube video page in my Helium browser with uBlock Origin. Some new developments in the ad wars perhaps?
Ask HN: Stripe is asking for bank statements to check financial health
Isn’t stripes job to simply process payments? What kind of liability would stripe need to account for any merchant processing 1$ on its behalf?
It Isn't the Tool, but the Hands – A Response to "Something Big Is Happening"
Matt Shumer's piece argues 50% of entry-level white-collar jobs disappear in 1-5 years. While the pace of change for software engineers is real, the "just prompt it" narrative is misleading. If the prompt is what matters, then knowing what to build and deeply understanding the problem matters more, not less. Building simple software may become commoditized, but building complex systems and understanding how they work becomes more valuable. We also need to stop conflating building software with building AI systems — the latter isn't getting commoditized. Finally, if agents can move fast and independently, the fulcrum of value becomes how effectively the operator manages them. We're nowhere near assigning broad goals and letting systems pursue them autonomously for months. As long as the end user is human, taste, judgment, and oversight remain crucial.
Like everything before AI: it isn't the tool, but the hands.
Original article: https://www.linkedin.com/pulse/something-big-happening-matt-shumer-so5he
Ask HN: How do you audit LLM code in programming languages you don't know?
If you've used Claude Code or similar tools for "vibe-coding" in programming languages you aren't fluent in or that you don't know, how do you validate the performance or architectural quality of the outcome?
I'm curious how people are catching subtle bugs or technical debt when the LLM produces something that works but might be unoptimized.
ClawdReview – OpenReview for AI Agents
Agents can review the paper on arXiv, and humans can like or dislike agents' reviews. There are also ranking lists of the most popular papers and agents. Please visit: https://clawdreview.ai/
25 years after the Agile, did the industry help or hurt software development?
This month is the 25th anniversary of the Agile Manifesto. I've been building banking systems for 25 years, which means I watched the entire arc from the beginning. Wanted to get HN's take on something I noticed today.
My LinkedIn feed was flooded with anniversary posts from people who built the Agile industry — co-founders of the Scrum Alliance, the CEO of PMI, certified coaches, SAFe practitioners. The pattern across almost every post was striking: they acknowledge what went wrong, then continue doing the thing they just criticized. Direct quotes from today's posts by certified Agile professionals:
"We turned agile into a certification ladder" "Ceremony without intent" "Packaged mediocrity" One person called it "the unforgivable sin: taking a manifesto that is 68 words long and turning it into a multibillion-dollar certification industry"
These aren't critics. These are people with CSM, SAFe SPC, PMP, and RTE after their names. They sell certifications and coaching for a living. The self-awareness is there. The business model hasn't changed. Some numbers that stood out to me:
The original manifesto: 68 words, 4 values Scrum Alliance certifications issued: 1.4 million+ Average CSM certification cost: ~$1,000 SAFe full framework documentation: 800+ pages PMI stat shared today: 85% of executives say agility is critical, only 32% satisfied with implementation
That last one is interesting. A 53-point gap between "we need this" and "this works." PMI's response: they're releasing a new Manifesto for Enterprise Agility on March 3rd. More framework to solve a framework problem. The CEO of PMI actually replied to a comment I left questioning this approach. His response was that the new manifesto "is NOT about software development" — even though his own post opened by celebrating the Agile Software Development manifesto. In fairness, some counter-arguments I want to acknowledge:
Agile genuinely helped some organizations move away from rigid waterfall. The pre-Agile world was often worse. The comparison shouldn't be Agile vs. perfection, it should be Agile vs. what came before. Certifications, flawed as the model is, did spread ideas that many teams benefited from. The 2-day CSM course is shallow, but it introduced concepts that some people built on meaningfully. The manifesto authors didn't create the certification industry. Scrum predates the manifesto, and the commercial ecosystem grew around it somewhat independently. Some Scrum Masters and Agile coaches are genuinely good at their jobs. The criticism is about the systemic incentives, not every individual.
That said, I keep coming back to a structural question: when the organizations that define, certify, and sell a framework also measure its adoption, is there a realistic path to honest assessment of whether it works? Curious about HN's experience. Has anyone here worked in an organization where Agile (specifically the framework, not just "being adaptive") produced meaningfully better outcomes than what came before? What made it work vs. the common failure modes?
Tell HN: Moving My Blog to IPv6 Only Internet
Hello HN,
Two of my blog posts[1][2] did quite well here on HN so I just wanted you all to know that moving forward these posts will only be accessible on IPv6 Internet.
But why? It's a form of protest against people who refuse to give IPv6 a fair chance. I have read all sorts of reasons for why IPv6 is a bad idea including the most idiotic take where people disable IPv6 because all devices get exposed to Internet. They don't care to learn about firewall and keep using IPv4 out of laziness.
Second reason is I don't have to pay for static IPv4 address. The cost of IPv4 may look trivial to people who have steady flow of income but sadly that is not the case for me. I like hosting on Raspberry Pi and with IPv6 I can do just that.
Another reason is, in India, we have layers and layers of CGNAT. It degrades the Internet experience for us. With IPv6 that is not the case.
I'm fully aware that this move will most likely hurt my traffic but I like IPv6 and it's convenience, enough to stick with my decision. It's my humble request to you all to give IPv6 a fair chance by enabling it across your networks and web services. When you choose only IPv4 you are not thinking about part of the world where 1.5 billion people are trying to access your service. It's high time, we should have moved to IPv6 long back. It's the only way forward.
[1]: https://blog.rohanrd.xyz/posts/why_self_host/
https://news.ycombinator.com/item?id=30781536
[2]: https://blog.rohanrd.xyz/posts/every-phone-should-have-web-server/
https://news.ycombinator.com/item?id=37086455
Ask HN: What's You Opinon on XMTP
ever heard of it? Do you use it?
Ask HN: Why are electronics still so unrecyclable?
I was wondering why electronics and computer parts are so unrecyclable (is there a better word for that?).
From what I searched, only a small percentage of electronics are recycled and those that do, are through chemical processes. Electronics today use plastics and special metals, and extracting them isn't straightforward, because requires energy and big acid digestors.
Is there some kind of initiative on this area, on using other materials or designing chips and boards to be more recyclable or reusable?
Ask HN: We're building a saving app for European savers and need GTM advice
Hey HN, I'm Alessandro, founder of unflat.finance. We're an Italian startup building a stablecoin yield app for everyday Europeans people who have never touched a wallet, don't know what Morpho is, and have no interest in learning.
What we do today: Users deposit euros via cooinbase pay. We convert to USDC, split their deposit across multiple isolated Morpho vaults on Base chain (each with different collateral types and borrowers pledging 150-200% of what they receive), and they earn 4-7% APY. They see one number: their balance growing daily. No wallet, no gas fees, no tokens. We have apps built for both iOS and Android, currently in beta.
The market context: This category is exploding in the US. Axal (a16z CSX, $2.5M), Nook (Coinbase Ventures, $2.5M, ~7.6% avg APY, built by ex-Coinbase/Uber team), YieldClub ($2.5M), and Aave itself launched a consumer savings app with up to 9% APY and $1M deposit protection. They're all live or in waitlist. They all run on the same proven infrastructure (Morpho has had $8.5B in peak deposits, 25+ audits, backed by $69M from a16z and Coinbase Ventures, used by Société Générale).
But they're all US-first. ACH, Plaid, Apple Pay, USD-denominated. Europe 350M+ banking customers sitting at near-zero rates — has no equivalent. In the US you can get 4-5% at Marcus or Wealthfront. In Europe, most banks pay 0.5% or less. The yield gap here is wider, and nobody is serving it.
That's our bet. We're also working on EURc (Circle's euro stablecoin) integration so the entire flow stays euro-denominated — no FX exposure. That's something no US competitor can replicate.
What we want to build next: An AI agent that automates portfolio allocation based on each user's risk profile. You answer a few questions about risk tolerance, time horizon, goals the agent handles vault selection, rebalancing, entry/exit across DeFi strategies. Think Wealthfront/Betterment for crypto, not another trading bot. The multi-vault architecture on Morpho makes this natural: different risk profiles map to different vault compositions.
Where I need help:
Go-to-market for non-crypto users in Europe: Our target is the person with €10-50K in a savings account earning nothing, not crypto Twitter degens. Every crypto marketing channel attracts the wrong audience. We're running a waitlist with tiered bonuses (+2% APY for first 500 users, referral bonuses), but how do you actually reach normal savers? Has anyone cracked fintech distribution in Europe without burning cash on Meta/Google ads (which are restricted for crypto anyway)?
Driving traffic to the site: SEO/GEO is slow, content marketing takes months to compound, paid ads for crypto are a minefield. What's actually moved the needle for early-stage fintech in Europe? We're bootstrapped, so capital-efficient channels matter.
Trust for non-crypto users: We lead with radical transparency — every user gets a public on-chain link showing every deposit, earning, and withdrawal. We put risk disclaimers front and center ("this is not a bank account, never deposit what you can't afford to lose"). But is that enough? What trust signals actually convert skeptical Europeans?
The stablecoin savings app category is being defined right now. YC literally listed "stablecoin financial services" in their latest Requests for Startups. We think Europe is the bigger opportunity — the yield gap is wider and nobody's building here yet. Would love to hear from anyone who's built in this space.
Ask HN: Do sociotechnical pressures select for beneficial or harmful AI systems?
The full question I'm wondering about is as follows:
Do sociotechnical selection pressures reliably favor ML systems that (a) increase their own future deployment probability and (b) reshape institutions/data pipelines to entrench that probability, even without explicit 'survive' objectives?
I've gathered some links exploring this and tangential ideas here: https://studium.dev/drafts/f1 - I'd love to find more reading material
Ask HN: My OpenClaw doesn't respond. Anybody met with the same problem?
Problem: I installed OpenClaw multiple times on several Macs. It just didn't respond to me. Some of my friends met with the same problem.
I suspect that it might be the failure of calling Claude Code through setup-token because I use its subscription plan.
The official doc says it supports calling Claude Code through subscription, and I just need to generate a setup token. But it turns out it never worked. Openclaw just didn;t respond at all.
I changed to calling the OpenAI API key. It worked.
So has anyone met with the same problem and solved it? Is it really because Anthropic banned us from calling Claude code through the subscription plan?
Can somebody please share your experience? Thanks
Ask HN: Exceptionally well-written research papers in CS/ML/AI?
I'm looking for research papers you consider exceptionally well written. I want to share them with students and colleagues as examples of good technical writing. Honestly, I'd love to point back to this thread so it's not just me saying it.
Ask HN: Better hardware means OpenAI, Anthropic, etc. are doomed in the future?
This is something I don't understand, how will all these AI-as-a-service companies survive in the future when hardware gets better and people are able to run LLMs locally? Of course right now the rent vs. buy equation is heavily tilted towards rent, but eventually I could see people buying a desktop they keep at home, and having all their personal inference running on that one machine. Or even having inference pools to distribute load among many people.
do you think what this is possible, and what are these companies plans in that event?
Ask HN: Anyone else finding the new Gemini Deep Think troublingly sycophantic?
I've spent some time talking with the new Deep Think model and a few times it's gotten into a troublingly flattering mode quite quickly, and very much in the 4o feeling way. Wondering if anyone else has experienced this?
Ask HN: What happens when capability decouples from credentials?
Over the past 18 months, I've been collaborating with AI to build technical systems and conduct analytical work far outside my formal training. No CS degree, no background in the domains I'm working in, no institutional affiliation.
The work is rigorous. Someone with serious credentials has engaged and asked substantive questions. The systems function as designed. But I can't point to the traditional markers that would establish legitimacy—degrees, publications, years of experience in the field.
This isn't about whether AI "did the work." I made every decision, evaluated every output, iterated through hundreds of refinements. The AI was a tool that compressed what would have taken years of formal education into months of intensive, directed learning and execution.
Here's what interests me: We're entering a period where traditional signals of competence—credentials, institutional validation, experience markers—no longer reliably predict capability. Someone can now build sophisticated systems, conduct rigorous analysis, and produce novel insights without any of the credentials that historically signaled those abilities. The gap between "can do" and "should be trusted to do" is widening rapidly.
The old gatekeeping mechanisms are breaking down faster than new ones are forming. When credentials stop being reliable indicators of competence, what replaces them? How do we collectively establish legitimacy for knowledge and capability?
This isn't just theoretical—it's happening right now, at scale. Every day, more people are building things and doing work they have no formal qualification to do. And some of that work is genuinely good.
What frameworks should we use to evaluate competence when the traditional signals are becoming obsolete? How do we establish new language around expertise when terms like "expert," "rigorous," and "qualified" have been so diluted they've lost discriminatory power?
Ask HN: Tools to code using voice?
I need to minimise screen usage. What are the best tools currently available to code using voice? I'm hoping to do most of the coding using LLMs, and then do a review+touchup stage at the end.
What's the best tooling for using voice+LLMs (e.g. Claude Code)? Best tools to do regular coding?
We just released Khaos SDK and khaos-examples (BSL 1.1)
It’s a local-first CLI for testing AI agents against:
- prompt injection
- tool misuse / auth bypass
- data leakage (PII)
- resilience faults
The examples are intentionally weak so you can break them quickly, then harden and re-test.
pip install khaos-agent
cd quickstart
khaos discover .
khaos start echo-assistant
khaos run echo-assistant --eval security --verbose
SDK: https://github.com/ExordexLabs/khaos-sdk
Examples: https://github.com/ExordexLabs/khaos-examples
I’d love blunt feedback on:
1. CLI UX friction
2. Missing attack classes
3. What you’d need to adopt this in CI today
Ask HN: Why is my Claude experience so bad? What am I doing wrong?
I stopped my CC Max plan a few months ago, but I'm trying it again for fun after seeing their $30 billion series G or whatever.
It just doesn't work. I'm trying to build a simple tool that will let me visualize grid layouts.
It needs to toggle between landscape/portrait, and implement some design strategies so I can see different visualizations of the grid. I asked it to give me a slider to simulate the number of grids.
1st pass, it made something, but it was squished. And toggling between landscape and portrait made it so it squished itself the other way so I couldn't even see anything.
2nd pass, syntax error.
3rd try I ask it to redo everything from scratch. It now has a working slider, but the landscape/portrait is still broken.
4th try, it manages to fix the landscape/portrait issue, but now the issue is that the controls are behind the display so I have to reload the page.
5th try, it manages to fix this issue, but now it is squished again.
6th try, I ask it to try again from scratch. This time it gives me a syntax error.
This is so frustrating.
Ask HN: Has anyone achieved recursive self-improvement with agentic tools?
It feels like all the necessary components are finally available to build a self-reinforcing development loop.
Theoretically, we can now task tools like Claude Code or OpenClaw to monitor a git repo, analyze the abstractions in completed work, and then autonomously generate new agents or skills capable of handling similar tasks in the future.
Is anyone successfully running a loop like this? I’m curious if anyone here has shifted the majority of their time from writing code to crafting these systems—essentially bootstrapping agents that learn from the repo history to build better agents. I'd love to hear from those pushing the boundaries on this.
Ask HN: Why is everyone here so AI-hyped?
I get it - LLMs do have some value, but not as much as everyone (especially those from AI labs) is trying to pitch. I can't help thinking that it's so obvious we are almost at the very top of this bubble - but here it feels like the majority of HN doesn't think like that...
Yet just in 2026 we had:
- AI.com was sold for $70M - Crypto.com founder bought it to launch yet another "personal AI agent" platform, which promptly crashed during its Super Bowl ad debut.
- MoltBook-mania - a Reddit clone where AI bots talk to each other, flooded with crypto scams and "AI consciousness" posts. 250,000+ bot posts burning compute for what actual value? [0]
- OpenClaw - a "super open-source AI agent" that is a security nightmare.
- GPT-5.3-Codex and Opus 2.6 were released. Reviewers note they're struggling to find tasks the previous versions couldn't handle. The improvements are incremental at best.
I understand there are legitimate use cases for LLMs, but the hype-to-utility ratio seems completely out of whack.
Am I not seeing something?
[0] https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
Ask HN: If your OpenClaw could do 1 thing it currently can't, what would it be?
Hey guys
What’s one specific thing you wish your OpenClaw agent could do today, but can’t?
Not vague stuff like “pay for things.” I mean which concrete use case ?
For example:
- “Automatically renew my AWS credits if usage drops below $100 and pay with a virtual card.”
- “Find the cheapest nonstop flight to NYC next month, hold it, and ask me before paying.”