Tell HN: MitID, Denmark's digital ID, was down
MitID is the sole digital ID provider, leading the entire country unable to log into their internet banking, public services, digital mail etc.
https://www.digitaliser.dk/mitid/nyt-fra-mitid/2026/feb/drif...
Super Editor – Atomic file editor with automatic backups (Python and Go)
I built this after getting frustrated with unsafe file operations in automation workflows.
Key features:
• Atomic writes (no partial/corrupted files)
• Automatic ZIP backups before every change
• Regex and AST-based text replacement
• 1,050 automated tests with 100% pass rate
• Dual implementation (Python + Go, Go is 20x faster)
Use cases:
• CI/CD pipelines that modify config files
• Automated refactoring scripts
• Any workflow where file corruption would be catastrophic
PyPI: https://pypi.org/project/super-editor/
GitHub: https://github.com/larryste1/super-editor
Would love feedback from the HN community!
Tell HN: YC companies scrape GitHub activity, send spam emails to users
Hi HN,
I recently noticed that an YC company (Run ANywhere, W26) sent me the following email:
From: Aditya <aditya@buildrunanywhere.org>
Subject: Mikołaj, think you'd like this
[snip]
Hi Mikołaj,
I found your GitHub and thought you might like what we're building.
[snip]
I have also received a deluge of similar emails from another AI company, Voice.AI (doesn't seem to be YC affiliated). These emails indicate that those companies scrape people's Github activity, and if they notice users contributing to repos in their field of business, send marketing emails to those users without receiving their consent. My guess is that they use commit metadata for this purpose. This includes recipients under the GDPR (AKA me).
I've sent complaints to both organizations, no response so far.
I have just contacted both Github and YC Ethics on this issue, I'll update here if I get a response.
Ask HN: 2026, where is the best place in the world to create a startup?
If you drive clock wise along the beach on an island
Is the ocean to your left or to you right?
I asked this question to multiple LLM.
ChatGPT: Wrong but reasoned itself back to being correct.
Gemini: Correct.
Grok: Using expert it got the right answer after 35s.
Claude Sonnet 4.6: Confidently incorrect.
Screenshots: https://imgur.com/a/7pmcoWr
Ask HN: Anthropic has stood its ground. What excuse is left for other companies?
I don't need AI to build me a new app. I need it to make Jira bearable
Last week I asked Claude to build me a Jira sidebar that shows cross-project dependency graphs — the kind Jira buries across 4 clicks and 3 page loads. 4 prompts. Works inside my actual Jira. It just used Claude Chrome extension that injects a panel into the page I already have open.
And I keep thinking: why isn't everyone doing this?
The entire AI coding conversation is about building new apps from scratch. Cool. But I don't need a new app. Most people spend their workday inside apps they didn't choose: Jira, Salesforce, Workday, ServiceNow, etc. These tools are not going anywhere. My company chose them in 2019 and they're entrenched until at least 2029.
Chrome extensions just reads what's already in the DOM and augments it.
Is there a fundamental reason this can't work at scale that I'm not seeing? Why isn't Claude's Chrome extension catching more attention?
Ask HN: How do you handle duplicate side effects when jobs, workflows retry?
Quick context: I'm building background job automation and keep hitting this pattern:
1. Job calls external API (Stripe, SendGrid, AWS) 2. API call succeeds 3. Job crashes before recording success 4. Job retries → calls API again → duplicate
Example: process refund, send email notification, crash. Retry does both again. Customer gets duplicate refund email (or worse, duplicate refund).
I see a few approaches:
Option A: Store processed IDs in database Problem: Race between "check DB" and "call API" can still duplicate
Option B: Use API idempotency keys (Stripe supports this) Problem: Not all APIs support it (legacy systems, third-party)
Option C: Build deduplication layer that checks external system first Problem: Extra latency, extra complexity
What do you do in production? Accept some duplicates? Only use APIs with idempotency? Something else?
(I built something for Option C, but trying to understand if this is actually a common-enough problem or if I'm over-engineering.)
Ask HN: My competitor wants to buy us out, recommend a lawyer?
My tech company has grown to the point where has become a significant threat to the industry leader. They have reacted by asking if we are open to acquisition. I’m looking for suggestions of lawyers who are experienced with this type of transaction, as this is significantly outside of my wheelhouse.
1Password pricing increasing up to 33% in March
Just got an email from 1Password:
Since 2005, 1Password has been on a mission to make security simple, reliable, and accessible for everyone. As the way people work and live online has evolved, so has 1Password.
More recently, we’ve invested significantly in new features that make 1Password even more powerful and effortless to use, helping protect what matters most to you, including:
* Automatic saving of logins and payment details
* Enhanced Watchtower alerts
* Faster, more secure device setup
* AI-powered item naming
* Expanded recovery options
* Proactive phishing prevention
While 1Password has grown substantially in value and capability, our pricing has remained largely unchanged for many years. To continue investing in innovation and the world-class security you expect, we’re updating pricing for Family plans, starting March 27, 2026.
Current vs New Pricing:
* Current price: $59.88 USD / year
* New price: $71.88 USD / year
The new price will take effect at your next renewal, provided it’s on or after March 27, 2026. Those occurring prior to March 27, 2026, will continue at the current pricing until your next renewal.
[Note: this is for family plans; individual plan price increases even higher, percentage-wise!]
36yo: Career at home vs. Simple life abroad?
I am 36 years old, single, and currently unemployed, living with my parents in my home country. I am at a point that might define the next decade of my life. I am struggling with a choice between two paths that offer completely different types of security.
Option A: Relocating to Southern Europe (Portugal)
The Income: A low-skill remote role (Content Analysis) with night shifts (PST hours), paying ~1100 EUR. I also have some passive income to supplement this.
The Lifestyle: Living in a studio or small apartmentin in smallish Portuguese town. For around 800 EUR.
The Perspective: The move isn't about a specific career goal or a passport; it’s about the higher life standards, safety, and the stable social environment of Western Europe.
The Trade-off: I would be far from my aging parents. I would be working an unskilled job that doesn't build professional equity, potentially living in studio at 36, which might be isolating during night shifts.
Option B: Staying in my Home Country (Ankara, Turkey)
The Job & Security: A Finance/Accounting role for a SME. I own my apartment here, so I have no housing costs.
The Professional Play: Pursuing a CPA-equivalent certification. This is a 3-year commitment of internships and exams, leading to legal signing authority and the ability to open my own practice later on with adaquate experience and networks.
The Context: Turkey is facing economic instability, high inflation, and politically unsettling.
The Trade-off: While I would be near my parents and building a protected professional title, I would be staying in a high-stress, unpredictable environment.
The Financial Weight:
I have already spent roughly 10k EUR on the relocation process for Option A (visas, consultants, etc.).
The Dilemma:
One path offers a prestigious, recession-proof career in a struggling, unstable country. The other offers a simple, comfortable life with 'okay' standards in a stable country, but with no professional growth.
At 36, is it wiser to invest 3 years in a professional license to root myself, or to take the jump for a better quality of life even if the work is menial?
What would you do?
THANK YOU!
Ask HN: Who Is Using XMPP?
Hello,
Are you using XMPP?
If so, what are your favorite servers to connect?
Ask HN: What's it like working in big tech recently with all the AI tools?
Curious to hear how have things changed day-to-day with the recent push to use AI coding tools.
Have you noticed faster pace of development?
Have you seen changes to code quality or code review?
Do teammates that use these tools complete sprint tasks faster than those who don't?
New Claude Code Feature "Remote Control"
No more tmux/Tailscale-type stuff needed now?
I built a 151k-node GraphRAG swarm that autonomously invents SDG solutions
Hi HN, I wanted to share a passion project I've been building: PROMETHEUS AGI. I got frustrated that most LLM/RAG applications just summarize text. I wanted to see if an agentic swarm could actually perform cross-domain reasoning to invent new physical solutions (focusing on UN SDGs). The Stack: Neo4j Aura (Free tier maxed out at 151k nodes / 400k edges) Ingestion: Google BigQuery (Patents) + OpenAlex API LLMs: Ollama (Llama 3) for zero-cost local entity extraction, Claude 3.5 via MCP for deep reasoning. UI: Streamlit (Digital Twin dashboard) + React/Vite (Landing). How it works: The swarm maps problems (e.g., biofouling in water filters) to isolated technologies across different domains (e.g., materials science + nanobiology) and looks for "Missing Links"—combinations that don't exist in the patent database yet. So far, the pipeline has autonomously drafted 261+ concept blueprints (like Project HYDRA, a zero-power water purifier). We are currently looking for domain experts (engineers, materials scientists) to validate these AI-generated blueprints and build physical prototypes, as well as grants to scale the graph to 1M+ nodes. Dashboard: https://project-prometheus-5mqgfvovduuufpp2hypxqo.streamlit.app/ Landing/Deck: https://prometheus-agi.tech I would love to hear your brutally honest feedback on the architecture, the Neo4j schema design, or the multi-agent approach!
How do you evaluate a person's ability to use AI?
Ask HN: Starting a New Role with Ada
So, good news. After a unexpectedly long absence from employment, I am 95% certain that I will receive an offer for a contract job as a product owner. This position will largely involve supervising the development/maintenance of code written in Ada. Even though I have over a decade of experience with C/C++/Assembly, I have ZERO experience with Ada. I doubt I will be writing much Ada myself, but I believe I will need to learn Ada.
So here are my questions:
1. Reading code is usually pretty straightforward. However, all software requires domain knowledge. When starting a new role, how do you bring yourself up on domain knowledge quickly?
2. If you know Ada, what resources to learn Ada do you recommend?
3. What Ada pitfalls do you advise to look out for?
Ask HN: What will happen with Anthropics ultimatum?
As we are all aware, anthropic has it's ultimatum from the US government: drop their anti-killing TOS or else get in trouble this Friday.
I'm sure whatever happens it'll seem much more obvious that that's what was always going to happen, in hindsight, than it does now.
So as an experiment, I'm curious to hear from the hn community, in advance of Friday, what we think will happen.
Will they concede? Will they ignore and suffer the consequences (and what might those be)? Will we even find out or will it be shrouded in mystery for the foreseeable future?...
This is admittedly somewhat sensationalist so I'm not sure whether it fits with hn guidelines, but as one of the best places I know of for genuine online discussion, I think it would be interesting to hear reasoned predictions.
Claude Code Bug triggers Rate limits without usage
Starting an hour ago, i received the following message "API Error: Rate limit reached" in claude code on a 5x Max subscription.
I had not used the model extensively, but accepted it. I waited 10min and asked again on how to go about a localization task on a website. Nothing code intensive, just a Pointer on what path to take given the infrastructure. However the same error message. I checked claude status, i checked HN and started the support bot, i reviewed the API ratelimits. But all seemed normal. And nowhere did it seem like i exceeded. I waited another 30min and tried again. The error message persists, according tot the doc, it should tell me how long to wait. it doesnt.
Anyone else experiencing this? Based in Switzerland, Europe, on Linux
Comparing manual vs. AI requirements gathering: 2 sentences vs. 127-point spec
We took a vague 2-sentence client request for a "Team Productivity Dashboard" and ran it through two different discovery processes: a traditional human analyst approach vs an AI-driven interrogation workflow.
The results were uncomfortable. The human produced a polite paragraph summarizing the "happy path." The AI produced a 127-point technical specification that highlighted every edge case, security flaw, and missing feature we usually forget until Week 8.
Here is the breakdown of the experiment and why I think "scope creep" is mostly just discovery failure.
The Problem: The "Assumption Blind Spot"
We’ve all lived through the "Week 8 Crisis." You’re 75% through a 12-week build, and suddenly the client asks, "Where is the admin panel to manage users?" The dev team assumed it was out of scope; the client assumed it was implied because "all apps have logins."
Humans have high context. When we hear "dashboard," we assume standard auth, standard errors, and standard scale. We don't write it down because it feels pedantic.
AI has zero context. It doesn't know that "auth" is implied. It doesn't know that we don't care about rate limiting for a prototype. So it asks.
The Experiment
We fed the same input to a senior human analyst and an LLM workflow acting as a technical interrogator.
Input: "We need a dashboard to track team productivity. It should pull data from Jira and GitHub and show us who is blocking who."
Path A: Human Analyst Output: ~5 bullet points. Focused on the UI and the "business value." Assumed: Standard Jira/GitHub APIs, single tenant, standard security. Result: A clean, readable, but technically hollow summary.
Path B: AI Interrogator Output: 127 distinct technical requirements. Focused on: Failure states, data governance, and edge cases. Result: A massive, boring, but exhaustive document.
The Results
The volume difference (5 vs 127) is striking, but the content difference is what matters. The AI explicitly defined requirements that the human completely "blind spotted":
- Granular RBAC: "What happens if a junior dev tries to delete a repo link?" - API Rate Limits: "How do we handle 429 errors from GitHub during a sync?" - Data Retention: "Do we store the Jira tickets indefinitely? Is there a purge policy?" - Empty States: "What does the dashboard look like for a new user with 0 tickets?"
The human spec implied these were "implementation details." The AI treated them as requirements. In my experience, treating RBAC as an implementation detail is exactly why projects go over budget.
Trade-offs and Limitations
To be fair, reading a 127-point spec is miserable. There is a serious signal-to-noise problem here.
- Bloat: The AI can be overly rigid. It suggested microservices architecture for what should be a monolith. It hallucinated complexity where none existed. - Paralysis: Handing a developer a 127-point list for a prototype is a great way to kill morale. - Filtering: You still need a human to look at the list and say, "We don't need multi-tenancy yet, delete points 45-60."
However, I'd rather delete 20 unnecessary points at the start of a project than discover 20 missing requirements two weeks before launch.
Discussion
This experiment made me realize that our hatred of writing specs—and our reliance on "implied" context—is a major source of technical debt. The AI is useful not because it's smart, but because it's pedantic enough to ask the questions we think are too obvious to ask.
I’m curious how others handle this "implied requirements" problem:
1. Do you have a checklist for things like RBAC/Auth/Rate Limits that you reuse? 2. Is a 100+ point spec actually helpful, or does it just front-load the arguments? 3. How do you filter the "AI noise" from the critical missing specs?
If anyone wants to see the specific prompts we used to trigger this "interrogator" mode, happy to share in the comments.
Would you choose the Microsoft stack today if starting greenfield?
Serious question.
Outside government or heavily regulated enterprise, what is Microsoft’s core value prop in 2026?
It feels like a lot of adoption is inherited — contracts, compliance, enterprise trust, existing org gravity. Not necessarily technical preference.
If you were starting from scratch today with no legacy, no E5 contracts, no sunk cost — how many teams would actually choose the full MS stack over best-of-breed tools?
Curious what people here have actually chosen in greenfield builds.
LazyGravity – I made my phone control Antigravity so I never leave bed
I get my best coding ideas when I'm nowhere near my desk — usually right
as I'm falling asleep. I got tired of losing that momentum, so I built
LazyGravity.
It's a local Discord bot that hooks up Antigravity to your phone. I can
ship fixes, kick off long implementation tasks, or start whole features
from bed, the train, wherever. Send a message in Discord, Antigravity
executes it on your home PC, results come back as rich embeds you can
reply to for follow-up instructions.
How it works: it drives the Antigravity UI directly via Chrome DevTools
Protocol over WebSocket (Runtime.evaluate on the Electron shell's DOM).
No private API hacking — no risk of account bans like with tools that
reverse-engineer proprietary APIs.
A few things I care about:
- Local-first: your code never leaves your machine. No exposed ports,
no cloud relays, no intermediate server.
- Secure: whitelist-based access — only your Discord ID can trigger
commands. (I recommend a dedicated server to keep things private.)
- Context threading: reply to any result embed to continue the
conversation with full context preserved.
What you can actually do from your phone:
- Route local projects to Discord categories, sessions to channels
— automatic workspace management
- Toggle LLM models or modes (Plan/Code/Architect) with /model and /mode
- /screenshot to see exactly what's happening on your desktop in real-time
- One-click prompt templates for common tasks
- Auto-detect and approve/deny file change dialogs from Discord
Still early alpha (v0.1.0), but it's been a game-changer for my own
workflow. Looking for folks to try it out, roast the architecture,
, add new features and help squash bugs.
npm install -g lazy-gravity
lazy-gravity setup
Demo video in Readme:
https://github.com/tokyoweb3/LazyGravity
Ask HN: How are you controlling AI agents that take real actions?
We're building AI agents that take real actions — refunds, database writes, API calls.
Prompt instructions like "never do X" don't hold up. LLMs ignore them when context is long or users push hard.
Curious how others are handling this: - Hard-coded checks before every action? - Some middleware layer? - Just hoping for the best?
We built a control layer for this — different methods for structured data, unstructured outputs, and guardrails (https://limits.dev). Genuinely want to learn how others approach it.
Ask HN: Could you create a competitor to your company at 10% of the cost?
I'm trying to wrap my mind about the AI tools, and while I believe there is way too much hype, I'm quite impressed with the progress.
The current mood seem to be that big companies will automate away many white collar jobs and just get bigger profits. My question is - what if it's the other way around ? Could said white collar workers just spin off competitors much more easily than before ? Obviously this mostly apply to software, but I'm curious what people think about it in all industries.
Fix cron routes: POST → GET (Vercel cron sends GET)
Our drip email cron ran its first day and sent zero emails. The cron hit the endpoint, got a 200 back, everything looked healthy. Turns out Vercel cron sends GET requests, but we put the email logic in a POST handler. The GET handler was just a health check returning {"status":"healthy"}. Two of three cron routes had this bug - the third one happened to use GET and worked fine.
ChatGPT finds an error in Terence Tao's math research
https://www.erdosproblems.com/forum/thread/783
> Ah, GPT is right, there is a fatal sign error in the way I tried to handle small primes. There were no obvious fixes, so I ended up going back to Hildebrand's paper to see how he handled small primes, and it turned out that he could do it using a neat inequality ρ(u1)ρ(u2)≥ρ(u1u2) for the Dickman function (a consequence of the log-concavity of this function). Using this, and implementing the previous simplifications, I now have a repaired argument. TerenceTao
Comparing manual vs. AI requirements gathering: 2 sentences vs. 127-point spec
We took a vague 2-sentence client request for a "Team Productivity Dashboard" and ran it through two different discovery processes: a traditional human analyst approach vs an AI-driven interrogation workflow.
The results were uncomfortable. The human produced a polite paragraph summarizing the "happy path." The AI produced a 127-point technical specification that highlighted every edge case, security flaw, and missing feature we usually forget until Week 8.
Here is the breakdown of the experiment and why I think "scope creep" is mostly just discovery failure.
The Problem: The "Assumption Blind Spot"
We’ve all lived through the "Week 8 Crisis." You’re 75% through a 12-week build, and suddenly the client asks, "Where is the admin panel to manage users?" The dev team assumed it was out of scope; the client assumed it was implied because "all apps have logins."
Humans have high context. When we hear "dashboard," we assume standard auth, standard errors, and standard scale. We don't write it down because it feels pedantic.
AI has zero context. It doesn't know that "auth" is implied. It doesn't know that we don't care about rate limiting for a prototype. So it asks.
The Experiment
We fed the same input to a senior human analyst and an LLM workflow acting as a technical interrogator.
Input: "We need a dashboard to track team productivity. It should pull data from Jira and GitHub and show us who is blocking who."
Path A: Human Analyst Output: ~5 bullet points. Focused on the UI and the "business value." Assumed: Standard Jira/GitHub APIs, single tenant, standard security. Result: A clean, readable, but technically hollow summary.
Path B: AI Interrogator Output: 127 distinct technical requirements. Focused on: Failure states, data governance, and edge cases. Result: A massive, boring, but exhaustive document.
The Results
The volume difference (5 vs 127) is striking, but the content difference is what matters. The AI explicitly defined requirements that the human completely "blind spotted":
- Granular RBAC: "What happens if a junior dev tries to delete a repo link?" - API Rate Limits: "How do we handle 429 errors from GitHub during a sync?" - Data Retention: "Do we store the Jira tickets indefinitely? Is there a purge policy?" - Empty States: "What does the dashboard look like for a new user with 0 tickets?"
The human spec implied these were "implementation details." The AI treated them as requirements. In my experience, treating RBAC as an implementation detail is exactly why projects go over budget.
Trade-offs and Limitations
To be fair, reading a 127-point spec is miserable. There is a serious signal-to-noise problem here.
- Bloat: The AI can be overly rigid. It suggested microservices architecture for what should be a monolith. It hallucinated complexity where none existed. - Paralysis: Handing a developer a 127-point list for a prototype is a great way to kill morale. - Filtering: You still need a human to look at the list and say, "We don't need multi-tenancy yet, delete points 45-60."
However, I'd rather delete 20 unnecessary points at the start of a project than discover 20 missing requirements two weeks before launch.
Discussion
This experiment made me realize that our hatred of writing specs—and our reliance on "implied" context—is a major source of technical debt. The AI is useful not because it's smart, but because it's pedantic enough to ask the questions we think are too obvious to ask.
I’m curious how others handle this "implied requirements" problem:
1. Do you have a checklist for things like RBAC/Auth/Rate Limits that you reuse? 2. Is a 100+ point spec actually helpful, or does it just front-load the arguments? 3. How do you filter the "AI noise" from the critical missing specs?
If anyone wants to see the specific prompts we used to trigger this "interrogator" mode, happy to share in the comments.
Ask HN: What Linux Would Be a Good Transition from Windows 11
I have users who glaze over the minute I mention "notepad." I think they can barely use Windows. But our work requires a level of privacy (regulatory and otherwise) and Windows 11 is just one big data transmitter. I know this is flamebait, but I'd love suggestions for a Linux desktop that looks like Windows, is stable and easy to administer and harden, and works with Dell business grade laptops that we bought new in 2025.
Ask HN: Who has seen productivity increases from AI
I would love examples of positions and industries where AI has been revolutionary. I have a friend at one of the largest consulting firms who has said it'd been a game-changer in terms of processing huge amounts of documentation over a short period of time. Whether or not that gives better results is another question, but I would love to hear more stories of AI actually making things better.
Ask HN: Have top AI research institutions just given up on the idea of safety?
I understand there's a difference between the stated values and actual values of individuals and organizations, and so I want to ask this in the most pragmatic and consequentialist way.
I know that labs, institutions, and so on have safety teams. I know the folks doing that work are serious and earnest about that work. But at this point are these institutions merely pandering to the notion of safety with some token level of investment? In the way that a Casino might fund programs to address gambling addiction.
I'm an outsider and can only guess. Insider insight would be very appreciated.