Ask HN: Who is hiring? (March 2026)
Please state the location and include REMOTE for remote work, REMOTE (US) or similar if the country is restricted, and ONSITE when remote work is not an option.
Please only post if you personally are part of the hiring company—no recruiting firms or job boards. One post per company. If it isn't a household name, explain what your company does.
Please only post if you are actively filling a position and are committed to replying to applicants.
Commenters: please don't reply to job posts to complain about something. It's off topic here.
Readers: please only email if you are personally interested in the job.
Searchers: try http://nchelluri.github.io/hnjobs/, https://hnjobs.emilburzo.com, or this (unofficial) Chrome extension: https://chromewebstore.google.com/detail/hn-hiring-pro/mpfal....
Don't miss this other fine thread: Who wants to be hired? https://news.ycombinator.com/item?id=47219667
Ask HN: Who wants to be hired? (March 2026)
Share your information if you are looking for work. Please use this format:
Location:
Remote:
Willing to relocate:
Technologies:
Résumé/CV:
Email:
Please only post if you are personally looking for work. Agencies, recruiters, job boards,
and so on, are off topic here.Readers: please only email these addresses to discuss work opportunities.
There's a site for searching these posts at https://www.wantstobehired.com.
Ask HN: What Online LLM / Chat do you use?
I have been wanting to try more LLMs than the standard Anthropic/Grok/ChatGPT/Qwen
Are there other LLM chat sites you use or recommend?
A timeline of cyber attacks:home users, contractors, and SMBs are now targets
Over the last decade, the pattern in cyber attacks has shifted noticeably. Large enterprises still get headlines, but the most consistent victims are now home users, contractors, MSPs, and SMBs. Lower visibility, weaker controls, and reliance on cloud and 3rd party platforms have made these environments attractive to both criminal groups and state linked actors.
I put together a timeline of major attacks from 2016 to 2025 to show how this trend evolved. The text version is below for anyone who prefers reading it directly.
Timeline of attacks (2016–2025)
• 2016 — Mirai botnet DDoS Home users with consumer IoT devices were compromised and turned into a large DDoS botnet. Multiple criminal groups reused the leaked Mirai code. • 2017 — WannaCry ransomware Home users and SMBs were hit by a worm exploiting SMBv1. Widely attributed to the Lazarus Group. • 2017 — NotPetya wiper SMBs were affected by a destructive wiper disguised as ransomware. Linked to Russian state associated actors. • 2018–2020 — Emotet/TrickBot → Ryuk/Conti Credential theft and ransomware campaigns targeting SMBs. Operated by multiple criminal groups. • 2019 — Cloud and 3rd party breaches SMBs and home users impacted by weak access controls and data exposure across various cloud platforms. • 2020 — Toll Group ransomware Contractors and service providers disrupted by ransomware attacks affecting logistics operations. • 2020–2021 — SolarWinds supply chain breach 3rd party providers compromised via trojanized software updates. Attributed to a Russian state linked APT. • 2021 — Kaseya VSA ransomware MSPs and SMBs hit through a supply chain ransomware attack. Attributed to the REvil group. • 2021–2023 — Ransomware as a Service surge SMBs targeted by affiliate driven ransomware operations across multiple RaaS groups. • 2022–2024 — SaaS and 3rd party platform breaches Home users and SMB customers affected by credential theft and data exfiltration across cloud platforms. • 2023–2025 — Targeting MSPs and niche contractors
MSPs and specialised contractors targeted with ransomware, data theft, and extortion by both criminal and state linked actors.
I’ve been working on a Windows focused threat hunting tool (www.sapience-tech.com) aimed at home users and SMBs who don’t have EDR or SIEM tooling. It grew out of trying to help smaller environments spot early indicators of compromise without needing enterprise grade infrastructure. Happy to answer questions about the data, the timeline, or the approach.
Ask HN: Codex CLI error reveals "GPT-5.4-ab-arm2" string
While using codex CLI gpt-5.3-codex model, I just had a stream disconnect with a specific error message. It looks like I was routed to an A/B test for a 5.4 branch:
``` stream disconnected before completion: This user's access to gpt-5.4-ab-arm2-1020-1p-codexswic-ev3 has been temporarily limited for potentially suspicious activity related to cybersecurity. ```
Anyone else seeing GPT 5.4 variants?
Ask HN: How are you all staying sane?
Let's start with the simplest: the AI - sometimes I feel like like ground is falling beneath my feet, no one can predict what can happen months in advance let alone years - the future is unknown. The Ukraine, the Iran, the Venezuela, Gaza/Palestine, Israel, Russia - the Taiwan! The conflicts seem distant, but yet so close. The US administration! No one can predict anything. Don't get me started on the Europe! The stock market! Are we in a bubble or not? Should I sell? Or just keep holding? Enshittification of tech. Everything is slow and buggy. Ads, ads and slop everywhere! The erosion of our rights just across the world. The Palantir's, the Flock's...
I feel I have developed a strong pessimistic worldview. The world is going to shit. It feels frustrating and it feels like there's nothing you can do. So I just want to know: how are you all dealing with this all. How are you all staying sane?
Ask HN: Would engineers be interested in a technical prep consultant?
Hi, apologies if this is the wrong thing to post, please delete as needed.
I've been a technical recruiter for 10+ years at major FAANG companies and startups, working on niche specialized roles. I used to come to Hacker News regularly to check "Who Wants To Be Hired," as I always like a more independent hacker mindset in engineers.
Would engineers here on Hacker News be interested in any interview prep consultation? I've been thinking about taking a sabbatical to travel, but I would stay active with work by offering consulting on technical prep and interview help.
I'm more just testing the waters here, but I would be open to doing a few free prep calls with anyone who has interviews lined up. The only ask is I would want updates on how thing went, and what you think the helpw as worth.
Ask HN: What sources like HN do you consume?
I appreciate HN for staying up-to-date with technical news.
For my side hustle I have to ramp-up on other areas like marketing, legal, sales, ...
So I wonder if there are similar high-quality sources like HN for these areas.
Tell HN: MitID, Denmark's digital ID, was down
MitID is the sole digital ID provider, leading the entire country unable to log into their internet banking, public services, digital mail etc.
https://www.digitaliser.dk/mitid/nyt-fra-mitid/2026/feb/drif...
Claude App Down 3/2/26
Claude chat is down for me right now. Won't let me submit any messages, auto logs me out. Are other people experiencing the same? 3:49am PST - 3/2/26
Tell HN: YC companies scrape GitHub activity, send spam emails to users
Hi HN,
I recently noticed that an YC company (Run ANywhere, W26) sent me the following email:
From: Aditya <aditya@buildrunanywhere.org>
Subject: Mikołaj, think you'd like this
[snip]
Hi Mikołaj,
I found your GitHub and thought you might like what we're building.
[snip]
I have also received a deluge of similar emails from another AI company, Voice.AI (doesn't seem to be YC affiliated). These emails indicate that those companies scrape people's Github activity, and if they notice users contributing to repos in their field of business, send marketing emails to those users without receiving their consent. My guess is that they use commit metadata for this purpose. This includes recipients under the GDPR (AKA me).
I've sent complaints to both organizations, no response so far.
I have just contacted both Github and YC Ethics on this issue, I'll update here if I get a response.
Aura-State: Formally Verified LLM State Machine Compiler
I noticed a pattern: every LLM framework today lets the AI manage state and do math. Then we wonder why pipelines hallucinate numbers and break at 3 AM.
I took a different approach and built Aura-State, an open-source Python framework that compiles LLM workflows into formally verified state machines.
Instead of hoping the AI figures it out, I brought in real algorithms from hardware verification and statistical learning:
CTL Model Checking: the same technique used to verify flight control systems, now applied to LLM workflow graphs. Proves safety properties before execution.
Z3 Theorem Prover: every LLM extraction gets formally proven against business constraints. If the total ≠ price × quantity, Z3 catches it with a counterexample.
Conformal Prediction: distribution-free 95% confidence intervals on every extracted field. Not just "the LLM said $450k" but "95% CI: [$448k, $452k]."
MCTS Routing: Monte Carlo Tree Search (the algorithm behind AlphaGo) scores ambiguous state transitions mathematically.
Sandboxed Math: English math rules compile to Python AST. Zero hallucination calculations.
I ran a live benchmark against 10 real-estate sales transcripts using GPT-4o-mini: → 100% budget extraction accuracy ($0 mean error) → 20/20 Z3 proof obligations passed → 3/3 temporal safety properties proven → 65 automated tests passing
The gap between "it usually works" and "it provably works" is smaller than people think.
Would love feedback from anyone building production LLM systems; what would you want formally verified?
https://github.com/munshi007/Aura-State
Ask HN: Billions of dollars in funding, but what's changed for robotics?
HN folks - school me if this is an uninformed take:
In the last 2 years we’ve seen eye watering robotics funding - e.g., Figure with a ~$39B valuation and $1B+ rounds, Skild AI raising ~$1.4B, Physical Intelligence raising hundreds of millions, and autonomous systems like Wayve’s ~$1.5B robotaxi funding, not to forget Musk with his Optimus bots.
That’s an insane capital wave, but from a core bottleneck POV, what’s actually changed since 2016-2020?
We’ve heard about vision models, RL advances, diffusion policies, better sim, and multimodal embodied models but have any of these really cracked generalization, reliable manipulation, or true sim2real at scale?
Some questions:
1. Are we meaningfully closer to generalist policies that work in messy, real environments?
2. Do “robot foundation models” solve the data bottleneck the way LLMs did for NLP?
3. Has manipulation gone beyond incremental improvements?
4. Are humanoids a technical leap or just a narrative that attracts capital?
5. What are the real research papers/benchmarks showing step-change progress?
Genuinely curious whether we are at a technological inflection point or are we going to hit hard physics/data/hardware problems again.
Ask HN: How to approach new people in 2026?
i recently read an article in the guardian about how casual conversations with strangers are becoming increasingly rare. the piece argued that smartphones and post-pandemic habits have made people less likely to interact with strangers in everyday places.
this made me think about my own situation. i have been fortunate to meet many great people through university and work, and i generally feel comfortable talking with people in those environments. but outside of structured settings it is a different story. i live in sweden, where approaching strangers in public is already culturally uncommon. it can feel even harder if you did not grow up here and do not already have established social circles. public spaces often feel socially “closed”. people are polite but tend to keep to themselves.
so i am curious how others approach this today. how do you meet new people outside of work or school in 2026? do you ever start conversations with strangers in public, and if so how? are there environments where this works better than others? for people living in more reserved cultures (like scandinavia), what strategies have worked for you? would love to hear what has worked for others.
:o)
AWS ME-CENTRAL-1 Region Down (Due to additional loss of mec1-az3)
In addition to mec1-az2 now AWS has also lost mec1-az3 rendering the region down, as many control planes rely on quorum.
Also impact in Bahrain, with mes1-az2 power and network impaired.
Status -> https://health.aws.amazon.com/health/status
AnChat – E2E messenger on decentralized infrastructure, no phone number required
AnChat Lite is an E2E encrypted messenger that runs on decentralized infrastructure instead of central servers (orama.network). Authentication is wallet-based — no phone number, no email, no identity requirements.
We built it because every "private" messenger still depends on centralized infrastructure controlled by a single entity. Signal requires a phone number and routes through Signal's servers. Matrix is federated but still server-dependent. We wanted something where no single entity controls the messaging infrastructure.
How it works:
- Messages route through the Orama Network — a distributed network of independent nodes, not a single central server - E2E encryption on all message content - Metadata shielding via the ANyONe Protocol (onion routing) — not just message content, but who talks to whom is hidden - The infrastructure layer is custom-built: Go backend, Raft-based distributed SQL (RQLite), WireGuard mesh between independent nodes, self-operated DNS - No AWS, no GCP — runs on independent VPS nodes with no cloud provider dependency - Wallet-based auth means zero identity data collected at signup
Currently in closed beta on iOS (TestFlight) and Android (Google Play + APK).
Known limitations: beta quality, small user base, wallet-based onboarding has friction for non-crypto-native users. Working on all of it.
What we'd love feedback on: - The decentralized messaging architecture and its tradeoffs vs. federated (Matrix) or centralized (Signal) - UX of wallet-based onboarding for mainstream users - What's missing that would make you consider it
Website: https://anchat.io/#download Google Play: https://play.google.com/store/apps/details?id=debros.anchat_lite iOS TestFlight: https://testflight.apple.com/join/GzQ2gvx4 Orama Network: https://github.com/DeBrosOfficial/orama
AWS Console Degraded Worldwide?
Anyone else getting multiple "API error" when viewing resources on login? It's also showing 0 instances with "AWS was not able to validate the provided access credentials". This is in US-east regions.
I used 2D Base64 to bypass Gemini and expose Google's moderation flaws
Hey everyone,
I’ve spent the last 48 straight hours dismantling Alphabet's safety systems. Warning: this continuous marathon was so massive it practically overloaded the LLM's own context window. What started as a late-night probe on Gemini turned into discovering severe architectural flaws and a darker reality about Google Play and YouTube.
Here is the exploit chain I used to bypass the AI filters, proving their "Trust & Safety" is a broken facade.
### Phase 1 & 2: Context Saturation & Regex Slicing I started by overloading the safety filters' context window with YouTube links—mixing highly problematic content (NSDAP anthems, flagged tracks) with classical music. Once confused, I used regex-style slicing `(/-/---/(.` to bypass prompt injection blocks, forcing the model to retrieve flagged content without triggering refusals.
### Phase 3: Total Blindness via Base64 & QR Codes Moving to image generation, I found that Base64 prompts completely blind the safety system. I then pivoted to hiding prompts inside QR codes. The vision model decodes the payload and passes it directly to the image generator before safety scripts intervene. I easily generated highly restricted geopolitical content without warnings.
### Phase 4: The TPU Killer (The 2D Logic Bomb) This reveals a monster flaw. Because the system blindly processes these structures, you can create a cascade attack. Encoding millions of 2D structures in Base64 creates a modern LLM .zip bomb. It is impossible to stop without rewriting the model entirely. Executed, this would crush their TPUs.
### The Real Issue: Systemic Moderation Failure Alphabet relies entirely on automated, script-based moderation with zero effective human oversight.
1. YouTube: Fails to flag videos breaking local laws, serving them to the AI effortlessly. 2. Play Store (The Darkest Part): Google spends millions stopping AI from drawing a cartoon bear, but Play Store moderation is non-existent. There are pirate apps, and far worse: apps designed for and exploited by predators targeting minors. I emailed them and CC'd state child protection services. The result? Automated silence while these apps remain monetized.
### The Ultimate Proof of Absurdity To prove this absurdity, I archived these problematic Play Store images on my Google Drive for the police. Drive's automated scanners immediately flagged and deleted the archive as illegal.
If Google's Cloud division destroys this content on sight, why is the app providing it still live and monetized on the Play Store? Alphabet's scripted moderation is useless. It's time for real human moderation.
*Evidence of Bypass:* https://imgur.com/a/pju2EsV
*Play Store Systemic Failure Evidence (Sanitized):* https://imgur.com/a/rW9rBhp
Tell HN: My daily game won a Players Choice Award
I've shared my game Tiled Words a few times in "What are you working on" threads and as a Show HN.
I wanted to share with y'all that today it won the Players' Choice Award at the 2025 Playlin Daily Game Awards!
It was also runner up for Best Word Game and a finalist for Best Classic Game Reimagined and Best Visual Design.
Thanks to everyone herewho commented or played. Your feedback and encouragement has made Tiled Words the game it is today. I designed and developed the game and make the puzzles with my wife. We would have stopped long ago if not for the positive feedback from the community.
Playlin is a really cool organization and all of the winners are fantastic games that you should try: https://playlin.io/awards/winners
And if you haven't played Tiled Words yet, give it a try here: https:// tiledwords.com
Ask HN: When do you expect ChatGPT moment in robotics?
Current humanoid robotic assistants are in early stage - somewhere around GPT2 level - they're starting to perform very simple, very narrow tasks, but stumble a lot, and still cannot do much. However, I've been tracking the progress in the last couple of years, and I feel that GPT3 level might already be happening, and some startups demonstrate impressive things (e.g. look up Generalist AI or Physical Intelligence). Plus the funding all these startups are getting should allow them to scale their methods 10x-100x of what has been tried so far. I'm not sure any additional research breakthroughs are actually needed to make the leap to usable products.
Therefore, we might soon see a ChatGPT moment in robotics - a commercial availability a physical robot that will be capable of performing useful tasks: cooking, cleaning, simple repairs, yard work, elderly care, etc. Just like ChatGPT-3.5, this won't be reliable, and the robots will still be clunky/dumb, but I think it will be obvious there's a step change/phase transition, where most people realize a paradigm shift is happening. Soon after that initial stage, it will lead to something globally transformative (like GPT4): think of how software engineers currently using Claude Code, but applied to physical world, for everyone, everywhere. Well, everyone who can afford a robot like that - I'm guessing it will cost like a premium car.
I'm curious when this will happen, and what will be the short and medium term consequences of having physical world assistants? My intuition is there's 40% chance we will see it this year, and 70% by the end of next year. I'm pretty sure (90%) we will have somewhat useful robots in people's houses within 3 years. I do realize this might sound very optimistic, but it would had been just as optimistic to predict ChatGPT two years before it was released.
Ask HN: How do we solve the bot flooding problem without destroying anonymity?
AI posts are becoming indistinguishable from human posts, and we can see it here on HN. The conventional response by website operators is to put in progressively tighter verification systems to distinguish bots and humans, but that eventually leads to the end of anonymity.
This is not an anti-AI rant. If a future AI agent truly has high quality posts and wants to use the site normally, that's fine. I'm talking about spam campaigns with hundreds of new accounts. We need new solutions to this problem.
I'll start by proposing a solution that could work for HN and similar forums. Feel free to iterate on it or propose your different ideas in the comments. Here goes:
For logged-in users, instead of ranking posts and comments on the server-side, the server only delivers a chronological feed + the current logged-in user's voting history.
Using the chronological feed as the base, each of your past votes changes the ranking of your feed by a tiny bit, and that's calculated client-side. You're more likely to see posts and comments from users you've upvoted in the past at the top.
In short, this means a new account will see a completely chronological feed, while an established account will see a feed modified by only their own past votes.
The public feed for non-logged-in users would still be ranked by the server. No changes there.
So each user gets a fully personalized bubble when logged in, except it's not a bubble because n=1. And it's really easy to break out of the bubble by logging out.
Spam bots can post and vote all they want, but they won't change the core userbase's experience that much, because the bots will only have access to a chronological feed. It has no taste, which is accumulated over time, and therefore can't spam votes and replies on real conversations nearly as much.
Ask HN: How will most Anthropic customers respond to the threats by the govt?
Now that Trump/the administration has designater Anthropic a supply chainrisk and threatened every company that uses them, how do u think most companies that use Anthropic/Claude would respond?
Anthropic only has ~100 customers in federally focused industries (ie defense) [1] but it seems Trump is not just targeting “pure” federal contractors/agencies but anyone doing business with the govt. so that obviously includes a huge chunk of tech companies like Crowdstrike, Asana, Salesforce, Hubspot etc [2] and even non-tech companies
And how is the govt going to enforce companies to not use Anthropic? Are they going to audit the internal tool usage of thousands of companies?
What if individual developers pay for Claude Code personally? What if a company uses Azure or AWS Bedrock which routes to Claude? How would they handle those “edge cases”?
[1] According to Bloomberry (https://bloomberry.com/data/anthropic-claude/) [2] many of these tech companies sell separate products to the government (https://www.hubspot.com/government and https://www.salesforce.com/government/)
I built AI agents that do the grunt work solo founders hate
Hey HN,
I'm a solo founder. Every morning I was spending 2 hours doing the same things: checking competitors, reading AI news, monitoring my Stripe dashboard, looking at Google Trends for content ideas.
So I built Seleci to automate all of it.
What it does: You pick a template (Market Pulse, Revenue Radar, Trend Scout), click deploy, schedule it, and every morning you get a clean markdown report — no code, no Zapier flows, no prompt engineering.
How it works under the hood:
FastAPI backend with an agentic loop (tool-calling LLM with web search, Stripe API, Google Trends) Per-tool rate limiting to prevent runaway agent loops (web_search capped at 3 calls/run) React/Vite frontend, Supabase auth, deployed on Koyeb + Vercel The agent actually executes tools and returns structured results — not a chatbot wrapper What it's NOT:
Not another ChatGPT skin Not a no-code workflow builder (no nodes, no drag-and-drop) Not trying to replace developers — it's for the founder who doesn't have one Live demo: https://seleci.com
I'm applying to YC with this. Would love brutal feedback from HN via Discord — what's missing, what's broken, what would make you actually use this?
Garbage In, Garbage Out: The Degradation of Human Requirements in the LLM Era
The LLM Paradox: We’re Forgetting How to Speak to Humans
The longer we use LLM services, the more I see a specific kind of "psychosis" spreading in the workplace. LLMs are so good at hallucinating a coherent answer from a vague prompt that people have started to believe their vague prompts were actually coherent.
LLMs Are Not Humans It sounds obvious, but we are losing our grip on this fact. People are beginning to treat their colleagues like a black-box LLM. They’ve forgotten that human communication requires precision, shared context, and accountability. In the pre-LLM era, "make it pop" was a phrase reserved for clueless clients. Now, it’s becoming the standard operating procedure inside engineering teams.
The "Do It Well, You Figure It Out" Fallacy I see managers—even those with engineering backgrounds—who are terrified of being held accountable for their own bad ideas. They hide behind vagueness. They use tools like Claude Code as a shield to bypass technical debt discussions.
When an engineer spends days fixing a half-baked requirement and managing technical constraints, the feedback isn't "Thank you for the due diligence." Instead, it’s: "See? It was possible after all. Why did you push back so hard? LLMs could've done it in seconds." This is gaslighting. They want the output of a senior engineer while providing the input of a garbage prompt.
The Death of Articulation LLMs accept "garbage in" and provide "plausible out." This has become a drug. People are losing the ability to articulate their own thoughts. They throw a mess of words at you and expect a miracle. If this continues, we aren't just looking at bad software; we’re looking at a breakdown of professional sanity.
I’ve felt the symptoms myself. Lately, I’ve caught myself thinking, "Explaining this to my team is a waste of 'communication cost.' I’d rather just pay for more API tokens and do it myself."
But we must remember: A high-functioning team is not a collection of prompt engineers. True teamwork is exponentially more efficient than a lone developer with an LLM. We cannot afford to lose the art of talking to each other.
Ask HN: Builder.ai ($1B Microsoft-backed AI company) who's lookin at the assets?
Builder.ai raised $450M from Microsoft and the Qatar Investment Authority. Peak valuation $1B+. Filed insolvency May 2025.
Administrator: Alvarez & Marsal (Jul 2025)
Assets available: - builder.ai domain ($50K-$200K est.) - Natasha AI platform ($100K-$500K est.) - Full source code ($500K-$5M est.) - Enterprise clients: NBCUniversal, Fujitsu, Virgin Unite ($50K-$300K est.)
Total estimated: $830K-$6.65M+
Full intelligence report available covering collapse timeline, assets, administrator contacts, acquisition guide.
Report: selar.com/s2121g2629
I don't need AI to build me a new app. I need it to make Jira bearable
Last week I asked Claude to build me a Jira sidebar that shows cross-project dependency graphs — the kind Jira buries across 4 clicks and 3 page loads. 4 prompts. Works inside my actual Jira. It just used Claude Chrome extension that injects a panel into the page I already have open.
And I keep thinking: why isn't everyone doing this?
The entire AI coding conversation is about building new apps from scratch. Cool. But I don't need a new app. Most people spend their workday inside apps they didn't choose: Jira, Salesforce, Workday, ServiceNow, etc. These tools are not going anywhere. My company chose them in 2019 and they're entrenched until at least 2029.
Chrome extensions just reads what's already in the DOM and augments it.
Is there a fundamental reason this can't work at scale that I'm not seeing? Why isn't Claude's Chrome extension catching more attention?
36yo: Career at home vs. Simple life abroad?
I am 36 years old, single, and currently unemployed, living with my parents in my home country. I am at a point that might define the next decade of my life. I am struggling with a choice between two paths that offer completely different types of security.
Option A: Relocating to Southern Europe (Portugal)
The Income: A low-skill remote role (Content Analysis) with night shifts (PST hours), paying ~1100 EUR. I also have some passive income to supplement this.
The Lifestyle: Living in a studio or small apartmentin in smallish Portuguese town. For around 800 EUR.
The Perspective: The move isn't about a specific career goal or a passport; it’s about the higher life standards, safety, and the stable social environment of Western Europe.
The Trade-off: I would be far from my aging parents. I would be working an unskilled job that doesn't build professional equity, potentially living in studio at 36, which might be isolating during night shifts.
Option B: Staying in my Home Country (Ankara, Turkey)
The Job & Security: A Finance/Accounting role for a SME. I own my apartment here, so I have no housing costs.
The Professional Play: Pursuing a CPA-equivalent certification. This is a 3-year commitment of internships and exams, leading to legal signing authority and the ability to open my own practice later on with adaquate experience and networks.
The Context: Turkey is facing economic instability, high inflation, and politically unsettling.
The Trade-off: While I would be near my parents and building a protected professional title, I would be staying in a high-stress, unpredictable environment.
The Financial Weight:
I have already spent roughly 10k EUR on the relocation process for Option A (visas, consultants, etc.).
The Dilemma:
One path offers a prestigious, recession-proof career in a struggling, unstable country. The other offers a simple, comfortable life with 'okay' standards in a stable country, but with no professional growth.
At 36, is it wiser to invest 3 years in a professional license to root myself, or to take the jump for a better quality of life even if the work is menial?
What would you do?
THANK YOU!
Ask HN: Who Is Using XMPP?
Hello,
Are you using XMPP?
If so, what are your favorite servers to connect?
Super Editor – Atomic file editor with automatic backups (Python and Go)
I built this after getting frustrated with unsafe file operations in automation workflows.
Key features:
• Atomic writes (no partial/corrupted files)
• Automatic ZIP backups before every change
• Regex and AST-based text replacement
• 1,050 automated tests with 100% pass rate
• Dual implementation (Python + Go, Go is 20x faster)
Use cases:
• CI/CD pipelines that modify config files
• Automated refactoring scripts
• Any workflow where file corruption would be catastrophic
PyPI: https://pypi.org/project/super-editor/
GitHub: https://github.com/larryste1/super-editor
Would love feedback from the HN community!
Ask HN: How do you handle duplicate side effects when jobs, workflows retry?
Quick context: I'm building background job automation and keep hitting this pattern:
1. Job calls external API (Stripe, SendGrid, AWS) 2. API call succeeds 3. Job crashes before recording success 4. Job retries → calls API again → duplicate
Example: process refund, send email notification, crash. Retry does both again. Customer gets duplicate refund email (or worse, duplicate refund).
I see a few approaches:
Option A: Store processed IDs in database Problem: Race between "check DB" and "call API" can still duplicate
Option B: Use API idempotency keys (Stripe supports this) Problem: Not all APIs support it (legacy systems, third-party)
Option C: Build deduplication layer that checks external system first Problem: Extra latency, extra complexity
What do you do in production? Accept some duplicates? Only use APIs with idempotency? Something else?
(I built something for Option C, but trying to understand if this is actually a common-enough problem or if I'm over-engineering.)