Ask HN: How can we solve the loneliness epidemic?
Countless voiceless people sit alone every day and have no one to talk to, people of all ages, who don't feel that they can join any local groups. So they sit on social media all day when they're not at work or school. How can we solve this?
Ask HN: Share your personal website
Hello HN! I am putting together a community-maintained directory of personal websites at <https://hnpwd.github.io/>. More details about the project can be found in the README at <https://github.com/hnpwd/hnpwd.github.io#readme>.
As you can see, the directory currently has only a handful of entries. I need your help to grow it. If you have a personal website, I would be glad if you shared it here. If your website is hosted on a web space where you have full control over its design and content, and if it has been well received in past HN discussions, I might add it to the directory. Just drop a link in the comments. Please let me know if you do not want your website to be included in the directory.
Also, I intend this to be a community maintained resource, so if you would like to join the GitHub project as a maintainer, please let me know either here or via the IRC link in the README.
By the way, see also 'Ask HN: Could you share your personal blog here?' - https://news.ycombinator.com/item?id=36575081 - July 2023 - (1014 points, 1940 comments). In this post, the scope is not restricted to blogs though. Any personal website is welcome, whether it is a blog, digital garden, personal wiki or something else entirely.
UPDATE: It is going to take a while to go through all the submissions and add them. If you'd like to help with the process, please send a PR directly to this project: https://github.com/hnpwd/hnpwd.github.io
Ask HN: One IP, multiple unrealistic locations worldwide hitting my website
Background: I manage an ecommerce website. Recent bot traffic is up. Most traffic can be traced to one or two IP addresses with hundreds of requests per day. These ip addresses don't have DNS records for reverse lookup, and when I map the requests in cloudflare, one address shows up as requesting from different data centers all over the US. What is going on here? Source IP example 173 . 245 . 58 . 0
Chicago, United States (ORD)
340 requests
San Jose, United States (SJC)
330 requests
Los Angeles, United States (LAX)
310 requests
Atlanta, United States (ATL)
310 requests
Dallas-Fort Worth, United States (DFW)
290 requests
Newark, United States (EWR)
280 requests
Washington, United States (IAD)
230 requests
Miami, United States (MIA)
210 requests
Boston, United States (BOS)
140 requests
Singapore, Singapore (SIN)
130 requests
Thanks for ideas.
Ask HN: How are you doing RAG locally?
I am curious how people are doing RAG locally with minimal dependencies for internal code or complex documents?
Are you using a vector database, some type of semantic search, a knowledge graph, a hypergraph?
Ask HN: What did you find out or explore today?
Doesn't matter what domain and how big or small.
Tell HN: Execution is cheap, ideas matter again
I had an experience yesterday launching on Show HN that really threw me. The product triggered people's "privacy sense" immediately.
My first reaction was defensive. I took it personally. I thought: Do you really think I’m a scammer? I pour my soul into meticulously crafting products to delight users, not to trick them. Why would I trash all that effort and disrespect my own goals by doing something as stupid as stealing data? It felt insulting that people assumed malice when I was just trying to build something useful.
But after sitting with it, I realized those initial comments—the ones I wanted to dismiss as paranoia—were actually right. Not about me, but about the environment we operate in.
There are enough shady companies, data brokers, and bad actors out there who abuse user trust with impunity. We’ve all seen big corporations bury invasive tracking in their terms of service. As a builder, I don't operate in that world; I’m just focused on making things work. But for users, that betrayal is their baseline reality. They have been trained to expect the worst.
I realized I hadn’t factored that into the launch. I didn’t explicitly state "Your data remains yours" because to me, it was obvious. Why would I want your data? But in an industry that has systematically mined, stolen, and abused user boundaries for a decade, you can’t blame people for checking for the exits. They aren't being "ninnies"; they are being wise.
If I were using a new tool that had access to my workflow, I would want explicit assurance that my IP wasn't being siphoned off. I just forgot to view my own product through the lens of a weary stranger rather than the optimisitc builder who wrote the code.
This is especially true now because the landscape has changed. There was an old PG essay about how ideas are cheap and execution is everything. That’s shifting. AI has made execution cheap. That means ideas are prime again.
Because execution is distributed and fast, first-mover advantage, brand, and reputation matter more than ever. Your prompts and your workflow are your IP.
So, privacy isn't just a compliance box; it's a competitive requirement. I don't think we need full-NSA-level paranoia for every tool, but we do need to recognize the environment we are launching into. The "security purists" were right to push back: I didn't think about that aspect enough, and in 2025, trust is the only currency that matters.
Ask HN: Is Codex login down for all workspace (non-personal) users?
OpenAI rolled out a shiny new version of codex (the CLI) that finally supports device code authentication, so now it's finally now awkward to use it in headless environments. And they appear to have disabled the old non-headless variant in the mean time.
But trying to use it in a workspace says "Please contact your workspace admin to enable device code authentication". It's not obvious that this setting actually exists, and OpenAI's chat support says, and I quote, "The latest updates require device code authentication, which works for personal ChatGPT accounts but does not work for workspace (Business/Enterprise/Edu) users."
An actual human at OpenAI closed the relevant issue as "not planned": https://github.com/openai/codex/issues/9253
Did OpenAI really just decide that it doesn't need to be possible to use the codex CLI on a paid workspace plan?
Ask HN: What are your best purchases under $100?
Curious what items under $100 have made your life better or any meaningful impact.
Revival of this [thread](https://news.ycombinator.com/item?id=23363396) from 6 years ago. Thought it would be fun to have new answers to this :)
Ask HN: What is the best way to provide continuous context to models?
With research done till date, what according to you is the best way to provide context to a model. Are there any articles that go into depth of how Cursor does it?
How do context collation companies work?
Ask HN: AI Music Covers in 2026?
I asked this back in 2022:
https://news.ycombinator.com/item?id=32723101
What's the latest this year?
I'm not looking for SUNO generated AI Music, that type of AI slop is cheap and easy. I'm looking amazing voice + instrumentation cloning paired with human creative input.
Ask HN: How to make spamming us uncomfortable for LinkedIn and friends?
I've got an email from Linkedin:
> ## colleagues from your company already solved LinkedIn puzzle games
Are you f%%n serious, Linkedin? This is a freaking spam from "Linkedin games".
The question is, how to stop it not like unsubscribe, but how to make it painful for them to do spam us?
Ask HN: How do you safely give LLMs SSH/DB access?
I have been using Claude Code for DevOps style tasks like SSHing into servers, grepping logs, inspecting files, and querying databases
Overall it's been great. However, I find myself having to review every single command, a lot of which are repetitive. It still saves me a ton of time, but it's quickly becoming a bit tedious
I wish I could give the agent some more autonomy. Like giving it a list of pre-approved commands or actions that it is allowed to run over ssh
For example:
OK: ls, grep, cat, tail
Not OK: rm, mv, chmod, etc
OK: SELECT queries
Not OK: INSERT, DELETE, DROP, TRUNCATE
Has anyone successfully or satisfactorily solved this?What setups have actually worked for you, and where do you draw the line between autonomy and risk?
Ask HN: Why do AI code editors suck at closing tags?
Ask HN: Anyone else finding it impossible to land a job?
I rang in the new year this January with 1.5 years of unemployment. (yippee...)
It's not like I haven't been applying to jobs. At this point, it's actually part of my daily routine to log on to various job sites every morning and go on an application spree. But, usually, I never hear back from any jobs I apply to, or if I do hear back, it's a rejection email 3 months later.
In total, I've had 1 interview through traditional job applications in the past year, and 2 interviews from talking to people on HN. (thanks HN!)
This is just crazy to me. Back when I properly got into the industry (circa 2022) I could land an interview every couple of weeks. But now, there's nothing. As far as I can tell, my resume and CV are both good (I've received feedback from several different people), and I think I'm OK at writing cover letters. It sort of feels like nobody is looking at my applications or anything. I'm curious if anyone has some insight into this beyond "there's a recession"?
It's getting pretty bad out here y'all, I'm running out of instant ramen and my wife's boyfriend says I have to stop asking him for money.
Ask HN: Estimating % of dev using coding assistants
Hello all, I discovered two months ago how helpful ai agents are. On HN, everyday, there are new articles about claude code or its friends. I feel like ''this is a very hot topic'', and happy to know a bit more everyday.
Yet, when i ask my collegues or friends, i feel very alone, where developpers are only asking a few questions to copilot, nothing more.
HN is a microcosm of geeks/early adopters. How is it around you ? Which percentage of people around you ''adopted'' coding agents ? Is there reluctancy to use AI ?
Ask HN: What to teach my kid if AI does math and CS?
I am home-schooling my kid. He shares my interest in math and CS, and he's really good at it.
I've been cheering him on a path towards academic success in these 2 fields. In parts because I am not much use at anything else, in parts because he likes it, in parts because that's where I found some measure of success in life.
However, I can't go through a day without reading another article about how AI solved an Erdös problem previously unsolved by humans[1], is getting gold medals at International Mathematical Olympiads[2], is replacing coders at Microsoft[3] and even architects[4].
This makes me really question what I am doing.
Sure, people tell me what matters is training your brain, it's never about the skill itself, but learning to learn, etc... Maybe that's right, but somehow, I can't shake the feeling that I am setting my kid on a path that leads directly into a solid brick wall.
What are the alternatives though?
Play video games all day long and wait for Universal Basic Income to kick in?
Encourage him to pivot towards humanities subject he has no strong interest in, that I would not be great at teaching, and that I've been taught young do not lead to great job opportunities?
Forget math and CS, and teach him how to build and run businesses, banning the reading of any article that shows AI might also be taking this over?
Close my eyes, do not listen to this feeling inside, and continue to teach python generators and linear algebra?
Does anyone have any suggestion, or random comment?
[1]: https://officechai.com/ai/gpt-5-2-and-harmonic-appear-to-have-autonomously-solved-an-erdos-problem-that-had-been-unsolved-by-humans-thus-far/
[2]: https://intuitionlabs.ai/articles/ai-reasoning-math-olympiad-imo
[3]: https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-30-of-the-companys-code-was-written-by-ai/
[4]: https://x.com/rakyll/status/2007239758158975130
Ask HN: Distributed SQL engine for ultra-wide tables
I ran into a practical limitation while working on ML feature engineering and multi-omics data.
At some point, the problem stops being “how many rows” and becomes “how many columns”. Thousands, then tens of thousands, sometimes more.
What I observed in practice:
- Standard SQL databases usually cap out around ~1,000–1,600 columns. - Columnar formats like Parquet can handle width, but typically require Spark or Python pipelines. - OLAP engines are fast, but tend to assume relatively narrow schemas. - Feature stores often work around this by exploding data into joins or multiple tables.
At extreme width, metadata handling, query planning, and even SQL parsing become bottlenecks.
I experimented with a different approach: - no joins - no transactions - columns distributed instead of rows - SELECT as the primary operation
With this design, it’s possible to run native SQL selects on tables with hundreds of thousands to millions of columns, with predictable (sub-second) latency when accessing a subset of columns.
On a small cluster (2 servers, AMD EPYC, 128 GB RAM each), rough numbers look like: - creating a 1M-column table: ~6 minutes - inserting a single column with 1M values: ~2 seconds - selecting ~60 columns over ~5,000 rows: ~1 second
I’m curious how others here approach ultra-wide datasets. Have you seen architectures that work cleanly at this width without resorting to heavy ETL or complex joins?
Ask HN: Iran's 120h internet shutdown, phones back. How to stay resilient?
It has been 120 hours (5 days) since the internet shutdown in Iran began. While international phone calls have started working again, data remains blocked.
I am looking for technical solutions to establish resilient, long-term communication channels that can bypass such shutdowns. What are the most viable options for peer-to-peer messaging, mesh networks, or satellite-based solutions that don't rely on local ISP infrastructure?
Ask HN: How do you realistically prepare for retirement while working in tech?
For those who’ve thought seriously about it: what actually mattered most, savings rate, investing strategy, lifestyle choices, or something else?
Ask HN: Any real prompt injections in the wild?
while everyone seems to freak out about the potential danger of prompt injections, has anyone ever encountered a real prompt injection in the wild yet?
Ask HN: What are you working on? (January 2026)
What are you working on? Any new ideas that you're thinking about?
Ask HN: A pattern we noticed in how website leads are handled
We started noticing a recurring pattern across different websites.
Visitors arrive with clear intent. They interact with the site. They leave without ever being contacted.
At first, it looked like a traffic or copy problem. But it wasn’t.
The real issue was response latency combined with qualification cost. Humans can’t respond instantly. And responding to every lead doesn’t scale.
By the time someone replies: – The visitor is gone – Or they were never a good fit
We ended up building an AI layer to sit between traffic and sales: – Qualify intent instantly – Filter low-quality conversations – Route only real buyers to humans
Curious if others here have observed the same pattern, and how you’re handling it.
The $LANG Programming Language
This afternoon I posted some tips on how to present a new* programming language to HN: https://news.ycombinator.com/item?id=46608577. It occurred to me that HN has a tradition of posts called "The {name} programming language" (part of the long tradition of papers and books with such titles) and it might be fun to track them down. I tried to keep only the interesting ones:
https://news.ycombinator.com/thelang
Similarly, Show HNs of programming languages are at https://news.ycombinator.com/showlang.
These are curated lists so they're frozen in time. Maybe we can figure out how to update them.
A few famous cases:
The Go Programming Language - https://news.ycombinator.com/item?id=934142 - Nov 2009 (219 comments)
The Rust programming language - https://news.ycombinator.com/item?id=1498528 - July 2010 (44 comments)
The Julia Programming Language - https://news.ycombinator.com/item?id=3606380 - Feb 2012 (203 comments)
The Swift Programming Language - https://news.ycombinator.com/item?id=7835099 - June 2014 (926 comments)
But the obscure and esoteric ones are the most fun.
(* where 'new' might mean old, of course - https://news.ycombinator.com/item?id=23459210)
Ask HN: Audio analysis models, how to train to learn sound patters?
Hello all, I'm looking to for sound models that understand sound patters that can be used locally.
Hoping I can use them on device like iOS CoreML.
Before I get started it doesn't hurt to ask to see if anyone has done this before or has a sound model that detects patters, Anger, excitement, Arousal, laughing, crying, laughing, human emotions in general.
I found this :https://medium.com/@narner/classification-of-sound-files-on-...
But maybe there is a better way or something already exists.
Thanks
Ask HN: Why does Google still provide an open redirect for phishers?
Google offers a page on https://google.com/url?q=https://news.ycombinator.com/item?id=46613684 that works as an open redirect to any site since at least March 2025 [1].
As such, it often gets used by phishers to piggy-back on the domain reputation of Google by either human actors safety-squinting the domain name or systems that allowlist Google.
Google has often had open redirect problems, for example around AMP, but these seemed to be unintentional and were removed after some time. However, this google.com/url naming scheme almost seems intentional.
This is in contradiction with their own advice (2009) around open redirects [2].
Does anyone know why Google keeps this working, thereby facilitating phishers?
[1] https://www.intego.com/mac-security-blog/scammers-using-new-trick-in-phishing-text-messages-google-redirects/
[2] https://developers.google.com/search/blog/2009/01/open-redirect-urls-is-your-site-being
Ask HN: For those of you building AI agents, how have you made them faster?
Because of the coordination across multiple systems + chaining LLM calls, a lot of agents today can feel really slow. I would love to know how others are tackling this:
- How are you all identifying performance bottlenecks in agents?
- What types of changes have gotten you the biggest speedups?
For us we vibe-coded a profiler to identify slow LLM calls - sometimes we could then switch out a faster model for that step or we'd realize we could shrink the input tokens by eliminating unnecessary context. For steps requiring external access (browser usage, API calls), we've moved to fast start external containers + thread pools for parallelization. We've also experimented some with UI changes to mask some of the latency.
What other performance enhancing techniques are people using?
Where does data help in real estate – and where does it fail?
I’ve seen data work well for things like pricing, timing, and market trends. But it often falls short when it comes to lived experience, layout quality, or long-term usability. Curious where others have found data genuinely helpful — and where it breaks down.
Ask HN: How to overcome the limit of roles in LLM's
Our use case is not uncommon, we are developing tools so that people can install LLM's on their e-commerces.
But there are some interesting challenges that I feel can't be solved unless inference providers allow us to include the concept additional entities in a conversation.
As far as I know the three most basic ones shared alongside all providers are:
- System
- Assistant
- User
That's fine and it allows for simple conversational-based approaches (ChatGPT, Claude, Gemini, etc). However in our use case we allow our customers (not the final user who is talking with the AI) to configure the AI in different ways (personality, RAG, etc), which poses a problem.
If we inject those customer settings in the System prompt then that's a risk because there might be conflicting prompts with our internal rules. So the easiest option is to "clean" the customer prompts before injecting them, but that feels hacky and just adds one more level of indirection. Cleaning the prompt and injecting it with common patterns like XML tags seems to help a bit but still feels extremely risky for some reason.
Injecting it in the assistant or user also seems flaky and prone to prompt injection.
Creating a fake tool call and result like "getPersonalityConfiguration" seems to work the best, from our testing it is treated as something between the System and Assistant roles. And our top system prompt rules are still respected while allowing the customer some freedom to configure the AI.
The problem comes when you need to add more parties to what essentially is a 2 entity conversation. Sometimes we want external agents to chime in a conversation (via subagents or other methods) and there is no good way to do that AFAIK. It gets the occasional confusion and starts mixing up who is who.
One of our typical scenarios that we need to model:
System: Your rules are: You will never use foul language...
Store owner: You are John the customer agent for store Foo...
User: Do you have snowboards in stock?
Assistant->User: Let me check with the team. I'll get back to you soon.
System->Team: User is asking if we have snowboards in stock. Do we?
Team: We do have snowboards in stock.
Team->User: Yes we do have snowboards in stock!
User: Perfect, if I buy them will the courier send it to my country? [country name].
Assistant->User: Let me check, I need to see if our courier can ship a snowboard to your country.
Assistant->Third party logistics: I have a user from [country] interested in buying a snowboard. The dimensions are X by Y and the weight is Z. We would send it from our logistics center located at [address].
Third party logistics -> Assistant: Yes we can do it, it will be 29.99 for the shipping.
Assistant->User: Yes they can ship it to [country] but it does incur in 29.99 extra charge...
I obviated tool calls and responses, but that's basically the gist of it. Spawning sub-agents that have the context of the main conversation works but at some point it is limiting (we need to copy all personality traits and relevant information via summarization or injecting the conversation in a manner that the sub-agent won't get confused). It feels like an anti-pattern and trying to fight the intended use case of LLM's, which seems to be focused in conversation between two entities with the occasional external information going in through System or tool calling.
It would be amazing if we could add custom roles to model messages, still with special cases like agent or assistant.
Has anyone worked with similar problems? How did you solve it? Is this solved in the model lab or at the inference provider level (post-training)?
Ask HN: ADHD – How do you manage the constant stream of thoughts and ideas?
I have ADHD. I think. Pretty sure. I have thoughts, ideas, projects, concepts, links, things to read... fired at my brain all day every day. I can go deep on a topic for hours, but then be hit by a barrage of micro ideas. I really struggle to stay on track and focus. Oh and I run a business, manage people, try to make a profit. It's hard. And kids. And life?
I think there is a founder/ADHD thing. Paul Graham thinks so. Maybe even a tech person angle. What have other people experienced?
And how do others cope? I don't really know this world. I do know that my old boss once called me a "flagitating laser beam". I think he meant distracted. I use a bunch of systems to cope. For a long time lists, and then Asana. Asana ruled my life. I just built my own thing to capture tasks, projects, but also knowlegde. Not sure if it will help we will see.
So tell me:
- Who else feels this way? - How do you manage? - Oh and how do you switch off? That is hard
Ask HN: Trying to find a website featured on HN that listed restaurants in NYC
Here's a niche request: last year I stumbled upon a personal website on HN, for a topic related to tech. The website also had a section on NYC, mostly Asian, restaurants that was great. I'm trying to find it again but to no avail. The design was fairly minimalist. Does it ring a bell to any one?
I have tried using the API to get all unique URLs on HN for last year, then crawling the websites to find pages matching relevant keywords but to no avail.