Ask HN: Do you also "hoard" notes/links but struggle to turn them into actions?
Hi HN — I’m exploring an idea and would love your feedback.
I’m a builder and user of Obsidian, validating a concept called Concerns. Today it’s only a landing page + short survey (no product yet) to test whether this pain is real.
The core idea (2–3 bullets):
- Many of us capture tons of useful info (notes/links/docs), but it rarely becomes shipped work.
- Instead of better “organization” (tags/folders), I’m exploring an “action engine” that:
1.detects what you’re actively targetting/working on (“active projects”)
2.surfaces relevant saved material at the right moment
3.proposes a concrete next action (ideally pushed into your existing task tool)
My own “second brain” became a graveyard of good intentions: the organizing tax was higher than the value I got back. I’m trying to validate whether the real bottleneck is execution, not capture.Before writing code, I’m trying to pin down two things:
- Project context signals (repo/PRs? issues? tasks? calendar? a “project doc”?)
- How to close the loop: ingest knowledge → rank against active projects → emit a small set of next-actions into an existing todo tool → learn from outcomes (done/ignored/edited) and optionally write back the minimal state. The open question: what’s the cleanest feedback signal without creating noise or privacy risk? (explicit ratings vs completion events vs doc-based write-back)
What I’m asking from you:
1.Where does your “second brain” break down the most?
capture / organization / retrieval / execution (If you can, share a concrete recent example.)
2.What best represents “active project context” for you today?
task project (Todoist/Things/Reminders)
issues/boards (GitHub/Linear/Jira)
a doc/wiki page (Notion/Docs)
calendar
"in my head"
Which one would you actually allow a tool to read?3.What’s your hard “no” for an AI that suggests actions from your notes/links? (pick 1–2)
privacy/data retention
noisy suggestions / interruption
hallucinations / wrong suggestions
workflow change / migration cost
pricing
others
Ask HN: Notification Overload
I'm looking for tools or methods to better curate the deluge and cacophony of notifications, emails, texts and phone calls I imagine we are all getting inundated with everyday with increasing entropy and volume.
The amount of "notifications" I get everyday is overwhelming to the point where I often decide to switch my phone to "silent", leave my phone in another room, and even turn it off for periods of time. The problem with this is that I miss important things and they often get buried.
I've spent hours and hours unsubscribing, deleting, uninstalling, toggling settings, but then I find myself reinstalling, resubscribing. It's just a mess, and exhausting to just think about.
The reason I'm writing this is partially to vent. I just realized that my closest friend's birthday was a few weeks ago. I had it in my calendar, but never saw the notification. Yes, I should be more organized and Yes, it's not the end of the world. but damnit, i get so much crap from this bionic appendage, and still I cant use this tool to help me with remembering important things.
It just seems like its getting worse every year.
Hopefully this is helpful to others.
P.S. can we please stop with the "would you like all or some cookies" popup on every friggin website?
P.P.S. can websites stop asking for permission to invade my OS?
P.P.P.S. does anyone else ever want to run away and be an off-grid hermit?
Ask HN: Is understanding code becoming "optional"?
On Twitter, Boris Cherny (creator of Claude Code) recently said that nearly 100% of the code in Claude Code is written by Claude Code, and that he personally hasn’t written code in months. Another tweet, from an OpenAI employee, went: "programming always sucked [...] and I’m glad it’s over."
This "good riddance" attitude really annoys me. It frames programming as a necessary evil we can finally be rid of.
The ironic thing is that I’m aiming for something similar, just for different reasons. I also want to write less code.
Less code because code equals responsibility. Less code because "more code, more problems." Because bad code is technical debt. Because bugs are inevitable. Less code because fewer moving parts means fewer things can go wrong.
I honestly think I enjoy deleting code more than writing it. So maybe it’s not surprising that I’m skeptical of unleashing an AI agent to generate piles of code I don’t have a realistic chance of fully understanding.
For me, programming is fundamentally about building knowledge. Software development is knowledge work: discovering what we don’t know we don’t know, identifying what we do know we don’t know, figuring out what the real problem is, and solving it.
And that knowledge has to live somewhere.
When someone says "I don’t write code anymore," what I hear is: "I’ve shoved the knowledge work into a black box."
To me there’s a real difference between:
- knowledge expressed in language (which AI can produce ad nauseam), and
- knowledge that solidifies as connections in a human mind.
The latter isn’t a text file. It isn’t your "skills" or "beads." It isn’t hundreds of lines of Markdown slop. No. It’s a mental model: what the system is, why it’s that way, what’s safe to change, what leverage the abstractions provide, and where the fragile assumptions lie.
I’ve always carried a mental model of the codebase I’m working in. In my head it’s not "code" in the sense of language and syntax. It’s more like a "mind palace" I can step into, open doors, close doors, renovate, knock down a wall, add a new wing. It happens at a level where intuition and intellect blend together.
I'm not opposed to progress. Lately, with everything going on, I’ve started dividing code into two categories:
- code I don’t need to model in my head (low risk, follows established conventions, predictable, easy to verify), and
- code I can't help modelling in my head (business-critical, novel, experimental, or introduces new patterns).
I’m fine delegating the former to an AI agent. The latter is where domain knowledge and system understanding actually forms. That’s where it gets interesting. That’s the fun part. And my "mind palace" craves to stay in sync with it.
Is the emerging notion that understanding code is somehow optional something you are worried about?
Ask HN: Junior getting lost
Hello those who still read forums.
I have recently graduated from a college and started working as a junior dev (trying to consume as much knowledge from senior colleagues as I can now) and it seems that the real world is kind of a different story compared to the college practice.
In the college we've been taught about design patterns and all these responsibilities like domain, application, infrastructure, UI. Domain should never depend on infrastructure or application layer and so on. But the projects I got have domain that depend on infrastructure and another one where application has a reference directly to infrastructure and been told that this is correct implementation... doh..
I think I was kind of a good at listening for the lectures, but I now am doubting about, whether it was worth learning stuff at all lol since it's so controversial out there. I am, of course, in no position to question senior dev, but what do you guys think - is it really normal that all the college so called "best practices" go straight to the trash bin or am I just misunderstanding the real-work-like context?
AI has failed to replace a single software application or feature
I can’t name a single software application or software feature that has been mooted by AI. Zero. Take Excel as an example. Not only has AI failed to replace Excel in its entirety, but it has also failed to replace any of its features. AI was simply appended as an additional feature in the form of an agentic chatbot. This has been the trend across the entire industry, and it’s why AI has failed to fundamentally transform any of our exiting software application.
Now you might ask: what about AI native applications? Well, as it turns out, most of them are essentially clones of existing software but with a chatbot slapped on top. Because of the error prone nature of AI, any application that leverages it also has to surface all of the controls required to override all of its decisions. So you end up with a traditional software application plus AI.
AI promised to transform and even replace software applications, but all it did instead was augment them with an unreliable chatbot. All of the old fields and buttons are still there, but now there’s an additional text field that you can type into and hope for the best.
Waypoint 1.1, a local-first world model for interactive simulation
Over the last few weeks, world models have started to feel real for the first time. You can see coherent environments, long rollouts, and increasingly convincing visuals. At the same time, most of these systems are hard to run, hard to integrate, and trade interactivity for scale.
We started Overworld because we cared less about producing impressive videos and more about building worlds you can actually inhabit. That means low latency, continuous control, and systems that respond every time you act, not once per prompt.
Last week, we released Waypoint 1, a research preview of a real-time diffusion world model that runs locally. Next week, we’re releasing Waypoint 1.1 Small, which is designed to run on modern consumer GPUs and be easy to build on and modify.
Waypoint is built from scratch rather than fine-tuned from a large video model. We optimized heavily for control frequency, sparse attention, and fast inference so the system can maintain a persistent world state and respond to input at game-level frame rates. The goal was to make something developers can integrate today, not just watch as a demo.
We think this space will move fastest once world models follow a path similar to LLMs: local execution, open tooling, and fast community-driven iteration. Genie and similar systems show what’s possible at a massive scale. Our focus has been on making that future local and accessible.
We wrote more about the “immersion gap,” why interactivity matters more than visuals alone, and how we optimized the model in a recent blog post.
Code, demos, and release details are here: https://over.world/blog/the-immersion-gap
Ask HN: Should a software engineer have research exposure?
I am asking this question for my personal circumstances --- not a general statement about software engineering.
I am a CompSci senior focusing on ML. My university does not have applied research in ML, so doing ML in school (classes/research) is pretty much a one-way ticket to the theory/algorithms side of academia.
Last year, I had the epiphany that I am good at (and enjoy) solving problems by connecting components in a system instead of finagling a problem into a form where we can apply some mathematical law. Specifically, I have greatly enjoyed working with artists/UI/UX/frontend/non-tech people as their back-end counterpart. I have built data pipelines for MLEs, back-end for UI/UX/frontend designers, machine learning pipeline for BME researchers and projection/imagery software for artists.
I am pretty generalist and tool-agnostic, with more breadth than depth. That feels like software engineering.
That said, I do like to have an understanding of how things work and I have a decent tolerance for reading math. This is a really nerdy thing to say but I enjoyed deriving stuff like the convergence of gradient descent and I enjoyed real analysis. I also really enjoyed Nand2Tetris (open source course teaching you to build a minimal computer from NAND gates + compiler from OOP language to binary). It's extremely elegant to me, seeing the great design choices people made in the past. I feel like these are underappreciated in software engineering.
Right now, I have an opportunity to work with my RL professor, who has an amazing track record publishing at top conferences. I am really on the fence because his research is in RL algorithms and I had a very bad experience in my last algorithm research project somewhere else (I had a vague idea of what we were doing but nowhere near enough to make contributions). I am concurrently applying to jobs and Master's and I am pretty sure I will never touch this topic again if I go into industry after graduation.
I have these two questions: 1) Do I sound like the software engineers you know? What other roles do you think I am a good fit for? 2) Should I take this opportunity simply for research exposure? Do you think this is necessary in helping me keep up with trends as an applied practitioner in ML?
P.S. This is my first time posting on HN and this seems a lot longer than the average Ask HN post. I don't know if that's appropriate. Please lmk if I should go to a subreddit instead.
Thanks in advance if you read all that!
Ask HN: How do you reset an AppleID?
So I'm trying to submit a resume to Apple, but to do so, I need an AppleId. Okay. I try to sign up for one and discover my phone number has been used by someone else. I dropped into the local Apple Store (who supposedly have the ability to reset or create AppleIds) but, spoiler alert, after two hours they couldn't figure out why my phone number won't work.
Does anyone know if there's a way to create an AppleId w/o a phone number? Apple's public docs say it's impossible.
Maybe there's an email alias I can use to submit my resume instead of buying an iProduct?
I guess I could get a dirt cheap sim for my (non-apple) phone and use it for a day, just to sign up for a new AppleId. But somehow this seems... wrong. It also seems unreasonable for a company to force me to buy an $1000 device just for the privilege of submitting my resume.
[edit: Forgot to mention, I own precisely zero iProducts.]
Ask HN: How do you market a side project?
I've got a side project I've worked on for a while and I'm happy with the engineering side, but I'm terrible at marketing. I've made a couple of reddit comments, shown friends who would benefit from my project, but what is the best way to get it out there?
AI creates over-efficiency. Organizations must absorb it
AI doesn’t just increase productivity: it creates *over-efficiency*.
Individuals and small teams can now generate decisions, options, and initiatives faster than existing organizations were designed to legitimize, coordinate, or absorb. The bottleneck has shifted from execution to governance.
When surplus productive capacity accumulates without an absorption layer, organizations don’t gradually adapt. Historically, they freeze: tighter rules, centralization, bans, decoupling.
We saw a similar reflex during COVID: when systems couldn’t absorb shock locally, they shut down globally.
What seems under-discussed is absorption: not "how fast can we produce" but how many decisions, options, and changes an organization can metabolize without defensive closure.
Two mechanisms seem relevant but under-theorized: (1) small, local process changes that redistribute coordination and decision load; (2) continuous skill and role shifts, as people reposition around what still needs to be decided, maintained, and legitimized.
I’ve been trying to think about this as a kind of "conduction" problem, how human decision-making and legitimacy flow alongside generations, AI and people.
If you’ve seen organizations handle this well (or fail badly), I’d be curious: what actually lets systems absorb AI-driven over-efficiency without reverting to control, ranking, layoffs or shutdown?
Ask HN: Ergo wireless keyboard with mouse for coding?
I'm at 160wpm with my apple keyboard [0]. It's easily my favourite keyboard of all time but I'd love an ergo keyboard with chiclet keys and a built in mouse. I was looking at the cybord imprint [1] and the kinese advantage 360 [2] but the kinesis despite its infinite good reviews doesn't have a mouse and is too expensive for me although it is wireless, same with the imprint. My goal is to ziptie it to my herman miller aeron armrests so I don't have to ever move my arms, and I can swing around to look at my other monitors without craning my neck. Any out of the box solutions I can order that don't need soldering and assembly? There seem to be few.
[0] https://www.apple.com/ca/shop/product/mxk83ll/a/magic-keyboard-with-touch-id-and-numeric-keypad-for-mac-models-with-apple-silicon-usb%E2%80%91c-us-english-black-keys [1] https://cyboard.digital/products/imprint [2] https://kinesis-ergo.com/keyboards/advantage360/
Ask HN: Who do you follow via RSS feed?
Hello there!
I just set up TinyTinyRSS (https://tt-rss.org/) at home and I'm looking into interesting things to read as well as people/website publishing interesting stuff.
This, among the other things, to reduce the daily (doom)scrolling and avoid the recommendation algorithms by social media.
So: who or what do you follow via RSS feed, and why?
Ask HN: How are you managing secrets with AI agents?
Secrets management with Agents feels absent today. The agent needs API keys to call external services, but the usual patterns feel broken in this context. You see this clearly when writing Agent Skills.
Environment variables: The agent has shell access. It can run `env` or `echo $API_KEY` and access the secret, either through prompt injection or just by exploring or debugging.
.env files: Same problem. The agent can `cat .env`. The file is right there on the filesystem waiting for curious `print()` statements.
Proxy process / wrapper: You can stand up a separate process that holds the secret and proxies requests. The agent calls localhost, never sees the key. This works, but it's a lot of operational overhead. Now you're running infrastructure just to hide a string from your own tools. It also feels close to reinventing MCP.
What I've been experimenting with:
1. OS keychain with credential helper: The bundled or generated script calls out to the system keychain (macOS Keychain, Windows Credential Manager, etc.) at runtime. The agent can invoke the script, but can't directly query the keychain. Libraries like Python's `keyring` abstract over OS keychains and make it somewhat portable, but this all assumes certain runtime environments and requires user interaction via the OS.
2. Credential command escape hatch: Scripts accept a `--credential-cmd` flag that runs an arbitrary shell command to fetch the secret (`pass show`, `op read`, `vault kv get`, etc.). Flexible, but the agent could potentially inspect what command is being run and iterate to try to access it anyway.
Neither of these feel like a real solution. An agent could probe for credentials.
How are others handling secrets in agent workflows? Is anyone building agent runtimes with proper secrets isolation? Seems like something the official agent harnesses need to figure out and ship with.
Ask HN: Books to learn 6502 ASM and the Apple II
I want to learn Assembly to make games on the Apple II. What are the old books to learn 6502 Assembly and the Apple II itself (memory, screen management) ? And is it absolutely necessary to learn BASIC before Assembly ?
Ask HN: Is free identity theft protection after a data breach worth the bother?
Following the most recent in a long line of data breaches (AFLAC this time), I am wondering if it is worth taking advantage of their 24 month free CyEx Medical Shield.
Has anyone ever used one of these after a breach? Any compelling reason not to use it?
The preposterous notion of AI automating "repetitive" work
This is just one of those narratives that people latch onto because it has a nice ring to it. Or maybe it’s because it makes AI sound less threatening and perhaps even palatable. “Don’t worry. AI is going to replace only the repetitive parts of your job.” But if you spend even a minute examining this narrative, then you will realize just how preposterous it is.
Humans have already figured out how to automate repetitive physical and digital labor, and we’ve been doing it for decades and even centuries by using machines and computing. Simply put: If it’s repetitive, then you don’t need AI to automate it.
In fact, the kinds of task we want AI to automate are precisely those that AREN’T repetitive. That was the whole god damn point of AI.
How did we go from the original purpose of AI to claiming that it will do what we’ve already been doing for decades? Where do these narratives come from, and why do people fall for them?
Ask HN: How do you force yourself to take breaks while coding?
I'm a dev with zero self-control. "One more function" turns into 3 hours.
Tried Apple Screen Time – I just click "Ignore" every time. Tried Pomodoro apps – closed them when they got annoying.
What actually works for you? Hardware timers? Standing desks? Blocking software?
I'm building a macOS tool that uses full-screen overlays with a 30s cooldown to bypass, but curious what approaches others have found effective.
Ask HN: DDD was a great debugger – what would a modern equivalent look like?
I’ve always thought that DDD was a surprisingly good debugger for its time.
It made program execution feel visible: stacks, data, and control flow were all there at once. You could really “see” what the program was doing.
At the same time, it’s clearly a product of a different era:
– single-process
– mostly synchronous code
– no real notion of concurrency or async
– dated UI and interaction model
Today we debug very different systems: multithreaded code, async runtimes, long-running services, distributed components.
Yet most debuggers still feel conceptually close to GDB + stepping, just wrapped in a nicer UI.
I’m curious how others think about this:
– what ideas from DDD (or similar old tools) are still valuable?
– what would a “modern DDD” need to handle today’s software?
– do you think interactive debugging is still the right abstraction at all?
I’m asking mostly from a design perspective — I’ve been experimenting with some debugger ideas myself, but I’m much more interested in hearing how experienced engineers see this problem today.
Ask HN: Why the OpenClaw hype? What's so special?
OpenClaw is seemingly just another way to chat with an AI on another non-AI centric platform instead of the CLI or the company's site. Then, you have to give it so many API keys to actually utilise it, which for me has shattered the image of it enabling complete autonomy. Yes, I get that it's just an one-time thing, but all these platforms have AIs of their own at this point, why would I go through this new hassle. That too some have expiring API keys or certain limits as well. All in all, AGAIN, a feature, not a product.
Ask HN: Is archive.is currently broken for WSJ links?
For the past couple of days any link I submit stays on the "Loading" spinner and never seems to make it into the queue, and it seems like HN submissions for new articles aren't getting any archive links posted.
Ask HN: How far has "vibe coding" come?
I’m trying to understand where “vibe coding” realistically stands today.
The project I’m currently working on is getting close to 60k lines of code, with fairly complex business logic. From what I’ve heard, at this scale only a few tools (like Claude’s desktop app) are genuinely helpful, so I haven’t experimented much with other AI coding services.
At the same time, I keep seeing posts about people building 20k lines of code and launching a SaaS in a single 40-hour weekend. That’s made me question whether I’m being overly cautious, or just operating under outdated assumptions.
I already rely on AI quite a bit, and one clear benefit is that I now understand parts of the codebase that I previously wrote without fully grasping. Still, at my current pace, it feels like I’ll need several more months of development, followed by several more months of testing, before this can become a real production service. And that testing doesn’t feel optional.
Meanwhile, products that are described as being “vibe coded” don’t seem to be getting particularly negative evaluations.
So I’m wondering how people here think about this now. Is “you don’t really understand the code, so it’ll hurt you later” still a meaningful criticism? Or are we reaching a point where the default approach to building software itself needs to change?
I’d especially appreciate perspectives from people working on larger or more complex systems.
Ask HN: How are devtool founders getting their paying users in 2026?
I’ve been looking at a number of devtools and AI tools launched over the last 12–18 months, and a pattern keeps repeating: - Strong product, clear technical value - Early users from friends / Twitter / communities - Then things stall when it comes to converting paying users
Things that seem less effective than expected: - Content that ranks but doesn’t convert - Community posting that generates discussion but no revenue - “Build in public” without a clear path to payment
Things that might be working, but inconsistently: - Integrations / ecosystems - Very narrow ICP + outbound - Founder-led sales lasting far longer than planned
For founders actively shipping devtools today: - What’s actually getting you your first 10–50 paying users? - What looked promising but turned out to be a dead end? - If you were starting again in 2026, where would you focus first?
Curious what’s working now, not what sounds good in theory.
Ask HN: What's the Point Anymore?
I love technology. But I'm no longer optimistic about the future. It seems like AI is not going to go away, and instead of building reliable software, managers seem to push people to use AI more, as long as they ship products. Everything else is being destroyed by AI: art, music, books, personal websites. Why read a blog post, when Google AI Summary can just give you the summary? Why read a book, when you can just get AI summary of it? Why pay artists for music, when you can just generate endless amount of AI music?
And even things like "doing day to day chores" are being automated away with tools like AI assistants. The only thing you are left to do is to eat and take a sh*t throughout the day. How should people make money? No idea, as in the "prosperous future", everything is replaced by AI.
So my question HN: What's the point anymore? Why keep going and where to?
Ask HN: What recent UX changes make no sense to you?
For me, it is the shift toward thin, auto-hiding scroll bars. I see it on macOS, Linux (Mint), mobile phones and probably Windows too (though i haven't used windows in a while).
Is this a cleaner look? I have always loved visible scroll bars because they act as useful guides for where I am on a page and how much content remains and just easy to drag. Now you have to hover over it first.
I am curious what UX changes have stood out to you lately, for better or worse.. Maybe some designers reading this forum will take notes.
Designing programming languages beyond AI comprehension
What characteristics should a programming language have in order to make automated analysis, replication, and learning by artificial intelligence systems difficult? Any idea?
How much recurring income do you generate in 2026 and from what?
It’s always interesting to know the (side) hustles people are running that, in their opinion, provides recurring revenue that is either a good source or passive income or their main source of income.
Tell HN: Beeper deletes inactive accounts without notice
Sending a message from the app counts as activity. If you're a part of a bridged group only for announcements, Beeper deletes your bridged account without notice.
The support did not confirm if using third party matrix clients counts as app activity.
Its really unfortunate that they do not give you a heads up. This restriction is not visible on the Pro pricing as well.
Where can I find startups looking for fractional product leads?
I am looking for the best place to find startup founders who need to hire fractional roles to get them off the ground. I have 15 years of product building experience in the SaaS world, and am looking to connect with other founders.
Ask HN: Where to find cool companies to work for?
I class myself as a product engineer and have worked with React, NextJS, PostgreSQL, PHP, Typescript etc.
I am tired of using linkedin to find a new job. I am looking for more smaller companies with remote work.
Anyone know some sources to search?
Ask HN: How to prevent Claude/GPT/Gemini from reinforcing your biases?
Lately i've been experimenting with this template in Claude's default prompt ``` When I ask a question, give me at least two plausible but contrasting perspectives, even if one seems dominant. Make me aware of assumptions behind each. ```
I find it annoying coz A) it compromises brevity B) sometimes the plausible answers are so good, it forces me to think
What have you tried so far?