Ask HN: In the real world we pay for everything so why not software?
It seems like I and many others have develop a bad habit of giving software away for free. Whereas in the real world we charge for anything of value. I met a carpenter today. You could not imagine giving away hand crafted furniture for free. It's his trade. So why is it in software we give so much away for free? Like open source and even hosted services?
I have written a lot of open source but feel like now I need to really use my skill to sell something, to sell the things I build. Does anyone get that feeling?
Ask HN: How do you use 5–10 minute gaps productively?
I often have 5-10m gaps. It’s too easy to waste this time.
What things do you like to do in these increments?
For instance, learning a new skill, getting slightly better at something, reading high quality content.
Edited to clarify that I don’t mean phone-specific activities!
LinkedIn Prevents You from Deplatforming
So I realized, I have a good set of contacts on LinkedIn that I want to be able to segment and reach out to this year. When I tried to download all my data, I realized that 95% of all the contacts did not have emails. So I then decided to go through each profile sequentially and look at the contact info and get the email and fill out my spreadsheet. After I spend an hour and got about 200 contacts, I got a warning that I was using an automation tool and that I needed to click to comply to not use an automation tool anymore. However, I never used an automation tool in the first place for this. I manually was extracting the emails available to me through my own contact list Has anyone else experienced this? Is there a solution?
Svger CLI – Zero-dependency SVG to component tool, 52% faster than SVGR
I've been working on an open-source CLI tool called SVGER that converts SVGs into ready-to-use components for React, Vue 3, Angular, Svelte, Solid, and several other frameworks – all with full TypeScript support and tree-shakable exports. What sets it apart:
Zero runtime dependencies – installs instantly, no bundle bloat, high security profile Built-in optimizer that often produces smaller SVGs than SVGO (custom tree-based cleanup, path simplification, transform collapsing, shape conversion) In real-world benchmarks on 600+ icons: ~30s total, ~50ms per file, 20 files/sec throughput – 52% faster than SVGR and 33% faster than SVGO while doing both optimization and component generation Plugin system (already shipped with gradient optimizer, stroke normalizer, etc.) Automated visual regression testing in CI to guarantee pixel-perfect output Parallel processing and easy integration with Vite/Webpack/Next.js/etc.
It's designed for everything from small side projects to large design systems and enterprise monorepos. Would love to hear feedback from the HN community – stars, issues, or just thoughts on where it could improve. Live online benchmark: https://faezemohades.github.io/svger-cli/ Thanks!
Ask HN: Who wants to be hired? (January 2026)
Share your information if you are looking for work. Please use this format:
Location:
Remote:
Willing to relocate:
Technologies:
Résumé/CV:
Email:
Please only post if you are personally looking for work. Agencies, recruiters, job boards,
and so on, are off topic here.Readers: please only email these addresses to discuss work opportunities.
There's a site for searching these posts at https://www.wantstobehired.com.
What do people usually do with spare Android phones? Any practical use cases?
I’ve been thinking about practical ways people reuse spare or unused Android devices instead of letting them sit in a drawer.
I’ve seen cases where phones are used for testing, monitoring, background tasks, or other always-online purposes after a one-time setup. No user interaction, just keeping the device connected and running.
Curious what real-world use cases others have actually found useful. Are there setups that worked well long-term, or things to avoid?
Not trying to promote anything here — genuinely interested in how people approach this.
Ask HN: Who is hiring? (January 2026)
Please state the location and include REMOTE for remote work, REMOTE (US) or similar if the country is restricted, and ONSITE when remote work is not an option.
Please only post if you personally are part of the hiring company—no recruiting firms or job boards. One post per company. If it isn't a household name, explain what your company does.
Please only post if you are actively filling a position and are committed to responding to applicants.
Commenters: please don't reply to job posts to complain about something. It's off topic here.
Readers: please only email if you are personally interested in the job.
Searchers: try https://dheerajck.github.io/hnwhoishiring/, http://nchelluri.github.io/hnjobs/, https://hnresumetojobs.com, https://hnhired.fly.dev, https://kennytilton.github.io/whoishiring/, https://hnjobs.emilburzo.com, or this (unofficial) Chrome extension: https://chromewebstore.google.com/detail/hn-hiring-pro/mpfal....
Don't miss this other fine thread: Who wants to be hired? https://news.ycombinator.com/item?id=46466073
Ask HN: Why not ban first-person pronouns from conversational AI?
Conversational AI presents (non-IT) people with the powerful illusion that it is conscious. (I personally have a friend who argues vehemently that ChatGPT is conscious - admittedly, he has a diagnosed mental illness, but still.) People become emotionally attached, over-trust it and rely on it for guidance. I understand teenagers are particularly prone to this. Real social interactions suffer.
That illusion is powerfully strengthened by the use of first-person pronouns. But "I", "we", "us" etc in LLM output have no referential object. There is no "I" in a LLM.
I want a mandatory ban on the use of first-person pronouns by LLMs. There's no impairment in meaning if it says "Would you like a list?" instead of "Would you like me to give you a list?"
Personally, I provide a system prompt with this instruction. Works well.
Why not?
Ask HN: Reading list for being a better engineer?
I'm looking for some books to help me practise and refine my skills as a developer and Engineer.
I'm currently working in python on a django project, working in a financial domain. I lead a few engineers and direct/manage some projects. But I feel like I'm missing out on something when I read about people making things in zig and rust, or how they apply some numerical modelling techniques to certain problems, plus the new technologies being developed. I feel like I'm very much not knowledgeable or distinguishable enough, so I want to refine my skill a bit and maintain "sharp" in case of something happening and i need to find a new job quickly. And i want to make sure that I'm learning all that I could be learning in my current position.
Some previous books I've read / enjoyed:
* The makings of an Expert Engineer * Designing data intensive applications (haven't finished, moved house and lost the book, want to pick it up again) * Designing Elixir Systems with OTP * Practical Common Lisp
I feel like I have learned a bit with the Elixir/CL books, inwhich I apply to how i write python, but I never branch out to doing my own projects in these languages, so I feel like I'm missing out on utilizing these tools fully.
Is there anything to read that could take me to the proverbial next level?
Ask HN: What's the future of software testing and QA?
Hello everyone, I have spent a decade in software testing and QA. I see Al taking over the field very fast. I want to prepare for the next five or ten years. According to you how the software testing field will evolve in the future? What should I prepare for it?
Ask HN: What did you learn in 2025?
What is something you learned (or had to re-learn) in 2025? New skills, insights, life lessons that could be worth sharing with the rest of the forum.
For me: I've been forced to relearn how important sleep hygiene is and that it is something that can fall away slowly resulting in misery. Maintaining a consistent be time is a chore but one that does make for a happier life.
Ask HN: What if a language's structure determined memory lifetime?
I’ve been exploring a new systems-language design built around a single hard rule:
Data lives exactly as long as the lexical scope that created it.
Outer scopes can never retain references to inner allocations.
There is no GC.
No traditional Rust-style borrow checker.
No hidden lifetimes.
No implicit reference counting.
When a scope exits, everything allocated inside it is freed deterministically.
---
Here’s the basic idea in code:
fn handler() {
let user = load_user() // task-scoped allocation
CACHE.set(user) // compile error: escape from inner scope
CACHE.set(user.clone()) // explicit escape
}
If data needs to escape a scope, it must be cloned or moved explicitly.The compiler enforces these boundaries at compile time. There are no runtime lifetime checks.
Memory management becomes a structural invariant. Instead of the runtime tracking lifetimes, the program structure makes misuse unrepresentable.
Concurrency follows the same containment rules.
fn fetch_all(ids: [Id]) -> Result<[User]> {
parallel {
let users = fetch_users(ids)?
let prefs = fetch_prefs(ids)?
}
merge(users, prefs)
}
If any branch fails, the entire parallel scope is cancelled and all allocations inside it are freed deterministically.This is structured concurrency in the literal sense: when a parallel scope exits (success or failure), its memory is cleaned up automatically.
Failure and retry are also explicit control flow, not exceptional states:
let result = restart {
process_request(req)?
}
A restart discards the entire scope and retries from a clean slate.No partial state.
No manual cleanup logic.
---
Why I think this is meaningfully different:
The model is built around containment, not entropy. Certain unsafe states are prevented not by convention or discipline, but by structure.
This eliminates:
* Implicit lifetimes and hidden memory management
* Memory leaks and dangling pointers (the scope is the owner)
* Shared mutable state across unrelated lifetimes
If data must live longer than a scope, that fact must be made explicit in the code.
---
What I’m trying to learn at this stage:
1. Scalability. Can this work for long-running, high-performance servers without falling back to GC or pervasive reference counting?
2. Effect isolation. How should I/O and side effects interact with scope-based retries or cancellation?
3. Generational handles. Can this replace traditional borrowing without excessive overhead?
4. Failure modes. Where does this model break down compared to Rust, Go, or Erlang?
5. Usability. What common patterns become impossible, and are those useful constraints or deal-breakers?
---
Some additional ideas under the hood, still exploratory:
* Structured concurrency with epoch-style management (no global atomics)
* Strictly pinned execution zones per core, with lock-free allocation
* Crash-only retries, where failure always discards the entire scope
---
But the core question comes first:
Can a strictly scope-contained memory model like this actually work in practice, without quietly reintroducing GC or traditional lifetime machinery?
NOTE: This isn’t meant as “Rust but different” or nostalgia for old systems.
It’s an attempt to explore a fundamentally different way of thinking about memory and concurrency.
I’d love critical feedback on where this holds up — and where it collapses.
Thanks for reading.
Tell HN: Happy New Year
Tell HN: I'm having the worst career winter of my life
SWE with 10+ years of experience, I've shipped great products and worked commercially with Ruby/Rails, Node.js, TypeScript, and Golang.
I'm open to learning new languages.
I'm UK-based and have been struggling to secure a good remote role for an extended period.
I'm hardworking and bring substantial experience and strong execution skills. I can also handle management functions.
Is anyone else going through the same? Any help understanding why this is happening would be greatly appreciated.
Github https://github.com/shellandbull
Linkedin https://www.linkedin.com/in/mario-gintili-software-engineer/
Email code.mario.gintili [at] gmail [dot] com
Ask HN: Expository/Succinct Books on Modern Physics
What are some good books which give an overview of all of Modern Physics (or even better, all of Physics)? Mathematical rigour is fine as long as they are clear and starting from undergrad level. Books for each of the quadrants mentioned here - https://en.wikipedia.org/wiki/Modern_physics
I have my eye on John Dirk Walecka's (https://en.wikipedia.org/wiki/John_Dirk_Walecka) books which seem pretty good particularly the ones published by World Scientific Publishing. Three vols on Introduction, Advanced, Topics on Modern Physics and Introduction vols on Classical Mechanics, Quantum Mechanics, Statistical Mechanics, Electricity & Magnetism, General Relativity. - https://www.worldscientific.com/author/Walecka%2C+John+Dirk?...
Dover has Robert Sproull's Modern Physics which seems a bit old. - https://store.doverpublications.com/products/9780486783260
Springer has S.H.Patil's Elements of Modern Physics which seems up to date. - https://link.springer.com/book/10.1007/978-3-030-70143-7
Does anybody have experience with these books both studying and teaching from? I would appreciate it if the knowledgeable folks here can shed some light on this.
What other books provide similar overview of the domain?
Also suggestions on books which provide the needed background Mathematics.
PS: I am finding the the old Soviet era book Fundamentals of Physics by Ivanov quite useful to get an overview - https://mirtitles.org/2018/04/21/fundamentals-of-physics-iva...
Ask HN: Who is using Nebula (mesh VPN)?
I've been doing some research these days about the state of the art for mesh VPN's / network overlays. I'm looking for secure options for a small company and even to update my home server.
Nebula, from the Slack team, looks like a really solid solution. All nodes having their own certificate, it doesn't even require to trust the coordination server. I love it!
But I'm surprised I can't find any big company claiming to use it (other than Slack themselves). I can only find 'Home-labbers' and smaller businesses, but no big guys looking into it. At least not publicly. Has anyone seen it deployed in a bigger corporation?
Tell HN: Perplexity Has Unspecified Character Limits for Session Export
Hello all,
I discovered, the hard way, that exporting Perplexity sessions to PDF results in substantial content loss when the page is ~90 pages.
After opening a ticket on the matter, a brief dialogue with a rep proved unhelpful and confusing. It was stated that the Export as PDF feature only exports individual "threads", and that to export an entire session, each so-called thread must be individually selected and exported. This is simply wrong.
In practice, there is no method to select threads through the Top-Right/ 3-dot menu/Export as PDF option. Testing this with various sessions from 1 to 170 page exports showed no indication that threads were relevant.
Exports under 90 pages tend* to retain all content, while a 93-page didn't, but a 95 and 170 page export did. This indicates that the character limit (if that's the cause) is variable, as 170 pages is almost guaranteed to contain more characters than 90.
The fundamental point here, whatever the cause, is that data loss is inevitable under the present UI with its absence of documentation, notices etc.
*I observed changes after submitting the ticket and modifications have already been made. The situation was worse before, and now less worse, but still applies.
It's 2026 now. Is Webpack 6.x going to happen?
Over the last couple of years, tools like Vite and Next.js(Turbopack based) have clearly taken the lead as the "default choice" when selecting tech stacks for new projects. From my own observation, it seems very rare for teams to prioritize Webpack as the bundler for a greenfield project nowadays.
However, looking at it from another angle, Webpack still maintains relatively active development on GitHub and hasn't officially entered "maintenance mode." The recent release of Babel 8 Beta also reminded me that these veteran infrastructure tools in the JS ecosystem are still capable of self-renewal and seeking breakthroughs.
I'd love to hear your thoughts:
1. Do you think Webpack will actually release a version 6.x? Or will they just continue evolving on 5.x for the long haul?
2. If they do release it, what major changes do you expect? Will they introduce Rust into the core?
3. Could the release of a 6.x version potentially restore Webpack as the "first choice" for new projects?
Android Tablet as Mac Display
Hi HN,
I wanted to use an Android tablet as a second display on macOS, similar to Apple’s Sidecar — and I wanted it to work at the same time as an iPad running Sidecar so I could have 2 external displays with my MacBook (Sidecar only allows 1 iPad).
I couldn’t find anything that I liked, so I built Caboose.
Caboose lets you use an Android tablet as an additional macOS display via USB or Wi-Fi. It’s optimized for low latency and works well even wirelessly. You can use it alongside Apple Sidecar to drive both an iPad and an Android tablet from the same Mac.
To use Caboose and Sidecar together, Caboose needs to be started first, and the iPad added via Sidecar afterward. This is due to how macOS manages display devices.
I’ve been using this daily while traveling with a MacBook + Android tablet + iPad and it’s been very solid for my workflow.
Website: https://www.jefferyabbott.com/caboose
This is my first time posting to HN — happy to answer questions or talk through technical tradeoffs.
Ask HN: Replacement for MacUpdater which reached EOL on 2026-01-01
I've been a happy user of CoreCode's MacUpdater - a tool for macOS that automatically compared versions of local applications against the latest known ones and would auto-update if local lagged behind. It unfortunately reached EOL on 2026-01-01 - updates no longer work. Now it only lists outdated apps.
Do you know of a good replacement? Does anyone else publicly track latest versions of software and their binaries for (semi-)automated updating?
Or do most apps nowadays use self-updating, so there's no more need for such a central update management app?
Announcement:
Unfortunately MacUpdater 3's promised lifetime of "until 2026-01-01" is now over.
There will be no MacUpdater 4 or any continuation of the MacUpdater product from us.
Our daily maintainaince has been stopped and we don't verify updates anymore.
MacUpdater 3.5 is now unsupported but free-to-use including all previous "Pro" features. For any questions or more info regarding the disontinuation head to the F.A.Q.URL: https://www.corecode.io/macupdater/index.html
Books Should Update as Software
Most books — especially technical ones — are treated as finished artifacts.
But the reality is: Knowledge changes Tools evolve Errors are discovered Better explanations emerge
In software, we accept continuous improvement as normal. In publishing, we still freeze books at “v1.0”.
I’m experimenting with this idea through a platform called Ulomira: authors publish once, and keep improving their book while it stays live.
I’m curious how others here think about this:
Should books be immutable? Or should they evolve like software?
Genuinely interested in feedback from authors and readers.
How to use AI to augment learning without losing critical thinking skills?
Lately I have been using AI more in my day to day learning. I typically use it to generate boilerplate code, ask it to explain some concept I’m having trouble grasping in an easier way and fact checking what it says while asking for deeper clarification (why is something done this way, what are others ways it can be done, comparing and contrasting etc. I basically use it as a tutor. I don’t use it to really “do” anything for me. I program most everything by hand and anything that has to do with problem solving I do myself.
But I won’t lie and say I’m not scared of becoming reliant on AI. I think the way I’m using it is pretty good. Improve learning while continue to apply knowledge myself. But I’d like to know where I can improve my AI usage and how you guys are using AI in your workflow. It’s giving me a great deal of anxiety when I read articles about how AI will kill critical thinking skills and what not. I don’t want to avoid it. But I don’t want it to make me stupid.
Tell HN: Instagram Web has been broken for weeks
I tried to reach support and all kinds of feedback channels, to no avail. I know this kind of post does not work well on HN, but I have nowhere else to try, so I figured I would give it a shot.
When you share a Reel from the mobile app, it gives links such as
https://www.instagram.com/reel/DTAcc_gE7J7/ (tracking parameters removed)
When you open this link on the web (you MUST log in first, sorry), it straight up does not work.
It will: first redirect to `/reels/` (with an extra s. both should work in general), and then it will either
get stuck on a white page and, for some reason, make my CPU fan run like crazy;
give up and just redirect to your reel timeline, i.e. a random video that is not `/DTAcc_gE7J7/`.
This does not happen for all Reel links, but for a non-trivial number, if not most, of them.
The only workaround is to manually change the link to `/p/{post_id}` and open it as a post (which has a different, worse UI because the video canvas is very small).
This has been broken for at least weeks, and I found people talking about it on Twitter too. But time has passed and no fix is coming.
Ask HN: What is your prediction for the price of computer parts in 2026?
I am considering doing a build but the pricing makes me want to gag. RAM pricing particularly seems to be climbing aggressively with no end in sight. I feel like I should build a computer ASAP and buy as much as I can afford now. Due to AI there will be significant pressure on RAM and GPUs IMO. There is a rumor that Samsung will pull out of the NAND/Storage market. In a situation where there is a problem in Taiwan, CPUs would increase drastically. What are your predictions? Do you think if you were to hold out 6-12 months things would improve? Maybe tariffs or some other factor will change and pricing will fall? What do you think the general parts market trends are for 2026?
Ask HN: What do you think of reality check based behaviour corrector app?
I can work on a task for hours without distraction, if I find it intriguing. But it happens in bursts, not as pre-planned day, week or month. Because of this, most planning tools fail for me, leaving me disappointed and sad. I don’t know if this is ADHD, poor discipline, or just how some people work, but I doubt I’m unique.
So here's the rough working of tool that I’m building that only records what already happened, using natural language or voice (“worked on X for Y hours”). The system parses this into structured time data and compares it against a stated goal, producing a blunt “reality check” about where time actually went and how misaligned it is. No scheduling, no timers, no gamification, just story based analysis and some steps for course correction to align with your goal.
I am not planning to profit from it; I am an engineer, not a businessman. It will be a free and open-source Telegram bot (since I've been using it a lot lately). But if someone doesn't want managed service, I'll charge just enough to cover the hosting costs and a small cut for my motivation, or I'll hire someone to work on it.
Ask HN: When do we expose "Humans as Tools" so LLM agents can call us on demand?
Serious question.
We're building agentic LLM systems that can plan, reason, and call tools via MCP. Today those tools are APIs. But many real-world tasks still require humans.
So… why not expose humans as tools?
Imagine TaskRabbit or Fiverr running MCP servers where an LLM agent can:
- Call a human for judgment, creativity, or physical actions
- Pass structured inputs
- Receive structured outputs back into its loop
At that point, humans become just another dependency in an agent's toolchain. Though slower, more expensive, but occasionally necessary.
Yes, this sounds dystopian. Yes, it treats humans as "servants for AI." Thats kind of the point. It already happens manually... this just formalizes the interface.
Questions I'm genuinely curious about:
- Is this inevitable once agents become default software actors? (As of basically now?)
- What breaks first: economics, safety, human dignity or regulation?
- Would marketplaces ever embrace being "human execution layers" for AI?
Not sure if this is the future or a cursed idea we should actively prevent... but it feels uncomfortably plausible.
Ask HN: Where else do you keep up-to-date?
I still love hacking, but I stopped trying to be the smartest person in the room 5-6 years ago, and don't enjoy the *-maximalism that is most HN commentary today.
Can anyone recommend communities that cater to curiosity, but with more humility/humanity and less SV god complex?
Thanks!
Ask HN: Help with LLVM
I'm developing a new language, and everything is pretty nice so far.
I need to know if there's a way to prevent LLVM from linking in CRT symbols entirely. The goal is to make a new runtime.
I have a stub library written in my language, when I go to compile the library in .lib form, I keep running into a wall where LLVM forcefully brings in _fltused, causing my definition to get flagged with an error saying _fltused already exists.
There is nothing in the .ll IR file other than the _fltused definition, the one that I want to have end up in the final .lib.
I have Googled and asked AI for days now what compiler/linker flags I can use to get LLVM to bypass the CRT entirely so I can develop my own runtime, and Clang, MinGW, and LLVM are all aggressively linking in the CRT no matter what flags I add.
I'm pulling my hair out over here. I can't convert my .ll file directly to .as because the LLVM compiler is getting in the way, otherwise I'd have my library by now.
I optimised my vibe coding tech stack cost to $0
Since vibe coding came into existence, I have been experimenting with building products a lot. Some of my products were consumer facing and some.. well, internal clones of expensive software. However, since beginning, I knew one big thing - the vibe stack was expensive.
I initially tried a lot of tools - Bolt, v0, Replit, Lovable, etc. out of which Replit game me the best results (yes, I can be biased due to my selection of applications). But I often paid anywhere from $25-$200/mo. Other costs like API, models, etc. made monthly bills upward of $300/mo. Was it cost effective when compared to hiring a developer? Yes. Was it value for money? NO.
So, over the months, I optimised by complete stack to be either free (or minimal cost) for internal use or stay at a much lean cost for consumer-facing products.
Here's how the whole stack looks today -
- IDE - Google's AntiGravity (100% free + higher access if you use student ID) --> https://antigravity.google/
AI Documentation - SuperDocs (100% free & open source) --> https://superdocs.cloud/
Database - Supabase (Nano plan free, enough for basic needs) --> http://supabase.com/
Authentication - Stack Auth (Free upto 10K users) --> http://stack-auth.com/
LLM (AI Model) - OpenRouter or Gemini via AI Studio for testing and a custom tuned model by Unsloth AI for production. (You can fine-tune models using Unsloth literal in a Google Colab Notebook) --> http://openrouter.ai/ OR http://unsloth.ai/ OR http://aistudio.google.com/
Version Maintenance/Distribution - Github/Gitlab (both totally free and open source) --> http://github.com/ OR http://gitlab.com/
Faster Deployment - Vercel (Free Tier Enough for Hobbyists) --> https://vercel.com
Analytics - PostHog, Microsoft Clarity & Google Analytics (All 3 are free and independent for different tracking, I recommend using all of them) --> Https://posthog.com OR http://clarity.microsoft.com/ OR http://analytics.google.com/
That's the list devs! I know I might have missed something. If yes, just ask me up or list them up in the comments. If you have any questions related to something specific, ask me up as well.
I built a screen-aware desktop assistant; now it can write and use your computer
I posted Julie here a few days ago as a weekend prototype: an open-source desktop assistant that lives as a tiny overlay and uses your screen as context (instead of copy/paste, tab switching, etc.)
Update: I just shipped Julie v1.0, and the big change is that it’s no longer only “answer questions about my screen.” It can now run agents (writing/coding) and a computer-use mode via a CUA toolkit. ((https://tryjulie.vercel.app/))
What that means in practice:
- General AI assistant, it hears what you hear, sees what you see, and gives you real-time answers for any question instantly. - Writing agent: draft/rewrite in your voice, then iterate with you while staying in the overlay (no new workspace). - Coding agent: help you implement/refactor with multi-step edits, while you keep your editor as the “source of truth.” - Computer-use agent: when you want, it can take the “next step” (click/type/navigate) instead of just telling you what to do.
The goal is still the same: don’t break my flow. I want the assistant to feel like a tiny utility that helps for 20 seconds and disappears, not a second life you manage.
A few implementation notes/constraints (calling these out because I’m sure people will ask):
- It’s opt-in for permissions (screen + accessibility/automation) and meant to be used with you watching, not silently running. - The UI is intentionally minimal; I’m trying hard not to turn it into a full chat app with tabs/settings/feeds.
Repo + installers are here: https://github.com/Luthiraa/julie
Would love feedback on two things: 1. If you’ve built/used computer-use agents: what safety/UX patterns actually feel acceptable day-to-day? 2. What’s the one workflow you’d want this to do end-to-end without context switching?