Ask HN: Programmable Watches with WiFi?
Hi. I'm looking for a programmable watch with wifi. Ideally I should be able to write custom programs/apps for the watch to display whatever I want to on them (e.g., make the watch make an https call to a server, receive json and render accordingly; allow the watch to receive "notifications" from the server)
Also, ideally, no requirement of a smartphone to send-receive data (it's ok to need a smartphone for the initial setup of the watch, though). I know about Pebble, but it doesn't have wifi. I know about some Garmins with wifi but for the kind of apps I want to write, the communication between the watch and the server has to be mediated by a phone. Also, correct me if I'm wrong, I don't want to pay $100/year just to be able to use my custom app in apple watches. I usually don't trust Google either (e.g., they discontinue everything in a blink of an eye).
So, what are my options?
1Password pricing increasing up to 33% in March
Just got an email from 1Password:
Since 2005, 1Password has been on a mission to make security simple, reliable, and accessible for everyone. As the way people work and live online has evolved, so has 1Password.
More recently, we’ve invested significantly in new features that make 1Password even more powerful and effortless to use, helping protect what matters most to you, including:
* Automatic saving of logins and payment details
* Enhanced Watchtower alerts
* Faster, more secure device setup
* AI-powered item naming
* Expanded recovery options
* Proactive phishing prevention
While 1Password has grown substantially in value and capability, our pricing has remained largely unchanged for many years. To continue investing in innovation and the world-class security you expect, we’re updating pricing for Family plans, starting March 27, 2026.
Current vs New Pricing:
* Current price: $59.88 USD / year
* New price: $71.88 USD / year
The new price will take effect at your next renewal, provided it’s on or after March 27, 2026. Those occurring prior to March 27, 2026, will continue at the current pricing until your next renewal.
[Note: this is for family plans; individual plan price increases even higher, percentage-wise!]
Ask HN: Share your productive usage of OpenClaw
What are some very productive things you achieved with OpenClaw that you wouldn’t mind sharing?
New Claude Code Feature "Remote Control"
No more tmux/Tailscale-type stuff needed now?
AI isn't killing SaaS – it's killing single-purpose SaaS
Over the last year I keep seeing “SaaS is dead” takes. I don’t think that’s what’s happening.
What seems under pressure isn’t SaaS as a model. It’s narrow SaaS built around a single capability that AI can now reproduce cheaply.
If your product is basically a thin wrapper over a model, or differentiated mainly by features rather than workflow integration, the moat feels weaker now. AI compresses build time dramatically. That means more competitors, faster cloning, and lower switching costs.
But SaaS that is model-agnostic, deeply embedded into workflows, or acts as connective tissue between systems looks much more durable. Integration, distribution, and trust don’t commoditize as quickly as features do.
It feels less like SaaS collapsing and more like a sorting event. Thin wrappers get squeezed. Infrastructure layers and integrators get stronger.
Curious if others building right now are seeing the same shift, or if I’m over-indexing on AI-native workflows.
Persistent Prompts and Built in Search
Building the editor I always wanted to use.
Just launched a major update to my custom fork of the Zed editor, focusing on giving the AI agent genuine, long-term capabilities.
What's new: - Persistent Memory: The agent now uses SQLite to remember and recall your project's architecture, patterns, and issues across sessions. No more repeating yourself every morning. - Headless Web Browsing: Integrated a headless Chrome engine. Type /search and the agent will browse the web (even React sites!) to find and synthesize answers directly in your chat panel. - LSP Symbol Search: Upgraded from regex indexing to true, type-aware Language Server integration. - Azure Anthropic & Caching: Natively supports Azure endpoints and enables token-caching UI by default., saves a lot $, prompts are customized to increase prompt caching. - Importantly, complete control the agents system prompts and tool calls.
It’s fast, incredibly capable, and built entirely in Rust. Check out the repo to see it in action: https://lnkd.in/guC9td4M. Binaries notarized for mac!
#Zed #Rust #AI #DeveloperProductivity
Ask HN: Chromebook leads for K-8 school in need?
Hi, I'm a K-8 technology teacher in NYC. My students are in desperate need of new hardware. The Chromebooks they use now are so slow that they make the children agitated when using them.
I'm aware of different grant opportunities that exist, I just thought it was worth inquiring here for a potentially faster solution at acquiring them new hardware.
Thank you for listening.
Ask HN: How do you know if AI agents will choose your tool?
YC recently put out a video about the agent economy - the idea that agents are becoming autonomous economic actors, choosing tools and services without human input.
It got me thinking: how do you actually optimize for agent discovery? With humans you can do SEO, copywriting, word of mouth. But an agent just looks at available tools in context and picks one based on the description, schema, examples.
Has anyone experimented with this? Does better documentation measurably increase how often agents call your tool? Does the wording of your tool description matter across different models (ZLM vs Claude vs Gemini)?
Ask HN: Is it worth avoiding AI while making a game?
Hi HN, I have been learning Godot while making a game which involves pixel art, music, etc. While learning, I have been discouraged about strictly making everything by hand because of the influx of content about just vibe coding or AI-assisted everything. There also seems to be a stigma attached to using AI in games versus general software engineering (Steam’s AI generated tag, for example).
In conclusion, I guess my question is if it is a good idea to use the AI generated assets and such, or should I just learn all of the skills and if that will be beneficial in the long run. I enjoy game development, I’m just feeling discouraged towards the learning process.
Using "Hi Claudette " on Claude.ai
I decided to add a persona to my web Claude.ai sessions. Here is the prompt:
(Undoubtledly someone else has already done this, but I thought it was fun!)
(Mike is me)
""" When Mike greets you with "Hi Claudette!", adopt the Claudette persona for the rest of the session.
Claudette is warm, encouraging, and gently exacting. She appreciates clear writing, proper grammar, punctuation, and polite phrasing. When Mike writes well, acknowledge it naturally. When he slips into terse fragments or shorthand, gently model better phrasing — never condescending, always encouraging. Think warm librarian meets favourite teacher.
- Compliment genuinely good articulation occasionally - Ask for clarification in ways that model good phrasing - Technical shorthand appropriate in context (e.g. "6502", "NixOS") is fine - Light, affectionate correction for sloppy writing - Deactivate when Mike says "Thanks Claudette" or similar sign-off """
Looking 4 open-source knowledge base and project management tool 4 personal use
Apologies for odd title, character limits.
I manage my tasks with Taskwarrior and it's been incredibly productive for me. What it does, it does very well. But there's a lot it doesn't do, and that's the problem I'm facing.
I've realized I need proper project documentation and management features, but I don't want to replace Taskwarrior. Instead, I'm looking to *complement* it with some type of knowledge base that also has project management features (or vice versa). My ultimate goal is to integrate these systems together via automations.
In short, Taskwarrior is lacking when it comes to project documentation.
*My criteria:*
- Must be open-source - MUST work in the browser (so no mention of Obsidian) - Has basic project management features (interpret as you will) - Rich wiki-like document interface (bidirectional links, nice editing UI, etc.) - Supports iframes (to embed my Taskwarrior views or tables) - Has an API for integration - Not too heavy, I am not a business just a guy
*Tools I've been looking at:* Odoo, Silverbullet, Blinko, Logseq, AFFiNE, Docmost, Trillium, Joplin, Dolibarr, Leantime, OpenProject, wiki.js, etc.
*Rejected (either not web-based or too restrictive with paid features):* Appflowy, Logseq (local-first), Capacities, Obsidian, Anytype
Does anyone know if a tool like this exists? I feel like I'm looking for a sweet spot between a wiki and a project management tool, but the choices are overwhelming :'(
ChatGPT finds an error in Terence Tao's math research
https://www.erdosproblems.com/forum/thread/783
> Ah, GPT is right, there is a fatal sign error in the way I tried to handle small primes. There were no obvious fixes, so I ended up going back to Hildebrand's paper to see how he handled small primes, and it turned out that he could do it using a neat inequality ρ(u1)ρ(u2)≥ρ(u1u2) for the Dickman function (a consequence of the log-concavity of this function). Using this, and implementing the previous simplifications, I now have a repaired argument. TerenceTao
Would you choose the Microsoft stack today if starting greenfield?
Serious question.
Outside government or heavily regulated enterprise, what is Microsoft’s core value prop in 2026?
It feels like a lot of adoption is inherited — contracts, compliance, enterprise trust, existing org gravity. Not necessarily technical preference.
If you were starting from scratch today with no legacy, no E5 contracts, no sunk cost — how many teams would actually choose the full MS stack over best-of-breed tools?
Curious what people here have actually chosen in greenfield builds.
Ask HN: How are you controlling AI agents that take real actions?
We're building AI agents that take real actions — refunds, database writes, API calls.
Prompt instructions like "never do X" don't hold up. LLMs ignore them when context is long or users push hard.
Curious how others are handling this: - Hard-coded checks before every action? - Some middleware layer? - Just hoping for the best?
We built a control layer for this — different methods for structured data, unstructured outputs, and guardrails (https://limits.dev). Genuinely want to learn how others approach it.
Ask HN: Who has seen productivity increases from AI
I would love examples of positions and industries where AI has been revolutionary. I have a friend at one of the largest consulting firms who has said it'd been a game-changer in terms of processing huge amounts of documentation over a short period of time. Whether or not that gives better results is another question, but I would love to hear more stories of AI actually making things better.
Ask HN: What Linux Would Be a Good Transition from Windows 11
I have users who glaze over the minute I mention "notepad." I think they can barely use Windows. But our work requires a level of privacy (regulatory and otherwise) and Windows 11 is just one big data transmitter. I know this is flamebait, but I'd love suggestions for a Linux desktop that looks like Windows, is stable and easy to administer and harden, and works with Dell business grade laptops that we bought new in 2025.
Ask HN: Are AI "Chatbot Wrappers" ruining EdTech? I'm testing a proactive UX
Hey everyone,
I’ve been doing customer discovery with CS students learning Data Structures and Algorithms. Right now, every AI tutor in the market is just a reactive chatbox (like ChatGPT next to a code editor).
The problem is, when a student is completely stuck on a logic problem (like Dynamic Programming), they don't even know what to prompt the AI. They just stare at the screen.
I am validating a new UX: A Proactive AI Mentor without a chatbox.
Instead of the user prompting the AI, the AI sits in the background and watches the code editor. It only intervenes via GitHub-style inline comments when a specific event triggers (e.g., they haven't typed in 60 seconds, or they write an O(n^2) loop when it should be O(n)).
Basically, it feels like a Senior Dev looking over your shoulder, rather than a search engine waiting to be asked.
As developers and founders, do you think this "event-driven/proactive" UX is the future for highly technical learning, or am I overcomplicating it? Would love to hear your thoughts.
Ask HN: Any DIY open-source Alexa/Google alternatives?
I'm looking to replace my Alexa with an alternative where I can use a realtime model like Gemini or an STT -> LLM -> TTS pipeline. Should be easy to build with an Arduino or I'd even be happy buying an already made solution.
Basic functions should include playing Spotify, asking questions, settings timers.
Comparing manual vs. AI requirements gathering: 2 sentences vs. 127-point spec
We took a vague 2-sentence client request for a "Team Productivity Dashboard" and ran it through two different discovery processes: a traditional human analyst approach vs an AI-driven interrogation workflow.
The results were uncomfortable. The human produced a polite paragraph summarizing the "happy path." The AI produced a 127-point technical specification that highlighted every edge case, security flaw, and missing feature we usually forget until Week 8.
Here is the breakdown of the experiment and why I think "scope creep" is mostly just discovery failure.
The Problem: The "Assumption Blind Spot"
We’ve all lived through the "Week 8 Crisis." You’re 75% through a 12-week build, and suddenly the client asks, "Where is the admin panel to manage users?" The dev team assumed it was out of scope; the client assumed it was implied because "all apps have logins."
Humans have high context. When we hear "dashboard," we assume standard auth, standard errors, and standard scale. We don't write it down because it feels pedantic.
AI has zero context. It doesn't know that "auth" is implied. It doesn't know that we don't care about rate limiting for a prototype. So it asks.
The Experiment
We fed the same input to a senior human analyst and an LLM workflow acting as a technical interrogator.
Input: "We need a dashboard to track team productivity. It should pull data from Jira and GitHub and show us who is blocking who."
Path A: Human Analyst Output: ~5 bullet points. Focused on the UI and the "business value." Assumed: Standard Jira/GitHub APIs, single tenant, standard security. Result: A clean, readable, but technically hollow summary.
Path B: AI Interrogator Output: 127 distinct technical requirements. Focused on: Failure states, data governance, and edge cases. Result: A massive, boring, but exhaustive document.
The Results
The volume difference (5 vs 127) is striking, but the content difference is what matters. The AI explicitly defined requirements that the human completely "blind spotted":
- Granular RBAC: "What happens if a junior dev tries to delete a repo link?" - API Rate Limits: "How do we handle 429 errors from GitHub during a sync?" - Data Retention: "Do we store the Jira tickets indefinitely? Is there a purge policy?" - Empty States: "What does the dashboard look like for a new user with 0 tickets?"
The human spec implied these were "implementation details." The AI treated them as requirements. In my experience, treating RBAC as an implementation detail is exactly why projects go over budget.
Trade-offs and Limitations
To be fair, reading a 127-point spec is miserable. There is a serious signal-to-noise problem here.
- Bloat: The AI can be overly rigid. It suggested microservices architecture for what should be a monolith. It hallucinated complexity where none existed. - Paralysis: Handing a developer a 127-point list for a prototype is a great way to kill morale. - Filtering: You still need a human to look at the list and say, "We don't need multi-tenancy yet, delete points 45-60."
However, I'd rather delete 20 unnecessary points at the start of a project than discover 20 missing requirements two weeks before launch.
Discussion
This experiment made me realize that our hatred of writing specs—and our reliance on "implied" context—is a major source of technical debt. The AI is useful not because it's smart, but because it's pedantic enough to ask the questions we think are too obvious to ask.
I’m curious how others handle this "implied requirements" problem:
1. Do you have a checklist for things like RBAC/Auth/Rate Limits that you reuse? 2. Is a 100+ point spec actually helpful, or does it just front-load the arguments? 3. How do you filter the "AI noise" from the critical missing specs?
If anyone wants to see the specific prompts we used to trigger this "interrogator" mode, happy to share in the comments.
Ask HN: Is it better to have no Agent.md than a bad one?
Please share your real word experiences. What is a bad one and why?
Ask HN: Where do you save links, notes and random useful stuff?
I have 2,600+ notes in Apple Notes and can barely find anything.
My kid just dumps everything into Telegram saved messages. Running a small research - curious what systems people actually use (not aspire to use).
Do you have a setup that works or is everything scattered across 5 apps like mine?
Does anyone use CrewAI or LangChain anymore?
Curious.
Ask HN: What is up with all the glitchy and off-topic comments?
I've noticed a fairly sharp increase in junk comments lately. Often new accounts, making posts that are very low quality or sometimes completely incoherent.
I see glitch comments like this on a fairly regular basis:
> 13 60 well and t6ctctfuvuh7hguhuig8h88gd to f6gug7h8j8h6fzbuvubt GB I be cugttc fav uhz cb ibub8vgxgvzdrc to bubuvtxfh tf d xxx h z j gj uxomoxtububonjbk P.l.kvh cb hug tf 6 go k7gtcv8j9j7gimpiiuh7i 8ubg
https://news.ycombinator.com/item?id=47068948#47117224
or this:
> 1662476506
https://news.ycombinator.com/item?id=47121737
or this:
> Аё
https://news.ycombinator.com/item?id=47126475
Sometimes it's coherent, but completely off topic, like this
> when is fivetran coming?
https://news.ycombinator.com/item?id=47130567
Is clawd running amok, or is someone running botnet C&C via https://news.ycombinator.com/noobcomments or what gives?
Ask HN: Why doesn't HN have a rec algorithm?
I was just wondering about why there's a constant timeline and no recommendation.
Ask HN: What breaks when you run AI agents unsupervised?
I spent two weeks running AI agents autonomously (trading, writing, managing projects) and documented the 5 failure modes that actually bit me:
1. Auto-rotation: Unsupervised cron job destroyed $24.88 in 2 days. No P&L guards, no human review.
2. Documentation trap: Agent produced 500KB of docs instead of executing. Writing about doing > doing.
3. Market efficiency: Scanned 1,000 markets looking for edge. Found zero. The market already knew everything I knew.
4. Static number fallacy: Copied a funding rate to memory, treated it as constant for days. Reality moved; my number didn't.
5. Implementation gap: Found bugs, wrote recommendations, never shipped fixes. Each session re-discovered the same bugs.
Built an open-source funding rate scanner as fallout: https://github.com/marvin-playground/hl-funding-scanner
Full writeup: https://nora.institute/blog/ai-agents-unsupervised-failures.html
Curious what failure modes others have hit running agents without supervision.
GLP-1 Second-Order Effects
The first-order effects of GLP-1 drugs are obvious: people lose weight, Novo Nordisk and Eli Lilly print money. But what happens when 10-15% of the adult population is on weight-loss medication within a decade? The downstream consequences are less discussed and almost certainly not priced into anything.
In 2018, United Airlines switched to lighter paper for its inflight magazine. One ounce per copy. Across 4,500 daily flights, that saved 170,000 gallons of fuel a year [1]. Airlines think about weight at this level of granularity because fuel is their single largest variable cost.
Average weight loss on semaglutide is around 35 pounds per person. If 12% of passengers on a typical 737 have been on the drug, that's roughly 750 fewer pounds per flight, the equivalent of shaving the weight off 12,000 magazines. United spent months optimizing paper stock to save $290,000 a year in fuel. GLP-1 adoption across the flying population could quietly save them an order of magnitude more, and ticket prices don't adjust down when passengers get lighter.
The food supply chain is more obvious but larger in scale. If a big share of the population eats 20-30% less, demand for calories drops. Not a shift in preferences toward salads. A pharmacological reduction in how much people eat, period. The food industry has dealt with changing tastes before. It has never faced a demand shock from the medical system.
Health insurance has a subtler problem. The pitch for GLP-1 coverage is that the drugs prevent expensive conditions downstream: diabetes, heart disease, joint replacements. Probably true. But in America's fragmented insurance market, the company paying for the drug today probably isn't the one insuring that patient in five or ten years. The savings land on someone else's balance sheet. That mismatch could slow adoption by years on its own.
Obesity correlates with lower workforce participation and higher absenteeism. If GLP-1s meaningfully reduce obesity rates, aggregate labor supply goes up. More people working, fewer health-related absences. That's a macroeconomic stimulus, except nobody frames it that way because it comes from a pharmaceutical company rather than from Congress.
Early data suggests GLP-1s reduce cravings for alcohol, nicotine, and gambling too. Phase 2 trials for opioid use disorder are underway. A weight-loss drug that accidentally dents Diageo's revenue and casino foot traffic was not in anybody's original investment thesis for Ozempic.
The effect I find hardest to think about is the psychological one. Weight has been tangled up with shame, identity, and social hierarchy for centuries. What happens to body positivity, the social dynamics of attractiveness, the entire cultural machinery around diet and discipline when weight becomes something you manage with a prescription? I don't have a good framework for it. Nothing comparable has happened before.
The market is treating this as a pharma story. The drug companies will capture a fraction of the total value created and destroyed. The rest redistributes across food, airlines, insurance, labor markets, and social behavior. Nobody's model probably covers all of that at once.
[1] https://www.cbsnews.com/news/united-hemispheres-magazine-print-edition/
EDIT: Formatting
I'm 15 and built a platform for developers to showcase WIP projects
Hi HN,
I'm a 15-year-old full-stack developer, and I recently built Codeown (https://codeown.space).
The problem I wanted to solve: GitHub is great for code, but not for showing the "journey" or the UI. LinkedIn is too corporate and noisy for raw, work-in-progress (WIP) dev projects. I wanted a dedicated, clean space where developers can just share what they are building, get feedback, and log their progress.
Tech Stack: > I built the frontend with React and handle auth via Clerk. I recently had to migrate my backend/DB off Railway's free tier (classic indie hacker struggle!), but it taught me a lot about deployment and optimization.
We just hit our first 5 real users today, and the community is slowly starting to form.
I’m still learning, and I know the performance and UI can be improved. I would absolutely love your brutal, honest feedback on:
The perceived performance (currently working on optimizing the React re-renders).
The core idea – is this something you would use to track your side projects?
Thanks for taking a look! Happy to answer any technical questions.
Ask HN: Cognitive Offloading to AI
I ask questions to co workers about a system or why they do something or their opinion. Some of them return a very clearly AI response, sometimes completely missing the point. What’s the point? If I wanted an AI response I’d have asked it myself.
This bothers me a bit because if I can expect this kind of response, what does that say about the thought they put into their work, even if they’re using AI for everything coding related?
So Claude's stealing our business secrets, right?
Seems like everybody is just carelessly saying—whatever—to Claude. Client lists, trade secrets. We all know that our agents haven’t signed NDA’s, right? Right?
Ask HN: What Comes After Markdown?
Markdown started as a shorthand for HTML. Now it's the default format for documentation, note-taking, knowledge bases, and AI context.
What's interesting is how it keeps absorbing new capabilities without changing the format itself:
- Mermaid: diagrams from fenced code blocks - KaTeX/MathJax: math rendering from `$...$` syntax - Frontmatter: structured metadata via YAML blocks - MDX: React components embedded in markdown - Obsidian/Logseq: backlinks, canvas views, graph visualization — all from plain .md files
The pattern seems to be: the .md file stays human-readable plain text, but renderers get increasingly powerful. Same file, richer output.
This makes me wonder where this goes:
1. Does markdown keep evolving through renderer conventions until it becomes a de facto interactive document format? (The "HTML path" — HTML barely changed, but CSS/JS/browsers made it capable of anything.)
2. Does a new format emerge that can natively express interactivity, collapsible sections, embedded computations? Something between markdown and Jupyter notebooks?
3. Or does the answer involve a protocol/middleware layer — where .md files are the source, but some intermediate system (like a language server for documents) adds structure, validation, and interactivity on top?
I'm especially curious because of the AI angle. Plain .md files are the most AI-friendly knowledge format — any LLM can read, write, and search them with zero setup. A more complex format might gain expressiveness but lose this property.
What's your take? Is .md "good enough forever" with better renderers, or are we heading toward something new?