Tell HN: AI coding is sexy, but accounting is the real low-hanging target
Working on automating small business finance (bookkeeping, reconciliation, basic reporting).
One thing I keep noticing: compared to programming, accounting often looks like the more automatable problem:
It’s rule-based Double entry, charts of accounts, tax rules, materiality thresholds. For most day-to-day transactions you’re not inventing new logic, you’re applying existing rules.
It’s verifiable The books either balance or they don’t. Ledgers either reconcile or they don’t. There’s almost always a “ground truth” to compare against (bank feeds, statements, prior periods).
It’s boring and repetitive Same vendors, same categories, same patterns every month. Humans hate this work. Software loves it.
With accounting, at least at the small-business level, most of the work feels like:
normalize data from banks / cards / invoices
apply deterministic or configurable rules
surface exceptions for human review
run consistency checks and reports
The truly hard parts (tax strategy, edge cases, messy history, talking to authorities) are a smaller fraction of the total hours but require humans. The grind is in the repetitive, rule-based stuff.
Ask HN: What Are You Working On? (December 2025)
What are you working on? Any new ideas that you're thinking about?
Ask HN: How are you vibe coding in an established code base?
Here’s how we’re working with LLMs at my startup.
We have a monorepo with scheduled Python data workflows, two Next.js apps, and a small engineering team. We use GitHub for SCM and CI/CD, deploy to GCP and Vercel, and lean heavily on automation.
Local development: Every engineer gets Cursor Pro (plus Bugbot), Gemini Pro, OpenAI Pro, and optionally Claude Pro. We don’t really care which model people use. In practice, LLMs are worth about 1.5 excellent junior/mid-level engineers per engineer, so paying for multiple models is easily worth it.
We rely heavily on pre-commit hooks: ty, ruff, TypeScript checks, tests across all languages, formatting, and other guards. Everything is auto-formatted. LLMs make types and tests much easier to write, though complex typing still needs some hand-holding.
GitHub + Copilot workflow: We pay for GitHub Enterprise primarily because it allows assigning issues to Copilot, which then opens a PR. Our rule is simple: if you open an issue, you assign it to Copilot. Every issue gets a code attempt attached to it.
There’s no stigma around lots of PRs. We frequently delete ones we don’t use.
We use Turborepo for the monorepo and are fully uv on the Python side.
All coding practices are encoded in .cursor/rules files. For example: “If you are doing database work, only edit Drizzle’s schema.ts and don’t hand-write SQL.” Cursor generally respects this, but other tools struggle to consistently read or follow these rules no matter how many agent.md-style files we add.
My personal dev loop: If I’m on the go and see a bug or have an idea, I open a GitHub issue (via Slack, mobile, or web) and assign it to Copilot. Sometimes the issue is detailed; sometimes a single sentence. Copilot opens a PR, and I review it later.
If I’m at the keyboard, I start in Cursor as an agent in a Git worktree, using whatever the best model is. I iterate until I’m happy, ask the LLM to write tests, review everything, and push to GitHub. Before a human review, I let Cursor Bugbot, Copilot, and GitHub CodeQL review the code, and ask Copilot to fix anything they flag.
Things that are still painful: To really know if code works, I need to run Temporal, two Next.js apps, several Python workers, and a Node worker. Some of this is Dockerized, some isn’t. Then I need a browser to run manual checks.
AFAICT, there’s no service that lets me: give a prompt, write the code, spin up all this infra, run Playwright, handle database migrations, and let me manually poke at the system. We approximate this with GitHub Actions, but that doesn’t help with manual verification or DB work.
Copilot doesn’t let you choose a model when assigning an issue or during code review. The model it uses is generally bad. You can pick a model in Copilot chat, but not in issues, PRs or reviews.
Cursor + worktrees + agents suck. Worktrees clone from the source repo including unstaged files, so if you want a clean agent environment, your main repo has to be clean. At times it feels simpler to just clone the repo into a new directory instead of using worktrees.
What’s working well: Because we constantly spin up agents, our monorepo setup scripts are well-tested and reliable. They also translate cleanly into CI/CD.
Roughly 25% of “open issue → Copilot PR” results are mergeable as-is. That’s not amazing, but better than zero, and it gets to ~50% with a few comments. This would be higher if Copilot followed our setup instructions more reliably or let us use stronger models.
Overall, for roughly $1k/month, we’re getting the equivalent of 1.5 additional junior/mid engineers per engineer. Those “LLM engineers” always write tests, follow standards, produce good commit messages, and work 24/7. There’s friction in reviewing and context-switching across agents, but it’s manageable.
What are you doing for vibe coding in a production system?
Memory Safety in C# vs. Rust
Noticed how C# is underrated. About memory safety in C#. How difficult to introduce multi-paradigm memory safety approach like Rust? Ownership model for example, would it be possible to enforce practice via some-sort of meta framework?
Ask HN: Is building a calm, non-gamified learning app a mistake?
I’ve been working on a small language learning app as a solo developer.
I intentionally avoided gamification, streaks, subscriptions, and engagement tricks. The goal was calm learning — fewer distractions, more focus.
I’m starting to wonder if this approach is fundamentally at odds with today’s market.
For those who’ve built or used learning tools: – Does “calm” resonate, or is it too niche? – What trade-offs have you seen when avoiding gamification?
Not here to promote — genuinely looking for perspective.
Ask HN: Is starting a personal blog still worth it in the age of AI?
Hi HN — I’ve wanted to start a personal blog for a few years, but I keep hesitating.
I write a lot privately (notes, mini-essays, thinking-through problems). Paul Graham’s idea that essays are a way to learn really resonates with me. But I rarely publish anything beyond occasional LinkedIn posts.
My blockers:
•“Nobody needs this” / “It’s not original”
•“AI can explain most topics better than I can”
•A bit of fear: shipping something that feels naive or low-signal
At the same time, I read a lot of personal blogs + LinkedIn and I do get real value from them — mostly from perspective, lived experience, and clear thinking, not novelty.
For those of you who blog (or used to):
•What made it worth it for you?
•What kinds of posts actually worked (for learning, career, network, opportunities)?
•Any practical format that lowers the bar (length, cadence, themes)?
•If you were starting today, what would you do differently?
I’m not trying to build a media business — more like building a “public notebook” that compounds over years.
Computer animator and Amiga fanatic Dick van Dyke turns 100
Here's a video from 2004 https://www.youtube.com/watch?v=Y1J9kfDCAmU
It's his 100th birthday today.
Ask HN: How can I get better at using AI for programming?
I've been working on a personal project recently, rewriting an old jQuery + Django project into SvelteKit. The main work is translating the UI templates into idiomatic SvelteKit while maintaining the original styling. This includes things like using semantic HTML instead of div-spamming, not wrapping divs in divs in divs, and replacing bootstrap with minimal tailwind. It also includes some logic refactors, to maintain the original functionality but rewritten to avoid years of code debt. Things like replacing templates using boolean flags for multiple views with composable Svelte components.
I've had a fairly steady process for doing this: look at each route defined in Django, build out my `+page.server.ts`, and then split each major section of the page into a Svelte component with a matching Storybook story. It takes a lot of time to do this, since I have to ensure I'm not just copying the template but rather recreating it in a more idiomatic style.
This kind of work seems like a great use case for AI assisted programming, but I've failed to use it effectively. At most, I can only get Claude Code to recreate some slightly less spaghetti code in Svelte. Simple prompting just isn't able to get AI's code quality within 90% of what I'd write by hand. Ideally, AI could get it's code to something I could review manually in 15-20 minutes, which would massively speed up the time spent on this project (right now it takes me 1-2 hours to properly translate a route).
Do you guys have tips or suggestions on how to improve my efficiency and code quality with AI?
Rkik v2.0.0 – NTP, NTS, PTP diagnostics, presets and config, Docker test lab
Hi HN,
I’m excited to announce rkik v2.0.0, a major update to Rusty Klock Inspection Kit, a stateless CLI tool and library for inspecting network time protocols (NTP/NTS) and precision clocks across infrastructure. This release advances the project well beyond its original scope as a simple NTP offset inspector.
What’s new in v2.0.0
Network Time Security (NTS) support - Fully integrated RFC 8915 NTS implementation with diagnostic detail. - --nts flag to enable authentication and encrypted NTS sessions. - Adjustable --nts-port, handshake timing, cookie metrics, negotiated AEAD algorithms, and certificate inspection (subject, issuer, validity, fingerprints). - JSON export of all NTS diagnostic data. NTS works alongside existing features like compare and plugin modes.
Precision Time Protocol (PTP) diagnostics - New --ptp switch for querying IEEE-1588 environments (Linux only). - Handles domain and port controls (--ptp-domain, --ptp-event-port, --ptp-general-port). - Optional hardware timestamping (--ptp-hw-timestamp) and extensive master clock info. - Supports text, structured JSON, compare output, and plugin lines. - Library primitives (PtpProbeResult, PtpQueryOptions, …) for embedding in other tools.
Config & presets management - Persistent configuration via rkik config and workspace presets via rkik preset, stored in ~/.config/rkik/config.toml (override via RKIK_CONFIG_DIR). - Presets let you define reusable probe sets and run them by name.
Test lab & Docker environment - New Docker-based test environment (./scripts/test-env-up.sh) to spin up multiple NTP daemons and a PTP grandmaster locally, enabling consistent QA and CI demos.
CLI redesign and documentation - CLI v2 spec documented in docs/cli_v2.md. New subcommand layout and improved ergonomics.
rkik started as a lightweight way to inspect NTP responses without daemons or root, but with v2.0.0 it becomes a comprehensive diagnostics and observability toolkit for time-related protocols, suitable for SREs, network engineers, and infrastructure operators who need precise insight into clock behavior across distributed systems.
All sources and releases are available on GitHub: https://github.com/aguacero7/rkik
Ask HN: Bloggers, how do you manage your content?
I would like to start a personal blog. I think Substack is a good option, but I would like more control over styling (potentially custom components) and want to host the blog on my own website.
I wanted to ask what the writing and hosting process is like for people who have a personal website and blog—do you just write markdown and then use a renderer?
I would like a kind of wysiwyg editor to see exactly how the content will appear once loaded. The issue with writing in a separate editor is that the line breaks, line lengths, font, etc. never appear how they will actually look. Thanks!
Ask HN: Best back end to run models on Google TPU?
So, I got Pixel 10 Pro, and I'd like to run parakeet (or whisper) model on it for voice to text. I'm building an ai dictation app (aidictation.com). I'm struggling to find a way to run this model on device. I have to reserve to use groq API, which is suboptimal.
Any recommendations?
Ask HN: Did anyone else notice that the OpenAI Labs website was completely gone?
I was sad to discover today that all of my Dall-E image generations are gone, along with the entire https://labs.openai.com/ site. Apparently, some users received emails earlier in the year when it was about to be taken down, but I didn't. There were quite a few images in my history that I would have liked to have saved.
Maybe worse is how much this lowers my trust in OpenAI even further than it already had been. Dall-E was not a small platform; it was a cultural phenomenon accessed by hundreds of millions of users. It's bewildering that OpenAI would so silently "take it behind the shed."
I'm searching, and it doesn't seem as if there was even an HN post about the shutdown. So, it didn't even hit this place's radar. How many of you are hearing this for the first time?
I don't understand how a company like OpenAI could be so reckless with user data integrity and access, particularly when sunsetting a product. All of the big-dog tech platforms have fairly robust protocols for notifying users and allowing them to download their data (even with hoops to jump through). How can they hope to be one while still acting like a "move fast and break things" startup? I liked the thing they broke.
Ask HN: How do you get comfortable with shipping code you haven't reviewed?
This is the advice I've gotten on how to adapt to AI driven development at breakneck speed - to the point of having AI tooling write and ship projects in languages the 'operator' doesn't even know. How do you get confidence in a workflow where e.g. a team of agents does development, another team of agents does code review and testing, and then it is shipped without a human ever verifying the implementation?
I hear stories of startup devs deploying 10-30k+ lines of code per day and that a single dev should now be able to build complete products that would ordinarily take engineer-years in under a month. Is this realistic? How do you learn to operate like this?
Ask HN: Why are modern AIs ignorant or reluctant to talk about "vibe coding"?
Is it because very little of their training data has the term "vibe coding" in it?
DietPi released a new version v9.20
DietPi is a lightweight Debian based Linux distribution for SBCs and server systems, with the option to install desktop environments, too. It ships as minimal image but allows to install complete and ready-to-use software stacks with a set of console based shell dialogs and scripts.
The source code is hosted on GitHub: https://github.com/MichaIng/DietPi The main website can be found at: https://dietpi.com/ Wikipedia: https://de.wikipedia.org/wiki/DietPi
The project released the version DietPi v9.20 on December 14th, 2025.
The highlights of this version are: Orange Pi 5/Max/Ultra, Radxa ZERO 3: Fixes for these SBCs RustDesk Server: New software title (selfhosted remote access server) DietPi-Backup: Improvements for NFSv4 mounts Allo GUI: New major version is used for installation Fixes for Portainer, DietPi-Drive_Manager, BirdNET-Go, ProFTPD, LazyLibrarian, Fail2Ban, Radarr The full release notes can be found at: https://dietpi.com/docs/releases/v9_20/
Ask HN: Thought-Provoking Books
I read many non-fiction books, but recently noticed that only a few qualify as truly heavy, thought-provoking reads, that you literally can't finish in a manageable time because you keep telling yourself, "Wait a minute," then stop to Google something, run an experiment, or just think deeply. My current example (still unfinished) is "Moonwalking with Einstein" by Joshua Foer. It's mind-blowing - the entire memory universe around us that I never properly explored before.
Ask HN: How do I navigate horror of requirement gathering in product management?
Every other day I face challenges while gathering requirements from various clients.
1. When everything becomes priority number 1 2. When the stakeholder goes back on the discussed requirements 3. Requirements change after every single meeting 4. During UAT a new stakeholder appears out of nowhere and says "This is not what we wanted" 5.You rely on SME for inputs who actually doesn't have a clue 6. Two clients from same team give you opposite requirements 7. Scope creep is the new fashion 8. THE BIGGEST OF ALL - The client doesn't know what they want
How do you navigate the horrors of the requirement gathering process to make yourself a better product manager?
`nmrs` is officially stable v1.0.0 released
Super excited to say I've finished `1.0.0` which deems my library API as stable. Breaking changes will only occur in major version updates (`2.0.0`+). All public APIs are documented and tested.
`nmrs` is a library providing NetworkManager bindings over D-Bus. Unlike `nmcli` wrappers, `nmrs` offers direct D-Bus integration with a safe, ergonomic API for managing WiFi, Ethernet, and VPN connections on Linux. It's also *runtime-agnostic* and works with any `async` runtime.
This is my first (real) open source project and I'm pretty proud of it. It's been really nice to find my love for FOSS through `nmrs`.
Hope someone derives use out of this and is kind enough to report any bugs, feature requests or general critiques!
> I am more than open to [contributions](https://github.com/cachebag/nmrs) as well!
https://github.com/cachebag/nmrs
Docs: https://docs.rs/nmrs/latest/nmrs/
Ask HN: Any online tech spaces you hang around that don't involve AI?
I understand why Ai is dominating online discourse right now. The tech is novel, it’s pushing boundaries, the business side has trillions of dollars involved, and it’s made its way to the mainstream of every day people.
But, I just truly don’t find it interesting. For all those that do - great! But for myself, for whatever reason it just does not scratch that part of my brain. I’d rather spend days writing and debugging code (to create a 5 minute automation ;) ) than having Ai spit something out in 10 seconds.
I just use Ai like a supercharged stack overflow. Ask it something if I have a syntax error or whatever, and then move on by continuing to use my own brain to think through the logic and patterns of my project.
All that to say - I miss what HN was before Ai and LLMs started dominating everything!
Anyone have other spaces, blogs, communities, or whatever where you go to learn and/or discuss interesting things that don’t have anything to do with Ai?
Ask HN: Please suggest a smart watch that can be customized
I’m looking for an affordable/cheap smart watch than can be modified. Ideally I just want to put a custom image background and lockdown/disable all other smart features, especially games or things like that. And preferably cheap enough that when my kid inevitably loses it or breaks it I won’t be crying over the wasted money.
Our "enterprise" experience with Stripe after $1B+ processed (be careful)
Hi guys,
In middle of a stripe Shakedown and feel like I this is something to warn others of.
We rent vehicles and implemented stripe in 2017. We process a massive volume and must have spent 1B+ with stripe so far.
We love stripe, the tools the software ect. But recently they have been closer to a mob boss than a vendor as they must know their customers are highly locked in.
Over time, our deal evolved into a multi‑year minimum annual fee commitment with “enterprise” pricing. On every renewal, the pattern has been:
Stripe pushes the minimum annual fees higher.
If we don’t naturally hit that minimum, they expect us to burn the difference on add‑on products and “nice to have” features just to satisfy the commit.
We’re warned that if we don’t find a way to hit the minimum, they can just take the full amount out of revenue.
What I would think of before picking stripe.
Make your integration portable. Don't use vender form/card logic. 2. Use an invoices platform that can easily switch card provider. 3. Push back on the small yearly minimum, as they will just raise it next time instead of focusing on you making more money stripe focuses on itself.
To anyone working at stripe, you guys built an awesome product, just wish you could maintain the culture that got you to where you are.
Good luck
Meta "deletes" years of conversations with friends/family
It seems Meta recently migrated all Messenger chats to e2ee. But if you had created a secret conversation (e2ee) in the past then instead of migrating the existing chat Meta decided to archive and switch to the secret chat.
Since I only experimented with/created secret chats in the past with close friends and family, all my old conversations with years of history were archived with no path to restore.
This is a horrible UX experience and absolutely ridiculous that this design passed in Meta.
Why are "remote" jobs in late 2025 still limited to hiring in US/CA/UK/DE?
Throughout 2025, I've been following job boards like YC Jobs, RemoteOK, NoDesk, WeWorkRemotely, and others. Across all of them, I keep seeing a recurring pattern:
Many companies advertise "remote" roles, but hiring is limited to the US, Canada, UK, or Germany. Sometimes they add one or two more countries, but rarely anything beyond that.
Given that it's the last quarter of 2025 and remote work is more established than ever, I'm trying to understand the reasoning behind this.
A few questions I'm hoping founders, hiring managers, or people with international hiring experience can shed light on:
- Is the main blocker regulatory complexity? (employment law, compliance, local registrations, PE risk, etc.)
- Is it primarily about taxes and payroll overhead when hiring abroad?
- Are there security or liability concerns that make certain jurisdictions easier to work with?
- Is it simply the cost of maintaining compliant employment structures worldwide, or are there deeper strategic reasons?
- And finally: Is there evidence that the value produced by strong engineers abroad doesn't offset those costs, or is the issue not economic at all?
I'm asking out of genuine curiosity, from the outside, it seems like a global talent pool should be an advantage, especially for remote-first companies. But the hiring restrictions persist, even as tools like Deel, Remote, Oyster, etc. mature.
I'd love to hear perspectives from people who have dealt with this firsthand.
Ask HN: How are you educating your elementary aged children?
Hi folks! We are pretty set on continuing in public school. I totally understand that we should focus more on social / emotional and extra curricular front. As it comes to academics, how have you supplemented their education besides the standard Kumon/Russian School of Math / Singapore Math?
Hearing lots of “stuff” around alpha school and AI enabled learning but have people tried something they liked? We have tried 1. Stuff like Tynker 2. We love Khan Academy kids 3. Haven’t tried any of the subscription box stuff yet
Ask HN: End of Year Book Recommendations
Top 1 or top 1-3 book you read this year (2025) that you would recommend to the HN community? Note: book itself doesn't need to have been published in 2025.