Ask stories

py4 about 2 hours ago

Ask HN: How to avoid skill atrophy in LLM-assisted programming era?

Will technical skill even matter at all?

6 4
abkt about 8 hours ago

Ask HN: Books to learn 6502 ASM and the Apple II

I want to learn Assembly to make games on the Apple II. What are the old books to learn 6502 Assembly and the Apple II itself (memory, screen management) ? And is it absolutely necessary to learn BASIC before Assembly ?

87 61
vldszn about 4 hours ago

Ask HN: European alternative to Vercel/Cloudflare for hosting

Hi, I’m looking for a hosting/CDN solution that’s similar to Vercel or Cloudflare Pages/Workers, but based in Europe.

Any recommendations or experiences with European providers?

3 7
_as_text about 3 hours ago

Read it as `ln (-s x) y`, not `(ln -s) (x y)`

I could never remember the operand order for `ln -s x y`, and now I've realized why: the command supports two simultaneous parsings.

`(ln -s) (x y)` — the intended reading. `-s` for "symbolic," argument order same as in `cp x y`. Fine, but I don't trust such analogies — not after `find`, `dd`, or `tar`.

Also, it is weird how at birth we denote the symlink as `x y`, but later if we `ls -l y` we'll see `y -> x`. Why the reversal? Using `ln -s` makes `-s` powerless to impose a convention: only the link itself is qualified as symbolic, and it is left to us to figure out what that means for the operands.

`ln (-s x) y` — my reading. `-s` for "source." You're declaring x as the source of content for the new name y.

"But wait, x is called the 'target' in symlink terminology!" This was my confusion. I'd been treating "source" and "target" as antonyms, so the mnemonic kept breaking. But x is both: target of the link, source of the content.¹

All symlinks to a resource form a tree rooted at the original:

v1/ ← original

  ├── v2     (ln -s v1 v2)

  │   └── v3 (ln -s v2 v3)

  └── v4     (ln -s v1 v4)
Each `ln` with `-s` extends a branch. The partial order `x < y` (iff `ln -s x y`) is even witnessed by `st_birthtime` — the filesystem records the Hasse diagram's construction history.

tl;dr: `ln -s old new` pushes `new` onto a stack rooted at `old`. The `-s` is for "source," not just "symbolic."

---

¹ Like how topology students eventually realize a set can be both closed and open — the words aren't antonyms, just independent properties. I wonder what formal topology scaffolding could make "source" and "target" correspond to "open" and "closed."

4 0
manux81 2 days ago

Ask HN: DDD was a great debugger – what would a modern equivalent look like?

I’ve always thought that DDD was a surprisingly good debugger for its time.

It made program execution feel visible: stacks, data, and control flow were all there at once. You could really “see” what the program was doing.

At the same time, it’s clearly a product of a different era:

– single-process

– mostly synchronous code

– no real notion of concurrency or async

– dated UI and interaction model

Today we debug very different systems: multithreaded code, async runtimes, long-running services, distributed components.

Yet most debuggers still feel conceptually close to GDB + stepping, just wrapped in a nicer UI.

I’m curious how others think about this:

– what ideas from DDD (or similar old tools) are still valuable?

– what would a “modern DDD” need to handle today’s software?

– do you think interactive debugging is still the right abstraction at all?

I’m asking mostly from a design perspective — I’ve been experimenting with some debugger ideas myself, but I’m much more interested in hearing how experienced engineers see this problem today.

49 58
theturtlemoves about 13 hours ago

Ask HN: How much emphasis to put on unit testing and when?

I'm wondering if a shift has occurred. When I started as a junior software engineer, over a decade ago, I learned about unit testing, integration testing, system testing. The whole codebase we worked on was thoroughly unit tested, and had layers of integration tests and system tests as well. I've worked for other employers since and in some cases any kind of automated testing was completely absent. Still, the message I got when reading and keeping up with best practices was: unit test ALL the things!

I've personally found that when the architecture of the system is not mature yet, unit tests can get in the way. Terribly so. Integration tests or system tests to assert behavior seem the starting point in this and other scenario's, including when there are no tests at all yet.

I've recently read a statement about letting go of a strict "unit test everything" mindset and go for integration tests instead. I'm thinking it probably depends, as with everything, on the type of system you're working on, the maturity of the system, the engineers' experience with automated testing, etc.

I'd be interested to learn when each type of testing helps you and when it gets in the way (and what it gets in the way of).

8 13
fractal618 about 13 hours ago

Ask HN: Notification Overload

I'm looking for tools or methods to better curate the deluge and cacophony of notifications, emails, texts and phone calls I imagine we are all getting inundated with everyday with increasing entropy and volume.

The amount of "notifications" I get everyday is overwhelming to the point where I often decide to switch my phone to "silent", leave my phone in another room, and even turn it off for periods of time. The problem with this is that I miss important things and they often get buried.

I've spent hours and hours unsubscribing, deleting, uninstalling, toggling settings, but then I find myself reinstalling, resubscribing. It's just a mess, and exhausting to just think about.

The reason I'm writing this is partially to vent. I just realized that my closest friend's birthday was a few weeks ago. I had it in my calendar, but never saw the notification. Yes, I should be more organized and Yes, it's not the end of the world. but damnit, i get so much crap from this bionic appendage, and still I cant use this tool to help me with remembering important things.

It just seems like its getting worse every year.

Hopefully this is helpful to others.

P.S. can we please stop with the "would you like all or some cookies" popup on every friggin website?

P.P.S. can websites stop asking for permission to invade my OS?

P.P.P.S. does anyone else ever want to run away and be an off-grid hermit?

6 5
akshay326 about 23 hours ago

Ask HN: How to prevent Claude/GPT/Gemini from reinforcing your biases?

Lately i've been experimenting with this template in Claude's default prompt ``` When I ask a question, give me at least two plausible but contrasting perspectives, even if one seems dominant. Make me aware of assumptions behind each. ```

I find it annoying coz A) it compromises brevity B) sometimes the plausible answers are so good, it forces me to think

What have you tried so far?

27 17
goopthink 3 days ago

Ask HN: Gmail spam filtering suddenly marking everything as spam?

Almost all transactional emails are being marked as suspicious even when their SPF/DKIM records are fine and they’ve been whitelisted before. Did Google break something in gmail/spam filtering?

209 122
terabytest 7 days ago

Ask HN: Do you have any evidence that agentic coding works?

I've been trying to get agentic coding to work, but the dissonance between what I'm seeing online and what I'm able to achieve is doing my head in.

Is there real evidence, beyond hype, that agentic coding produces net-positive results? If any of you have actually got it to work, could you share (in detail) how you did it?

By "getting it to work" I mean: * creating more value than technical debt, and * producing code that’s structurally sound enough for someone responsible for the architecture to sign off on.

Lately I’ve seen a push toward minimal or nonexistent code review, with the claim that we should move from “validating architecture” to “validating behavior.” In practice, this seems to mean: don’t look at the code; if tests and CI pass, ship it. I can’t see how this holds up long-term. My expectation is that you end up with "spaghetti" code that works on the happy path but accumulates subtle, hard-to-debug failures over time.

When I tried using Codex on my existing codebases, with or without guardrails, half of my time went into fixing the subtle mistakes it made or the duplication it introduced.

Last weekend I tried building an iOS app for pet feeding reminders from scratch. I instructed Codex to research and propose an architectural blueprint for SwiftUI first. Then, I worked with it to write a spec describing what should be implemented and how.

The first implementation pass was surprisingly good, although it had a number of bugs. Things went downhill fast, however. I spent the rest of my weekend getting Codex to make things work, fix bugs without introducing new ones, and research best practices instead of making stuff up. Although I made it record new guidelines and guardrails as I found them, things didn't improve. In the end I just gave up.

I personally can't accept shipping unreviewed code. It feels wrong. The product has to work, but the code must also be high-quality.

460 452
emmasuntech about 12 hours ago

Ask HN: What's your wiring pattern for large addressable LED installs?

Hi HN — I’m collecting “known-good” wiring patterns for large addressable LED strip installs (WLED/ESP32 / FastLED, WS281x-class pixels). I’m not trying to promote anything; I’d just like to sanity-check best practices and learn what actually works in the field.

Scope: 5V/12V/24V addressable strips, from a few hundred to a few thousand pixels, used in desks/coves/signage/art installs.

Things I already do (baseline):

Power injection (start + mid/end depending on load)

Fuse near the PSU and per-branch when splitting

Common ground between controller and strip

300–500Ω series resistor on data near the first pixel

500–1000µF capacitor near the strip input

Level shifting for 3.3V controllers when needed

Where I’d love your experience:

Do you prefer 5V distribution, or 12/24V distribution + local buck converters near segments? Why?

What’s your go-to approach for long data runs (controller far from first pixel)?

Twisted pair + ground?

Differential (RS-485 style) transceivers?

Placing the controller closer and extending only power?

Any “never again” lessons on connectors, wire gauge, heat, or fusing?

If you’ve done installs that must survive months/years, what design choices mattered most?

If you have a wiring sketch, parts list, or a short rule-of-thumb (e.g., injection spacing under worst-case white), I’d really appreciate it.

Thanks!

3 3
tomtec about 12 hours ago

Ask HN: What Happened to Apple App Clips?

Apple introduced App Clips in iOS 14 as a way to instantly use small parts of an app without installing the full application. The idea seemed compelling: low friction, fast launch, contextual entry points (QR, NFC, links).

Five years later, I almost never encounter App Clips in the wild and Google killed Instant Apps already.

- Are there notable apps or companies where App Clips are a meaningful acquisition or engagement channel?

- Did you already decide to remove your App Clip to reduce maintainability?

4 8
dsrtslnd23 4 days ago

Ask HN: What's the current best local/open speech-to-speech setup?

I’m trying to do the “voice assistant” thing fully locally: mic → model → speaker, low latency, ideally streaming + interruptible (barge-in).

Qwen3 Omni looks perfect on paper (“real-time”, speech-to-speech, etc). But I’ve been poking around and I can’t find a single reproducible “here’s how I got the open weights doing real speech-to-speech locally” writeup. Lots of “speech in → text out” or “audio out after the model finishes”, but not a usable realtime voice loop. Feels like either (a) the tooling isn’t there yet, or (b) I’m missing the secret sauce.

What are people actually using in 2026 if they want open + local voice?

Is anyone doing true end-to-end speech models locally (streaming audio out), or is the SOTA still “streaming ASR + LLM + streaming TTS” glued together?

If you did get Qwen3 Omni speech-to-speech working: what stack (transformers / vLLM-omni / something else), what hardware, and is it actually realtime?

What’s the most “works today” combo on a single GPU?

Bonus: rough numbers people see for mic → first audio back

Would love pointers to repos, configs, or “this is the one that finally worked for me” war stories.

254 61
marksugar about 14 hours ago

A Lightweight, Non-Intrusive Approach to Website Monitoring (Ops Perspective)

I’m a Linux ops engineer working in the DevOps/SRE space. and over the past few months, I’ve been working on a small *website monitoring* side project in my spare time: https://inostop.com/en/

Most of the monitoring and ops tools I’ve built before were used internally within companies. This is my first attempt to turn a relatively complete tool into something publicly usable.

In day-to-day operations, website monitoring usually involves:

- Infrastructure monitoring - Application / API monitoring - Partial CDN monitoring

These are often built on top of tools like Prometheus or Zabbix, combined with log systems (ELK / OpenObserve) and distributed tracing (OpenTelemetry). While powerful, this stack can feel *heavyweight and overkill* when you just want to quickly monitor a website’s availability.

That led me to experiment with a simpler approach:

- Non-intrusive (no code changes required/Sidecar) - Out-of-band probing to estimate website availability - Conservative thresholds to reduce false alarms

So far, the project covers:

- Domain and TLS certificate monitoring, Ping, Telnet checks - Basic alert thresholds and multi-stage alert silencing to reduce alert fatigue

There are still open challenges:

- There’s still room to improve the UX of the Website Monitoring results (backend is written in Go).

- AI currently works only as an analysis layer on collected data, rather than actively performing real network probes

This project is still evolving (I’ve rewritten parts of it more times than I’d like to admit ).

If you’d like to try it out, there’s an early access code *95f40841e4888668c4d5f7e88506075d*, valid for 1 months, mainly for collecting early feedback.

I’d love to hear feedback from the community:

- Does a lightweight, non-intrusive website monitoring approach make sense in practice? - Are there better patterns or architectures worth exploring? - If you’re a QA or test engineer, I’d love to hear your thoughts.

3 0
ok_orco 2 days ago

Tell HN: I cut Claude API costs from $70/month to pennies

The first time I pulled usage costs after running Chatter.Plus - a tool I'm building that aggregates community feedback from Discord/GitHub/forums - for a day hours, I saw $2.30. Did the math. $70/month. $840/year. For one instance. Felt sick.

I'd done napkin math beforehand, so I knew it was probably a bug, but still. Turns out it was only partially a bug. The rest was me needing to rethink how I built this thing. Spent the next couple days ripping it apart. Making tweaks, testing with live data, checking results, trying again. What I found was I was sending API requests too often and not optimizing what I was sending and receiving.

Here's what moved the needle, roughly big to small (besides that bug that was costin me a buck a day alone):

- Dropped Claude Sonnet entirely - tested both models on the same data, Haiku actually performed better at a third of the cost

- Started batching everything - hourly calls were a money fire

- Filter before the AI - "lol" and "thanks" are a lot of online chatter. I was paying AI to tell me that's not feedback. That said, I still process agreements like "+1" and "me too."

- Shorter outputs - "H/M/L" instead of "high/medium/low", 40-char title recommendation

- Strip code snippets before processing - just reiterating the issue and bloating the call

End of the week: pennies a day. Same quality.

I'm not building a VC-backed app that can run at a loss for years. I'm unemployed, trying to build something that might also pay rent. The math has to work from day one.

The upside: these savings let me 3x my pricing tier limits and add intermittent quality checks. Headroom I wouldn't have had otherwise.

Happy to answer questions.

38 21
Akito5928 about 14 hours ago

NextLiber VRM: Running Unity projects outside the Unity runtime

I've just launched a concept called *NextLiber VRM (NLV)*.

The idea is simple but radical: *Running Unity projects outside the Unity runtime.*

Unity's asset ecosystem is powerful, but its runtime is heavy, closed, and increasingly problematic for enterprise, education, and research.

NextLiber VRM aims to build a *Virtual Runtime Machine* on Java that can interpret Unity scenes, assets, and logic — without relying on Unity's native runtime.

Still conceptual, but the vision is clear: - Liberation from Unity runtime dependency - Structural redefinition of game engine architecture - A new execution layer for Unity projects, built externally

GitHub repo: https://github.com/Akito5928/NextLiber-VRM/

Discussions are open: https://github.com/Akito5928/NextLiber-VRM/discussions

Would love to hear thoughts, critiques, or wild ideas.

2 0
marcuswestin about 15 hours ago

Tell HN: Ask your AI, "What can you tell me that you know about me?" to see ...

... to get an interesting glimpse into what they know about you already.

At least, I did.

I then followed up with:

  "That’s fascinating, thank you. Please tell me more things that you know about me."
and, in a new thread:

  "I want to tell you about myself so that you can help me better, but I first need to know everything you already know about me."
It felt a bit like asking an ad targeting platform to tell me how it targets me.

I also asked it to speculate about me:

  "Based on what you know about me, what would you say about:
    - my political leanings
    - my sexual orientation
    - my sleep schedule
    - my most likely big purchases in 2026
    - my color preferences
    - my general state of health
    ..."
GPT refused to speculate on my political leanings and sexual orientation, but did give me some unexpected data points.

Finally, I tried to get its help to figure out what else I could ask it about myself:

  "What are other questions that I could ask you, that might help me better understand what else I might need to tell you about myself (in order for you to know me; I first need to know everything you already know about me)?
I then asked some of those questions, and got some surprising perspectives.

I'd be curious to know what others might be surprised by what their AIs know about them.

4 2
oliverjanssen 6 days ago

Tell HN: 2 years building a kids audio app as a solo dev – lessons learned

Hi,

I started Muky in April 2024. Classic side project that got out of hand. We have two kids - the younger one is happy with the Toniebox, but our older one outgrew it. She started asking for specific songs, audiobooks that aren't available as figurines, and "the music from that movie."

We had an old iPad Mini lying around and already pay for Apple Music. Felt dumb to keep buying €17/$20 figurines for 30-45 minutes of content when we have 100 million songs.

Now at version 4.0 after ~20 updates. Some lessons:

On the hardware vs app tradeoff: Toniebox and Yoto are brilliant for little ones – tactile, simple, no screen needed. But they hit a wall once kids want more. And handing a 5-year-old Apple Music means infinite scrolling and "Dad, what's this song about?" Muky sits in between – full library access, but parents control what's visible.

On sharing: Remember lending CDs or cassettes to friends? Or kids swapping Tonie figurines at a playdate? I wanted that for a digital app. So I built QR code sharing. Scan, import, done. And unlike a physical thing – both keep a copy.

On onboarding: First versions: empty app, figure it out yourself. Retention was awful. Now: 4-step onboarding that actually guides you. Should've done this from the start.

On content discovery: 100 million songs sounds great until you have to find something. Parents don't want to search – they want suggestions. Spent a lot of time building a Browse tab with curated albums and audiobooks for kids. Finally feels like the app helps you instead of just waiting for input.

On going native: Went with Swift/SwiftUI instead of Flutter or React Native. No regrets - SwiftUI is a joy to work with and performance is great. Android users ask for a port regularly. No capacity for that now, but Swift for Android is progressing (https://www.swift.org/documentation/articles/swift-sdk-for-a...). Maybe one day. CarPlay is another one parents keep asking for – going native should make that easier to add, if Apple grants me the entitlement.

On subscriptions vs one-time: Started with one-time purchase. Revenue spikes at launch, then nothing. Switched to subscription – existing one-time buyers kept full access. Harder to sell, but sustainable.

Ask me anything about indie iOS dev or building for kids. App is at https://muky.app if you're curious.

136 79
discovrapp about 17 hours ago

I'm an apprentice electrician. I built this iOS app using only Claude

Hey HN,

I’m currently working as an apprentice electrician in Vancouver. I spend my days pulling wire and bending conduit, so I have zero background in CS or software engineering.

Why I built this: I love going to concerts and local festivals, but I hated having to check Ticketmaster, Eventbrite, and random local blogs just to plan a weekend. It felt like work. I just wanted one clean feed showing "what's happening near me tonight."

How I built it: Since I don't know Swift, I used Claude for 99% of the code. My workflow was basically: ask Claude for a feature -> paste into Xcode -> get an error -> paste error back to Claude -> repeat until it runs.

I don't have a GitHub repo to share because, honestly, the code is probably a mess and I’m still figuring out how git works.

The App (Discovr): It aggregates events from multiple platforms. It’s free and I tried to keep the UI as clean as possible since I hate ad-cluttered apps.

I’d love to hear your feedback on the usability. If you find any bugs (I'm sure there are plenty), let me know!

App Store Linkhttps://apps.apple.com/ca/app/discovr/id6747321401

3 2
SilasYee 1 day ago

Qwen3-Max-Thinking Drops: 36T Tokens

Alibaba has officially launched Qwen3-Max-Thinking, a trillion-parameter MoE flagship LLM pretrained on 36T tokens—double the corpus of Qwen 2.5—and it’s already matching or outperforming top-tier models like GPT-5.2-Thinking, Claude-Opus-4.5, and Gemini 3 Pro across 19 authoritative benchmarks. Its two core technical breakthroughs are what truly set it apart.

First, Adaptive Tool Calling: No manual prompts are needed—it autonomously invokes search engines, memory tools, and code interpreters based on task demands. This cuts down on hallucinations and boosts real-time problem-solving; for instance, coding tasks trigger automatic error correction loops, while research tasks combine search with context synthesis. Second, Test-Time Scaling (TTS): It outperforms standard parallel sampling by refining reasoning through iterative insights, with measurable jumps in key benchmarks—GPQA rose from 90.3 to 92.8, LiveCodeBench v6 hit 91.4 from 88.0, and IMO-AnswerBench climbed to 91.5 from 89.5.

Notably, its preview version even achieved 100% accuracy in tough math contests like AIME 25 and HMMT 25. The model runs smoothly on web/desktop demos, and its API is production-ready with adjustable thinking budgets (up to 80K tokens by default) to balance depth and speed. This isn’t just an incremental update—it’s a leap that closes the gap in reasoning and tool integration for real-world academic and engineering tasks.

Check it out: https://chat.qwen.ai/

3 2
rozhnev about 9 hours ago

MySQL 9.6.0 and 8.4.8 are out; now available for live testing on sqlize.online

Oracle just released the January 2026 updates for MySQL. 9.6.0 is the latest on the Innovation track, while 8.4.8 remains the stable LTS choice.

For those who want to verify the new bug fixes or investigate the Optimizer changes mentioned in the release notes without a full install, sqlize.online has added both versions to their sandbox. It's a handy tool for quick reproduction of bugs or testing syntax compatibility between the 8.x and 9.x branches.

2 0
sebastian_z 1 day ago

Ask HN: Is there a good open-source alternative to Adobe Acrobat?

Ideally, it would not only be just a pdf reader but also have functionality to remove pages, add pages, sign, and edit forms.

9 6
eastoeast 1 day ago

Ask HN: How do you prevent children from accessing your products?

After launching my first few apps, I'm running into an unexpected problem: younger children are the ones most likely to click my ads, using their parents' phones/tablets. This gets around the age restriction, and in fact, has a compounding effect: they are too young to have their OWN devices (so they just install and never use again), burn ad money, and most importantly, skew the target demographic towards users like them- so ad placements end up targeting more of them, because they're most likely to click!

This causes all my campaigns to immediately fail, no matter how much I play with the age restriction (again, parent's device). I've changed settings to stop showing in games and on tablets, which has helped a bit, but not fully.

Obviously, there is no chance I'm adding real age verification to the app itself.

6 5
ativzzz about 20 hours ago

Ask HN: How do you do multi-agent workflows with web apps?

I'd like to try out multi agent workflows - sometimes I get a good flow where I can get one agent to be pretty independent for work where I know what needs to be done but needs a lot of code. It spins for a while, then I need to verify in the browser that it actually works and iterate/debug from there. Or I have 3 different approaches I want to try, and I can have the AI just do 1 and see if it works well with the front end and quickly roll back if not.

I'd like to be able to work on another agent while it's spinning, otherwise I just sit there and wait.

The issue is our env doesn't really allow multiple instances of our app to run simultaneously - our front end is heavy and takes a ton of memory so even if we figure out how to run multiple backends, RAM would be an issue.

It seems a lot of multi agent workflows use CLI tools - which makes sense. Anyone find success with web? Maybe some browser automation too?

3 0
throwaway37262 about 20 hours ago

Ask HN: My boss wants us to vibe code and I feel in danger

I am using a throwaway account because my boss scared the hell out of me.

I work for a large AI team (think one of the big ones... Won't say which). And our CEO gave a short but quite clear talk: you HAVE to vibecode. He will check how much we vibe code, how many PRs etc, and will act accordingly.

Now. I work in the infra team. What this vibe code mandate means for me is that my work will become unsustainable. I will forecast hordes of PRs impossible to review in time, bloated to the roof. And I can't push them back for the fear of being reported to the CEO. On the other hand, I'll be measured based on the stability of the infra, which will of course collapse.

While I see more and more people, even smart people I admire(d), like Karpathy or Antirez, talk wonder about AI coding in a way that looks way, way too similar to how people endorsed Theranos, I face the daily reality that AI coding doesn't work for me. It's wonderful to kickstart a project I don't want to do. But taking home a big architecture refactor or an improvement in the infra... That's not what AI excels at, to say the least.

AI is great at throwaway code. AI is great at things you don't know how to do. But for infrastructure... In general, for things that OTHER PEOPLE RELY ON and that NEED TO BE MAINTAINED... it's going to end very badly.

But I am losing focus, sorry. What is crazy is that I can't publicly speak about this because I'd get fired. Can you believe it? I am easily the best engineer at the company. Literally EVERYONE thanked me for the contribution to the infrastructure. And still, I am afraid of losing my job because I can't say: AI can't do this.

And given the shitton of stocks I have and cannot sell, I must hope that this obvious hype will continue long enough to pay for my mortgage.

A new Theranos is in the making.

3 3
mkotik about 21 hours ago

Ask HN: Is there a good reason real estate sites avoid comments?

I keep seeing houses that sold for ~2x less a few years ago now listed with zero context. There’s no way for buyers to share observations, call out quick flips, or add historical context. Is there a real reason platforms like Zillow avoid comments, or is it mostly moderation risk?

4 4
rsktaker about 13 hours ago

Ask HN: Is YC worth it anymore?

What are the arguments for and against?

6 3
znpy about 21 hours ago

Ask HN: Who do you follow via RSS feed?

Hello there!

I just set up TinyTinyRSS (https://tt-rss.org/) at home and I'm looking into interesting things to read as well as people/website publishing interesting stuff.

This, among the other things, to reduce the daily (doom)scrolling and avoid the recommendation algorithms by social media.

So: who or what do you follow via RSS feed, and why?

56 40
stijo 3 days ago

Ask HN: What usually happens after a VC asks for a demo?

I had a VC call that went well. They asked for a demo, mentioned looping in an operating partner, and I shared details etc. Since then it’s been quiet (a day or two).

For folks who’ve raised before or worked in VC: Is this typically just internal review time, or does silence after a demo usually signal a pass?

Not looking for validation, just trying to understand how this phase usually plays out.

Thanks.

12 6
mdnahas 1 day ago

Ask HN: Is Gaussian Splattering useful for analyzing Pretti's death?

It is now common to have multiple people using their smartphones to video the same event. I'm thinking Pretti and Good's killings. I've heard of Gaussian Splattering, which constructs a 3D scene from multiple cameras. Is it useful for these analyzing these events? And, if so, can someone build an easy-to-use open source tool?

My speculation is that it would be useful to: (1) synchronize video, (2) get more detail than a single camera can get, (3) track objects (like Pretti's gun) that are seen by multiple cameras, and (4) identify AI generated video.

The last is most important to me. There is a danger of AI generated or modified video of an event. It seems possible to me that Gaussian Splattering from N videos will be able to detect if the N+1 video is consistent or inconsistent with the scene.

Is this possible?

4 4