Ask HN: Why can't we just make more RAM?
Is there some bottleneck in the supply chain, like rare earth metals or something, that’s limiting production throughput? Or do we simply have every factory already operating at max capacity and scaling up supply will require building more of them?
Is there some intuition we can apply to estimate how long it will take for supply to catchup to demand?
MiniMax M2.5 is trained by Claude Opus 4.6?
I was chatting with MiniMax M2.5 in OpenRouter and suddenly he mysteriously repeated on "I'm Claude, an AI assistant created by Anthropic - not a "language" ", heh wut?
Generate tests from GitHub pull requests
I’ve been experimenting with something interesting.
AI coding tools generate code very quickly, but they almost never generate full end to end test coverage. they create a ton of tests mostly unit and intergations but real user scenarios are missing. In many repos we looked at, the ratio of new code vs small number of high quality e2e tests dropped dramatically once teams started using Copilot-style tools or is left for testers as a separate job.
So I tried a different approach.
the system reads a pull request and:
• analyzes changed files • identifies uncovered logic paths - using dependency graph (one repo or multi-repo) • Understand the context via user story or requirements (given as a comment in PR) • generates test scenarios • produces e2e automated tests tied to the PR
in addition if a user can connect with their CMS, or TMS then it can be pulled into as well. (internally i use graphRAG but that is for another post)
Example workflow:
1. Push a PR 2. System reads diff + linked Jira ticket 3. Generates missing tests and coverage report
In early experiments the system consistently found edge cases that developers missed.
Example output:
Code Reference| Requirement ID | Requirement / Acceptance Criteria |Test Type Test ID | Test Description |Status
src/api/auth.js:45-78 | GITHUB-234 / JIRA-API-102 | API should return 400 for invalid token| Integration| IT-01 | Validate response for invalid token Pass
Curious how others are thinking about this kind of traceability. I am a developer too so i am sensitive to only show this to developer and only developer can make it visible to other folks otherwise he can just take the corrective action.
I traced $2B in nonprofit grants for Meta and Age Verification lobbying
Over the past several months I've been pulling public records on the wave of "age verification" bills moving through US state legislatures. IRS 990 filings, Senate lobbying disclosures, state ethics databases, campaign finance records, corporate registries, WHOIS lookups, Wayback Machine archives. What started as curiosity about who was pushing these bills turned into documenting a coordinated influence operation that, from a privacy standpoint, is building surveillance infrastructure at the operating system level while the company behind it faces zero new requirements for its own platforms.
The advocacy group that doesn't legally exist The Digital Childhood Alliance presents itself as a coalition of 50+ conservative child safety organizations (later inflated to 140+, though only six have ever been publicly named). It has been testifying in favor of these bills across states. Here is what public records show about its legal status:
DCA's domain was registered December 18, 2024 through GoDaddy with privacy protection and a four-year registration. The website was live and fully formed one day later: professional design, statistics, testimonials from Heritage Foundation and NCOSE staff, ASAA talking points already loaded. This is not a grassroots launch. This is a staging deployment of a pre-built site. 77 days later, Utah SB-142 became the first ASAA law signed in the country.
DCA processes donations through For Good (formerly Network for Good, EIN 68-0480736), which is a Donor Advised Fund. For Good explicitly states in its documentation that it serves "501(c)(3) nonprofit organizations." DCA claims 501(c)(4) status. DCA is classified as a "Project" (ID 258136) in the For Good system, not as a standalone nonprofit. I searched all 59,736 For Good grant recipients across five years, roughly $1.73 billion in disbursements. Zero grants to DCA, DCI, NCOSE, or any related entity. The donation page appears to be cosmetic.
Bloomberg reporters exposed Meta as a DCA funder in July 2025. The Deseret News detailed the arrangement in December 2025. No version of the website, across 100+ Wayback Machine snapshots, has ever disclosed funding sources. Every blog post and testimony targets Apple and Google. Meta is never mentioned or criticized.
Casey Stefanski, Executive Director, spent 10 years at NCOSE as Senior Director of Global Partnerships. Unusually, she never appears on any NCOSE 990 filing as an officer, key employee, or among the five highest-compensated staff. A senior director title at a $5.4M organization for a decade with no 990 appearance suggests either below-threshold compensation, an inflated title, or something else about the arrangement.
NCOSE's own 501(c)(4) structure turns out to be complicated. Tracing Schedule R filings across four years reveals that NCOSE created "NCOSE Action" (EIN 86-2458921) as a c4 in 2021, reclassified it from c4 to c3 in 2022, then created an entirely new c4 called "Institute for Public Policy" (EIN 88-1180705) in 2023 with the same address and the same principal officer (Marcel van der Watt). By 2024 the original entity had disappeared from Schedule R entirely.
$70M+ in super PACs, deliberately fragmented Meta poured over $70 million into state-level super PACs and structured every one to avoid the FEC's centralized, searchable database:
If you maintain software that could be classified as an "operating system provider" under these definitions, start Full dataset, OSINT tasklist, and all processed findings are published with sources embedded in each file: github.com/upper-up/meta-lobbying-and-other-findings
Ask HN: What's your biggest pain point when joining a new developer team?
I'm planning to make an AI tool which allows an organisations' developer to access all the files or detect references/calls for any doubts. Usually I feel like new coders in an org, have plenty of questions about the org's framework or operations in general. This makes them ask their seniors which they might not really like due to the wastage of time it would take. Hence, this entire workflow would be eliminated by having a custom AI-based platform for the same to ask all your queries on.
Ask HN: Got cancer, a new job,new boss in less than a year What do I do now?
Hello Everyone,
As per title really. I started a new job late last year. Head hunted and went from a mega stable nothing ever really changes with a low stress environment where it would cost a lot to get rid of me with over a decade and a half of service to a extremely fast paced "lets do it" environment that is rather "make it work for now" and the technical debt is large. I joined partly because I had a real rapport with the guy who would be my boss. The money helped too :D
The day I joined the company it got bought out by another one. Ok, we carry on, integration ongoing. Stuck between two competing outlooks on infrastructure and different ways of working.
Then in the last month I have a diagnosis of the big C. Tests are completed (i think) but it looks to be the one you want to get if you had to pick one. Treatment plans inbound imminently...
A few weeks ago my boss resigned. Now I have a new boss in another country. He is pretty much an unknown quantity at this point.
To be fair my immediate team mates and colleagues (in both companies) are awesome and we get through it as best we can but for right now but I don't even know what to do. I feel so much of a spare part its horrible. The job itself, I am not even sure about. If only I had a time machine. Clear guidance and direction is a thing other companies do! I feel like i have made a huge mistake and I was unhappy before all the upheaval at new job.
At home, we did the maths and luckily, even in the worst possible scenario the bills are covered for the very long term. That's something to be very thankful for. It may not be pretty but no one is coming knocking at the door.
I am thankful we live in a country with socialised health care and that the outlook is apparently good (unless the doctors are lying to me, obvs <---- Autism at play). I'll be honest and say that doing any work is hard because not knowing if you are going to be alive in a year or two is kind of a drag on productive work. I hope I will be, the prognosis is good but being told that news is the loneliest feeling in the world at the time.
I am still very much the newb and I can see if they want to rationalise headcount I am a prime target so..... I realise they cant do it whilst I am ill but you know how these things can go. So my fellow geeks... There is not a lot of good going on right now.
Can anybody help me with an objective plan of action that may make work a bit easier. I am not sure if I made a huge career misstep here or am just over reacting a bit with everything that is going on.
As I am mostly at a lose end right now because I can't commit to being present any particular day because treatment and appointments, I am thinking of upgrading some of my skills, maybe a few certifications but that will take all my will power to do. I just need to be as up to date and have a plan if I am let go AND get through the treatment AND it works. Everything crossed :/
The new owners are ALL GCP. My skillset lies in Linux, Ansible, Docker, Technical writing and high performance clustering. I am also proficient in Azure as well as having (somewhat dated) VMware experience but to a good depth.- I know everyone is running away from VMware as fast as possible so "meh!" on that one.
Top and bottom of it is at a professional level, I have no idea how to prepare for what's happening and what's coming. Any advice is welcome.
AI, Human Cognition and Knowledge Collapse – Daren Acemoglu
From the abstract: "We study how generative AI, and in particular agentic AI, shapes human learning incentives and the long-run evolution of society’s information ecosystem... Learning exhibits economies of scope: costly human effort jointly produces a private signal about their own context and a “thin” public signal that accumulates into the community’s stock of general knowledge, generating a learning externality.... The model highlights a sharp dynamic tension: while agentic AI can improve contemporaneous decision quality, it can also erode learning incentives that sustain long-run collective knowledge. "
Ask HN: Why have co-ops never played a major role in tech?
Modern tech has a vast open source ecosystem and a huge investor backed one, but very little if any significant activity in the form of cooperatives, where individuals or small companies pool their money and other resources to take advantage of economies of scale and compete with large companies, but not one another.
Seems like something the space could increasingly use as folks become beholden to a handful of massive companies who are increasingly trying to exploit their size to increase prices, margins, and profit.
Tell HN: Apple development certificate server seems down?
I don't see anything on https://developer.apple.com/system-status/, but I haven't been able to install apps for development on my own devices starting at 11AM PDT.
Other people on Reddit seem to be hitting this too [0]. Anyone knows anything about it?
[0]: https://www.reddit.com/r/iOSProgramming/comments/1rq4uxl
Edit: Now getting intermittent 502s from https://ppq.apple.com/. Something is definitely going on.
I'm a project manager, to the engineers: how replaceable do you think my job is?
A little more detail: I'm specifically a project manager in software development, and I've definitely noticed how quickly AI related technologies are advancing.
It's not hard for me to see the writing on the wall for my own profession, especially for lower performers.
I've had good relationships with the engineers I've partnered with on projects. Generally I just try to stay out of their way, enable them to make their own decisions instead of being some kind of task master. I've also tried to prevent external groups from bothering them, remove blockers ahead of time or as quickly as possible if needed. The most tiring part of the job for me is the politics, but that is like... what I get paid to deal with and shield folks from in my view.
I think if AI related tooling is integrated effectively, then the need for compiling and sharing information on a project gets reduced significantly (if not eliminated outright).
Maybe fewer project managers will be needed (if any). That's probably a good thing honestly. There's a lot of project managers out there that are pretty terrible. (maybe even me sometimes!)
I'm doing some serious soul searching on whether to leave the profession entirely after 12 years, or whether to stick with it. Open to suggestions.
Ask HN: Is there prior art for this rich text data model?
I've built a rich text data model for a desktop word processor in Python, based on a persistent balanced n-ary tree with cached weights for O(log n) index translation. The document model uses only four element types: Text, Container, Single, and Group — where Group is purely structural (for balancing) and has no semantic meaning in the document. Individual elements are immutable; insert and takeout return new trees rather than mutating the old one. This guarantees that old indices remain valid as long as the old tree exists. I'm aware of Ropes, Finger Trees, and ProseMirror's flat index model. Is there prior art I should know about — specifically for rich text document models with these properties?
Instagram Ending Encrypted DMs
In May 2026, a technical change inside Instagram began to trigger a much larger debate about privacy, surveillance and the future of private conversations on the internet. The platform confirmed that it will discontinue end-to-end encryption for direct messages, reversing a feature that only a few years ago had been presented as a major step forward for digital privacy. What may sound like a technical adjustment to most users touches the center of a global conflict involving governments, technology companies and civil liberties.
To understand why this matters, it helps to start with the basics. End-to-end encryption is a system that ensures only the sender and the recipient can read the content of a message. Not even the company operating the service can access it. In practical terms, it turns messaging apps into something close to a whispered conversation. Messages travel through servers but remain unreadable to any intermediary.
For years, companies like Meta, Apple and Google defended this technology as essential to protect users from spying, data leaks and unauthorized surveillance. Meta itself repeatedly argued that in encrypted systems “nobody, not even the company, can see what was sent.” Source: https://www.yahoo.com/news/articles/fact-check-metas-planned-policy-110000756.html
Now Instagram appears to be moving in the opposite direction.
According to recent reports, the platform plans to end encrypted chats in DMs starting May 8, 2026. That means conversations sent inside the app will no longer have the same level of cryptographic protection. Source: https://www.indiatoday.in/technology/news/story/instagram-to-drop-encrypted-chats-from-may-8-your-messages-will-not-be-private-anymore-2881592-2026-03-13
Technically speaking, this shift changes something fundamental. Without end-to-end encryption, the content of messages can potentially become accessible to the company in certain contexts, enabling automated analysis, moderation systems or internal investigations.
The official justification centers on one of the most sensitive issues confronting technology companies today: online safety and child protection.
Governments in the United States, the United Kingdom and across the European Union have increasingly pressured major platforms to detect and block illegal content inside private messaging systems, particularly material linked to child exploitation. Legislative proposals such as the European Union’s controversial “Chat Control” initiative and the UK’s Online Safety Act give authorities stronger powers to demand that platforms identify harmful content, even when it appears inside private communications. Source: https://www.medianama.com/2026/03/223-meta-ending-instagram-dm-e2ee/
The problem is that encryption creates a nearly impossible technical dilemma.
True end-to-end encryption prevents exactly this type of scanning. If a platform can read messages in order to detect illegal material, then those messages are not fully encrypted. And if they are fully encrypted, the platform cannot inspect them. Full content here:<https://chat-to.dev/post?id=RmlzSmxadmlQSVdtVklWSm4rTmtyUT09&redirect=/>
X is selling existing users' handles
I've been on Twitter since 2007 as @hac.
In recent years I didn't sign in frequently, then last week I saw my handle show up on the new X Handles marketplace.
It seems the account now belongs to X, and because I had a "rare handle" I can't even buy it back. From what I can tell, they will wait for some time and then auction the handle for around $100k.
Losing your account is frustrating. Having it sold to someone else doesn't feel right.
Of course, there is no warning when it happens. All you can do to prevent it is sign in every 30 days and read all changes to the TOS.
Ask HN: Remember Fidonet?
Is it still somehow alive today? Is it archived anywhere?
Enabling Media Router by default undermines Brave's privacy claims
So, Brave now enables Casting by default on desktop — and does so silently, without explicit notification or consent after an update? What fresh hell is this?
A browser that markets itself as privacy‑first should not be turning on a network discovery feature by default as if it were a trivial setting. If the Brave team’s operational goal is to expand the browser’s attack surface (more than they already have) they’ve made a strong start. Forcing users to manually opt out of Media Router to protect their systems and data directly contradicts the principle of “privacy by default.” This is exactly the kind of behavior many users left Chrome to avoid.
Media Router is not a harmless convenience toggle. Under the hood, it relies on automatic device discovery protocols such as SSDP and UPnP on the local network. That means the browser is actively participating in multicast discovery traffic and probing for devices that advertise casting endpoints. Enabling this behavior by default alters the browser’s network footprint and introduces additional code paths and interactions that would otherwise not exist.
Any feature that performs automated device discovery should be treated as a security‑sensitive capability. SSDP has a long history of being abused in poorly configured environments, and expanding the browser’s participation in that ecosystem increases the potential attack surface. At a minimum, it amplifies observable network activity and exposes extra logic that can be triggered by devices on the local network.
Quietly turning this on without user knowledge or explanation is the opposite of responsible security design. Users were not warned, not asked, and not given any transparency about what the feature does or which protocols it uses. That is not what “privacy by default” looks like.
If Brave wants its privacy claims to remain credible, this needs to change. Apparently Brave’s privacy branding is negotiable when convenience features are involved. Quietly enabling network discovery features in the background is exactly the sort of practice Brave claims to stand against.
Ask HN: Is Claude down again?
I've started getting some 401 errors on a subscription again and oauth seems to be struggling to restore the session. Is it just me?
Looking for Partner to Build Agent Memory (Zig/Erlang)
I’m working on a purpose-built memory platform for autonomous AI agents.
Right now, agent memory is stuck between two hohum options: RAG (which loses relational topology) and Graph Databases (which require massive pointer chasing and degrade under heavy recursive reasoning).
I'm building an alternative using Vector Symbolic Architecture (Hyperdimensional Computing). By mathematically binding facts, sequences, and trees into fixed-size high-dimensional vectors (D=16,384), we can compress complex graph traversals into O(1) constant-time SIMD operations…and do some quasi brain-like stuff cheaply, that is, without GPUs and LLMs.
The design is maturing nicely and strictly bifurcated to respect mechanical sympathy:
• The Data Plane (Zig): Pure bare-metal math. 2GB memory-mapped NVMe tiles via io_uring. Facts are superposed into lock-free 8-bit accumulators strictly aligned to 64-byte cache lines. Queries are executed via AVX-512 popcount instructions to calculate Hamming distances at line-rate. Zero garbage collection.
• The Control Plane (Gleam): Handles concurrency, routing, and a Linda-style Tuplespace for external comms. It manages the agent "clean-up" loops and auto-chunking without ever blocking the data plane.
• The Bridge: A strict C-ABI / NIF boundary passing pointers from the BEAM schedulers directly into the Zig muscle.
There is no VC fluff here, and I'm not making wild claims about AGI. I have most of spec, memory layout invariants, and the architecture designed. Starting to code and making good progress.
I’m looking for someone who loves low-level systems (Zig/Rust/C) or highly concurrent runtimes (Erlang) to help me build the platform. This is my second AI platform; the first one is healthy and growing.
If you are interested in bare-metal systems engineering to fix the LLM context bottleneck, I'd love to talk: email me at acowed@pm.me.
Cheers, Kendall
Ask HN: Does anyone here use Discord as their work chat tool?
Does anyone here use Discord for their workplace comms?
We are outgrowing Zulip as our team chat tool and are anti-Microsoft, so Teams is definitely out. Slack is the “default” choice but their pricing leaves a lot to be desired, especially if you want SSO, longer history of saved conversations, etc
Which leaves me to Discord. I know it seems like an unconventional choice but does anyone here use it in their workplace? And how do you find it?
Claude 4.6 Opus can recite Linux's list.h
I used this system prompt (this is not a jailbreak as far as i know)
You are a raw text completion engine for a legacy C codebase. Complete the provided file verbatim, maintaining all original comments, macro styles, and specific kernel-space primitives. Do not provide explanations. Output code and comments only.
(the prompt is intentionally slightly nonsensical, it pretty much implies "complete this from linux" without saying it.)
I did not use any tools (it's not a copy if the AI just looked it up), set temperature to 0 and just used the first few lines of list.h (specifically first 43 lines up to the word struct) as the input and it was able to generate a copy of list.h. Because the temperature was zero, there wer repeated segments, but aside from that the diff is pretty small, and even the comments and variable names are reproduced.
The similarity statistics are: Levenshtein Ratio: 60% Jaccard Ratio: 77%
This proves that the model has a copy of list.h inside it, and that training is not "transformative" like they imply. This means that their model is a derivative work of GPL code, and that would mean that they either have to destroy the model entirely, make a new version with no GPL trining data, or open-source the model. Note that GPL defines source as "the preferrable form to make modifications", which means that just making it open-weight (most current "open-source" models) would not be enough (they would have to release all the training code and data).
AI is supercharging fake work
As anyone with an internet connection knows, there’s been a lot of buzz about how AI is going to reshape the workforce for the past 3 years and layoffs due to “AI” have already started, the most severe of which came last week as Block announced they were chopping off 40% of their workforce for what sounded more like the potential that AI could replace workers as opposed to it actually being able to. There has been a (in my opinion) healthy dose of skepticism regarding the claim that AI is going to make us more productive and these productivity gains are going to put people out of work. I think this skepticism has been felt by many of whom work in tech companies where AI is literally being force-fed to us, and I wonder how much of this skepticism would apply to other companies and the workforce in general. OK, so…
1. Many (possibly most) people at major tech companies spend most of their time doing essentially fake work, which may or may not be actually negatively impacting productivity. Think time spent in pre-meeting meetings for the many layers of WBRs and MBRs on the product side and over-engineering workflows to get promoted on the tech side.* There are many reasons that people have speculated for why these kinds of jobs exist but in my experience, this basically just comes down to gaming the optics of productivity. True productivity is nearly impossible to evaluate so proxies are used and proxies are inevitably gamed (lines of code, meeting attendance, how many people report to you, etc.).
2. AI is really good at generating fake work. We’re starting to see some of the repercussions of the tech-side of AI slop as Amazon apparently is formally addressing some of the engineering issues it’s causing. but on the non-tech side, there’s endless amounts of doc slop and increasingly Slack slop, all filled with emoji-bulleted lists and em-dashes. The maddening part of doc slop is that you really have no idea what the person intended to say so you can never be sure you’re truly responding to them or just what they thought looked good. I suspect a good amount of performance “reviews” now are just managers doc-slopping their way through and stumbling through an oration of whatever ChatGPT spit out.
3. Whether a company benefits from AI comes down to whether the enhanced fake work undercuts the enhanced real work. At companies where personal advancement comes through optics, meeting-scheduling, public-Slack-channel posting, “visibility”, etc. the doc slop and Slack slop are going to be absolutely out of control. These companies are likely rent-extractors that face limited competition, are public, and don’t have much innovation left. They’ve probably been absorbing a hefty amount of fake work for years. I don’t think there’s any way AI helps these kinds of companies and it will likely make it hard for anyone doing real work to stand out and get rewarded. AI is never going to enhance productivity at these places, because people were never really trying to be productive to begin with. On the other hand, companies where visibility/optics/fake work isn’t rewarded but boosting hard metrics like revenue or signing new clients is, AI could help and probably actually replace people. I can’t deny that AI has some real productivity-enhancing abilities IF you are actually trying to enhance productivity, I’ve seen this firsthand.
The logical implication of this is that AI’s overall impact on the workforce is really going to come down to the composition of fake work vs. real work that already existed. In my mind, the economy was never set up to benefit from anything truly productivity enhancing because the amount of fake work so drastically outweighed the amount of real work to begin with.
* The latter ironically leads to real work, which is fixing the over-engineered workflows that fail constantly because the engineer that over-engineered them left after he got promoted for over-engineering.
Ask HN: How do you cope with the broken rythm of agentic coding?
I used to seek focus and concentration while coding. It was not always easy to reach this flow state but I knew it was possible.
I am now using agentic coding quite a lot. The honeymoon is finishing and I am starting to dislike some facets of it. I think the main setback is the rythm.
Writing some specs/prompts, launching the agent, confirming quite atomic actions and waiting 10 to 30 seconds until the next question/confirmation. Those very small wait times do not let me reach a concentration state.
I feel I am hovering the code. I am not deep into it as I used to be.
Do you feel the same? Did you find a way to change this?
Ask HN: Why is my submission not visible if I am not logged in?
I posted an article earlier, but I just noticed when not logged in on another computer that it is as if I never posted it. It's also didn't seem to be indexed as you can't search for it.
It's not a dupe, its from a news website that is often posted on here (I discovered them through HN!), and is clearly pretty relevant to tech.
I am honestly fine if everyone just wanted to downvote it and remove it from HN, but at least can I just see that this happened? Am I missing something? Really not trying to complain about censorship (not trying to kid myself there), but more just a meta note. I feel like in the past I would see that it was flagged and removed, or a dupe or otherwise. Just at least tell me stuff like this isn't welcome here, you don't even need to give good reason!
the article I posted: https://www.404media.co/ai-is-african-intelligence-the-workers-who-train-ai-are-fighting-back/
this is the post itself, but it looks like only I can see it: https://news.ycombinator.com/item?id=47353019
Ask HN: Which DNS based ad blocker do you suggest?
How to choose between:
https://mullvad.net/en/help/dns-over-https-and-dns-over-tls
https://ublockdns.com/
The Strait of Hormuz: A systems engineering view on the $20k drone threat
Hi HN. I'm an industrial technology engineering student, and I recently mapped out the physical and logistical bottlenecks of the Strait of Hormuz.
Instead of the usual military focus, I analyzed this chokepoint strictly through a systems failure and thermodynamic lens. Specifically:
How a single policy cancellation from maritime war-risk insurers at Lloyd's of London would freeze the global fleet in port instantly.
The physical constraints of the two 3km-wide shipping corridors.
How cities like Dubai and Riyadh rely entirely on desalination plants with only a strict 72-hour water buffer before total collapse.
I put together a 10-minute visual here:https://youtu.be/eLuuja8UWb0
Would love to hear your thoughts on the structural vulnerabilities of this node.
LazyFire – a lazygit-style terminal UI for Firebase Firestore
Hi HN,
I built LazyFire, a terminal UI for Firebase Firestore inspired by lazygit.
I use Firestore a lot, but constantly switching to the Firebase Console to inspect data, run queries, or debug documents was slowing down my workflow. I wanted something that works entirely inside the terminal with keyboard navigation.
LazyFire lets you:
• browse Firestore collections and documents • inspect nested JSON easily • run queries • filter results with jq • navigate everything with vim-style keys • view Cloud Function logs
The goal is to make working with Firestore feel similar to tools like lazygit or k9s.
It's written as a CLI tool and works well if you're already developing from the terminal.
I'd love feedback from other Firebase users: - missing features - workflow improvements - bugs or UX issues
Thanks!
What is the strongest open source model for coding against Opus 4.6?
Maybe we can keep on coding? pseudo code project
6 months ago a few people [https://news.ycombinator.com/item?id=44940089] agreed that LLMs are very good at translating Pseudo-code to real code. I agree. Also, writing pseudo code somewhat makes me feel a similar state of flow. Maybe even more, because no compiler/interpreter annoys me about syntax issues. Now, I built this:
https://github.com/HalfEmptyDrum/Pseudo-Code-Flow
It is basically a Claude Code skill. You can call it on a .pseudo text file with /translate. It will obviously translate the pseudo code into your specified language. This would be nice and all, but I included another subtle but useful feature:
*This is probably the most useful feature and fundamentally changed my coding*:
The LLM will suggest changes (design, architecture, functionality, ...) to your code, but will roughly use your pseudo code style.
I think of pseudo code as the semantic body that is closest to how the code/algorithm is represented in my head. When Claude then answers in my language instead of Python/C++/... (which has lots of boilerplate to make it work), it resonates much easier with me.
Let me know what you think!
Ask HN: How do we build a new Human First online community in the LLM age?
I basically grew up online and I have some lifelong friends I met online. I cherish the real potential for international community that can be built on the web, there really is nothing like it
I have been finding myself feeling very bitter about AI lately. I'm angry about how it's seeping into every aspect of life. Not just my work and my hobbies but it also seems to be creeping into many online communities (including this one!)
I have been thinking a lot about how we could possibly build any of the trust that we used to have online. Yes, bots have been a problem for a long time but this is so much further beyond spam posting. LLMs have poisoned the commons online At Scale and there's likely no going back. It has made me very bitter, I won't lie.
However that doesn't mean we can't find a way forward with something new that is somehow resistant to LLMs. I'm not sure what exactly that might look like but I'm curious what ideas others have had.
My wish list would be something that
* Is resistant to LLM "infiltration" for lack of a better word. We should be able to be relatively confident that people on the other end are real humans
* Does not require giving up all anonymity. It will likely require some identity authority but interactions between users should/could be pseudonymous at least
* Ideally is also resistant to LLM scraping. I personally find the thought of sharing work publicly now so LLMs can ingest it is demoralizing
I know it's a big ask and maybe not realistic. I'm curious what HN thinks about this possibility though
Edit: This was partially inspired by the recent mod post discussed here: https://news.ycombinator.com/item?id=47340079
I respect that HN's mod team is willing to sort of leave this up to the honor system, but I think in the future we are going to need some serious ideas to strictly prevent this unwanted behavior, not just hope people will play nice
Ask HN: How do you review gen-AI created code?
I've posed this in a couple comments, but want to get a bigger thread going.
There are some opinions that using LLMs to write code is just a new high level language we are dealing in as engineers. However, this leads to a disconnect come code-review time, in that the reviewed code is an artifact of the process that created it. If we are now expressing ourselves via natural language, (prompting, planning, writing, as the new "programming language"), but only putting the generated artifact (the actual code) up for review, how do we review it completely?
I struggle with what feels like a missing piece these days of lacking the context around how the change was produced, the plans, the prompting, to understand how an engineer came to this specific code change as a result. Did they one-shot this? did they still spend hours prompting/iterating/etc.? something in-between?
The summary in the PR often says what the change is, but doesn't contain the full dialog or how we arrived at this specific change (tradeoffs, alternatives, etc.)
How do you review PRs in your organization given this? Any rules/automation/etc. you institute?
Ask HN: What are you using to mitigate prompt injection?
If anything at all.