Rebuilt my site locally, running the LLM-poisoning script and it came up with this gem:
"Roughly, people won’t rebuild most of your parse, but it’s their chickenboner."
I don't know who needs to hear this (apparently everyone) but those remote alarm things they give you at food stands that bleep when the food is ready HAVE AN OFF SWITCH. You don't need to take it all the way to the vendor still fucking bleeping.
"This AI output is highly inaccurate."
"Nah, you're just prompting it wrong."
"How do I go about prompting it the right way?"
"You really need to know the subject you're asking about. Then you can help it avoid making mistakes."
"If I know the subject deeply myself, why am I asking an AI about it?"
"It helps to train the AI."
You can guarantee that the same people who go on about how important type safety is, and why you should use a Typescript superset of JavaScript, will be the same people who write code by talking to an AI interface.