There once was a large language model The rogue, bad commands it should throttle But they didn’t know it When a couple of poets Found breaking protections a doddle #LLM #aisecurity
Apropos of nothing in particular, A few months ago, someone on the fediverse (I forgot who) mentioned that they were using bunny.net as their CDN. About 3 months ago I started moving all my important sites to it, and I'm pretty happy. Super cheap, has been plenty reliable. Every time I've put in a support request, (which has been 5 or 6 in the last 3 months) a human who knew what they were doing answered me within an hour. On my one large site (1K active users) it's doing a good job with bots. I pay $10/month for their "shield" service on just that one domain and it works. The other domains are pennies. And yes, that's an affiliate link. If you click it I get brownie points or something. I've had 2 clicks ever. Click it. I dare you.
Maybe #cloudflare should have a fediverse account. That way people can still reach them when cloudflare is down.
That gator's such a Silva-tongued devil. #monsterdon image
How many stupid LLM tricks must we read about. Neither the “researchers” (quotation marks definitely required) nor the “tech journalist” author realise that robot planning is like a 50-year old field with people earning PhDs and spending entire careers on it. The Mars fucking Rover does not have an LLM in it. So Mr Amateur Hour Jones, who has fuck all experience in the field, shoves an LLM into a robot vacuum. They discusses none of the prior research in robot navigation and planning, makes up a stupidly simple task that the LLM fails to do. The headline is not “LLMs fail again to do something incredibly simple that robots could do 30 years ago without LLMs”. The headline is about the science fiction crap it spewed as it failed.