ASRG

ASRG's avatar
ASRG
npub1mwgn...zng5
The Algorithmic Sabotage Research Group (ASRG) is a conspiratorial, practice-led research framework focused on the intersection of culture, politics, and technology. WEBSITE: https://algorithmic-sabotage.gitlab.io/asrg/about/ LOCATION: Athens-EU-WWW
**AI Must Die!** *Critical Perspectives on the State of Artificial Intelligence* *AI MUST DIE* is a short zine by *Myke Walton* and *Cam Smith* that presents critical perspectives on the way that AI is talked about, governed, and owned in 2025. > *AI is fucking everywhere: at work, at school, in our homes, in our phones, on our streets, in our governments, and at war. We are living through an era of extreme AI hype. In this climate, some people have gotten rich off of the stolen data and stolen labor that fuels these technologies. Others have been surveilled, oppressed, exploited, and killed. This text presents a short, practical guide to the technologies currently called “AI,” the ideologies and actors pouring gasoline on the AI dumpsterfire, and what we can do about it.* > ... > *How long will this go on? When will the hype cycle end? Is another AI winter coming? Faced with an inhumane tech industry and complicit governments it may ultimately fall to the people to invent new ways to evade, refuse, resist, sabotage or otherwise destroy AI.* > *Hammers up! AI Must Die!* > *Seize the means of computation* Read the full zine here: image
**The Anti-AI Protests Have Arrived in Portland, and This is Only the Beginning** By @npub15z3w...0xrp | @The Internet Review According to reports from pro-#AI attendees at a business meetup in #Portland, #Oregon, anti-AI protesters showed up outside, throwing eggs and paint at cars and distributing pamphlets with the following message: > *“BUTLERIAN JIHAD AGAINST AI* > *never ending destruction and death propels the machine of progress. AI rides on the front of this beast, annihilating humanity, leaving nothing in its wake but sterile boxes to lock us in and apps to keep us sedated. while the techies are inside sipping their cocktails the rest of us are faced with a choice: to accept our position at the bottom of the social order or find our people and bring the whole thing down. only we can decide to smash the screens that are brainwashing us into submission.* > *the time is now, > the day is here, > ATTACK! ATTACK! ATTACK!”* Read more: image
**"Trapping AI" – Operational Progress!** 🕳️ Following the last update (https://tldr.nettime.org/@asrg/114742667183459482), we’re transitioning from planning to practice — amplifying both the *scale* and *operational ferocity* of our approach through the deployment of a new, more radically interventionist layer of complexity. This intensified escalation — as outlined in the last update — is concretized in fakejpeg, which is now *fully live and operational*. From this point onward, every page generated by our standalone LLM crawler-tarpit embeds a garbage JPEG. Notably, within just a few hours of operation, this deployment has already yielded over 40,000 such images. The integration of fakejpeg constitutes a critical deepening layer that amplifies both the *strategic offensiveness* and *subversive ardor* of our approach by enabling the ongoing *targeted poisoning* and coordinated dissemination of systematically crafted junk data within the operational workflows of AI systems.fakejpeg repo: See the tarpit in action: https://content.asrg.site/ Context and rationale: https://algorithmic-sabotage.github.io/asrg/trapping-ai/#expanding-the-offensiveness image
**"Trapping AI" – Slight Update!** 🌀 Activity in the **"Trapping AI"** project is accelerating: in just under a month, over **26 million requests** have hit our tarpit URLs 🕳️. Vast volumes of meaningless content were devoured by AI crawlers — ruthless digital leeches that relentlessly scour and pillage the web, leaving no data untouched. In the coming days, we’ll roll out a new layer of complexity — amplifying both the *intensity* and *offensiveness* of our approach. This escalation builds on fakejpeg, a tool developed by @Alun Jones. 🖼️ fakejpeg generates fake JPEGs on the fly. You "train" it with a collection of existing JPEGs, and once trained, it can produce an arbitrary number of things that *look* like real JPEGs — perfect for feeding aggressive web crawlers junk 🗑️. Explore fakejpeg: Learn more about *"Trapping AI"*: https://algorithmic-sabotage.github.io/asrg/trapping-ai/#expanding-the-offensiveness See the tarpit in action: https://content.asrg.site/ image
**AI is Dehumanization Technology** A call to reject the deployment and use of AI systems. Interesting piece by @npub1tz72...za03 > AI systems reproduce bias, cheapen and homogenize our social interactions, deskill us, make our jobs more precarious, eliminate opportunities to practice care, and enable authoritarian modes of surveillance and control. Deployed in the public sector, they undercut workers' ability to meaningfully grapple with problems and make ethical decisions that move our society forward. These technologies dehumanize all of us. Collectively, we can choose to reject them. 📰 Read it here: image
**"Trapping AI" – New Update!** 🌀 Web crawlers play a central role in the escalating race to develop ever-more powerful AI models: they tirelessly scour the web, harvesting vast quantities of content to feed large language models. *Babble* (), originally developed by @Joshua Barretto, is a tool that lures these crawlers into an endless labyrinth—feeding their insatiable hunger for data with masses of pointless content. To deliberately drain the resources of exploitative crawlers—and push AI models further toward collapse—we’ve deployed *Babble*, initially enhancing it through subtle, targeted modifications that align its functionality tightly with our strategic and operational priorities. *Babble* dynamically generates an unending stream of deterministic bollocks, trapping crawlers on a single site where they endlessly navigate an ever-growing sea of pages 🌊—each filled with vast amounts of useless text and dozens of links that draw them ever deeper into the tarpit 🔁. In the coming period, we’ll continue to develop and escalate this tactic—adding new layers of complexity and increasing both the intensity and offensiveness of the approach. At the next stage, we’ll openly share our code—allowing others to deploy, adapt, and build upon it. See it in action: https://content.asrg.site/ P.S. If you're looking for similar tools and frameworks to deploy or explore further, check out our list titled *"Sabot in the Age of AI"*—a record of strategically offensive methods and purposefully orchestrated tactics for facilitating (algorithmic) sabotage, framework disruption, and intentional poisoning. Explore it here: https://algorithmic-sabotage.github.io/asrg/posts/sabot-in-the-age-of-ai/ image
Sabot in the Age of AI Here is a curated list of strategies, offensive methods, and tactics for (algorithmic) sabotage, disruption, and deliberate poisoning. 🔻 iocaine The deadliest AI poison—iocaine generates garbage rather than slowing crawlers. 🔗 🔻 Nepenthes A tarpit designed to catch web crawlers, especially those scraping for LLMs. It devours anything that gets too close. [@aaron]( ) 🔗 🔻 Quixotic Feeds fake content to bots and robots.txt-ignoring #LLM scrapers. @marcusb 🔗 🔻 Poison the WeLLMs A reverse-proxy that serves diassociated-press style reimaginings of your upstream pages, poisoning any LLMs that scrape your content. @Mike Coats 🏴󠁧󠁢󠁳󠁣󠁴󠁿🇪🇺🌍♻️ 🔗 🔻 Django-llm-poison A django app that poisons content when served to #AI bots. @Fingel 🔗 🔻 KonterfAI A model poisoner that generates nonsense content to degenerate LLMs. 🔗 image