“If you think #technology will solve your problems, you don’t understand technology — and you don’t understand your problems.”[1] — Laurie Anderson, 2020 “To compare human and machine intelligence implies also a judgement about which human behaviour or social group is more intelligent than another, which workers can be replaced and which cannot. Ultimately, #AI is not only a tool for automating labour but also for imposing standards of mechanical intelligence that propagate, more or less invisibly, social hierarchies of knowledge and skill. As with any previous form of automation, AI does not simply replace workers but displaces and restructures them into a new social order.” [2] — Matteo Pasquinelli 2023 Through horizontal forms of intervention that prioritise marginalised perspectives, “Algorithmic Sabotage” articulates a collective approach to challenging the ideology of “algorithms everywhere”, in particular by shifting the focus from statistical inference to mutually constituting solidarity, by undertaking the activities necessary to generate prefigurative practices of resistance, agency and refusal that disrupt the algorithmic enclosure and overturn the application of continuous states of exception, highlighting the entanglement of the algorithmic harmfulness of “AI” with ongoing forms of societal disintegration, from austerity to far-right politics, and from racialized algorithmic violence to the propagation of patterns of segregation and exclusion. --- [1] Sterling, B. (2020) Laurie Anderson, machine learning artist-in-residence, Wired. Available at: . [2] Pasquinelli, M. (2024) The eye of the master: A social history of artificial intelligence. London, UK: Verso. Available at: . image
The Police Officers Faces (POF) dataset is an investigative counter-surveillance artistic project that focuses on the utilisation of facial recognition technology, with a particular emphasis on its deployment by law enforcement agencies. The dataset comprises 88,783 facial images of thousands of police officers. The images have been sourced from the #internet and are being used for the purpose of facial recognition. In this project, the conventional paradigm, in which the powerful observe and the powerless are observed, is reversed. In order to address the dehumanising effects of the expansion of automated discrimination and segregation, the exacerbation of harm and the overarching correlation, the project advocates for a distinct technical mentality, a collective “counter-intelligence,” which seeks to emphasise the influence of citizens and grassroots realities in uncovering facts, focusing on the impact of collective actions to reveal wrongdoing and the collaborative production of social justice. .. By employing a multifaceted approach encompassing targeted #data collection and compilation, social engineering, and rigorous analysis, the #Police Officers Faces (POF) dataset has established a comprehensive and structural alternative framework for the emancipatory empowerment of community members engaged in #copwatch and other counter-#surveillance practices. * Please be advised that the URL below provides an initial overview only. As further information becomes available, the content will be updated accordingly. ⟶ https://algorithmic-sabotage.github.io/asrg/police_officers-faces-pof/
“The lesson of the current wave of 'artificial' 'intelligence', I feel, is that intelligence is a poor thing when it is imagined by corporations. If your view of the world is one in which profit maximisation is the king of virtues, and all things shall be held to the standard of shareholder value, then of course your artistic, imaginative, aesthetic and emotional expressions will be woefully impoverished. We deserve better from the tools we use, the media we consume and the communities we live within, and we will only get what we deserve when we are capable of participating in them fully. And don’t be intimidated by them either – they’re really not that complicated. As the science-fiction legend Ursula K Le Guin wrote: 'Technology is what we can learn to do'”[2] — @npub1lazu...hewg Rather than engaging in brute force calculations, and accepting algorithms as agents of disempowerment, or as recapitulations of older colonial technologies, “Algorithmic Sabotage” starts from the twin pillars of feminist and decolonial standpoints, functioning as an ethical add-on that mobilises its capacity to act as a counter-power, fully contributing to the techno-political procedures of radicalisation for the development of strategies, aesthetics and prefigurative practices of resistance, agency and refusal as a corrective to the aggressive abstraction of #AI, whose opacity and indifference to causality reinforce social inequality, perpetuate prejudice and unjust discrimination, to the point of enabling algorithmic apartheid. --- [1] Jahić, S. (2023) No to AI, yes to a non-fascist apparatus. Available at: . [2] Bridle, J. (2023) The stupidity of ai, The Guardian. Available at: . image
“If a machine is expected to be infallible, it cannot also be intelligent.”[1] — Alan Turing “When you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.”[2] — Joe Davidson While #AI produces thoughtlessness, in the sense that political philosopher Hannah Arendt meant when interpreting the actions of Nazi war criminal Adolf Eichmann[3]; the inability to critique instructions, the lack of reflection on consequences, a commitment to the belief that a correct ordering is being carried out, “Algorithmic Sabotage” is intensified by the new forms of machinic knowing and the nascent becoming of an anti-worker and anti-community computational complex, making necessary a restructuring that reorients the focus from the miasma of AI and its concomitant toxic algorithmic operations of optimisation to developing techniques for redistributing social power, starting from a feminist standpoint and progress towards the implementation of prefigurative strategies of resistance, agency and refusal, to inhibit, slow down or reverse the emergence of harmful racialized practices of exteriorization and exclusion driven by algorithms. --- [1] Hodges, A. (2013) Alan Turing, Stanford Encyclopedia of Philosophy. Available at: . [2] Joe Davison, “No, Machine Learning is not just glorified Statistics”, Medium, June 27, 2018. Available at: . [3] Arendt, Hannah. Eichmann in Jerusalem: A Report on the Banality of Evil. 1 edition. New York, N.Y: Penguin Classics, 2006. image