I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.
@tante has a very thoughtful reply here:
tante.cc/2026/02/20/acting-eth…
A few further comments, 🧵>>
Acting ethically in an imperfect world
Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are at some point in a situation where the best you can do is a vegetarian option.Sometimes it’s not even our hand being forced but us not having the mental strength or priorities to do something: I could just not use WhatsApp because it is owned by Meta but my son’s daycare organizes everything through WhatsApp and do I really want to force my belief on all those very busy parents and caretakers or should I just bite the bullet and use the tool that seems to work for everyone – even though it’s not perfect?
There are no one-size-fits-all solutions for this: Sometimes a belief or ethic you hold is so integral to you that you will not move. Sometimes they are held loosely enough to let go under certain conditions. There’s a multitude of factors and thoughts that go into those kinds of decisions and at some point you just gotta make a call based on what’s in front of you and your priorities.
What I am saying is: We are all doing things we know are not ideal, are either morally questionable or do not align with our values. That’s life. I for example know that consuming meat is problematic for ethical and ecological reasons, I still do it sometimes. I reduce as much as I can but I am far from perfect. Just one of the many examples of my actions not being perfectly and purely aligned with my beliefs.
And I am 100% sure each and every reader will have similar experiences. We are imperfect and messy beings. In the end all you can do is actually try to make good decisions based on your values, try to learn from your actions and ideally do better – or at least understand what were the forces that go you to act against your values.
Cory Doctorow, probably one of the most influential writers about digital technology and culture celebrated the 6th anniversary of his personal blog pluralistic – congratulations! Cory is quite the phenomenon, I know nobody with his amount of output and his consistency of publication. It is scary just how consistently he writes and published while also churning out books. I do admire his work ethic tremendously.
But one thing in his celebratory post rubbed me the wrong way and I think it’s worth pointing out. Not for the one specific case but because it highlights a problematic way of thinking that I see a lot in current tech discourse that stands in the way of us actually improving the world.
So Cory outlines his process of how he publishes to his blog (and then pushes the same writing out to other places). He describes how one QA step in his process is piping his writing through an LLM (using the Ollama software, it’s unclear which open weight LLM he uses) to check for typos and small grammar mistakes. He then points out how some readers might find that problematic:
“Doubtless some of you are affronted by my modest use of an LLM. You think that LLMs are “fruits of the poisoned tree” and must be eschewed because they are saturated with the sin of their origins. I think this is a very bad take, the kind of rathole that purity culture always ends up in.”
Using LLMs isn’t always popular with the cool crowd, Cory knows that. And he wants to defend his (quite modest) use, which I understand: Nobody likes their problematic behavior being pointed out to them. But as outlined: Life’s complicated. Cory could just have said “I know there are many critiques of LLMs, but right now that is the best way for me to enable my work, I try to limit the problematic aspects by using a small open weight model and checking the results in detail.” and have moved on. But he needed to make a stand. And that stand lead him into the problematic train of thought that I wanted to point out here. Because many, many people listen to him and basically take his word as the gospel. So great power, great responsibility and such.The whole argument is based on a strawman. Let’s look at Cory’s words:
Let’s start with some context. If you don’t want to use technology that was created under immoral circumstances or that sprang from an immoral mind, then you are totally fucked. I mean, all the way down to the silicon chips in your device, which can never be fully disentangled from the odious, paranoid racist William Shockley, who won the Nobel Prize for co-inventing the silicon transistor
Cory is right in pointing out that almost any technology we have has been touch by problematic figures. Racists, fascists, sexists, rapists. You name it. Anything you touch will have some research or engineering or product work by a person you despise in it.The strawman is his claim that people who criticize LLM usage are doing that for some form of absolutist reasons. That they have a fully binary view of the world as separated into “acceptable, pure things” and “garbage”. Which is of course false. Because they are using a computer, using warm water that’s probably heated through the use of fossil fuels etc.
He attacks a ridiculous made-up figure to deflect from specific criticism of LLM use (that many probably wouldn’t even apply that strongly to his use case). But that’s not where criticism of LLMs comes from: It’s mostly specific focussing on the material properties of these systems, their production and use.
Cory continues:
“Refusing to use a technology because the people who developed it were indefensible creeps is a self-owning dead-end. You know what’s better than refusing to use a technology because you hate its creators? Seizing that technology and making it your own. Don’t like the fact that a convicted monopolist has a death-grip on networking? Steal its protocol, release a free software version of it, and leave it in your dust:”
Here again Cory is misrepresenting the LLM-critic’s argument: Sam Altman is a scam artist and habitual liar, but that’s not one of the first 10 to 20 reasons people criticise OpenAI’s products. Sure, basically every leading figure in the “AI” space seems to be unpleasant at best but that’s true for most of tech TBH. People criticise LLMs for their structural properties, their material impacts, for the way they make it harder to learn and grow, for the way they make products worse while creating massive negative externalities in the form of emissions, water use and e-waste. For the way these systems can only be build by taking every piece of data – regardless of whether the authors consent or even explicitly refuse and how the training needs ungodly amounts of harmful, exploitative labor done mostly by people in countries from the global majority. How it materially harms the commons.Even if OpenAI was run by decent, ethical, friendly, trustworthy people (which would then of course make them not work on the products OpenAI has, but it’s just a thought experiment) their products would need to be criticized for what they are and what they do. It’s really not about these few dudes running the companies.
Cory misrepresents the arguments (well basically hides them) in order to not have to face any material criticism and turns them into “you just don’t like these people” which frames the criticism as emotional and not rational. As if it was about not liking a bunch of rich men.
He then goes into how the path forward is to “steal the protocol”. His following paragraph goes into detail:
“That’s how we make good tech: not by insisting that all its inputs be free from sin, but by purging that wickedness by liberating the technology from its monstrous forebears and making free and open versions of it”
And here we are coming to the core of the problematic argument. Because Cory implicitly argues that technology is neutral and that one can just change its meaning and effect through usage. But as Langdon Winner argues in his famous essay “Do artifacts have politics” artifacts have built-in politics deriving from their structure. A famous example is the nuclear power plant: Due to the danger of these plants, their needs with regards to resources as well as security power plants imply a certain form of political arrangement based on having a strong security force/army and a way to force these facilities (and facilities to store the waste in) upon communities potentially against their will.Artifacts and technologies have certain logics built into their structure that do require certain arrangements around them or that bring forward certain arrangements. The second aspect is often illustrated by how ships are organized: Because ships are sometimes in dangerous situations and sometimes critical decisions need to be made, the existence of ships implies the existence of a hierarchy of power relationships with a captain having the final say. Because democracy would be too slow at times. These politics are built into the artifact.
Understanding this you cannot take any technology and “make it good”. Is a torturing device “good” if the plans on how to build it are creative commons? Do we need to answer the existence of the digital torment nexus by building an open source torment nexus? I’d argue we need to destroy it – regardless of what license it is released under.
That does not mean that it is impossible to take certain technologies or artifacts and try to reframe them, change their meaning. In some way computers are one such example: They were first used by governments, banks and other corporations to reach their goals but where then taken and reframed to devices intended to support personal liberation. It’s a bit more complicated (for why dive into the late David Golumbia’s “The Cultural Logic of Computation”) but let’s give that one to Cory. Sure, sometimes it is possible to take something originally built for nefarious purposes and find better uses for it. But is that true for everything? Very obviously not.
Let’s just look at the embedded politics of LLMs: In order to train a capable system you need data. Lots of it. AI companies keep buying books to scan them, they download everything from every legal or illegal source claiming “fair use” (a doctrine that only applies to the US by the way) or that “scraping is always okay”. Capable LLMs require a logic of dominance and of disregarding consent of the people producing the artifacts that are the raw material for the system. LLMs are based on extraction, exploitation and subjugation. Their politics is violence. How does one “liberate” that? What’s the case for open source violence?
He uses a so-called “open source LLM” and that’s very much how he presents his values but open-source LLMs do not really exist. You can download some weights but cannot understand what went into them or really change or reproduce them. Open source AI is just marketing and openwashing.
Cory shows his libertarian leanings here: If everything is somehow “free and open” then we have won. But “free and open” in this context usually means that “certain privileged groups have easy access to it and are not limited in what to do with it”. That’s one of the core problems with the whole “open Source” movement: That it reduces all struggle to if one can get their hands on the tools and has any restrictions to using them.
This also shines through in Cory arguing that we need to “liberate” technology. What a strange idea: Technology doesn’t need liberation, people do. Technologies are tools not what we actually care about. Sure, sometimes technologies can play a role in liberating people but just as often “freeing” a technology does quite the opposite to people: Ask the women who have massive amounts of nonconsensually created sexualized images and videos created of them whether they think that the “liberation” of stochastic images generators is liberating them? Technology doesn’t need to be free. It cannot be free because freedom as a concept applies to people.
And freedom is not the only value that we care about. Making everything “free” sounds cool but who pays for that freedom. Who pays for us having for example access to the freedom an open weight LLM brings? Our freedom as users rests on the exploitation of and violence against the people suffering the data centers, labeling the data for the training, the folks gathering the resources for NVIDIA to build chips. Freedom is not a zero-sum game but a lot of the freedoms that wealthy people in the right (which I am one of) enjoy stem from other people’s lack thereof.
“Purity culture is such an obvious trap, an artifact of the neoliberal ideology that insists that the solution to all our problems is to shop very carefully, thus reducing all politics to personal consumption choices:”
Cory labels people’s values and their prioritization as “purity politics” (referring back to the black and white strawman the started this part of his post with) and then pulls a really interesting spin here: Many people criticizing LLMs come from a somewhat leftist (in contrast to Cory’s libertarian) background. Cory intentionally frames those leftist thoughts that put politics based on values as “neoliberal ideology” that reduces “all politics to personal consumption choices”. This is narrativecly clever: Tell those stupid leftists that they are just neoliberals, the thing they hate! Awesome.But the argument against using LLMs is not about shopping and markets at all. My not using LLMs does not influence anything in that regard, Microsoft will just keep making the data center go BRRRRRRR.
In a way this framing shows more about Cory’s thinking that about that of the people he criticises: Cory is focused on markets and market dynamics and in that world it’s about purchasing. But moral choices only sometimes relate to markets. They do when I for example choose only to buy fairly produced garments. But when I for example refused conscription (when I was young Germany still forced every young man to learn how to kill) that was not a shopping decision. That was politics as well as leading an ethical life.
People do not believe that “not using LLMs” will solve the issue of OpenAI etc all existing. They do no want to build on, use products with so clearly defined harms and negative externalities. Because they believe it to be wrong. Sure, there might be a utilitarian argument for “the thing exists anyways and if it saves you time, that’s good, right” but many people are not utilitarians. They want to lead a life where they feel their actions align with their values. In a way that is a path to freedom: To having the freedom to make the decisions one feels are right are in alignment with one’s values.
Which Cory actually also believe and acts upon when it’s about his values: He has refused to create a Bluesky account in spite of wanting to be there cause his friends are there for (good!) ideological reasons: Because Bluesky was back then and honestly is still today mostly centralized with the Bluesky corporation having a central chokepoint to control the network. Cory believes that one sometimes needs to make decisions based on one’s values. He just does not think that your values as someone not wanting to use LLMs matter.
I mean, it was extraordinarily stupid for the Nazis to refuse Einstein’s work because it was “Jewish science,” but not merely because antisemitism is stupid.
Everybody hates Nazis. Implying that one is in any way like the Nazis is just a killer argument. But let’s talk about Nazis for a second. The Nazis did a lot of psychological and medical research. On people they interned and later killed in concentration camps. There actually was a massive debate within especially psychology whether using the results of that kind of research is ethically possible. Utilitarianism of course argues that if it’s there one should use it but especially when your whole discipline is focused on understanding how our psyche works in order to for example help people in trauma just taking research that has been created through unthinkable violence and torture feels wrong. Feels contrarian to what your whole discipline is there for. This reminds me of Ursula K. Le Guin’s story “The Ones Who Walk Away From Omelas“: Omelas is an almost perfect city. Rich, democratic, pleasant. But it only works by having one small child in perpetual torment. Okay, but if that kid is already suffering because those other people chose to, should you walk away? Of just reap the fruits of that suffering?Sometimes you need to walk away.
Cory then repeats the strawmen we already talked about and lands here:
“It’s not “unethical” to scrape the web in order to create and analyze data-sets. That’s just “a search engine””
Again, it twists the argument in the way that the AI corporations like to do it as well: Search engines scour the web so AI companies should be allowed the same. It’s the same technology! But what’s the purpose?A search engine scrapes pages to build an “index” in order to let people find those pages. The scraping has value for the page and its owner as well because it leads to more people finding it and therefore connecting to the writer, journalist, musician, artist, etc. Search engines create connection.
AI scrapers do not guide people towards the original maker’s work. They extract it and reproduce it (often wrongly). “AI”‘s don’t point out to the web for you to find other’s work to relate to, they keep you in their loop and give you the answer cutting off any connection to the original sources.
While the technology of scraping is the same, the purpose and material effects of those two systems is massively different. Again, Cory misrepresents the critique and tries to make it look unreasonable by making it just a conversation about tech without regarding how that technology affects the world and the people in it.
I appreciate a lot of work Cory Doctorow has done in the last decades. But the arguments he presents here to defend his usage of LLMs for this rather trivial task (which TBH could probably be done reasonably well with traditional means) are part of why the Internet – and therefore the world – looks like it does right now. It’s a set of arguments that wants to delegitimize political and moral actions based on libertarian and utilitarian thinking.
Technologies are embedded not only in their deployment but also in their creation, conceptualization. They carry the understanding of the world that their makers believe in and reproduce those. A bit like an LLM reproduces the texts it learned from: It might not always be a 100% identical replica but it’s structurally so similar that the differences are surface level.
In order to build an Internet and a world that is more inclusive, fairer, freer we need to move past the dogma of unchecked innovation and technology. We need to re-politicize our conversations about technology and their effects and goals in order to build the structures (technological, political, social) we want. The structures that lead to a conviviality in harmony with the planet we all live on and will live on till the end of our days.
That path is paved with discussions about political and moral values. Discussions about how certain technological artifacts to align with those values or not.
I do agree with Cory that demanding perfect purity lead nowhere. We are imperfect people in an imperfect world. I just do not think that this means to go all accellerationalist. Just turning the “open source” dial up to 11 does not stop the apocalypse. It’s a lot harder.
Liked it? Take a second to support tante on Patreon!
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
What It’s Like to Be a Data Labeler Training AI
Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.404 Media (YouTube)
like this
Hypolite Petovan, Glowing Cat of the Nuclear Wastelands and Shifting Sand, Sandwich Fumbler like this.
reshared this
Claudius Link, Lucas C. Wheeler, allison, Spookybot, Carina C. Zona, Marta Threadbare, Dave Rahardja, Bruno Girin, craignicol, Bodhipaksa, Michał "rysiek" Woźniak · 🇺🇦, Trixter of the Moon Council, James Hawley, kim_harding ✅, Bob Greene, don Elías (como los buses) 🥨, Demian, Joe Mansfield, Insecurity Princess 🌈💖🔥, teemuki, Meznor UNGRATEFUL MIDDLE POWER, Amandine, josh susser, Bethan M. Jenkins, David, Donaupiratin, JWcph, Radicalized By Decency, Lenz Grimmer, Wulfy—Speaker to the machines, Erin Dalzell (He/Him) 🇨🇦, Lazarou Monkey Terror 🚀💙🌈, hex, Nik | Klampfradler 🎸🚲, TriploidTree, Tanquist, PapyrusBrigade, Tyrone Slothrop, ayo, Gergely Nagy, Ian, paija, Jan Ainali, Francisca Sinn, Gavin, Djoerd Hiemstra 🍉, Toshen, Chèvre Mousse, Stuart, Rob, emeritrix, Khrys, Rémi Letot, aoanla, Ligature, FediThing, Sminney, 🕯️Curious Magpie 🕯️, ruben, wsm, Dźwiedziu, Keith, Deborah Preuss, pcc 🇨🇦, Trendy Toots, Thomas Sturm, Glowing Cat of the Nuclear Wastelands, Michael Knudsen, Femme Malheureuse, Ghadeer Abou-Saleh, Roquette, Debbie Goldsmith 🏳️⚧️♾️🇺🇦, Dilman Dila, ralf tauscher, L'égrégore André ꕭꕬ, Jure Repinc and 17 other people reshared this.




Prof. Emily M. Bender(she/her)
in reply to Prof. Emily M. Bender(she/her) • • •>>
Hypolite Petovan likes this.
reshared this
Claudius Link, Spookybot, Marta Threadbare, David, Donaupiratin, Francisca Sinn, L'égrégore André ꕭꕬ, Debbie Goldsmith 🏳️⚧️♾️🇺🇦, 💀 𝓕airchild 💀, Ian, tante, David Gerard, Salva, mx alex tax1a - 2020 (6), Ozzelot and Robert Kingett reshared this.
Prof. Emily M. Bender(she/her)
in reply to Prof. Emily M. Bender(she/her) • • •In this context, I feel like reminding people (again) that the stochastic parrots paper was not primarily a response to synthetic text extruding machines (not at all popular in late 2020), but an exploration of the range harms that had already been documented in the pursuit of LM scale.
dl.acm.org/doi/10.1145/3442188…
>>
Hypolite Petovan likes this.
reshared this
Michał "rysiek" Woźniak · 🇺🇦, craignicol, JWcph, Radicalized By Decency, Francisca Sinn, Kilian Evang, L'égrégore André ꕭꕬ, Debbie Goldsmith 🏳️⚧️♾️🇺🇦, David Gerard, mtc_uk and Ulrich Junker reshared this.
Prof. Emily M. Bender(she/her)
in reply to Prof. Emily M. Bender(she/her) • • •I also want to point out (again) the ways in which lumping together all uses of LMs (like the lumping of technologies into "AI") obscures the issues at hand.
Language modeling is a useful component of many technologies that can be built without extractive, exploitative means. Take the automatic transcription built by and for the Māori people -- there's te reo Māori language model that's part of that.
>>
Hypolite Petovan likes this.
reshared this
Claudius Link, gabriel, Sam Levine, hex, Caro S., craignicol, oldguycrusty, Francisca Sinn, GunChleoc, Khrys, Kiwiana 🦣, botwiki.org, Sminney, Debbie Goldsmith 🏳️⚧️♾️🇺🇦, Mastodon Migration, Kilian Evang, 💀 𝓕airchild 💀, Dr Emma Kate Ward, Adrian McEwen, SIN001E/R-SINE | ΘΔ🦈 | 🏳⚧ and Robert Kingett reshared this.
Prof. Emily M. Bender(she/her)
in reply to Prof. Emily M. Bender(she/her) • • •>>
Prof. Emily M. Bender(she/her)
in reply to Prof. Emily M. Bender(she/her) • • •>>
reshared this
craignicol, kim_harding ✅, gabriel, JWcph, Radicalized By Decency, bleeptrack, Su_G, Yvan ー イボん 🗺️, Maxi 12x 💉, Misha, Mastodon Migration, Sminney, Tokyo Outsider (337ppm), Djoerd Hiemstra 🍉, Taxo Rubio:wq, Insecurity Princess 🌈💖🔥 and Ulrich Junker reshared this.
Robb B. 🇺🇸
in reply to Prof. Emily M. Bender(she/her) • • •Gaëtan Perrault
in reply to Prof. Emily M. Bender(she/her) • • •"Datasets too large to collect with care" is a great line.
Is there anyone actually doing this?
I know Mozilla is trying this (CommonVoice) and they're incredibly unpopular right now. Could we have community support for such an organization?
Bogdan Buduroiu
in reply to Prof. Emily M. Bender(she/her) • • •> without ammassing datasets too large to collect with care
I appreciate your post, and I'm a huge proponent for moving past the transformer.
However, how is worrying about this not choosing "cooperate" every single time in the Prisoner's Dilemma when you know Meta & Co will always choose "defect"?
This is worrying to me, because nefarious actors have no incentive to care about these issues (and actually have plenty of incentives to be hostile to these ideas). Also, they have institutional backing to do so.
Prof. Emily M. Bender(she/her)
in reply to Bogdan Buduroiu • • •Prof. Emily M. Bender(she/her)
in reply to Prof. Emily M. Bender(she/her) • • •And you can build and use language models without turning them into the synthetic text extruding machines that are despoiling our information ecosystem.
And even if those are easily accessible, because OpenAI et al want to burn through cash with their demos, we can still refute and refuse the narrative that synthetic text is somehow a panacea to be used across social services (medicine, education) and in science, etc.
>>
reshared this
Dave Rahardja, Florence Maraninchi, JWcph, Radicalized By Decency, craignicol, Wendy Nather, Khrys, axel., 💀 𝓕airchild 💀, May Likes Toronto, Claudius Link, Ulrich Junker and Robert Kingett reshared this.
Prof. Emily M. Bender(she/her)
in reply to Prof. Emily M. Bender(she/her) • • •>>
reshared this
JWcph, Radicalized By Decency, Francisca Sinn and Sminney reshared this.
Prof. Emily M. Bender(she/her)
in reply to Prof. Emily M. Bender(she/her) • • •But instead he wrote a defensive screed, seemingly imagining someone knowing about his LLM use ascribing to him all of the ills of everyone's LLM production and use.
A missed opportunity, to be sure.
reshared this
Claudius Link, JWcph, Radicalized By Decency and Francisca Sinn reshared this.
Chris Real
in reply to Prof. Emily M. Bender(she/her) • • •His position on subjects is distorted by his personal position in society.
It's a common side-effect of successful critics of society. He speaks, now, to the only people he thinks matter, but they are a narrow group of exceptionals, culled from the privileged, who he interacts most with.
Success has its isolations, and he hasn't confronted this yet . . .
reshared this
Claudius Link reshared this.
Everyday.Human Derek
in reply to Prof. Emily M. Bender(she/her) • • •Sometimes both
Andreas Wagner
in reply to Prof. Emily M. Bender(she/her) • • •Thank you for this thoughtful and balanced post/thread! It almost obviates a comment on @tante 's piece that I have been planning to write. I am particularly happy about your point on the Māori language model, on transformers and how to build and use language modeling without extractive, exploitative means!
If I may thus join in there and add another thought ...
>>
Mark Hahn
in reply to Prof. Emily M. Bender(she/her) • • •He's not making your criticism; that is not a slight to you or your ranking of what's important.
I'm not really sure what the background is here, but it reminds me of how the left so frequently winds up harming its own allies.
Jesse McClure
in reply to Prof. Emily M. Bender(she/her) • • •This distinction about use cases is the important point in my view. So much so that I wasn't fully on board with the first paragraphs of the Smashing Frames article (though I loved the rest).
For example, the analogy to wanting to be vegan but accepting vegetarian. I am convinced of the value of reducing our meat consumption and animal farming. But personally I don't find eating meat morally objectionable on principle. If I did, I'd *not* make exceptions.
>>
Emma Irwin
in reply to Prof. Emily M. Bender(she/her) • • •Prof. Emily M. Bender(she/her)
in reply to Emma Irwin • • •@sunnydeveloper There is a whole literature on dataset documentation, including Data Statements for NLP. We link to some of the other projects from this page and also have some sample data statements.
techpolicylab.uw.edu/data-stat…
Data Statements | Tech Policy Lab
Tech Policy LabEmma Irwin
in reply to Prof. Emily M. Bender(she/her) • • •Martin Hamilton
in reply to Prof. Emily M. Bender(she/her) • • •Hi from a random Internet person! I wondered if you have a view on "Sovereign" models like Apertus? Per raw.githubusercontent.com/swis…
FWIW I am a genAI septic who started out feeling quite positive about this development, but then cooled on it rapidly once I realised that it doesn't address a) environmental impacts, or b) potential harms when genAI is used naively - or for plausible deniability by people doing bad stuff ¯\(ツ)/¯
For anyone reading this who hasn't come across Apertus before, there are now several models like this with characteristics such as:
Three LLMs in a Trenchcoat
in reply to Prof. Emily M. Bender(she/her) • • •Wulfy—Speaker to the machines
in reply to Three LLMs in a Trenchcoat • • •@me
There are many ways #Aiantagonists lose credibility building inaccurate mythos.
One of which is, they assume AI is frozen in stone with zero development, and because they are outright hostile to the tech they rarely keep up with advancements.
The "Guessing text" is a case in point.
#kona is an Energy Based Model which presents MATHEMATICALLY PROVABLE answers.
If I had a cent for every time in my timeline somebody talks about stochiastic parrots, I'd have 67 cents, and thats just yesterday.
Angry posts won't fix AI, political engagement will.
Get off your fat arses and activate politically, #regulateai
1337
in reply to Prof. Emily M. Bender(she/her) • • •tuban_muzuru
in reply to 1337 • • •As a linguist of creoles and patois, if you wish to translate with reasonable accuracy, throw out the adjectives and adverbs. The resulting translation is cleaner.
Wulfy—Speaker to the machines
in reply to Prof. Emily M. Bender(she/her) • • •...least you be denounced as a broligarch/techbro by the righteous!
Andrew Wigglesworth
in reply to Wulfy—Speaker to the machines • • •@n_dimension
I'm with Lord Byron on the Luddites:
reshared this
Passageiro clandestino and mtc_uk reshared this.
dryad
in reply to Andrew Wigglesworth • • •@ecadre @n_dimension
Heh! I'm often reminded (and the piece below does a gorgeous job with the subject, incl referencing Byron's poem) that Luddism wasn't anti-progress / anti-tech / reactionary 😀 That's how the victors rewrote it, as they so often do.
thenib.com/im-a-luddite/
@emilymbender
I’m a Luddite (and So Can You!) | The Nib
Tom Humberstone (The Nib)reshared this
Wulfy—Speaker to the machines, webhat and mtc_uk reshared this.
mancube
in reply to Wulfy—Speaker to the machines • • •Wulfy—Speaker to the machines
in reply to mancube • • •@mancube
The #luddites lost.
There were many reasons. Not the least being that the state deployed more soldiers against them than against Napoleon... And hanged 17 ringleaders.
But the main reason was THE LUDDITES NEVER FORMED A NATIONAL POLITICAL REPRESENTATION!
Stop posting angry memes. Get off your fat/scrawny arses and become politically active.
#regulateai
Violet Madder
in reply to Wulfy—Speaker to the machines • • •@n_dimension @mancube
It's a little hard to mount an effective political party when the system is hellbent on murdering you (and uses all the propaganda machinery at its disposal to distort your message and memory). Status quo parties like Republicans and Democrats "fight" each other (and even the terrorists who hate us for our freedoms or whatever) like pro wrestlers putting on a cute show. Managed opposition. Anything close to a real Luddite party they sabotage, co-opt, and/or kill with the focus reserved for REAL threats.
People are trying. A lot of folks whose names you'll never know have died or been locked up and tortured, trying.
Wulfy—Speaker to the machines
in reply to Violet Madder • • •@mancube @violetmadder
Resisting the opressor is never easy.
4 of my ancestors were killed by the Nazis.
One of whome was in a concentration camp, she died from Tuberculosis.
One survived a bloody battle of Monte Casino storming the mountain, (mainly because he was his division Baker, still counts), My Grandfather was the only survivor of his unit fighting Nazis (He never volunteered).
My Great-Uncle was captured, incarcerated and died in a Nazi concentration camp.
#resistance is never easy.
But we don't have to go full #Luddite yet, we have not exhausted political action yet.
Cy
in reply to Wulfy—Speaker to the machines • • •The Luddites didn't even go full Luddite. They felt they had not exhausted political action, then were summarily executed and/or sent to slave colonies. They mailed letters, did public demonstrations, signed petitions, and for their efforts they were beset by bullshitters accusing them of "military-like drills," among other obvious fabrications. They wrecked machinery, then got blamed for death threats. They were blamed for the Pentrich uprising, which itself was caused by the unbearable exploitation of the working class, and not by the working class who justifiably marched on Nottingham. But nobody there was flying some imagined flag of Ned Ludd!
I wasn't there when it went down, but I call it like I see it. A city nearby to me had a lady last week who marched up to city council during a public hearing and held up a petition at them, which they'd been ignoring for months because 19,000 people wanted to revoke the city's permit that they granted to ICE to hole up in there. They freaking swarmed her with cops, and the news dutifully reported that in the wake of this vicious a
... Show more...The Luddites didn't even go full Luddite. They felt they had not exhausted political action, then were summarily executed and/or sent to slave colonies. They mailed letters, did public demonstrations, signed petitions, and for their efforts they were beset by bullshitters accusing them of "military-like drills," among other obvious fabrications. They wrecked machinery, then got blamed for death threats. They were blamed for the Pentrich uprising, which itself was caused by the unbearable exploitation of the working class, and not by the working class who justifiably marched on Nottingham. But nobody there was flying some imagined flag of Ned Ludd!
I wasn't there when it went down, but I call it like I see it. A city nearby to me had a lady last week who marched up to city council during a public hearing and held up a petition at them, which they'd been ignoring for months because 19,000 people wanted to revoke the city's permit that they granted to ICE to hole up in there. They freaking swarmed her with cops, and the news dutifully reported that in the wake of this vicious attack, city council members were going to have to start carrying firearms to defend themselves. She's been charged with criminal trespassing. In a public hearing, at city hall. People in power lie like that ALL the TIME. Not even a lot of power. Any "representative" of anyone anywhere in the USA is going to lie that protected, civil acts are dangerous violence that must be stopped with more violence. I can't imagine the UK is much better.
Until someone shows me the "death threats and possibly attacked" magistrates, I'm going to assume the magistrates were lying. Even when they were (supposedly) flying Ned's flag, I'm not holding a single one Luddite to any extremist lens, until I read some record other than that of the total slimeball Samuel Bamford, claiming the protestors opened fire unprovoked, because when the mill owners fired on them, that was just "to intimidate" and shouldn't have been seen as uh... shooting them.
I mean um... going "full Luddite" as you put it is probably a bad idea, since they start murdering people long before that point. So I agree. Just... don't expect them to play by any rule other than "kill people until they stop nattering at me."
#opinions #politics #ProbablyWrong #idk
CC: @emilymbender@dair-community.social @mancube@pnw.zone @violetmadder@kolektiva.social
Nantucket Lit
in reply to Prof. Emily M. Bender(she/her) • • •Robert Kingett reshared this.
Marianne
in reply to Prof. Emily M. Bender(she/her) • • •Sensitive content
small typos if useful to know:
"And that stand lead him into the problematic train of thought" (led)
"Of just reap the fruits..." (or)
And thank you for this piece!!
Stacey Cornelius 🇨🇦
in reply to Prof. Emily M. Bender(she/her) • • •Rens van der Heijden
in reply to Prof. Emily M. Bender(she/her) • • •tante.cc/2026/02/20/on-allianc…
On Alliances
tante (Smashing Frames)Hypolite Petovan likes this.
reshared this
Alexander Bochmann, JWcph, Radicalized By Decency, craignicol, Gavin, Roy and Keith reshared this.
Scott D. Strader 😐
in reply to Prof. Emily M. Bender(she/her) • • •Thank you so much. Yours is the ethical reading of LLMs that is needed. (1) Moral beliefs lead to choices that are not black-and-white. (2) Tools are not immoral because of their creators. (3) The decision to use knowledge immorally obtained should weigh heavily on the user. (4) Tools whose use produces immoral outcomes should also weigh heavily on the user.
It's unfortunate that Doctorow went all-in on logical fallacies and presumed absolutes in order to defend his use.
mHtt
in reply to Prof. Emily M. Bender(she/her) • • •2qx
in reply to Prof. Emily M. Bender(she/her) • • •@tante
The City of Omelas we must walk away from is #petrofascism
They are building datacenters full of AI chips to burn fossil fuels. They are monetizing energy to control people with their prompt responses. They're telling people what to think and buy, the same as they've been doing since Standard Oil.
They have energy, and they are trying desperately to control people, because their energy monopoly is dying.
The only way to beat a big Company Town scam is to walk away.
Wulfy—Speaker to the machines
in reply to Prof. Emily M. Bender(she/her) • • •Ahaha Mr. "AI is asbestos in the walls" has become "#reversecentaur" himself...
What a hypocrite!
Scott D. Strader 😐
in reply to Wulfy—Speaker to the machines • • •Euclid Jerome Darlings
in reply to Prof. Emily M. Bender(she/her) • • •Prof. Emily M. Bender(she/her)
in reply to Euclid Jerome Darlings • • •Euclid Jerome Darlings
in reply to Prof. Emily M. Bender(she/her) • • •Prof. Emily M. Bender(she/her)
in reply to Euclid Jerome Darlings • • •Euclid Jerome Darlings
in reply to Prof. Emily M. Bender(she/her) • • •The Servitor
in reply to Prof. Emily M. Bender(she/her) • • •I don't find Doctorow's take disingenuous. There *are* a lot of people who don't discriminate between "good" or "bad" uses of AI and hate AI simply as a symbol for rich white fascism. (Which, easy to see how they get there...).
I see people espousing the absolutist take often. If he's misrepresenting, it's only by not breaking out and enumerating different rationale for thinking AI is unethical as tante did (well). The rationale he did name is an entirely real position.
aburka 🫣
in reply to Prof. Emily M. Bender(she/her) • • •grrl_aex
in reply to Prof. Emily M. Bender(she/her) • • •This is a mask-off moment for this influencer, put simply.
Intellectual dishonesty. Arguing in bad faith. Disposing of human volunteer proofreaders in favor of ai and joking about it. Doubling down. Defending and legitimizing mass scraping of human works by ai companies for gods sake.
The amount of ego operating here is blatant and repugnant.
Take a hike, grifter.
trying bear 🕊😺🕊🐼
in reply to Prof. Emily M. Bender(she/her) • • •Patrick H. Lauke
in reply to Prof. Emily M. Bender(she/her) • • •Kathy Murdoch
in reply to Prof. Emily M. Bender(she/her) • • •Vincent Sparks
in reply to Prof. Emily M. Bender(she/her) • • •tiny fat birb
in reply to Prof. Emily M. Bender(she/her) • • •Dr Ro Smith
in reply to Prof. Emily M. Bender(she/her) • • •ChookMother 🇦🇺🦘
in reply to Prof. Emily M. Bender(she/her) • • •But he's succumbed to the anxieties that afflict many who rocket to fame - what if his fans find him to be imperfect and it all falls apart?
The strength of his 'enshittification" narrative is in its truth. We all recognise very quickly that it's true, that it impacts our own everyday lives. Eventually it will be common knowledge and the speaking engagements will taper off.
But the morals and ethics of Big Tech? Clearly these are not Doctorow's subject. /2
ChookMother 🇦🇺🦘
in reply to ChookMother 🇦🇺🦘 • • •That he does not produce a continuous flow of blameless ethics, pure reasoning and rare insight should not detract from the truth of his main narrative. Don't let perfection be the enemy of the good.
Prof. Emily M. Bender(she/her)
in reply to ChookMother 🇦🇺🦘 • • •ChookMother 🇦🇺🦘
in reply to Prof. Emily M. Bender(she/her) • • •ChookMother 🇦🇺🦘
in reply to Prof. Emily M. Bender(she/her) • • •dnparadice
in reply to Prof. Emily M. Bender(she/her) • • •Dr. Angus Andrea Grieve-Smith
in reply to Prof. Emily M. Bender(she/her) • • •"Cory believes that one sometimes needs to make decisions based on one’s values. He just does not think that your values as someone not wanting to use LLMs matter."
Love this, thank you!
Spitfire
in reply to Prof. Emily M. Bender(she/her) • • •around and lecturing people about enshitification while defending the most harmful technology brought into existence in recent years. the hypocracy ...
feliz
Unknown parent • • •@CptSuperlative
These are some good thoughts why the usage of synthetic texts (AI output) undermines the process of social construction of truth on which we depend. Thanks.
funbreaker
in reply to Prof. Emily M. Bender(she/her) • • •