justaprogressive
justaprogressive's JournalThe Luckiest Man in the World
A man woke up one morning with a horrendous hangover. When he finally pried his eyes open, there was a glass of water and two aspirin tablets on the nightstand. He took the aspirin and washed them down with the water. Stumbling into the bathroom, looks in the mirror and realizes he has a black eye. He also sees a note from his wife on a fresh towel. I put out a fresh towel for your shower, and breakfast is keeping warm in the oven. Ill be back later, Im picking up a nice steak for your dinner.
This really confused him, normally his wife is angry for days after he goes out drinking with his friends. He showers and goes back into the bathroom to dress. He finds his favorite worn out sweat suit, the one his wife hates, laid out for him. He dresses and heads downstairs.
On the way down, he notices a wet spot in the carpet, and a broken little chair that was his wifes favorite little decoration. He walks into the kitchen and sure enough, his favorite breakfast is waiting. His son is sitting at the table eating. His boy looks at him, smiles, and says You dont remember what happened last night, do you?
The man confesses that he doesnt. His son fills him in. You were so drunk you couldnt stand up. Mom was helping you to the stairs when you fell and broke her chair. Thats when you got the black eye. Then, halfway up the stairs, you threw up on her and the carpet. She finally got you to bed, and then tried to undress you. That's when you yelled out Leave me alone lady! Im married!
At that moment, he became the luckiest man in the world.
Trump's New Nuclear Nightmare in Iran - David Corn

After Trump, during his first White House stint, ripped up the Iran nuclear deal that President Barack Obama and other world leaders had negotiated with Tehran in 2015, Iran responded by enriching its uranium to a much higher level than it had been doing under the agreement. Because of that move, it now possesses an estimated 970 pounds of highly enriched uranium thats a lot closer to the level of refinement needed for bomb-grade material. And international nuclear inspectorswho were able to keep track of Irans uranium stockpile before Trump bombed Irans nuclear facilities in Junearent sure where this uranium is now.
In short, with his war in Iran, Trump has created a big, possibly catastrophic problem: A half-ton of highly enriched uranium, which can be made bomb-ready, is somewhere out thereavailable for use by Irans new regime or perhaps not fully secured and susceptible to theft or expropriation.
I spoke to Joe Cirincione, a veteran nuclear policy expert, about this stockpile and the challenges it presents.
He notes that it would not take much for Iran to enrich this materiala gaseous form of uraniumfrom its present state of 60-percent enrichment to the 90-percent level necessary for a bomb. (Uranium at the 60-percent level can be used for a crude and large bomb that would be akin to the weapon dropped on Hiroshima but not a bomb that could be delivered by a missile.) He points out that under the Iran deal that Trump rejected, Iran had only been enriching uranium to the 4-percent level.
https://www.motherjones.com/politics/2026/03/donald-trump-iran-nuclear-uranium-isfahan-joe-cirincione/]
Elizabeth Warren's Amazingly Progressive Housing Bill - Robert Kuttner

The 303-page legislation creates new programs and federal money for housing construction, promotes manufactured housing, while streamlining zoning and permitting obstacles and improving access to mortgages. A key measure aimed at private equity prevents Wall Street from buying large numbers of single-family homes. The bill would allow construction of single-family homes as rentals, but companies with more than 350 units would have to sell them within seven years.
Even more improbably for casual observers, the bill reflected the legislative and political genius of one of the Senates most progressive members, Elizabeth Warren. How does the senator do it? Two ways.
First, think of a Venn diagram. For the most part, Republican and Democratic principles and politics on how to contain and complement rapacious capitalism diverge. But there are some areas of potential overlap where Republicans find it expedient to embrace a bit of economic populism. The massive public outcry on behalf of affordable housing is one. And while Republicans push tax cuts and lifting regulatory restraints to help oligarchs, a symbolic kick now and then is good cover.
Warrens genius is to identify those areas and pursue legislative opportunitiesthis is part two of her strategyto find a Republican partner. In this case, her lead co-sponsor was Sen. Tim Scott of South Carolina, the chair of the Senate Banking Committee on which Warren serves as ranking Democrat, and the Senates only Black Republican.
https://prospect.org/2026/03/13/elizabeth-warrens-amazingly-progressive-housing-bill/]
AI Data Center Hucksters Wouldn't Bribe Our Legislators -- Would They? By Jim Hightower
Instead of slipping cash-filled envelopes to individual politicos, tech giants like Amazon, Meta and OpenAI, are putting up hundreds of millions of dollars in this spring's midterm elections to pay for the campaigns of candidates who pledge to back their intrusive, water-sucking, energy-wasting AI schemes. For example, Mark Zuckerberg, CEO of Meta, has two super-PACs doling out $65 million to state and local politicians who will oppose any regulation of sprawling data centers he wants to impose on rural Texas and Illinois.
Why such a barrage of corporate money in local legislative races? Because the countryside is aflame with fury that arrogant, avaricious AI profiteers think they're entitled to walk over local communities so these locals are demanding that their legislators regulate or even ban AI data centers.
Unable (or unwilling) to win political support honestly, the corporate giants intend to overpower the democratic will of the people by effectively bribing submissive legislators with campaign cash or by funding opponents for lawmakers who refuse to be bought.
Of course, bribers and bribees alike will piously pretend that the corporate ruse of buying government policy by buying legislative seats is technically not a bribe. But hello rigging the system so billionaire donors can crush local democracy is not a "technicality." If it looks, smells and has the impact of a bribe ... it is one.
https://www.creators.com/read/jim-hightower]
AI "journalists" prove that media bosses don't give a shit - Cory Doctorow

https://www.wheresyoured.at/the-ai-bubble-is-an-information-war/
That's "Ed, the financial sleuth." But Ed has another persona, one we don't get nearly enough of, which I delight in: "Ed the stunt journalist." For example, in 2024, Ed bought Amazon's bestselling laptop, "a $238 Acer Aspire 1 with a four-year-old Celeron N4500 Processor, 4GB of DDR4 RAM, and 128GB of slow eMMC storage" and wrote about the experience of using the internet with this popular, terrible machine:
https://www.wheresyoured.at/never-forgive-them/
It sucked, of course, but it sucked in a way that the median tech-informed web user has never experienced. Not only was this machine dramatically underpowered, but its defaults were set to accept all manner of CPU-consuming, screen-filling ad garbage and bloatware. If you or I had this machine, we would immediately hunt down all those settings and nuke them from orbit, but the kind of person who buys a $238 Acer Aspire from Amazon is unlikely to know how to do any of that and will suffer through it every day, forever.
Normally the "digital divide" refers to access to technology, but as access becomes less and less of an issue, the real divide is between people who know how to defend themselves from the cruel indifference of technology designers and people who are helpless before their enshittificatory gambits.
Zitron's stunt stuck with me because it's so simple and so apt. Every tech designer should be forced to use a stock configuration Acer Aspire 1 for a minimum of three hours/day, just as every aviation CEO should be required to fly basic coach at least one out of three flights (and one of two long-haul flights).
To that, I will add: every news executive should be forced to consume the news in a stock browser with no adblock, no accessibility plugins, no Reader View, none of the add-ons that make reading the web bearable:
https://pluralistic.net/2026/03/07/reader-mode/#personal-disenshittification
But in all honesty, I fear this would not make much of a difference, because I suspect that the people who oversee the design of modern news sites don't care about the news at all. They don't read the news, they don't consume the news. They hate the news. They view the news as a necessary evil within a wider gambit to deploy adware, malware, pop-ups, and auto-play video.
Rawdogging a Yahoo News article means fighting through a forest of pop-ups, pop-unders, autoplay video, interrupters, consent screens, modal dialogs, modeless dialogs a blizzard of news-obscuring crapware that oozes contempt for the material it befogs. Irrespective of the words and icons displayed in these DOM objects, they all carry the same message: "The news on this page does not matter."
The owners of news services view the news as a necessary evil. They aren't a news organization: they are an annoying pop-up and cookie-setting factory with an inconvenient, vestigial news entity attached to it. News exists on sufferance, and if it was possible to do away with it altogether, the owners would.
That turns out to be the defining characteristic of work that is turned over to AI. Think of the rapid replacement of customer service call centers with AI. Long before companies shifted their customer service to AI chatbots, they shifted the work to overseas call centers where workers were prohibited from diverging from a script that made it all but impossible to resolve your problems:
https://pluralistic.net/2025/08/06/unmerchantable-substitute-goods/#customer-disservice
These companies didn't want to do customer service in the first place, so they sent the work to India. Then, once it became possible to replace Indian call center workers who weren't allowed to solve your problems with chatbots that couldn't resolve your problems, they fired the Indian call center workers and replaced them with chatbots. Ironically, many of these chatbots turn out to be call center workers pretending to be chatbots (as the Indian tech joke goes, "AI stands for 'Absent Indians'"
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
"We used an AI to do this" is increasingly a way of saying, "We didn't want to do this in the first place and we don't care if it's done well." That's why DOGE replaced the call center reps at US Customs and Immigration with a chatbot that tells you to read a PDF and then disconnects the call:
https://pluralistic.net/2026/02/06/doge-ball/#n-600
The Trump administration doesn't want to hear from immigrants who are trying to file their bewildering paperwork correctly. Incorrect immigration paperwork is a feature, not a bug, since it can be refined into a pretext to kidnap someone, imprison them in a gulag long enough to line the pockets of a Beltway Bandit with a no-bid contract to operate an onshore black site, and then deport them to a country they have no connection with, generating a fat payout for another Beltway Bandit with the no-bid contract to fly kidnapped migrants to distant hellholes.
If the purpose of a customer service department is to tell people to go fuck themselves, then a chatbot is obviously the most efficient way of delivering the service. It's not just that a chatbot charges less to tell people to go fuck themselves than a human being the chatbot itself means "go fuck yourself." A chatbot is basically a "go fuck yourself" emoji. Perhaps this is why every AI icon looks like a butthole:
https://velvetshark.com/ai-company-logos-that-look-like-buttholes
So it's no surprise that media bosses are so enthusiastic about replacing writers with chatbots. They hate the news and want it to go away. Outsourcing the writing to AI is just another way of devaluing it, adjacent to the existing enshittification that sees the news buried in popups, autoplays, consent dialogs, interrupters and the eleventy-million horrors that a stock browser with default settings will shove into your eyeballs on behalf of any webpage that demands them:
https://pluralistic.net/2024/05/07/treacherous-computing/#rewilding-the-internet
Remember that summer reading list that Hearst distributed to newspapers around the country, which turned out to be stuffed with "hallucinated" titles? At first, the internet delighted in dunking on Marco Buscaglia, the writer whose byline the list ran under. But as 404 Media's Jason Koebler unearthed, Buscaglia had been set up to fail, tasked with writing most of a 64-page insert that would have normally been the work of dozens of writers, editors and fact checkers, all on his own:
https://www.404media.co/chicago-sun-times-prints-ai-generated-summer-reading-list-with-books-that-dont-exist/
When Hearst hires one freelancer to do the work of dozens, they are saying, "We do not give a shit about the quality of this work." It is literally impossible for any writer to produce something good under those conditions. The purpose of Hearst's syndicated summer guide was to bulk out the newspapers that had been stripmined by their corporate owners, slimmed down to a handful of pages that are mostly ads and wire-service copy. The mere fact that this supplement was handed to a single freelancer blares "Go fuck yourself" long before you clap eyes on the actual words printed on the pages.
The capital class is in the grips of a bizarre form of AI psychosis: the fantasy of a world without people, where any fool idea that pops into a boss's head can be turned into a product without having to negotiate its creation with skilled workers who might point out that your idea is pretty fucking stupid:
https://pluralistic.net/2026/01/05/fisher-price-steering-wheel/#billionaire-solipsism
For these AI boosters, the point isn't to create an AI that can do the work as well as a person it's to condition the world to accept the lower-quality work that will come from a chatbot. Rather than reading a summer reading list of actual books, perhaps you could be satisfied with a summer reading list of hallucinated books that are at least statistically probable book-shaped imaginaries?
The bosses dreaming up use-cases for AI start from a posture of profound and proud ignorance of how workers who do useful things operate. They ask themselves, "If I was a ______, how would I do the job?" and then they ask an AI to do that, and declare the job done. They produce utility-shaped statistical artifacts, not utilities.
Take Grammarly, a company that offers statistical inferences about likely errors in your text. Grammar checkers aren't a terrible idea on their face, and I've heard from many people who struggle to express themselves in writing (either because of their communications style, or because they don't speak English as a first language) for whom apps like Grammarly are useful.
But Grammarly has just rolled out an AI tool that is so obviously contemptuous of writing that they might as well have called it "Go fuck yourself, by Grammarly." The new product is called "Expert Review," and it promises to give you writing advice "inspired" by writers whose writing they have ingested. I am one of these virtual "writing teachers" you can pay Grammarly for:
https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews
This is not how writing advice works. When I teach the Clarion Science Fiction and Fantasy Writers' workshop, my job isn't to train the students to produce work that is strongly statistically correlated with the sentence structure and word choices in my own writing. My job the job of any writing teacher is to try and understand the student's writing style and artistic intent, and to provide advice for developing that style to express that intent.
What Grammarly is offering isn't writing advice, it's stylometry, a computational linguistics technique for evaluating the likelihood that two candidate texts were written by the same person. Stylometry is a very cool discipline (as is adversarial stylometry, a set of techniques to obscure the authorship of a text):
https://en.wikipedia.org/wiki/Stylometry
But stylometry has nothing to do with teaching someone how to write. Even if you want to write a pastiche in the style of some writer you admire (or want to send up), word choices and sentence structure are only incidental to capturing that writer's style. To reduce "style" to "stylometry" is to commit the cardinal sin of technical analysis: namely, incinerating all the squishy qualitative aspects that can't be readily fed into a model and doing math on the resulting dubious quantitative residue:
https://locusmag.com/feature/cory-doctorow-qualia/
If you wanted to teach a chatbot to teach writing like a writer, you would at a minimum have to train that chatbot on the instruction that writer gives, not the material that writer has published. Nor can you infer how a writer would speak to a student by producing a statistical model of the finished work that writer has published. "Published work" has only an incidental relationship to "pedagogical communication."
Critics of Grammarly are mostly focused on the effrontery of using writers' names without their permission. But I'm not bothered by that, honestly. So long as no one is being tricked into thinking that I endorsed a product or service, you don't need my permission to say that I inspired it (even if I think it's shit).
What I find absolutely offensive about Grammarly is not that they took my name in vain, but rather, that they reduced the complex, important business of teaching writing to a statistical exercise in nudging your work into a word frequency distribution that hews closely to the average of some writer's published corpus. This is Grammarly's fraud: not telling people that they're being "taught by Cory Doctorow," but rather, telling people that they are being "taught" anything.
Reducing "teaching writing" to "statistical comparisons with another writer's published work" is another way of saying "go fuck yourself" not to the writers whose identities that Grammarly has hijacked, but to the customers they are tricking into using this terrible, substandard, damaging product.
Preying on aspiring writers is a grift as old as the publishing industry. The world is full of dirtbag "story doctors," vanity presses, fake literary agents and other flimflam artists who exploit people's natural desire to be understood to steal from them:
https://writerbeware.blog/
Grammarly is yet another company for whom "AI" is just a way to lower quality in the hopes of lowering expectations. For Grammarly, helping writers with their prose is an irritating adjunct to the company's main business of separating marks from their money.
In business theory, the perfect firm is one that charges infinity for its products and pays zero for its inputs (you know, "scholarly publishing"
In this regard, AI is connected to the long tradition of capitalist innovation, in which new production efficiencies are used to increase quantity at the expense of quality. This has been true since the Luddite uprising, in which skilled technical workers who cared deeply about the textiles they produced using complex machines railed against a new kind of machine that produced manifestly lower quality fabric in much higher volumes:
https://pluralistic.net/2023/09/26/enochs-hammer/#thats-fronkonsteen
It's not hard to find credible, skilled people who have stories about using AI to make their work better. Elsewhere, I've called these people "centaurs" human beings who are assisted by machines. These people are embracing the socialist mode of automation: they are using automation to improve quality, not quantity.
Whenever you hear a skilled practitioner talk about how they are able to hand off a time-consuming, low-value, low-judgment task to a model so they can focus on the part that means the most to them, you are talking to a centaur. Of course, it's possible for skilled practitioners to produce bad work some of my favorite writers have published some very bad books indeed but that isn't a function of automation, that's just human fallibility.
A reverse centaur (a person conscripted to act as a peripheral to a machine) is trapped by the capitalist mode of automation: quantity over quality. Machines work faster and longer than humans, and the faster and harder a human can be made to work, the closer the firm can come to the ideal of paying zero for its inputs.
A reverse centaur works for a machine that is set to run at the absolute limit of its human peripheral's capability and endurance. A reverse centaur is expected to produce with the mechanical regularity of a machine, catching every mistake the machine makes. A reverse centaur is the machine's accountability sink and moral crumple-zone:
https://estsjournal.org/index.php/ests/article/view/260
AI is a normal technology, just another set of automation tools that have some uses for some users. The thing that makes AI signify "go fuck yourself" isn't some intrinsic factor of large language models or transformers. It's the capitalist mode of automation, increasing quantity at the expense of quality. Automation doesn't have to be a way to reduce expectations in the hopes of selling worse things for more money but without some form of external constraint (unions, regulation, competition), that is inevitably how companies will wield any automation, including and especially AI.
https://pluralistic.net/2026/03/11/modal-dialog-a-palooza/#autoplay-videos]
Profile Information
Gender: Do not displayMember since: Wed Aug 23, 2023, 12:40 PM
Number of posts: 6,860

