General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsIs it purposeful to turn our minds to mush? I see AI mistakes in time and geography too.
https://bsky.app/profile/slothropsmap.bsky.social/post/3ldfdeaelws2j"It is worth communicating that there is not currently a known technical fix for removing AI slop imagery and web content from search results, so were headed for an information crisis of unprecedented proportions. Really."
Think. Again.
(19,751 posts)...the more will be used to train A.I. and it will only get worse as A.I. feeds on itself like that.
applegrove
(123,870 posts)to get all that slop off the internet so it will keep feeding itself. Maybe people will actually become more discerning instead as their senses are assaulted. Sure is a new world.
jfz9580m
(15,584 posts)I also dont get why people find this so confusing.
I would be alarmed if pubmed or Stack Exchange or Wikipedia (the sort of more reliable standard bearers on the net) started getting filled with ai slop.
But who are these people who turn to YouTube and TikTok etc for information?
If you look in places like those for information obviously you will get rubbish.
If I had a science question, I would ask Stack Exchange or look at university sites or pubmed. If those started getting filled with ai slop, then it is back to textbooks and cross referencing.
But why would you not expect YouTube etc to largely be garbage? Those are entertainment sites not information sites.
I would be alarmed if pubmed went that way.
That would be scary. I am not fast enough to pick up on errors in highly technical work if it is in places you have some baseline level of trust in.
Man I hope pubmed stays free of ai clutter. I would be scared if someone tried to enhance the critical thinking skills of the populace by filling pubmed with ai junk to see if people could distinguish between good and bad science. Malignant creativity that..
Way to make the average drudges life way harder. And ultimately a moronic method.
Nothing based in deception is ever worthwhile. Its just a way for pathetic douchebags to feel superior.
I have so much contempt for deception and manipulation based, bad faith methods..besides it wouldnt even work.
Not even very bright people have expertise in every damn thing.
I am not saying that anyone has come up with ideas that bad to combat disinfo or enhance critical thinking. But sometimes when I look out their cynically and think of all the bad faith actors with no shame, I can come up with ideas that would occur to those who channel malignant creativity.
applegrove
(123,870 posts)that maybe it would backfire.
jfz9580m
(15,584 posts)That may actually have the effect of making people use YouTube less trustfully and end up being a good thing.
Maybe people are conned by humans more easily than by something they know is saturated with ai.
That would be a net gain.
AI itself isn't inherently bad, but how it is used, controlled, and integrated into our information systems will determine whether it benefits society or contributes to a crisis of misinformation.
MomInTheCrowd
(338 posts)Some comments in your link show a couple workarounds:
Append either of the following phrases at the end of any of your searches and they modify your returned results by either removing ai generators content or only returns content from before AI was ubiquitous:
"stable diffusion" -"ai" -"midjourney"
Or
"2021 and earlier"
AI has kilt the internets
Blue_Tires
(57,247 posts)Ironically this might push some people back to old-fashioned books and newspapers 🤔