There's this Calvin and Hobbes strip where Calvin's dad says that if we actually wanted more leisure time, we'd invent machines that did things more slowly, and I think about it all the time.
-
The world has been unkind and polarized since our distant ancestors were fashioning clubs from fallen branches and using them on rival tribes. None of this is new. It has come and gone in cycles, and this is one of them.
Yes, that culture of “efficiency” should be tossed out in favor of a culture of calm cooperation. Fortunately, quite a lot of people agree with you, and that is exactly what seems to be slowly happening.
Maybe this time it'll be permanent.
-
Yeah. I remember when people were blaming Doom for the Columbine shooting. It was and remains absolutely baffling.
I played Doom. The player character is a heroic Marine battling an invasion of demons from hell. At no point does the game involve shooting defenseless children or anything similarly immoral.
And yet, somehow, it was the subject of public outrage.
No doubt stoked by interests with media connections, back in the bad old days before Internet fact checking…
-
I wish it was only Boomers saying that. From what I hear, Gen Z is of that opinion now. I can't even. They of all people should know better than this; they just got finished with their own childhood!
-
@roy_calum @clemensg @camertron It's used a bit more broadly in German I think. Like in English I think you wouldn't say you drink a cup of tea in the afternoon to decelerate.
-
That seems to be Australia's approach, but so far it isn't working. Kids easily evade the age checks with scrunched-up faces, video game characters, fake IDs, whatever it takes.
Must every generation learn the hard way that keeping kids from chatting and exploring is impossible? I thought we learned this in the 1990s. Must we burn the entire Internet to ashes, leaving no trace of what was once humanity's crowning achievement, before we give up on this fool's errand?
-
That's what I heard! Apparently some kids figured out that they can fool the bot into thinking they're adults just by scrunching up their faces.
Among many, many other strategies.
-
But for how long? If platforms are commanded to employ AI to stop bullying, that will:
1. Not work. Bullying can and often does happen right under the noses of highly intelligent schoolteachers. If even they can't reliably identify bullying, AI doesn't stand a chance.
2. Make all of the alternatives illegal. AI is staggeringly expensive to create and use. Only billion-dollar companies can afford it. You can kiss Mastodon goodbye, and we aren't part of the problem!
-
But for how long? If platforms are commanded to employ AI to stop bullying, that will:
1. Not work. Bullying can and often does happen right under the noses of highly intelligent schoolteachers. If even they can't reliably identify bullying, AI doesn't stand a chance.
2. Make all of the alternatives illegal. AI is staggeringly expensive to create and use. Only billion-dollar companies can afford it. You can kiss Mastodon goodbye, and we aren't part of the problem!
And this isn't speculation. Big platforms like TikTok are actively attempting to use AI to stop bullying, hate, etc. They are failing miserably.
All they're actually accomplishing is forcing people to make up Newspeak-ish words like “unalive” in order to discuss political issues of the day without getting banned. This protects no one.
-
But for how long? If platforms are commanded to employ AI to stop bullying, that will:
1. Not work. Bullying can and often does happen right under the noses of highly intelligent schoolteachers. If even they can't reliably identify bullying, AI doesn't stand a chance.
2. Make all of the alternatives illegal. AI is staggeringly expensive to create and use. Only billion-dollar companies can afford it. You can kiss Mastodon goodbye, and we aren't part of the problem!
@argv_minus_one @Elizafox @camertron
Bleh. Literal pedophiles describe themselves as such in Discord without getting in any trouble.
You don't need AI, you literally just have to filter messages for strings, it's cheap af.
The lack of safety isn't because it's technically hard, it's because nobody cares.
-
@argv_minus_one @Elizafox @camertron
Bleh. Literal pedophiles describe themselves as such in Discord without getting in any trouble.
You don't need AI, you literally just have to filter messages for strings, it's cheap af.
The lack of safety isn't because it's technically hard, it's because nobody cares.
Seriously?!
Well, yes, sites certainly should be required to have a report button and ban people who get reported and are doing clearly illegal stuff.
But word filtering? Yeah, it's cheap, but it doesn't work. It'll drown the admins in false reports, depopulate the site with false bans, or if it simply deletes/censors content without further punishment, then it's easy to evade. https://en.wikipedia.org/wiki/Scunthorpe_problem
Some think AI will solve this problem, but, well, see above.
-
Seriously?!
Well, yes, sites certainly should be required to have a report button and ban people who get reported and are doing clearly illegal stuff.
But word filtering? Yeah, it's cheap, but it doesn't work. It'll drown the admins in false reports, depopulate the site with false bans, or if it simply deletes/censors content without further punishment, then it's easy to evade. https://en.wikipedia.org/wiki/Scunthorpe_problem
Some think AI will solve this problem, but, well, see above.
Perhaps we could require large for-profit platforms to hire trust-and-safety teams to monitor things? I believe that's how it used to be done.
That's even more expensive than AI, though, so it has to apply only to big companies that can afford it.
Also, I'm under the impression that this kind of work tends to be traumatic for the trust-and-safety team…
-
Found it
@camertron Is this the reason I am surrounded by fountain pens and typewriters, and why I'm going to go see about a Linotype machine tomorrow?
-
T tanyakaroli@expressional.social shared this topic