@Fragglemuppet They talk more about politics than technical stuff. And yeah, it sort of is, but it's much less visible on mobile. Also: remember Myspace Tom? Just because some username and profile photo is shown to you, that doesn't mean they're someone you can interact with. At least not on the big corporate social media.
fastfinge@fed.interfree.ca
@fastfinge@fed.interfree.ca
Indlæg
-
Huh. -
Huh.Huh. Just chatted in person with someone who's been on #mastodon for a while, now, who honestly thought that all the Mastodon domains were run by Mastodon. And having a different domain was, like, just a vanity thing to look cool. It only came up because they were complaining about an issue they were having, and they were on a smaller server (not naming it for anonymity), so I suggested contacting their server admin about the problem. I was surprised when they answered "Dude nobody at big companies reads those reports. It just all goes to AI or whatever." It took some actual convincing to get them to believe that the server they're on does, in fact, have a living breathing human admin who can be talked to.
Anyway, folks, support your #fediverse server admins and moderators. With money, where you can. They're almost certainly getting messages from users who think that reporting things to an admin here is exactly like reporting stuff to Facebook or Google. IE: screaming at a giant faceless entity who's never going to care or do anything about whatever your problem is. -
Poll only for #Blind/LowVision users who rely on #AltText.@RachelThornSub So as an actual blind user who uses AI regularly...no, not really. If you include AI generated alt-text, the odds are you're not checking it for accuracy. But I might not know that, so I assume the alt-text is more accurate than it is. If you don't use any alt-text at all, I'll use my own AI tools built-in to my screen reader to generate it myself if I care, and I know exactly how accurate or trustworthy those tools may or may not be. This has a few advantages:
1. I'm not just shoving images into Chat GPT or some other enormous LLM. I tend to start with deepseek-ocr, a 3b (3 billion parameter) model. If that turns out not to be useful because the image isn't text, I move up to one of the 90b llama models. For comparison, chat GPT and Google's LLM's are all 3 trillion parameters or larger. A model specializing in describing images can run on a single video card in a consumer PC. There is no reason to use a giant data center for this task.
2. The AI alt text is only generated if a blind person encounters your image, and cares enough about it to bother. If you're generating AI alt text yourself, and not bothering to check or edit it at all, you're just wasting resources on something that nobody may even read.
3. I have prompts that I've fiddled with over time to get me the most accurate AI descriptions these things can generate. If you're just throwing images at chat GPT, what it's writing is probably not accurate anyway.
If you as a creator are providing alt text, you're making the implicit promise that it's accurate, and that it attempts to communicate what you meant by posting the image. If you cannot, or don't want to, make that promise to your blind readers, don't bother just using AI. We can use AI ourselves, thanks. Though it's worth noting that if you're an artist and don't want your image tossed into the AI machine by a blind reader, you'd better be providing alt text. Because if you didn't, and I need or want to understand the image, into the AI it goes.