Oh, also: Forget about those "better #AI toolkits".
Ikke-kategoriseret
1
Indlæg
1
Posters
0
Visninger
-
Oh, also: Forget about those "better #AI toolkits".
There literally is no way of making LLM-based assistants not full of shit. The way they work, it simply IS NOT TECHNICALLY POSSIBLE to build a mechanism for accuracy into the system.
The only way would be to build an entirely new system capable of fact-checking any text - but that's not happening anytime soon, because we have no idea how to build semantic comprehension & that's what it would take.