@Purple @halfy LLMs fundamentally do not "understand" things. they cannot access meaning. as such they also can't really fundamentally get better at this. all you can do is little tweaks that cannot resolve the fundamental problems with them cause they are based on how they function.