![](https://lemmy.magnor.ovh/api/v3/image_proxy?url=https%3A%2F%2Ffedia.io%2Fmedia%2F33%2F95%2F33951fd7c296b0000d1da93f5b28bad35a19034fa8e8517f7f2dee91fb6751d4.png)
![](https://lemmy.magnor.ovh/api/v3/image_proxy?url=https%3A%2F%2Flemmy.world%2Fpictrs%2Fimage%2F8aead832-799f-4d34-a20d-eae5b621a9b1.jpeg)
I’d love to agree with you - but when people say that LLMs are stochastic parrots, this is what they mean…
LLMs don’t actually know what the words they’re saying mean, they just know what words are most likely to be next to each other based on training data.
Because they don’t know the meaning of what they’re saying, they also don’t know the factuality of what they’re saying - as such they simply can’t self-fact check.
It really saddens to me see how many managers out there treat their subordinates terribly, and then act surprised when their subordinates do the same - as though employees are meant to greatful for their terrible treatment