Stochastic parrots: so what ...
In " On the dangers of stochastic parrots: Can language models be too big? " Bender et al. state that people "mistake LM [i.e., language model]-driven performance gains for actual natural language understanding" (p. 616). This is because language models are built by being able to accurately predict what word follows another but don't actually understand language. They are nothing more than stochastic parrots. The interesting question here is the 'nothing more than'. The objection again seems rooted in the humanistic bias that only biological beings can 'really' understand. But human understanding and the language manipulation that manifests it may itself be mostly a sophisticated stochastic process. Sam Altman has tweeted on X "I am a stochastic parrot, and so r u." Our brains are sophisticated neural networks that don't seem functionally very different from the artificial networks that they were the original...