Take a moment and bang this concept into your heads… “we found no evidence of formal reasoning in language models …. Their behavior is better explained by sophisticated pattern matching—so fragile, in fact, that changing names can alter results by ~10%!” https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and STOP WITH THE WISHFUL THINKING.
Machines are epistemically landlocked. There is no such thing as “an external world” to them. Machines push around loads INSIDE them. “But-but-but we’re also machines!” some dream-addled kid may scream with teary eyes. Uh, no. Learn what a machine is. https://davidhsing.substack.com/p/reading-about-neuro-symbolic-ai-has
https://jensorensen.com/2024/10/03/ai-data-centers-carbon-climate-tech-companies-cartoon/
Neural networks never deal with knowledge in any kind of legitimate fashion. It will always have epistemic issues because that’s their underlying nature. Additionally, it’s a house of cards that’s brittle and collapses if anything is out-of-range of the model’s data. The actual “range” being the entire real world and the entirety of human knowledge. https://davidhsing.substack.com/p/why-neural-networks-is-a-bad-technology
*THE* reason the AI folks ran to Washington to squeal about the civilization ending capabilities of their tech was because they knew this: ChatGPT Plus has no real moat, little product differentiation (outside of its advanced voice mode, which Meta is already working on a competitor to), and increasing commoditization from other models, open source…