Analyzing the Contextual Shortcomings of Artificial General Intelligence
- ELM
Even in the most cutting-edge Artificial General Intelligence (AGI) endeavors, the disparity between humans and artificial systems is extremely apparent. Although this difference fundamentally divides the capabilities of each, human-level intelligence (HLI) has remained the aim of AGI for decades. This paper opposes the binarity of the Turing Test, the foundation of this intention and original establishment of a potentially intelligent machine. It discusses how AI experts misinterpreted the Imitation Game as a means to anthropomorphize computer systems and asserts that HLI is a red herring that distracts current research from relevant problems. Despite the extensive research on the potential design of an AGI application, there has been little consideration of how such a system will access and ingest data at a human-like level. Although current machines may emulate specific human attributes, AGI is developed under the pretense that this can be easily scaled up to a general intelligence level. This paper establishes contextual and rational attributes that perpetuate the variation between human and AI data collection abilities and explores the characteristics that current AGI lacks. After asserting that AGI should not be seeking HLI, its current state is analyzed, the Turing Test is reevaluated, and the future of AGI development is discussed within this framework.
View on arXiv