Having explored the definitional complexities and historical evolution of AI, we now examine how these developments have crystallised into systematic taxonomies. The progression from symbolic systems to contemporary neural architectures—traced in the previous sections—has given rise to increasingly sophisticated attempts to classify AI systems according to their capabilities and potential. The typological frameworks examined below both reflect and inform the definitional debates discussed earlier, providing structured approaches to understanding AI's current state and future trajectory. The conventional tripartite classification of AI into Narrow (ANI), General (AGI), and Superintelligent (ASI) systems emerged from the historical developments outlined above, yet recent advances—particularly the large language models and reasoning systems discussed earlier—have complicated these distinctions. This taxonomical framework, while useful for conceptual clarity, increasingly requires nuanced interpretation in light of contemporary AI capabilities that challenge traditional categorical boundaries.
Artificial Narrow Intelligence (weak AI) refers to systems that excel at specific tasks within limited domains but cannot generalise their abilities across different challenges. Examples include virtual assistants, streaming platform recommendation systems, and autonomous vehicle navigation systems (Goodfellow et al. 2016), which rely on machine learning, neural networks, and rule-based programming. ANI dominates current AI applications. IBM's Deep Blue, which defeated Garry Kasparov in 1997, perfectly illustrates ANI's nature: it mastered chess through specialised algorithms but couldn't perform unrelated tasks (Campbell et al. 2002). Similarly, modern image recognition models achieve superhuman accuracy in narrow domains but cannot adapt to different challenges without retraining. Searle (1980) argues that despite their sophistication, ANI systems lack genuine understanding or consciousness, functioning merely as advanced tools. This qualitative gap raises questions about whether narrow systems can evolve into general intelligence. Recent narrow‐domain advances continue to push performance limits while illustrating ANI’s boundaries: for example, ESMFold—an AI language model for protein structure prediction—enables high‐throughput protein–peptide docking with acceptable accuracy in only 30 seconds per complex, yet fails on many cases outside its training distribution (Zalewski et al. 2025). Similarly, GPT-4 achieved a 90th-percentile score on the USMLE medical licensing exams, demonstrating exceptional performance in a specialised domain but lacking general task flexibility (Kung et al. 2023). And while Claude 3 sets a high bar for conversational reasoning, the newly released Claude 4 continues to refine these narrow competencies without moving closer to full general intelligence (Anthropic 2025).
Artificial General Intelligence represents a theoretical leap from ANI, referring to AI systems capable of performing any intellectual task that a human can undertake. AGI would possess the ability to reason, learn, and adapt across diverse domains without requiring task-specific programming. This concept aligns with Turing’s (1950) vision of a machine that could convincingly simulate human intelligence across varied contexts, as articulated in his seminal paper on the Turing Test. AGI remains a hypothetical construct, with no fully realised examples in existence. However, research efforts, such as those by DeepMind and OpenAI, aim to approach AGI through advancements in reinforcement learning and large-scale neural networks (Silver et al. 2016). For instance, AlphaGo’s ability to master the game of Go demonstrated a degree of adaptability, though it still fell short of true general intelligence due to its domain-specific focus. The pursuit of AGI raises significant theoretical and ethical questions. Kurzweil (2005) predicts that AGI could emerge by the 2030s, driven by exponential growth in computational power and algorithmic sophistication. Contemporary predictions have largely vindicated Kurzweil's accelerated timeline, with industry leaders being even more optimistic than his original projections: Sam Altman, CEO of OpenAI, suggests AGI could arrive by 2025; Dario Amodei, CEO of Anthropic, predicts "powerful AI" by 2026; and Demis Hassabis, CEO of Google DeepMind, considers AGI "in the next five to ten years" (Varanasi 2024).
Artificial Superintelligence represents hypothetical AI systems that would surpass human intelligence across all domains, including creativity, problem-solving, and social skills. ASI, as conceptualised by Bostrom (2014), would possess the ability to improve its own architecture through recursive self-enhancement, distinguishing it from AGI. ASI remains entirely speculative, with no consensus on feasibility or timeline. While some experts argue the complexity of human intelligence may pose insurmountable barriers (Hofstadter 1999), others caution about potential existential risks if superintelligent systems become misaligned with human values (Yudkowsky 2008). The discourse around ASI blends optimism about revolutionary advances in medicine and energy with serious concerns about unpredictable behaviour. This tension underscores the need for robust governance frameworks, as advocated by Floridi (2021), to ensure any development toward superintelligence remains beneficial and safe.
The ANI, AGI, and ASI framework, while useful, has limitations. This categorisation implies a linear progression that simplifies the complex interplay of technical, ethical, and societal factors in AI development. The boundaries between categories are increasingly blurred by adaptable systems like large language models (Brown et al. 2020), and ASI's speculative nature raises questions about whether it represents a realistic endpoint or merely a philosophical construct. Recent scholarship has introduced intermediate frameworks such as “frontier AI models”, denoting large-scale systems whose capabilities approach generality without full autonomy. The 2025 AI Index Report highlights frontier models like GPT-4o and Claude 3, which demonstrate advanced reasoning and multimodal processing but remain constrained by training regimes and require further alignment before approaching AGI levels (Stanford HAI 2025). The field requires interdisciplinary approaches that extend beyond technical considerations. Russell (2019) advocates for value-aligned AI that prioritises human well-being regardless of intelligence level, highlighting the need to integrate insights from philosophy, sociology, and policy studies into AI research. In conclusion, these categories provide a structured lens for understanding AI's evolution from narrow systems to hypothetical superintelligent entities. While ANI dominates current applications, AGI remains an ambitious goal, and ASI presents both profound opportunities and risks. As AI advances, ongoing research and dialogue across disciplines will be essential to navigate challenges and responsibly harness these transformative technologies.
References:
1. Anthropic. 2025. Introducing Claude 4. Available at: https://www.anthropic.com/news/claude-4 ^ Back
2. Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. – ^ Back
3. Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. ‘Language Models Are Few-Shot Learners’. Advances in Neural Information Processing Systems 33: 1877–1901. ^ Back
4. Campbell, Murray, A. Joseph Hoane Jr, and Feng-hsiung Hsu. 2002. ‘Deep Blue’. Artificial Intelligence 134 (1–2): 57–83. ^ Back
5. Dreyfus, Hubert L. 1992. What Computers Still Can't Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press. – ^ Back
6. Floridi, Luciano. 2021. “Establishing the Rules for Building Trustworthy AI.” In Ethics, Governance, and Policies in Artificial Intelligence, 41–45. – ^ Back
7. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. Cambridge, MA: MIT Press. – ^ Back
8. Hofstadter, Douglas R. 1999. Gödel, Escher, Bach: An Eternal Golden Braid. New York: Basic Books. – ^ Back
9. Kurzweil, Ray. 2005. ‘The Singularity Is Near’. In Ethics and Emerging Technologies, 393–406. London: Palgrave Macmillan UK. – ^ Back
10. Russell, Stuart. 2019. Human Compatible: AI and the Problem of Control. London: Penguin UK. – ^ Back
11. Searle, John R. 1980. ‘Minds, Brains, and Programs’. Behavioral and Brain Sciences 3 (3): 417–24. – ^ Back
12. Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., and Dieleman, S. 2016. ‘Mastering the Game of Go with Deep Neural Networks and Tree Search’. Nature 529 (7587): 484–489. https://doi.org/10.1038/nature16961 – ^ Back
13. Stanford HAI (Human‑Centered AI Institute). 2025. Artificial Intelligence Index Report 2025. Stanford University. Available at: https://hai.stanford.edu/assets/files/hai_ai_index_report_2025.pdf ^ Back
14. Turing, Alan M. 1950. ‘Computing Machinery and Intelligence’. Mind 59 (236): 433–460. ^ Back
15. Varanasi, Lakshmi. 2025. Here’s How Far We Are From AGI, According to the People Developing It. Business Insider, Available at: https://www.businessinsider.com/agi-predictions-sam-altman-dario-amodei-geoffrey-hinton-demis-hassabis-2024-11 ^ Back
16. Yudkowsky, Eliezer. 2008. ‘Artificial Intelligence as a Positive and Negative Factor in Global Risk.’ In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 308–345. New York: Oxford University Press. – ^ Back
17. Zalewski, Marcin, Błażej Wallner, and Sebastian Kmiecik. 2025. Protein–Peptide Docking with ESMFold Language Model. Journal of Chemical Theory and Computation 21(6): 2817–2821. ^ Back