Having examined the diverse definitional approaches to AI, we now turn to the historical trajectory that has shaped these conceptual frameworks. The theoretical foundations discussed above—particularly Turing's behavioural criterion and the Dartmouth Conference's formalisable intelligence hypothesis—provided the intellectual scaffolding for AI's subsequent development. However, the field's evolution has been marked not by linear progress but by distinct paradigmatic shifts, periods of optimism and disillusionment, and fundamental reconceptualisations of what constitutes machine intelligence. The practical implementation of these early theoretical visions began in earnest during the 1950s and 1960s, translating the definitional debates explored above into concrete computational systems. This historical development reveals how abstract definitional questions became embedded in specific technological approaches, each reflecting particular assumptions about the nature of intelligence and its mechanical realisation.
During the 1960s and 1970s, symbolic AI dominated the field. Systems such as the General Problem Solver (Newell and Simon 1961) demonstrated rule-based reasoning, while ELIZA (Weizenbaum 1966) simulated therapeutic dialogue through pattern matching. These programs illustrated early successes but were limited to narrow tasks and domains. The 1980s brought expert systems like MYCIN, which could diagnose infections using encoded medical knowledge (Buchanan & Shortliffe 1984). However, reliance on hand-crafted rules proved brittle. The field struggled with ambiguity, uncertainty, and scalability, leading to reduced funding and the first so-called “AI winter.” A significant paradigm shift occurred in the 1990s as AI researchers increasingly adopted statistical and data-driven approaches. Machine learning (ML), defined as the ability of systems to improve through experience, gained traction with the development of algorithms like decision trees and backpropagation for training neural networks (Rumelhart et al. 1986). IBM’s Deep Blue famously defeated chess champion Garry Kasparov in 1997, signalling AI’s potential in constrained domains (Campbell et al. 2002). However, generalisation remained a challenge.
The 2010s marked the advent of deep learning, enabled by advances in computational power, big data, and graphics processing units. A key breakthrough came with AlexNet, a deep convolutional neural network that outperformed all competitors in image recognition tasks (Krizhevsky et al. 2012). This catalysed a wave of progress across vision, speech, and natural language processing. Transformer-based models like BERT (Devlin et al. 2019) and GPT-3 (Brown et al. 2020) demonstrated human-like language capabilities. Reinforcement learning also matured, exemplified by AlphaGo’s 2016 victory over world Go champion Lee Sedol (Silver et al. 2016). These breakthroughs not only redefined AI’s capabilities, but also reignited debates about its ethical, epistemological, and societal implications—especially concerning transparency, resource concentration, and long-term risks (Crawford 2021; Marcus 2018). Contemporary developments have intensified these concerns, with the rapid deployment of large language models raising new questions about misinformation, privacy, bias amplification, and the concentration of AI capabilities among a few major technology companies. The implementation of regulatory frameworks such as the EU AI Act in 2024 reflects growing institutional recognition of these challenges (EU AI Act 2024).
The period from 2020 to 2025 witnessed unprecedented acceleration in AI development, fundamentally reshaping the field. OpenAI's release of ChatGPT on November 30, 2022, marked a watershed moment, demonstrating conversational AI capabilities that captured global attention and reached 100 million users within two months (OpenAI 2022). The subsequent release of GPT-4 in March 2023 represented another significant leap, exhibiting near-human performance on professional exams and demonstrating multimodal capabilities that could process both text and images (OpenAI 2023). The introduction of reasoning-based models like OpenAI's o1 in September 2024 further advanced the field by incorporating explicit chain-of-thought reasoning, achieving 83% accuracy on International Mathematics Olympiad qualifying exams compared to GPT-4o's 13% (OpenAI 2024). These developments, alongside competing systems from Google (Gemini), Anthropic (Claude), and others, have democratised access to sophisticated AI capabilities and accelerated adoption across industries, with 78% of organisations reporting AI usage by 2024 compared to 55% the previous year (Stanford HAI 2024).
In conclusion, the evolution of AI embodies more than just a sequence of technological breakthroughs; it represents a shifting understanding of intelligence itself. From Turing’s theoretical propositions to today’s deep learning systems and generative models, the field has progressively redefined what it means for machines to act, learn, and decide. Yet, each advancement brings new ethical, epistemological, and political questions that cannot be resolved through technical solutions alone. As AI systems increasingly mediate human experience, shape institutions, and influence global power dynamics, it is imperative that their development be guided not only by innovation but also by critical reflection, inclusivity, and democratic accountability.
References:
1. Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. ‘Language Models Are Few-Shot Learners’. Advances in Neural Information Processing Systems 33: 1877–1901. ^ Back
2. Buchanan, Bruce G., and Edward H. Shortliffe. 1984. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Reading, MA: Addison-Wesley Longman Publishing Co. ^ Back
3. Campbell, Murray, A. Joseph Hoane Jr, and Feng-hsiung Hsu. 2002. ‘Deep Blue’. Artificial Intelligence 134 (1–2): 57–83. ^ Back
4. Crawford, Kate. 2021. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press. https://yalebooks.yale.edu/book/9780300209570/the-atlas-of-ai/ – ^ Back
5. Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. ‘BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding’. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 4171–86. ^ Back
6. European Union. 2024. Artificial Intelligence Act (Regulation 2024/1689). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 ^ Back
7. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ‘ImageNet Classification with Deep Convolutional Neural Networks’. Advances in Neural Information Processing Systems 25. ^ Back
8. Marcus, Gary. 2018. ‘Deep Learning: A Critical Appraisal’. arXiv preprint. https://arxiv.org/abs/1801.00631 – ^ Back
9. Newell, Allen, and Herbert A. Simon. 1961. ‘GPS, A Program That Simulates Human Thought’. In H. Billing (ed.), Lernende Automaten, 109–124. München: R. Oldenbourg. PDF link – ^ Back
10. OpenAI. 2022. Introducing ChatGPT. OpenAI Blog, November 30. Available at: https://openai.com/blog/chatgpt ^ Back
11. OpenAI. 2023. GPT‑4 Technical Report. OpenAI Research, March 14. Available at: https://arxiv.org/abs/2303.08774 ^ Back
12. OpenAI. 2024. Learning to Reason with LLMs. OpenAI Research, September 12. Available at: https://openai.com/index/learning-to-reason-with-llms/ ^ Back
13. Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. 1986. ‘Learning Representations by Back-Propagating Errors’. Nature 323 (6088): 533–536. ^ Back
14. Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., and Dieleman, S. 2016. ‘Mastering the Game of Go with Deep Neural Networks and Tree Search’. Nature 529 (7587): 484–489. https://doi.org/10.1038/nature16961 – ^ Back
15. Stanford HAI (Human-Centered AI Institute). 2024. Artificial Intelligence Index Report 2024. Stanford University. Available at: https://hai.stanford.edu/assets/files/hai_ai-index-report-2024-smaller2.pdf ^ Back
16. Weizenbaum, Joseph. 1966. ‘ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine’. Communications of the ACM 9 (1): 36–45. https://doi.org/10.1145/365153.365168 – ^ Back