Machine learning (ML), a subset of AI, focuses on algorithms that enable systems to learn patterns from data and make predictions or decisions. Mitchell (1997, 2) provides a foundational definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” This framework encapsulates ML’s core principles: task definition, performance evaluation, and experiential learning. ML algorithms are broadly categorised into three paradigms: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning involves training models on labelled datasets to predict outcomes, as seen in applications like image classification (e.g., Krizhevsky et al. 2012). Unsupervised learning identifies patterns in unlabelled data, such as clustering or dimensionality reduction (e.g., Hinton & Salakhutdinov 2006). Reinforcement learning, inspired by behavioural psychology, trains agents to optimise rewards through trial and error, exemplified by AlphaGo’s success (Silver et al. 2016).
At the algorithmic level, ML relies on mathematical foundations, including linear algebra, probability theory, and optimisation. For instance, gradient descent underpins many ML models by iteratively minimising error in parameter estimation (Goodfellow et al. 2016). Neural networks, particularly deep learning architectures, have revolutionised ML by enabling hierarchical feature learning, significantly advancing fields like natural language processing and computer vision (LeCun et al. 2015). However, ML’s reliance on data raises challenges. Overfitting, where models memorise training data rather than generalising, and bias in datasets, which can perpetuate societal inequalities, are persistent issues (Dwork et al. 2012). Furthermore, the “black box” nature of complex models, such as deep neural networks, complicates interpretability, prompting research into explainable AI (Gunning 2017).
The AI ecosystem comprises multiple subfields, including ML, expert systems, robotics, natural language processing (NLP), computer vision, and knowledge representation. ML’s prominence stems from its versatility and empirical success, but it does not operate in isolation. Its integration with other AI paradigms enhances system capabilities, while its limitations highlight the need for complementary approaches. In NLP, ML underpins models like transformers, enabling advancements in language generation and translation (Vaswani et al. 2017). However, symbolic AI, which relies on predefined rules and knowledge bases, remains relevant for tasks requiring explicit reasoning, such as legal expert systems (Bench-Capon 1993). Similarly, in robotics, ML facilitates perception and motion planning, but control theory and planning algorithms are critical for precise execution (Siciliano et al. 2008).
ML’s data-driven approach marks a departure from earlier rule-based systems, which faced challenges in scalability and adaptability. The transition to ML, fuelled by enhanced computational power and vast data availability, has been a key driver of AI’s recent achievements. However, ML’s focus on narrow tasks raises concerns about its limitations in achieving general intelligence. Emerging hybrid approaches that combine ML with symbolic reasoning aim to address these shortcomings, fostering more robust and adaptable systems. The pursuit of artificial general intelligence (AGI) further underscores the need to integrate ML with other paradigms, such as cognitive architectures, to develop intelligence that mirrors human versatility (Boden 2016).
Machine learning thus stands not merely as a central pillar of the AI ecosystem, but as a dynamically evolving field that continuously reshapes the possibilities and boundaries of artificial intelligence. Whilst current ML paradigms have achieved remarkable successes in pattern recognition and prediction, future AI systems will likely be founded upon hybrid approaches that combine the empirical strength of machine learning with the explicit reasoning capabilities of symbolic AI and other complementary methods. The field's continued development requires not only the resolution of technical challenges—such as interpretability, bias mitigation, and improved generalisation—but also careful consideration of ethical and societal questions, particularly on the path towards artificial general intelligence (AGI). Ultimately, machine learning is not an end in itself, but rather a tool for solving human problems and extending our capabilities, whose proper application will be decisive in determining AI's future role in society.
References:
1. Bench-Capon, Trevor. 1993. ‘Neural Networks and Open Texture’. In Proceedings of the 4th International Conference on Artificial Intelligence and Law, 292–297. – ^ Back
2. Boden, Margaret A. 2016. AI: Its Nature and Future. Oxford University Press. – ^ Back
3. Dwork, Cynthia, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. ‘Fairness through Awareness’. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214–226. – ^ Back
4. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. Cambridge, MA: MIT Press. – ^ Back
5. Gunning, David. 2017. ‘Explainable Artificial Intelligence (XAI)’. Defense Advanced Research Projects Agency (DARPA), nd Web 2 (2): 1. – ^ Back
6. Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. 2006. ‘Reducing the Dimensionality of Data with Neural Networks’. Science 313 (5786): 504–507. – ^ Back
7. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ‘ImageNet Classification with Deep Convolutional Neural Networks’. Advances in Neural Information Processing Systems 25. ^ Back
8. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. ‘Deep Learning’. Nature 521 (7553): 436–444. – ^ Back
9. Mitchell, Tom M. 1997. Machine Learning. New York: McGraw-Hill. – ^ Back
10. Siciliano, Bruno, Oussama Khatib, and Torsten Kröger, eds. 2008. Springer Handbook of Robotics. Vol. 200. Berlin: Springer. – ^ Back
11. Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., and Dieleman, S. 2016. ‘Mastering the Game of Go with Deep Neural Networks and Tree Search’. Nature 529 (7587): 484–489. https://doi.org/10.1038/nature16961 – ^ Back
12. Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. ‘Attention Is All You Need’. arXiv. doi:10.48550/ARXIV.1706.03762 – ^ Back