GenAI textbook

GenAI textbook

Main Types of Generative Models and Their Operating Principles: GANs, Diffusion Models, and Autoregressive Models

Generative models represent a fundamental paradigm in machine learning, enabling computers to create new data samples that closely mirror real-world examples. These models have become indispensable tools across diverse fields including image creation, natural language processing, and scientific research. Three principal architectures have emerged as dominant approaches: Generative Adversarial Networks

The Place of GenAI in the AI Hierarchy: From Neural Networks to Large Language Models

Generative AI relies on a specialised branch of machine learning (ML), namely deep learning (DL) algorithms, which employ neural networks to detect and exploit patterns embedded within data. By processing vast volumes of information, these algorithms are capable of synthesising existing knowledge and applying it creatively. As a result, generative

Benchmark-based Evaluation of Language Models and Their Limits

Benchmarking is the practice of evaluating artificial intelligence models on a standard suite of tasks under controlled conditions. In the context of large language models (LLMs), benchmarks provide a common yardstick for measuring capabilities such as factual knowledge, reasoning, and conversational coherence. They emerged because the proliferation of new models

Principles and Methods of Model Evaluation

Creating effective large language models (LLMs) involves two critical stages: pre-training and fine-tuning. These stages enable models to progress from capturing broad linguistic knowledge to excelling in specific tasks, powering applications such as automated translation, sentiment analysis, and conversational agents. Rigorous evaluation and performance measurement ensure LLMs meet general and

Fine-tuning: Adapting General Models for Specific Tasks and Applications

The evolution of machine learning has led to the development of powerful general models, such as BERT, GPT-3, and Vision Transformers, which have transformed artificial intelligence applications across diverse domains. These models, pre-trained on extensive datasets like Common Crawl for natural language processing or ImageNet for computer vision, demonstrate exceptional

The Pre-Training Process: Principles, Methods, and Mechanisms of Language Pattern Acquisition

Pre-training underpins the capabilities of large-scale language models like BERT and GPT, enabling them to capture linguistic patterns from extensive text corpora. This process equips models with versatile language understanding and adaptability through fine-tuning for tasks such as translation or sentiment analysis. The principles, methods, and mechanisms of pre-training reveal

The Transformer Revolution: Breakthrough in Language Modelling and Its Impact on AI Development

Building upon the foundational principles of the attention mechanism discussed in the previous section, the Transformer architecture represents a paradigm shift by leveraging attention exclusively, completely replacing the recurrent structures that once dominated sequence modeling. This architectural innovation, first unveiled by Vaswani et al. (2017), has since catalysed a seismic

The Attention Mechanism: The Key to Understanding Linguistic Relationships

The attention mechanism has fundamentally reshaped natural language processing (NLP), enabling models to capture complex linguistic relationships with unprecedented accuracy. Introduced prominently in Vaswani et al. (2017), attention allows models to focus on relevant parts of input sequences, enhancing performance in tasks like machine translation and sentiment analysis. This essay

NLP Tasks and Applications: Core Techniques and Their Impact

Natural Language Processing (NLP) encompasses a variety of tasks, each with distinct methodologies and applications, including Named Entity Recognition (NER), sentiment analysis, classification, machine translation, summarisation, and information extraction. These tasks underpin numerous real-world applications, from virtual assistants to automated content analysis. Named Entity Recognition involves identifying and classifying named

Challenges in Natural Language Processing: Linguistic Ambiguity, Context, and Cultural Differences

The transformative potential of Natural Language Processing (NLP), as a cornerstone of artificial intelligence, lies in its ability to enable machines to understand and generate human language, facilitating advanced human-computer interaction and knowledge extraction. However, the complexity of human language presents significant obstacles, particularly in managing linguistic ambiguity, contextual nuances,