Chapter 3

Persona-based Prompt Patterns: Mega Prompts, Expert Prompts, and Tree of Thoughts

Persona-based approaches contextualise AI responses within specific roles, expertise domains, or cognitive frameworks, thereby improving both relevance and quality of generated outputs (Kong et al. 2024). These techniques encompass mega prompts providing extensive contextual information, expert prompts assigning specific professional roles, and advanced reasoning frameworks such as Tree of Thoughts

The Environmental Costs of Artificial Intelligence: A Growing Concern

The rapid integration of Artificial Intelligence (AI) into global economies has driven transformative advancements in sectors such as healthcare and agriculture. However, this technological revolution incurs significant environmental costs, particularly through substantial energy consumption and greenhouse gas (GHG) emissions. The carbon footprint of AI, stemming from energy-intensive processes like hardware

Small Language Models (SLMs) and Knowledge Distillation

Small Language Models (SLMs) are compact neural networks designed to perform natural language processing (NLP) tasks with significantly fewer parameters and lower computational requirements than their larger counterparts. SLMs aim to deliver robust performance in resource-constrained environments, such as mobile devices or edge computing systems, where efficiency is paramount. A

Main Types of Generative Models and Their Operating Principles: GANs, Diffusion Models, and Autoregressive Models

Generative models represent a fundamental paradigm in machine learning, enabling computers to create new data samples that closely mirror real-world examples. These models have become indispensable tools across diverse fields including image creation, natural language processing, and scientific research. Three principal architectures have emerged as dominant approaches: Generative Adversarial Networks

Model Evaluation and Performance Measurement: Methods for Determining Effectiveness in Language Model Creation

Creating effective large language models (LLMs) involves two critical stages: pre-training and fine-tuning. These stages enable models to progress from capturing broad linguistic knowledge to excelling in specific tasks, powering applications such as automated translation, sentiment analysis, and conversational agents. Rigorous evaluation and performance measurement ensure LLMs meet general and

NLP Tasks and Applications: Core Techniques and Their Impact

Natural Language Processing (NLP) encompasses a variety of tasks, each with distinct methodologies and applications, including Named Entity Recognition (NER), sentiment analysis, classification, machine translation, summarisation, and information extraction. These tasks underpin numerous real-world applications, from virtual assistants to automated content analysis. This essay explores these core NLP tasks, their

Main Types of Machine Learning: Supervised, Unsupervised, and Reinforcement Learning

Machine learning (ML), a fundamental pillar of artificial intelligence, equips computational systems with the capacity to derive insights from data and refine their performance autonomously. Its profound influence permeates diverse domains, encompassing medical diagnostics, financial modelling, and autonomous systems. This essay offers a critical examination of the three principal paradigms

Typologies of Artificial Intelligence: Narrow, General, and Superintelligent Systems

Artificial Intelligence (AI) has emerged as a transformative field within computer science, encompassing technologies that enable machines to perform tasks that typically require human intelligence, such as reasoning, learning, and problem-solving. In academic literature, AI is often categorised into three primary types: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI)