Chapter 3

Practical Applications of Research Agents and Tools

Research agents and tools represent a burgeoning field within artificial intelligence, where autonomous systems leverage large language models (LLMs) and modular architectures to facilitate scientific inquiry and innovation. These agents operate by integrating perception, reasoning, planning, and action capabilities, enabling them to perform tasks such as literature review, hypothesis generation,

Citing Generative AI in Scientific Research: Publishing Guidelines and Ethical Requirements

Publishers have recognised both the potential and the risks of generative AI and have formulated policies accordingly. Broadly, these policies emphasise three principles: (1) human authorship – GAI tools cannot be credited as authors; (2) transparency and disclosure – authors must disclose when and how GAI has been used; and (3) accountability

Persona-based Prompt Patterns: Mega Prompts, Expert Prompts, and Tree of Thoughts

Persona-based approaches contextualise AI responses within specific roles, expertise domains, or cognitive frameworks, thereby improving both relevance and quality of generated outputs (Kong et al. 2024). These techniques encompass mega prompts providing extensive contextual information, expert prompts assigning specific professional roles, and advanced reasoning frameworks such as Tree of Thoughts

The Environmental Costs of Artificial Intelligence: A Growing Concern

The rapid integration of Artificial Intelligence (AI) into global economies has driven transformative advancements in sectors such as healthcare and agriculture. However, this technological revolution incurs significant environmental costs, particularly through substantial energy consumption and greenhouse gas (GHG) emissions. The carbon footprint of AI, stemming from energy-intensive processes like hardware

Small Language Models (SLMs) and Knowledge Distillation

Small Language Models (SLMs) are compact neural networks designed to perform natural language processing (NLP) tasks with significantly fewer parameters and lower computational requirements than their larger counterparts. SLMs aim to deliver robust performance in resource-constrained environments, such as mobile devices or edge computing systems, where efficiency is paramount. The

Main Types of Generative Models and Their Operating Principles: GANs, Diffusion Models, and Autoregressive Models

Generative models represent a fundamental paradigm in machine learning, enabling computers to create new data samples that closely mirror real-world examples. These models have become indispensable tools across diverse fields including image creation, natural language processing, and scientific research. Three principal architectures have emerged as dominant approaches: Generative Adversarial Networks

Principles and Methods of Model Evaluation

Creating effective large language models (LLMs) involves two critical stages: pre-training and fine-tuning. These stages enable models to progress from capturing broad linguistic knowledge to excelling in specific tasks, powering applications such as automated translation, sentiment analysis, and conversational agents. Rigorous evaluation and performance measurement ensure LLMs meet general and

NLP Tasks and Applications: Core Techniques and Their Impact

Natural Language Processing (NLP) encompasses a variety of tasks, each with distinct methodologies and applications, including Named Entity Recognition (NER), sentiment analysis, classification, machine translation, summarisation, and information extraction. These tasks underpin numerous real-world applications, from virtual assistants to automated content analysis. Named Entity Recognition involves identifying and classifying named

Main Types of Machine Learning: Supervised, Unsupervised, and Reinforcement Learning

Machine learning (ML), a fundamental pillar of artificial intelligence, equips computational systems with the capacity to derive insights from data and refine their performance autonomously. Its profound influence permeates diverse domains, encompassing medical diagnostics, financial modelling, and autonomous systems. This essay offers a critical examination of the three principal paradigms

Typologies of Artificial Intelligence: Narrow, General, and Superintelligent Systems

Having explored the definitional complexities and historical evolution of AI, we now examine how these developments have crystallised into systematic taxonomies. The progression from symbolic systems to contemporary neural architectures—traced in the previous sections—has given rise to increasingly sophisticated attempts to classify AI systems according to their capabilities