Comparing leading large language models: architectures, performance and specialised capabilities
Most contemporary LLMs employ a decoder‑only transformer architecture, which processes sequences in parallel via self‑attention. However, scaling dense transformers linearly in size increases computation and cost. Mixture‑of‑experts (MoE) approaches address this by activating only a subset of parameters per token. In the Switch Transformer, MoE routing