Core Prompt Types by Complexity Levels: General, Specific, and Chain of Thought prompts

Core Prompt Types by Complexity Levels: General, Specific, and Chain of Thought prompts
Source: Kamran Abdullayev For Unsplash+

Prompts are central to human-AI interaction, with their complexity directly influencing the performance of large language models (LLMs). Within prompt engineering, prompts can be categorised by their structural and functional complexity. This section focuses on three core prompt types: short, general questions or instructions; longer, specific questions with defined output requirements; and Chain of Thought (CoT) prompting.

Short, general prompts are concise and open-ended, such as “What are the main drivers of social inequality?” or “Summarise theories of political participation.” These prompts allow AI flexibility in generating responses, making them suitable for exploratory research. Brown et al. (2020) highlight their utility in probing broad conceptual knowledge, though their ambiguity can lead to inconsistent or overly general outputs. Wei et al. (Wei et al. 2022) argue that general prompts struggle to capture precise intent, limiting their use in rigorous academic contexts. Nevertheless, they are valuable for exploratory research, such as brainstorming or hypothesis generation, where broad overviews initiate deeper investigation.

Longer, specific prompts incorporate detailed instructions and explicit constraints, such as "Provide a 1,000-word literature review on machine learning applications in bioinformatics, citing at least five peer-reviewed sources from the last three years, formatted in APA style." These prompts reduce ambiguity, guiding AI towards precise, research-ready outputs. Liu et al. (Liu et al. 2023) demonstrate that specific prompts enhance LLM performance on tasks demanding factual accuracy and adherence to academic standards. Such prompts align outputs with scholarly needs, producing results suitable for journal submissions or policy reports, as shown in studies on prompt specificity (Reynolds & McDonell 2021). For example, “Analyse the impact of social media on political polarisation in the EU, including statistical evidence and three case studies” yields structured analyses. Crafting these prompts requires familiarity with research conventions, and overly rigid instructions may limit novel insights (Kaplan & Haenlein 2020). They are essential for tasks like policy evaluation or ethnographic reviews, where precision is paramount.

Chain of Thought (CoT) prompting, introduced by Kojima et al. (Kojima et al. 2022), prompts AI to articulate reasoning step-by-step, as in “Evaluate the validity of rational choice theory in explaining voter turnout, detailing each assumption and supporting evidence.” CoT excels in complex analytical tasks, enhancing reasoning transparency, which is crucial for fields like political economy or social network analysis. Its structured approach supports novel applications, such as evaluating theoretical models or interpreting qualitative data (Wang et al. 2022; Wang et al. 2024). Recent advancements show that reasoning can be further improved by techniques like representation engineering, where control vectors modulate LLM activations to enhance performance on reasoning tasks (Højer et al. 2025). For example, “Analyse survey data on public trust in institutions, explaining each statistical step” aids rigorous data interpretation. CoT requires carefully designed prompts, which can be time-intensive, and its computational demands may limit real-time use (Zhao et al. 2025).

In sum, short, general prompts prioritise accessibility and creativity, specific prompts ensure precision, and CoT prompting excels in complex reasoning. Each type serves distinct roles in prompt engineering, with effectiveness tied to task demands and user expertise. Understanding these core complexity levels is essential for optimising human-AI collaboration.

References:

1. Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. ‘Language Models Are Few-Shot Learners’. Advances in Neural Information Processing Systems 33: 1877–1901. ^ Back


2. Højer, Bertram, Oliver Jarvis, and Stefan Heinrich. 2025.
Improving Reasoning Performance in Large Language Models via Representation Engineering.
arXiv preprint arXiv:2504.19483.
https://arxiv.org/abs/2504.19483
^ Back


3. Kaplan, Andreas, and Michael Haenlein. 2020. Rulers of the World, Unite! The Challenges and Opportunities of Artificial Intelligence. Business Horizons 63 (1): 37–50. https://doi.org/10.1016/j.bushor.2019.09.003 ^ Back


4. Kojima, Takeshi, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large Language Models Are Zero-Shot Reasoners. Advances in Neural Information Processing Systems 35: 22199–22213. https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b81b3f78-Paper-Conference.pdf ^ Back


5. Liu, Pengfei, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys 55 (9): 1–35. https://doi.org/10.1145/3564445 ^ Back


6. Reynolds, Laria, and Kyle McDonell. 2021. Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–7. https://doi.org/10.1145/3411763.3450381 ^ Back


7. Wang, Han, Archiki Prasad, Elias Stengel-Eskin, and Mohit Bansal. 2024. Soft Self-Consistency Improves Language Model Agents. arXiv preprint arXiv:2402.13212. https://arxiv.org/abs/2402.13212 ^ Back


8. Wang, Xuezhi, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Sharan Narang et al. 2023. Self-Consistency Improves Chain of Thought Reasoning in Language Models. arXiv preprint arXiv:2203.11171. https://arxiv.org/abs/2203.11171 ^ Back


9. Wei, Jason, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, and Fei Xia et al. 2022. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. arXiv preprint arXiv:2201.11903. https://arxiv.org/abs/2201.11903 ^ Back


10. Zhao, Wayne Xin, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, et al. 2025. A Survey of Large Language Models. arXiv preprint arXiv:2303.18223. https://arxiv.org/abs/2303.18223 ^ Back