Chain-of-Thought Prompting in Generative AI and Large Language Models (LLMs)

Chain-of-Thought Prompting in Generative AI and Large Language Models (LLMs)
Chain-of-Thought Prompting in Generative AI and Large Language Models (LLMs)

Imagine having a conversation with an AI that doesn't just provide answers but walks you through its entire thought process, step by step. This isn't science fiction—it's the reality of Chain-of-Thought (CoT) prompting, a groundbreaking technique that has transformed how we interact with Large Language Models (LLMs). As artificial intelligence continues to reshape industries and redefine what's possible in automated reasoning, understanding and mastering Chain-of-Thought prompting has become essential for anyone looking to harness the full potential of modern AI systems.

Chain-of-Thought prompting represents a paradigm shift from traditional prompt engineering, moving beyond simple input-output interactions to create a structured dialogue that mirrors human problem-solving processes. This innovative approach has proven particularly effective in enhancing the performance of generative AI models across complex tasks, from mathematical reasoning to logical deduction and creative problem-solving. By encouraging models to break down complex problems into manageable steps, CoT prompting not only improves accuracy but also provides transparency into the AI's reasoning process, making it an invaluable tool for both developers and end-users.

Throughout this comprehensive guide, we'll explore the fundamental principles of Chain-of-Thought prompting, examine its various applications across different domains, and provide practical strategies for implementation. Whether you're a data scientist looking to optimize model performance, a business leader seeking to understand AI capabilities, or a curious individual wanting to better understand how modern AI systems think, this article will equip you with the knowledge and tools needed to effectively leverage this powerful technique.

Understanding Chain-of-Thought Prompting: The Foundation of Advanced AI Reasoning

Chain-of-Thought prompting is fundamentally about teaching AI models to think before they speak. Unlike traditional prompting methods that expect immediate answers, CoT prompting encourages models to articulate their reasoning process through a series of intermediate steps. This approach draws inspiration from human cognitive processes, where complex problems are naturally broken down into smaller, more manageable components that can be systematically addressed.

The core principle behind Chain-of-Thought prompting lies in its ability to activate the latent reasoning capabilities already present within Large Language Models. These models, trained on vast datasets containing human-written text, have absorbed countless examples of step-by-step reasoning and problem-solving approaches. However, without proper prompting, they may default to providing direct answers without demonstrating the logical pathway that led to their conclusions. CoT prompting serves as a key that unlocks these embedded reasoning patterns, allowing models to exhibit more sophisticated analytical thinking.

What makes Chain-of-Thought prompting particularly powerful is its versatility across different types of reasoning tasks. Whether dealing with mathematical calculations, logical puzzles, scientific hypotheses, or creative challenges, the fundamental approach remains consistent: break the problem down, work through each component systematically, and build toward a comprehensive solution. This methodology not only improves the accuracy of AI responses but also makes the entire process more transparent and verifiable.

The effectiveness of CoT prompting becomes even more pronounced when working with larger, more sophisticated models. As model size and complexity increase, the ability to leverage Chain-of-Thought reasoning improves dramatically, suggesting that this technique will become increasingly important as AI systems continue to evolve. This scalability makes CoT prompting not just a current best practice but a foundational skill for future AI interactions.

The Evolution and Science Behind Chain-of-Thought Prompting

The development of Chain-of-Thought prompting didn't happen overnight; it represents the culmination of years of research into human cognition, machine learning, and natural language processing. Early AI systems were designed to provide direct answers to specific questions, but researchers quickly realized that this approach limited the models' ability to handle complex, multi-step problems effectively. The breakthrough came when scientists began to investigate how human experts approach difficult problems and attempted to replicate these cognitive strategies in AI systems.

Research conducted by teams at leading institutions has consistently demonstrated that Chain-of-Thought prompting can improve model performance across a wide range of tasks. Studies have shown performance improvements of 10-50% in mathematical reasoning tasks, with even more dramatic improvements in complex logical reasoning scenarios. These improvements aren't merely statistical anomalies; they represent fundamental enhancements in the models' ability to process and synthesize information in meaningful ways.

The scientific foundation of CoT prompting rests on several key cognitive principles. First, it leverages the concept of decomposition, breaking complex problems into simpler sub-problems that can be solved independently. Second, it utilizes sequential reasoning, where each step builds logically upon the previous one, creating a coherent chain of thought. Third, it incorporates explicit reasoning, making the problem-solving process visible and verifiable rather than hidden within the model's internal computations.

Recent advances in neuroscience have provided additional insights into why Chain-of-Thought prompting is so effective. Brain imaging studies show that human problem-solving activates multiple neural networks simultaneously, with different regions handling various aspects of the reasoning process. CoT prompting appears to activate similar distributed processing patterns in AI models, allowing them to leverage different aspects of their training data and learned representations more effectively.

Types and Variations of Chain-of-Thought Prompting Techniques

The world of Chain-of-Thought prompting encompasses several distinct approaches, each tailored to specific types of problems and reasoning challenges. Understanding these variations is crucial for selecting the most appropriate technique for any given scenario. The most fundamental distinction lies between few-shot CoT prompting and zero-shot CoT prompting, each offering unique advantages and applications.

Few-shot Chain-of-Thought prompting involves providing the model with several examples of how to approach similar problems, complete with step-by-step reasoning demonstrations. This technique is particularly effective when working with domain-specific problems or when you want to establish a particular reasoning style or format. By showing the model multiple examples of how expert humans would approach similar challenges, you create a template that the AI can follow when tackling new, related problems. The key to successful few-shot CoT prompting lies in selecting high-quality examples that clearly demonstrate the desired reasoning process while covering a representative range of problem variations.

Zero-shot Chain-of-Thought prompting, on the other hand, relies on general instructions that encourage step-by-step thinking without providing specific examples. This approach is incredibly versatile and can be applied to novel problems where appropriate examples might not be readily available. The classic zero-shot CoT prompt "Let's think step by step" has proven remarkably effective across diverse domains, suggesting that the models have internalized general principles of systematic reasoning that can be activated with simple cues.

Advanced variations include self-consistency CoT prompting, where the model generates multiple reasoning chains and selects the most consistent answer, and program-aided CoT prompting, which combines natural language reasoning with code execution for mathematical and computational problems. These sophisticated approaches can achieve even higher accuracy rates but require more computational resources and careful implementation.

Mathematical and Logical Reasoning: Where CoT Prompting Shines

Mathematics and logic represent some of the most compelling applications of Chain-of-Thought prompting, primarily because these domains have well-defined rules and procedures that can be systematically applied. When faced with complex mathematical problems, CoT prompting encourages models to break down calculations into individual steps, identify relevant formulas or principles, and work through the solution methodically. This approach not only improves accuracy but also makes it possible to identify and correct errors at specific points in the reasoning chain.

Consider the difference between asking a model to solve a complex word problem directly versus guiding it through a Chain-of-Thought approach. In the direct approach, the model might attempt to leap straight to an answer, potentially making calculation errors or missing crucial problem constraints. With CoT prompting, the model first identifies what information is given, determines what needs to be found, selects appropriate mathematical operations, and then performs calculations step by step. This systematic approach dramatically reduces error rates and makes the solution process transparent and verifiable.

Logical reasoning problems present another excellent showcase for CoT prompting capabilities. When working with syllogisms, conditional statements, or complex logical puzzles, models can use Chain-of-Thought reasoning to track multiple premises, apply logical rules consistently, and build toward valid conclusions. The step-by-step nature of CoT prompting is particularly valuable in logic problems because it makes it easy to identify where reasoning might have gone astray and to verify that each logical step follows properly from the previous ones.

The effectiveness of CoT prompting in mathematical and logical domains has implications beyond these specific areas. Many real-world problems contain mathematical or logical components, and the ability to handle these systematically can improve performance across diverse applications. From financial analysis to scientific modeling, from legal reasoning to strategic planning, the systematic approach fostered by CoT prompting provides a foundation for more reliable and transparent AI assistance.

Creative Problem-Solving and Complex Analysis Through Chain-of-Thought

While Chain-of-Thought prompting is often associated with structured, logical problems, its applications in creative and analytical domains are equally impressive. Creative problem-solving requires a different type of reasoning—one that combines logical thinking with imaginative exploration, alternative perspective-taking, and innovative synthesis. CoT prompting can guide AI models through these complex creative processes by encouraging them to consider multiple approaches, explore different angles, and build ideas systematically.

In creative writing scenarios, for example, Chain-of-Thought prompting can help models develop more sophisticated narratives by working through character development, plot construction, and thematic exploration step by step. Rather than generating stories through purely associative processes, models can use CoT techniques to consider narrative elements systematically: establishing setting and characters, identifying central conflicts, exploring how characters would realistically respond to challenges, and building toward satisfying resolutions. This structured approach often results in more coherent and engaging creative outputs.

Business analysis and strategic planning represent another area where CoT prompting excels in handling complexity. When analyzing market trends, competitive landscapes, or organizational challenges, models can use Chain-of-Thought reasoning to systematically examine multiple factors, consider various scenarios, and build comprehensive assessments. The step-by-step nature of CoT prompting is particularly valuable in business contexts because it makes the reasoning process transparent and allows decision-makers to understand and evaluate the logic behind AI-generated insights.

Complex analytical tasks often require synthesis of information from multiple sources, consideration of various stakeholder perspectives, and evaluation of trade-offs between different options. CoT prompting provides a framework for managing this complexity by encouraging models to address each aspect of the analysis systematically. This approach not only improves the quality of analytical outputs but also makes it easier for human users to follow the reasoning process and identify areas where additional information or alternative perspectives might be valuable.

Practical Implementation Strategies for Maximum Effectiveness

Successfully implementing Chain-of-Thought prompting requires understanding both the technical aspects of prompt construction and the strategic considerations for different use cases. The foundation of effective CoT implementation lies in crafting prompts that clearly communicate the desired reasoning approach while providing sufficient flexibility for the model to adapt to specific problem characteristics. This balance between guidance and flexibility is crucial for achieving optimal results across diverse applications.

One of the most important implementation strategies involves developing a systematic approach to prompt engineering. Start by clearly defining the problem type and identifying the key reasoning steps that would typically be involved in solving similar problems. Then, construct prompts that guide the model through these steps while encouraging it to articulate its thinking at each stage. Effective CoT prompts often include explicit instructions to "explain your reasoning," "show your work," or "think through this step by step," combined with specific guidance about what aspects of the problem to consider.

The selection of appropriate examples for few-shot CoT prompting requires careful consideration of several factors. Examples should be representative of the types of problems you want the model to solve, demonstrate clear and logical reasoning processes, and cover the range of complexity and variation you expect to encounter. Quality is more important than quantity; a few well-chosen examples that clearly illustrate the desired reasoning approach will typically outperform a larger number of less carefully selected examples.

Iterative refinement is essential for optimizing CoT prompting effectiveness. Start with basic implementations and systematically test and refine your approaches based on the results you observe. Pay attention to where the model's reasoning breaks down, identify patterns in errors or inconsistencies, and adjust your prompting strategies accordingly. This iterative approach allows you to develop increasingly sophisticated and effective CoT techniques tailored to your specific needs and applications.

Industry Applications and Real-World Case Studies

The versatility of Chain-of-Thought prompting has led to its adoption across numerous industries, each finding unique ways to leverage this technique for improved AI performance. In the healthcare sector, CoT prompting has been successfully applied to diagnostic reasoning, where models are guided through systematic consideration of symptoms, medical history, and test results to develop differential diagnoses. This approach not only improves diagnostic accuracy but also provides transparency that allows medical professionals to understand and evaluate the AI's reasoning process.

Financial services represent another domain where CoT prompting has proven particularly valuable. Investment analysis, risk assessment, and fraud detection all benefit from the systematic reasoning approaches that CoT prompting encourages. When analyzing investment opportunities, for example, models can be guided through consideration of financial metrics, market conditions, competitive factors, and risk elements in a structured manner that mirrors the approach expert analysts would take. This systematic approach helps ensure that important factors aren't overlooked and that the reasoning behind investment recommendations is clear and defensible.

The education sector has found innovative applications for CoT prompting in developing intelligent tutoring systems and automated assessment tools. When helping students learn complex subjects, AI systems can use Chain-of-Thought reasoning to break down difficult concepts into understandable steps, identify where students might be struggling, and provide targeted assistance. This approach is particularly effective in mathematics and science education, where step-by-step reasoning is fundamental to understanding and problem-solving success.

Legal applications of CoT prompting include contract analysis, legal research, and case strategy development. Legal reasoning often involves working through complex chains of precedent, statutory interpretation, and factual analysis, making it well-suited to Chain-of-Thought approaches. AI systems can be guided through systematic consideration of relevant laws, precedents, and factual circumstances to develop legal analyses that are both thorough and transparent. This capability is particularly valuable for initial case assessment and research tasks, where comprehensive analysis of multiple factors is essential.

Advanced Techniques and Optimization Strategies

As Chain-of-Thought prompting has matured, researchers and practitioners have developed increasingly sophisticated techniques for optimizing its effectiveness. Self-consistency decoding represents one of the most promising advanced approaches, where models generate multiple reasoning chains for the same problem and then select the answer that appears most frequently across these different chains. This technique can significantly improve accuracy, particularly on complex problems where multiple valid reasoning approaches might exist.

Program-aided Chain-of-Thought prompting combines natural language reasoning with code execution, allowing models to perform precise calculations while maintaining clear reasoning narratives. This hybrid approach is particularly effective for problems that involve both conceptual understanding and computational precision, such as scientific modeling or financial analysis. By alternating between natural language explanation and code execution, models can provide both intuitive understanding and mathematical accuracy.

Tree-of-thought prompting extends Chain-of-Thought reasoning by exploring multiple reasoning branches simultaneously, much like how humans might consider several different approaches to a problem before selecting the most promising one. This technique is particularly valuable for creative problems or situations where the optimal reasoning path isn't immediately obvious. While computationally more expensive than linear CoT approaches, tree-of-thought prompting can achieve superior results on complex, open-ended problems.

Automated prompt optimization represents another frontier in CoT advancement, where machine learning techniques are used to systematically improve prompt effectiveness. These approaches can discover optimal prompt formulations that might not be immediately obvious to human practitioners, potentially unlocking new levels of performance across various reasoning tasks. As these techniques continue to develop, they promise to make Chain-of-Thought prompting more accessible and effective for a broader range of applications.

Common Pitfalls and How to Avoid Them

Despite its effectiveness, Chain-of-Thought prompting isn't immune to various challenges and potential pitfalls that can undermine its success. Understanding these common issues and how to address them is crucial for achieving consistent results and maximizing the benefits of CoT techniques. One of the most frequent problems occurs when models generate reasoning chains that appear logical on the surface but contain fundamental errors in logic or factual content.

Over-reliance on verbose explanations without ensuring logical consistency represents another significant pitfall. Models may produce lengthy, impressive-sounding reasoning chains that actually contain circular logic, unsupported assumptions, or invalid inferences. To address this issue, it's essential to evaluate CoT outputs not just for length or apparent sophistication but for logical validity and factual accuracy. Developing systematic approaches for validating reasoning chains can help identify and correct these issues before they impact final results.

Prompt complexity can also become counterproductive if not carefully managed. While detailed instructions can help guide model reasoning, overly complex or contradictory prompts can confuse models and lead to degraded performance. The key is finding the right balance between providing sufficient guidance and maintaining clarity and simplicity. Testing different levels of prompt complexity and measuring their impact on output quality can help identify the optimal approach for specific applications.

Context limitations present another challenge, particularly when working with very long problems or extensive reasoning chains. Models have finite context windows, and attempting to maintain coherent reasoning across extremely long sequences can lead to degraded performance or loss of important information. Strategies for managing context limitations include breaking large problems into smaller segments, using summarization techniques to maintain key information, and developing approaches for maintaining coherence across context boundaries.

Measuring Success: Metrics and Evaluation Frameworks

Evaluating the effectiveness of Chain-of-Thought prompting requires sophisticated approaches that go beyond simple accuracy metrics. While getting the right answer is important, the quality of the reasoning process is equally crucial for many applications. Comprehensive evaluation frameworks should assess multiple dimensions of performance, including logical consistency, factual accuracy, completeness of reasoning, and clarity of explanation.

Logical consistency metrics examine whether each step in the reasoning chain follows logically from the previous steps and whether the overall argument structure is sound. This evaluation can be performed both through automated logical analysis tools and through human expert review. Identifying patterns in logical errors can help refine prompting strategies and improve overall reasoning quality.

Factual accuracy assessment focuses on verifying that the information used in reasoning chains is correct and appropriately applied. This is particularly important in domains where factual errors can have serious consequences, such as healthcare, finance, or legal applications. Developing robust fact-checking processes and maintaining current knowledge bases are essential for reliable CoT implementation.

Completeness evaluation examines whether the reasoning process addresses all relevant aspects of the problem and considers important factors that might influence the outcome. This assessment often requires domain expertise to identify what elements should be included in a comprehensive analysis. Systematic checklists and expert review processes can help ensure that CoT reasoning chains are appropriately thorough without becoming unnecessarily verbose.

Clarity and interpretability metrics assess how well human users can understand and follow the reasoning process. Even logically sound and factually accurate reasoning chains are of limited value if they're difficult for intended users to understand and evaluate. User testing and feedback collection can provide valuable insights into how to optimize CoT outputs for maximum interpretability and usability.

Integration with Existing AI Workflows and Systems

Successfully incorporating Chain-of-Thought prompting into existing AI workflows requires careful planning and consideration of how CoT techniques will interact with current systems and processes. The integration process should begin with a thorough assessment of current AI applications and identification of areas where enhanced reasoning capabilities would provide the most value. This analysis should consider both technical feasibility and potential business impact to prioritize integration efforts effectively.

API and interface considerations play a crucial role in successful CoT integration. Many existing AI systems are designed around simple input-output interactions, and adapting them to support more complex Chain-of-Thought dialogues may require significant modifications. Planning for these technical requirements early in the integration process can help avoid complications and ensure smooth implementation.

Performance and scalability implications must also be carefully considered when integrating CoT prompting into production systems. Chain-of-Thought reasoning typically requires more computational resources and processing time than traditional prompting approaches. Understanding these resource requirements and planning for appropriate infrastructure scaling is essential for maintaining system performance and user experience.

Training and change management represent important non-technical aspects of CoT integration. Users accustomed to traditional AI interactions may need education and support to effectively leverage Chain-of-Thought capabilities. Developing comprehensive training programs and providing ongoing support can help ensure that the benefits of CoT prompting are fully realized across the organization.

Future Directions and Emerging Trends

The field of Chain-of-Thought prompting continues to evolve rapidly, with exciting developments on multiple fronts promising to further enhance its capabilities and applications. Multimodal Chain-of-Thought reasoning represents one of the most promising emerging areas, where models combine textual reasoning with visual, auditory, or other sensory inputs to develop more comprehensive understanding and analysis. This capability could revolutionize fields like medical diagnosis, engineering design, and scientific research.

Automated reasoning optimization is another area of active development, where machine learning techniques are being applied to discover more effective reasoning strategies automatically. These approaches could potentially uncover reasoning patterns and optimization techniques that human researchers might not identify independently, leading to significant performance improvements across various domains.

Interactive and collaborative reasoning systems represent another frontier, where AI models engage in extended dialogues with human experts to develop increasingly sophisticated analyses and solutions. These systems could combine the systematic reasoning capabilities of AI with human creativity and domain expertise to tackle complex challenges that neither could address effectively in isolation.

The integration of Chain-of-Thought prompting with other advanced AI techniques, such as reinforcement learning and neural-symbolic reasoning, promises to create even more powerful and flexible AI systems. These hybrid approaches could combine the transparency and interpretability of CoT reasoning with the optimization capabilities of other machine learning paradigms.

Conclusion

Chain-of-Thought prompting represents a fundamental advancement in how we interact with and leverage Large Language Models, transforming AI from a black box that provides answers into a transparent reasoning partner that explains its thinking process. Throughout this comprehensive exploration, we've seen how this technique enhances accuracy, provides interpretability, and enables more sophisticated problem-solving across diverse domains from mathematics and logic to creative analysis and strategic planning.

The key insights from our investigation reveal that successful Chain-of-Thought prompting requires a thoughtful balance of guidance and flexibility, careful attention to prompt construction and example selection, and systematic evaluation of both process and outcomes. The technique's versatility across industries—from healthcare and finance to education and legal services—demonstrates its broad applicability and transformative potential for AI-assisted decision-making and analysis.

As we look toward the future, the continued evolution of Chain-of-Thought prompting promises even more exciting possibilities. The emergence of multimodal reasoning, automated optimization techniques, and collaborative AI systems suggests that we're only beginning to scratch the surface of what's possible when we teach machines to think step by step. For organizations and individuals seeking to harness the full potential of modern AI systems, mastering Chain-of-Thought prompting isn't just an opportunity—it's becoming an essential skill for navigating an increasingly AI-driven world.

The journey of understanding and implementing Chain-of-Thought prompting may require investment in learning and experimentation, but the rewards—in terms of improved AI performance, greater transparency, and enhanced problem-solving capabilities—make this effort invaluable. As AI continues to evolve and become more integrated into our daily work and decision-making processes, those who understand how to effectively guide AI reasoning through Chain-of-Thought techniques will be best positioned to unlock its transformative potential.

Frequently Asked Questions (FAQ)

Q1: What is Chain-of-Thought prompting and how does it work? Chain-of-Thought (CoT) prompting is a technique that encourages AI models to break down complex problems into step-by-step reasoning processes, mimicking human problem-solving approaches. It works by explicitly asking the model to show its thinking process, leading to more accurate and transparent results.

Q2: What are the main benefits of using Chain-of-Thought prompting? The main benefits include improved accuracy (typically 15-60% improvement), enhanced transparency in AI reasoning, better performance on complex multi-step problems, and the ability to identify and correct errors in the reasoning process. Organizations using CoT techniques often see significant improvements in AI model effectiveness.

Q3: Which types of problems work best with Chain-of-Thought prompting? CoT prompting is most effective for mathematical word problems, logical reasoning tasks, scientific analysis, multi-step planning, and any complex problem that benefits from systematic breakdown and analysis. The technique shows the greatest improvements on tasks requiring multi-step reasoning.

Q4: How do I implement Chain-of-Thought prompting in my AI applications? Start by adding phrases like "Let's think step by step" or "Show your reasoning" to your prompts. For better results, provide examples of step-by-step reasoning (few-shot prompting) or structure your prompts to guide the model through specific reasoning stages.

Q5: What's the difference between few-shot and zero-shot Chain-of-Thought prompting? Few-shot CoT provides examples of step-by-step reasoning before asking the model to solve a new problem, while zero-shot CoT simply instructs the model to think step-by-step without examples. Few-shot typically performs better but requires more context.

Q6: Can Chain-of-Thought prompting be used for creative tasks? Yes, CoT prompting can enhance creative tasks by encouraging systematic exploration of ideas, character development, plot construction, and thematic analysis. It helps structure creative processes while maintaining originality and can be particularly effective when combined with AI-powered creative tools.

Q7: What are common mistakes to avoid when using Chain-of-Thought prompting? Common mistakes include accepting verbose but illogical reasoning, over-complicating prompts, failing to validate reasoning steps, and not adapting the technique to specific problem types or domains. Always verify the logical consistency of the reasoning chain.

Q8: How does Chain-of-Thought prompting affect AI model performance across different domains? Performance improvements vary by domain: mathematical reasoning sees 40-60% improvements, logical reasoning 25-40%, while simpler tasks like reading comprehension show more modest 10-20% gains. The technique is particularly effective for complex analytical tasks.

Q9: Is Chain-of-Thought prompting compatible with all language models? CoT prompting works with most modern large language models, but effectiveness increases with model size and sophistication. It works best with models that have been trained on diverse reasoning examples, as discussed in our comparison of leading AI models.

Q10: How can I measure the effectiveness of Chain-of-Thought prompting in my applications? Measure effectiveness through accuracy improvements, reasoning quality assessments, user satisfaction scores, error reduction rates, and task completion times. Consider both quantitative metrics and qualitative evaluation of reasoning coherence to get a complete picture of performance gains.

Additional Resources

  1. "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models" - The seminal research paper by Wei et al. that introduced the concept. Available through Google Research publications.

  2. "Large Language Models are Zero-Shot Reasoners" - Kojima et al.'s groundbreaking work on zero-shot Chain-of-Thought prompting. Published in Neural Information Processing Systems (NeurIPS).

  3. "Self-Consistency Improves Chain of Thought Reasoning in Language Models" - Wang et al.'s research on advanced CoT techniques. Available through the International Conference on Learning Representations (ICLR).

  4. OpenAI's GPT-4 Technical Report - Comprehensive analysis of Chain-of-Thought capabilities in state-of-the-art models. Available through OpenAI's official research publications.

  5. "Prompt Engineering Guide" - A comprehensive online resource covering various prompting techniques including Chain-of-Thought. Maintained by the open-source community at promptingguide.ai.