Guidance with Chain of Thought in Artificial Intelligence
Artificial intelligence (AI) systems have made significant progress in recent years, achieving human-level performance in various areas such as image classification, speech recognition, and language translation. However, when it comes to more complex reasoning abilities, today's AI still falls short compared to the versatile and adaptable human mind. This new technique called "Chain of Thought Prompting" aims to advance the reasoning capabilities of AI by enabling the step-by-step demonstration of their reasoning processes.
Method is a technique that enables artificial intelligence to reach an answer by creating a logical and sequential explanation by connecting one to another. This method provides a series of consistent intermediate reasoning steps by generating examples that demonstrate solving a problem through a "chain of thought."
The method here is actually quite simple; provide the AI with examples that show the paths leading to the answer.
Example: If Ezgi has 12 apples and gives 8 of them to her friend, how many apples will she have left? Let's take a look at an example of how artificial intelligence would approach solving this question.
1- Ezgi had 12 apples.
2- Ezgi gave 8 apples to her friend.
3- So she had 12 - 8 = 4 apples left.
4- Therefore, the answer is 4.
Thanks to a few examples, artificial intelligence can also create its own thinking system for new problems. The researchers tested this on several large language models, including PaLM, LaMDA and GPT-3, with striking results:
- Maths problems: Thought chain routing completed the task, increasing solving rates from 33% to 57% on challenging problems.
- Common sense reasoning: Continued to consistently improve performance on common sense reasoning, such as evaluating history or sporting events.
- Symbolic reasoning: He was able to generalise from short examples and learn reasoning patterns on longer tests.
The findings of these studies:
- The larger the model, the more coherent reasoning chains can be produced. This means that human-like inferences can be made.
- This method is particularly successful with complex, multi-step requests.
- Learning is robust enough that it is not affected by small differences in examples or descriptions.
- It improves understanding by breaking down complex processes into intermediate steps.
- It provides interpretability and systematicity to the thinking process of artificial intelligence.
- It adapts to reasoning methods such as common sense and symbolism.
- It can add reasoning ability to already trained language models with minimal training.
- It can generate more ideas by brainstorming.
Although we have mentioned promising developments, there are still unanswered questions. How scalable are reasoning abilities with larger language models and more representations? Can this approach extend beyond logic to causal reasoning? How reliably can these language models make sense of real-world events through reasoning? Can artificial intelligences learn to generate thought chains on their own?
With proper training, artificial intelligence can have the capacity to create solutions for various situations it has never encountered before, just like how humans can reason through new problems using logic.