1、On a theory of hidden variables in chain of thoughtsOn a theory of hidden variables in chain of thoughtsRasul TutunovSenior Research Scientist Huawei R&D,Noahs Ark,London*Elena.Ospina(https:/ is a prompting technique for large language model(LLM)that allows to improve its performance by providing a
2、demonstrations of several intermediate reasoning steps as exemplars Pre-trained LLMexampleprompt questionChain-of-Thought(CoT)CoT prompting:CoT is a prompting technique for large language model(LLM)that allows to improve its performance by providing a demonstrations of several intermediate reasoning
3、 steps as exemplars Pre-trained LLMexample step-by-stepreasoningprompt questionmodel also constructsstep-by step solution Significantly improving“reasoning”ability CoT(beyond math questions)Few-shot examplars of triples for non-arithmetic tasks:Chain of thoughts are highlightedChain-of-Thought(CoT)C
4、oT is a prompting technique for large language model(LLM)that allows to improve its performance by providing a demonstrations of several intermediate reasoning steps as exemplars Pre-trained LLMCOT is computationally efficient,as it does notrequire to re-train/fine tune the model.But why does COT wo
5、rk?What does effect its performance?Statistical model for natural languageeach CoT sequence generation has the following steps:general task description describingthe final goal behind the message Examples:Arithmetic demonstration.contextC“Provide simple arithmetic problem”“Alice has 2 apples,Bob has
6、 5 apples.Alice ate 1 apple and Bob ate 2 apples and gave 1 apple to John.Home many apples Alice and Bob have”“Calculate Alices apples after she ate 1”“Alice has 2 apples,She ate 1.Now,she has 1 apple”“Calculate Bobs apples after ate 2 one and gave 1 apple to John”“Bob has 5 apples,He ate 2 apples a