Self-consistency prompting is an strategy that merely asks a mannequin the identical immediate a number of instances and takes the bulk consequence as the ultimate reply.
It’s a observe as much as CoT Prompting.
Self-consistency prompting works by making a sequence of reasoning paths within the AI’s determination making course of.
By analyzing the consistency throughout these paths, it will possibly discard any path that doesn’t align with the preliminary premises.
Self-consistency goals “to exchange the naive grasping decoding utilized in chain-of-thought prompting.” Wang.
The thought is to pattern a number of, numerous reasoning paths via few-shot CoT and use the generations to pick out probably the most constant reply.
This helps to spice up the efficiency of CoT prompting on duties involving arithmetic and commonsense reasoning.
Since you possibly can typically take quite a few paths to reach at a solution to a math drawback, why don’t we ask an LLM to create a number of reasoning paths?
CoT with self-consistency includes launching a number of numerous CoT threads, noting their outputs after which contemplating the bulk vote of those outputs as…