When can llms actually correct their own mistakes?a critical survey of self-correction of llms
https://arxiv.org/pdf/2406.01297Self-correction is an approach to improving responses from large language models (LLMs) by refining the responses using LLMs during inference. Prior work has proposed various self-correction frameworks using different sources of feedback, including self-evaluation and external feedback. However, there is still no consensus on the question of when LLMs can correct ..