분류 전체보기 251

Generative verifiers 논문리뷰

Generative verifiers2024년 8월 27일자 논문PRM, ORM verifier가 LLM의 reasoning 퍼포먼스를 올리기 위해 사용되는데흔한 방식으로는 BoN, 여러 후보를 생성후 verifier로 rank한후 best를 선택이때의 verifier는 정확히 score만을 위한 classifier로 train됨이는 LLM의 텍스트 생성 능력을 활용하지 못하는 것또한 LLM as judge와 달리 이 verifier는 LLM기반 verifier임으로 majority voting과 같은 strategy도 사용가능 (CoT도 가능)

ReST-MCTS 논문리뷰

self training에서는  intermediate 에러(wrong or useless)가 있는데도 우연히  결과가 올바른 false positive 데이터가 만들어지는 경우가 있다One way to tackle this issue 에는 verifier나 reward model이 있는데 (math-sheperd 논문, let's verify step by step 논문) 실제로 ReST , Self-Rewarding CoT , ToT, Self-Consistency , Best-of-N 를 outperform SC 다수의 reasoning trace 샘플후 frequent 선택BoNPRM 또는 ORM이 선택하는 것이 BoNHistorically, the main challenge with learni..

V-star: Training verifiers for self-taught reasoners 논문리뷰

https://arxiv.org/abs/2402.06457Common self-improvement approaches for large language models (LLMs), such as STaR, iteratively fine-tune LLMs on self-generated solutions to improve their problem-solving ability. However, these approaches discard the large amounts of incorrect solutions generated during this process, potentially neglecting valuable information in such solutions. To address this s..

Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations 논문리뷰

https://arxiv.org/abs/2312.08935 Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human AnnotationsIn this paper, we present an innovative process-oriented math process reward model called \textbf{Math-Shepherd}, which assigns a reward score to each step of math problem solutions. The training of Math-Shepherd is achieved using automatically constructedarxiv.org수학 문제 해결에서 각 step에 re..