inference-time, RLHF/Process reward model 6

Generative verifiers 논문리뷰

Generative verifiers2024년 8월 27일자 논문PRM, ORM verifier가 LLM의 reasoning 퍼포먼스를 올리기 위해 사용되는데흔한 방식으로는 BoN, 여러 후보를 생성후 verifier로 rank한후 best를 선택이때의 verifier는 정확히 score만을 위한 classifier로 train됨이는 LLM의 텍스트 생성 능력을 활용하지 못하는 것또한 LLM as judge와 달리 이 verifier는 LLM기반 verifier임으로 majority voting과 같은 strategy도 사용가능 (CoT도 가능)

V-star: Training verifiers for self-taught reasoners 논문리뷰

https://arxiv.org/abs/2402.06457Common self-improvement approaches for large language models (LLMs), such as STaR, iteratively fine-tune LLMs on self-generated solutions to improve their problem-solving ability. However, these approaches discard the large amounts of incorrect solutions generated during this process, potentially neglecting valuable information in such solutions. To address this s..

Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations 논문리뷰

https://arxiv.org/abs/2312.08935 Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human AnnotationsIn this paper, we present an innovative process-oriented math process reward model called \textbf{Math-Shepherd}, which assigns a reward score to each step of math problem solutions. The training of Math-Shepherd is achieved using automatically constructedarxiv.org수학 문제 해결에서 각 step에 re..