분류 전체보기 286

Step-Controlled DPO: Leveraging Stepwise Error forEnhanced Mathematical Reasoning 논문리뷰

https://arxiv.org/pdf/2407.00782We introduce Step-Controlled DPO (SCDPO), which we empirically show improves the performance of DPO in enhancing LLMs’ mathematical reasoning abilities. We also conduct qualitative analysis of credit assignment of SCDPO. • We conduct experiments on chain-of-thought and code-integrated solutions, showing that SCDPO can effectively improve mathematical problem-solvi..

카테고리 없음 2024.08.20

Self-Rewarding Language Models 논문리뷰

https://arxiv.org/pdf/2401.10020요약 : self-alignment, AI 피드백, DPO pair dataset 생성 iterativelyRM의 preference data의 학습bottlenecked by human performance level(2024 초기논문 관계상)또한 이 RM은 frozen -> policy(LLM)의 학습중 cannot improveDPO가 RM이 필요하지는 않지만  pair dataset을 준비하기 위해 LLM-as-a-Judge prompting을 통해 스스로 reward 즉 pair 데이터셋을 생성 즉 프롬프트에 답변을 llm이 생성하고 이 llm을 사용해 evaluate해 reward를 매긴후 select한 데이터셋을 DPO로 사용하는 구조..

카테고리 없음 2024.08.18

Tree Search for Language Model Agents

https://arxiv.org/abs/2407.01476 Tree Search for Language Model AgentsAutonomous agents powered by language models (LMs) have demonstrated promise in their ability to perform decision-making tasks such as web automation. However, a key limitation remains: LMs, primarily optimized for natural language understanding and generaarxiv.orgbest first tree search 추론 알고리즘웹 자동화 프로세스와 같은 decision-making ta..

카테고리 없음 2024.08.18