카테고리 없음

ARGS: Alignment as Reward-Guided Search 논문리뷰

jinuklee 2024. 11. 1. 19:57

https://arxiv.org/abs/2402.01694 

 

ARGS: Alignment as Reward-Guided Search

Aligning large language models with human objectives is paramount, yet common approaches including RLHF suffer from unstable and resource-intensive training. In response to this challenge, we introduce ARGS, Alignment as Reward-Guided Search, a novel frame

arxiv.org

Aligning large language models with human objectives is paramount, yet common approaches including RLHF suffer from unstable and resource-intensive training.

 

In response to this challenge, we introduce ARGS, Alignment as Reward-Guided Search, a novel framework that integrates alignment into the decoding process, eliminating the need for expensive RL training.

 

By adjusting the model’s probabilistic predictions using a reward signal, ARGS generates texts with semantic diversity while being aligned with human preferences, offering a promising and flexible solution for aligning language models.

 

Notably, ARGS demonstrates consistent enhancements in average reward compared to baselines across diverse alignment tasks and various model dimensions.

 

For example, under the same greedybased decoding strategy, our method improves the average reward by 19.56% relative to the baseline and secures a preference or tie score of 64.33% in GPT-4 evaluation.

 

We believe that our framework, emphasizing decoding-time alignment, paves the way for more responsive language models in the future. Code is publicly available at: https://github.com/deeplearning-wisc/args.

 

In this section, we introduce ARGS, a novel decoding framework that facilitates the alignment of generated text with human preferences, by employing a reward mechanism that directly guides the text generation process of a language model.

 

Our method has two main components: (1) reward-guided scoring, which assigns scores to possible continuations of the text, and (2) token selection, which selects a continuation.

 

We detail the reward-guided scoring method in Section 2.1 and the token selection methods in Section 2.2.