inference-time, RLHF 41

forest of thought 논문 요약

https://arxiv.org/html/2412.09078v1#S4 Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM ReasoningForest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning Zhenni Bi    Kai Han    Chuanjian Liu    Yehui Tang    Yunhe Wang Abstract Large Language Models (LLMs) have shown remarkable abilities across various language tasks,arxiv.orgbenchmark : GSM 8k , MATH 3.1 FoT frame..

MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchm

https://arxiv.org/abs/2402.04788 MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language BenchmarkMultimodal Large Language Models (MLLMs) have gained significant attention recently, showing remarkable potential in artificial general intelligence. However, assessing the utility of MLLMs presents considerable challenges, primarily due to the absence ofarxiv.orgMultimodal Large L..

GPT-4V(ision) as a Generalist Evaluator for Vision-Language Tasks 논문리뷰

https://arxiv.org/abs/2311.01361Automatically evaluating vision-language tasks is challenging, especially when it comes to reflecting human judgments due to limitations in accounting for finegrained details. Although GPT-4V has shown promising results in various multimodal tasks, leveraging GPT-4V as a generalist evaluator for these tasks has not yet been systematically explored. We comprehensiv..

VL-feedback: Silkie: Preference Distillation for Large Visual Language Models

https://arxiv.org/abs/2312.10665 Silkie: Preference Distillation for Large Visual Language ModelsThis paper explores preference distillation for large vision language models (LVLMs), improving their ability to generate helpful and faithful responses anchoring the visual context. We first build a vision-language feedback (VLFeedback) dataset utilizingarxiv.orgThis paper explores preference distil..

Rlaif-v: Aligning mllms through open-source ai feedback forsuper gpt-4v trustworthiness.

https://arxiv.org/abs/2405.17220Learning from feedback reduces the hallucination of multimodal large language models (MLLMs) by aligning them with human preferences. While traditional methods rely on labor-intensive and time-consuming manual labeling, recent approaches employing models as automatic labelers have shown promising results without human intervention. However, these methods heavily r..

RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback

https://arxiv.org/abs/2312.00849Multimodal Large Language Models (MLLMs) have recently demonstrated impressive capabilities in multimodal understanding, reasoning, and interaction. However, existing MLLMs prevalently suffer from serious hallucination problems, generating text that is not factually grounded in associated images. The problem makes existing MLLMs untrustworthy and thus impractical ..

Enhancing visual-language modality alignment in large vision language models via self-improvement 논문리뷰

https://arxiv.org/abs/2405.15973 Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-ImprovementLarge vision-language models (LVLMs) have achieved impressive results in various visual question-answering and reasoning tasks through vision instruction tuning on specific datasets. However, there is still significant room for improvement in the alignmentarxiv.orgLar..