분류 전체보기 286

Rlaif-v: Aligning mllms through open-source ai feedback forsuper gpt-4v trustworthiness.

https://arxiv.org/abs/2405.17220Learning from feedback reduces the hallucination of multimodal large language models (MLLMs) by aligning them with human preferences. While traditional methods rely on labor-intensive and time-consuming manual labeling, recent approaches employing models as automatic labelers have shown promising results without human intervention. However, these methods heavily r..

RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback

https://arxiv.org/abs/2312.00849Multimodal Large Language Models (MLLMs) have recently demonstrated impressive capabilities in multimodal understanding, reasoning, and interaction. However, existing MLLMs prevalently suffer from serious hallucination problems, generating text that is not factually grounded in associated images. The problem makes existing MLLMs untrustworthy and thus impractical ..

Enhancing visual-language modality alignment in large vision language models via self-improvement 논문리뷰

https://arxiv.org/abs/2405.15973 Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-ImprovementLarge vision-language models (LVLMs) have achieved impressive results in various visual question-answering and reasoning tasks through vision instruction tuning on specific datasets. However, there is still significant room for improvement in the alignmentarxiv.orgLar..

FLASK: FINE-GRAINED LANGUAGE MODELEVALUATION BASED ON ALIGNMENT SKILL SETS 논문리뷰

https://openreview.net/pdf?id=CYmF38ysDaEvaluation of Large Language Models (LLMs) is challenging because instruction-following necessitates alignment with human values and the required set of skills varies depending on the instruction. However, previous studies have mainly focused on coarse-grained evaluation (i.e. overall preference-based evaluation), which limits interpretability since it doe..

카테고리 없음 2024.10.13