분류 전체보기 252

MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated Capabilities 논문리뷰

https://arxiv.org/pdf/2408.00765https://huggingface.co/spaces/whyu/MM-Vet-v2_Evaluator MM-Vet v2 Evaluator - a Hugging Face Space by whyu huggingface.co MM-Vet, with open-ended vision-language questions targeting at evaluating integrated capabilities, has become one of the most popular benchmarks for large multimodal model evaluation. MM-Vet assesses six core vision-language (VL) capabilities: r..

카테고리 없음 2024.10.15

MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchm

https://arxiv.org/abs/2402.04788 MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language BenchmarkMultimodal Large Language Models (MLLMs) have gained significant attention recently, showing remarkable potential in artificial general intelligence. However, assessing the utility of MLLMs presents considerable challenges, primarily due to the absence ofarxiv.orgMultimodal Large L..

GPT-4V(ision) as a Generalist Evaluator for Vision-Language Tasks 논문리뷰

https://arxiv.org/abs/2311.01361Automatically evaluating vision-language tasks is challenging, especially when it comes to reflecting human judgments due to limitations in accounting for finegrained details. Although GPT-4V has shown promising results in various multimodal tasks, leveraging GPT-4V as a generalist evaluator for these tasks has not yet been systematically explored. We comprehensiv..

VL-feedback: Silkie: Preference Distillation for Large Visual Language Models

https://arxiv.org/abs/2312.10665 Silkie: Preference Distillation for Large Visual Language ModelsThis paper explores preference distillation for large vision language models (LVLMs), improving their ability to generate helpful and faithful responses anchoring the visual context. We first build a vision-language feedback (VLFeedback) dataset utilizingarxiv.orgThis paper explores preference distil..

Rlaif-v: Aligning mllms through open-source ai feedback forsuper gpt-4v trustworthiness.

https://arxiv.org/abs/2405.17220Learning from feedback reduces the hallucination of multimodal large language models (MLLMs) by aligning them with human preferences. While traditional methods rely on labor-intensive and time-consuming manual labeling, recent approaches employing models as automatic labelers have shown promising results without human intervention. However, these methods heavily r..

RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback

https://arxiv.org/abs/2312.00849Multimodal Large Language Models (MLLMs) have recently demonstrated impressive capabilities in multimodal understanding, reasoning, and interaction. However, existing MLLMs prevalently suffer from serious hallucination problems, generating text that is not factually grounded in associated images. The problem makes existing MLLMs untrustworthy and thus impractical ..

Enhancing visual-language modality alignment in large vision language models via self-improvement 논문리뷰

https://arxiv.org/abs/2405.15973 Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-ImprovementLarge vision-language models (LVLMs) have achieved impressive results in various visual question-answering and reasoning tasks through vision instruction tuning on specific datasets. However, there is still significant room for improvement in the alignmentarxiv.orgLar..

FLASK: FINE-GRAINED LANGUAGE MODELEVALUATION BASED ON ALIGNMENT SKILL SETS 논문리뷰

https://openreview.net/pdf?id=CYmF38ysDaEvaluation of Large Language Models (LLMs) is challenging because instruction-following necessitates alignment with human values and the required set of skills varies depending on the instruction. However, previous studies have mainly focused on coarse-grained evaluation (i.e. overall preference-based evaluation), which limits interpretability since it doe..

카테고리 없음 2024.10.13