분류 전체보기 286

Scaling LLM Test-Time Compute Optimally canbe More Effective than Scaling Model Parameters 논문리뷰

https://arxiv.org/abs/2408.03314(1) searching against dense, process-based verifier reward models; and(2) updating the model’s distribution over a response adaptively, given the prompt at test time. We find that in both cases, the effectiveness of different approaches to scaling test-time compute critically varies depending on the difficulty of the prompt모델 크기를 키우는 것과 test-time에 추가 계산을 수행하는 것 중 ..

카테고리 없음 2024.08.08

ToRA ( A TOOL-INTEGRATED REASONING AGENTFOR MATHEMATICAL PROBLEM SOLVING) 논문리뷰

https://openreview.net/pdf?id=Ep0TtjVoapa 는 CoT, b는 PAL ,c는 ToRA의 tool(PAL)을 통합한 rationale(CoT)을 활용imitation learningGPT4 같은 모델을 써서 만든 ToRA corpus로 모델 M 학습진행 output space shaping 모델 M의 ToRA를 샘플링 후 이를 teacher model에 evaluate, validate 후 수정된 trajectory 를 corpus로 사용

agentscope 논문리뷰 (A Flexible yet Robust Multi-Agent Platform)

가장 중요한점https://arxiv.org/pdf/2402.14034https://github.com/modelscope/agentscope GitHub - modelscope/agentscope: Start building LLM-empowered multi-agent applications in an easier way.Start building LLM-empowered multi-agent applications in an easier way. - modelscope/agentscopegithub.com following aspects feature the challenge1) Agents involved in a multi-agent application can specialize at diff..

agent/multi - agent 2024.08.07

GPTSwarm 논문리뷰 (Language Agents as Optimizable Graph)

https://arxiv.org/pdf/2402.16823노드는 오퍼레이션(LLM , tool 사용)edge는 에이젼트간의 커뮤니케이션+ The nodes implement functions to process multimodal data or query LLMs, and the edges describe the information flow between operations 2. GPTSwarm2.1. Language Agents as Graphs2.2. Graph Definitioninput x and context information z from its predecessor nodes by applying a computational routine f 2.3. Edge Optimization  2..

agent/multi - agent 2024.08.06

MACNET - Scaling Large-Language-Model-based Multi-Agent Collaboration 논문리뷰

https://arxiv.org/pdf/2406.07155LLM의 power law와 같이 agent의 수를 늘린다고 해서 emergent capability가 생기는가?Inspired by the neural scaling law, a natural question arises: does increasing agents in multi-agent collaboration exhibit emergent capabilities? 3. Multi-Agent Collaboration Network This arrangement ensures that each node-occupied agent ai precedes its corresponding edge-occupied agent aij , and aij p..

agent/multi - agent 2024.08.05

Reflexion: Language Agents withVerbal Reinforcement Learning 논문리뷰

1. Introduction예를 들어, 그림 1에서 Reflexion 에이전트는 시도, 오류 및 자기 성찰(trial, error, self-reflect)을 통해 의사결정, 프로그래밍 및 추론 작업을 해결하기 위해 자신의 행동을 최적화하는 방법을 학습한다. 유용한 성찰 피드백을 생성하는 것은 모델이 어디서 실수를 했는지에 대한 이해(즉, the credit assignment problem)뿐만 아니라 개선을 위한 actionable insight을 포함한 요약을 생성할 수 있는 능력을 필요로 하기 때문에 challenging 우리는 이를 수행하기 위해 세 가지 방법을 탐구한다 –1) simple binary environment feedback2) pre-defined heuristics for co..

agent/multi - agent 2024.08.03