https://arxiv.org/pdf/2406.07155
LLM의 power law와 같이 agent의 수를 늘린다고 해서 emergent capability가 생기는가?
Inspired by the neural scaling law, a natural question arises: does increasing agents in multi-agent collaboration exhibit emergent capabilities?
3. Multi-Agent Collaboration Network
This arrangement ensures that each node-occupied agent ai precedes its corresponding edge-occupied agent aij , and aij precedes aj , thereby guaranteeing ensuring orderly information transmission
Specifically, ai requests feedback, aij offers optimization suggestions and requests further refinement, and aj provides the refined solutio
3.3 Memory Control
Context 문제 해결을 위해
we adopt a heuristic mechanism(chatdev)
Short-term memory captures the intra-interaction working memory during each dual-agent interaction, ensuring contextaware decision-making. Long-term memory maintains inter-interaction context continuity by transmitting only the final solutions derived from dialogues, not the entire conversational history
마지막의 convergent agent가 solution을 종합해 강점을 살리고 약점을 버리는(assess solution's weakness and strength) 합성과정 진행(synthesizing
'agent > multi - agent' 카테고리의 다른 글
CAMEL: Communicative Agents for “Mind”Exploration of Large Language Model Society 논문리뷰 (0) | 2024.08.06 |
---|---|
agentverse 논문리뷰 (0) | 2024.08.05 |
Reflexion: Language Agents withVerbal Reinforcement Learning 논문리뷰 (0) | 2024.08.03 |
mind search 논문리뷰 (0) | 2024.08.01 |
CRITIC: LARGE LANGUAGE MODELS CAN SELFCORRECT WITH TOOL-INTERACTIVE CRITIQUING 논문리뷰 (0) | 2024.07.29 |