This study aims to explore the potential of Multi-Agent Large Language Models (MALLM) to enhance Social Network Analysis (SNA) for online education. It compares MALLM with single-agent LLMs in conducting, interpreting and applying SNA, addressing barriers that limit adoption.
An exploratory experiment using AutoGen compared MALLM and single-agent LLMs across multistep SNA workflows with a Coursera discussion data set. The process included data exploration, analysis and visualization. Specialized agent teams were assigned to analysis and interpretation. Performance was tested over 20 rounds, evaluated on comprehension, accuracy, execution and educational relevance.
Single agents were more efficient in simpler tasks (data exploration 85% vs 25%, visualization 50% vs 45%). MALLM outperformed in complex tasks, with higher SNA precision (30% vs 25%), stronger node-level analysis (95% vs 65%) and greater educational insights (55% vs 35%). However, MALLM faced coordination inefficiencies in linear tasks.
Limitations include contextual forgetting, token-size constraints and coordination overhead. Results are specific to GPT-4/GPT-4o, with a 30% success rate in complex tasks, indicating LLMs are not yet sufficient for full automation.
MALLMs can advance online education by supporting personalized learning and engagement while democratizing access to advanced analytics and pedagogical feedback, thereby enhancing educational equity.
To the best of the authors’ knowledge, this study is among the first to examine MALLMs’ management of multimodal, domain-specific analytics tasks, moving beyond general text-based applications, highlighting their advantages in generating educational insights and informing agent design while providing benchmarks for advancing multi-agent LLM systems.
