Spectral Graph Decomposition for Parameter Coordination in Multi-Task LoRA Adaptation
DOI:
https://doi.org/10.69987/Keywords:
Multi-task learning, Low-rank adaptation, Spectral decomposition, Parameter coordinationAbstract
Multi-task learning in large language models faces significant challenges in parameter coordination and interference mitigation. This paper proposes a novel spectral graph decomposition framework for coordinating Low-Rank Adaptation (LoRA) parameters across multiple tasks. We construct parameter graphs representing LoRA weight relationships and apply Laplacian spectral decomposition to identify coordination patterns in the frequency domain. Our approach integrates spectral regularization into the training objective, enabling gradient coordination through spectral projection and adaptive parameter updates. Experimental evaluation on GLUE and SuperGLUE benchmarks demonstrates superior performance compared to vanilla LoRA, AdaLoRA, and MTLoRA baselines. The proposed method achieves 11.8% improvement in average task performance while reducing parameter interference by 32.7%. Ablation studies confirm the effectiveness of each spectral decomposition component. The framework provides theoretical insights into parameter coordination mechanisms and offers practical solutions for large-scale multi-task learning scenarios.