Domain Adaptation Analysis of Large Language Models in Academic Literature Abstract Generation: A Cross-Disciplinary Evaluation Study

Authors

  • Qichang Zheng Computational Social Science, University of Chicago, IL, USA Author
  • Wenyan Liu CElectrical & Computer Engineering, Carnegie Mellon University, PA, USA Author

DOI:

https://doi.org/10.69987/

Keywords:

Large Language Models, Domain Adaptation, Academic Abstract Generation, Cross-Disciplinary Evaluation

Abstract

This study presents a comprehensive cross-disciplinary evaluation of large language models' domain adaptation capabilities in academic literature abstract generation. Through systematic analysis across computer science, biomedical sciences, engineering, and social sciences domains, we investigate how different LLMs perform when generating abstracts for various academic disciplines. Our methodology employs a multi-dimensional evaluation framework incorporating semantic coherence, domain-specific terminology accuracy, and structural consistency metrics. We collected and analyzed 2,400 abstracts from four major academic domains, evaluating six prominent LLMs including GPT-4, Claude-3, and domain-specific fine-tuned variants. Results demonstrate significant performance variations across disciplines, with computer science achieving the highest adaptation scores (0.847) while social sciences showed the most challenging adaptation patterns (0.623). Domain-specific linguistic features and terminology density emerged as primary factors influencing adaptation success. Our findings reveal critical insights into LLM limitations and capabilities in cross-disciplinary academic writing automation, providing foundational knowledge for developing more robust domain-adaptive text generation systems.

Downloads

Published

2024-08-17

How to Cite

Qichang Zheng, & Wenyan Liu. (2024). Domain Adaptation Analysis of Large Language Models in Academic Literature Abstract Generation: A Cross-Disciplinary Evaluation Study. Journal of Advanced Computing Systems , 4(8), 57-71. https://doi.org/10.69987/

Share