Personalized Feedback Generation Using LLMs: Enhancing Student Learning in STEM Education
DOI:
https://doi.org/10.69987/JACS.2023.31002Keywords:
Personalized feedback, Large language models, STEM education, Learning outcomesAbstract
This paper presents a comprehensive framework for generating personalized feedback in STEM education using Large Language Models (LLMs). Current STEM feedback mechanisms often lack personalization and timeliness, limiting their effectiveness in addressing individual learning needs. The proposed framework integrates domain-specific knowledge with advanced LLM capabilities to deliver tailored, actionable feedback across various STEM disciplines. Experimental implementation across multiple educational settings demonstrates significant improvements in student performance metrics, with effect sizes ranging from 0.58 to 0.82 across core STEM competencies. The personalized LLM approach achieves 89.7% accuracy compared to 91.4% for human instructors while reducing response time from 1,248 seconds to 12.3 seconds. Engagement metrics reveal substantial increases in time on task (28.5% average increase), assignment completion rates (9.4 percentage point improvement), and voluntary practice behavior (3.4× increase). Qualitative analysis identifies feedback specificity, actionability, and timeliness as the most impactful characteristics, with distinctive reception patterns across demographic groups. Implementation challenges persist in disciplines requiring extensive visualization and in resource-limited environments. The framework provides a scalable solution for enhancing STEM education through personalized feedback mechanisms that approach human-quality guidance while dramatically improving response time and accessibility.