Skip to main content

Research design and dataset collection

This work employs a mixed-methods research approach, integrating both quantitative and qualitative methods to thoroughly evaluate the effectiveness of GAI in animation teaching. The rationale behind this approach is to capture both measurable outcomes (quantitative data) and deeper contextual insights (qualitative data) from students’ learning experiences. This combination enables a more comprehensive understanding of how GAI-driven innovative thinking and strategies enhance the effectiveness of animation teaching. For the quantitative aspect, pre- and post-tests are utilized to assess students’ performance in basic knowledge and application ability. Additionally, structured questionnaires are administered to measure students’ learning satisfaction, motivation, and engagement. Statistical methods such as t-tests, analysis of variance (ANOVA), and effect size calculations are used to compare differences in knowledge retention, application ability, and other learning outcomes between the experimental and control groups. Regarding qualitative methods, data is collected through student interviews, open-ended questionnaires, and classroom observations. These methods aim to provide an in-depth understanding of students’ learning experiences and explore the impact of personalized learning paths, intelligent teaching resources, and AI-enabled interactive learning environments. The qualitative data is analyzed using thematic analysis to identify and discuss key themes related to learning engagement, creativity, problem-solving skills, and teamwork.

This work employs various data sources and multi-level data collection methods to ensure a comprehensive and robust research process. To guarantee the scientific rigor and thoroughness of data collection, a multi-source, multi-level approach is utilized, with clearly defined steps for statistical analysis. The specifics are as follows:

  1. 1.

    Disney Movie Dataset: This dataset includes information such as movie titles, release dates, genres (e.g., musicals, adventure films, and dramas), film ratings assigned by film associations, total box office revenue during the original release period, and inflation-adjusted total box office revenue.

    Data source: Disney Movies Dataset on Kaggle (https://www.kaggle.com/datasets/suvroo/disney-movies-dataset).

  2. 2.

    Learner Dataset: This dataset records basic information about students, such as age, gender, academic performance, and preferences. Chi-square tests are used to verify the balance of these variables between the control and experimental groups.

    Data collection method: Collected by the research team through experiments.

  3. 3.

    Teaching Resource Evaluation Dataset: This dataset provides multi-dimensional ratings of the comprehensibility, attractiveness, and practicality of teaching resources, using Likert scale ratings. Reliability of the questionnaire is tested using Cronbach’s Alpha coefficient.

    Data collection method: Collected by the research team through experiments.

  4. 4.

    Teaching Process Dataset: This dataset records students’ interaction frequency, learning behaviors, and the quality of teacher feedback. Independent samples t-tests are employed to analyze differences in behavioral data between the two groups.

    Data collection method: Collected by the research team through experiments.

To enhance the transparency of the experimental design and ensure the validity and credibility of the results, a strict random grouping method is employed for the control and experimental groups. A series of statistical tests are conducted to verify that there are no significant differences in students’ baseline characteristics prior to the experiment, thus eliminating potential bias. First, simple random sampling is used to assign students to the control and experimental groups. This process involves randomly selecting participants from the total pool, ensuring each student has an equal chance of being placed in either group. This approach mitigates any human bias in the grouping process, ensuring fairness and scientific rigor. To confirm that the two groups are comparable before the experiment, statistical tests are applied. Chi-square tests are used to compare gender distribution between the groups, while independent samples t-tests are employed to assess other background variables, such as age and academic performance. The results reveal no significant differences between the control and experimental groups in terms of age, gender, or academic performance (p > 0.05). This indicates that the groups are comparable at the start of the experiment, ensuring the reliability of the findings.

To assess the impact of personalized teaching paths, intelligent teaching resources, and interactive learning experiences on student learning outcomes, various experimental scenarios are designed based on the study’s objectives and hypotheses. These scenarios include animation production tasks and interactive learning activities of varying levels of difficulty. Specifically, three levels of difficulty are established for the animation production tasks (basic, intermediate, and advanced), along with three types of interactive learning tasks (individual tasks, group discussions, and collective creation). These scenarios are intended to comprehensively evaluate students’ animation creation capabilities and collaborative skills. Prior to the formal experiment, a pilot test is conducted for all experimental tasks to ensure that tasks of varying difficulty are appropriately challenging for both groups and can effectively distinguish differences in learning progress. During the pilot test, completion time, task difficulty, and student feedback for each task are assessed. Tasks deemed too simple or too complex are then adjusted accordingly. The results of the pilot test confirm that the difficulty distribution of all tasks is well-balanced between the control and experimental groups, ensuring the fairness and reliability of the experiment.

Experimental environment and parameters setting

This work explores innovative thinking and strategies in animation teaching within a GAI environment. A comprehensive experimental environment is established, including hardware, software, and experimental conditions tailored to the study’s needs. The experimental computer is a high-performance workstation featuring an NVIDIA RTX 3090 graphics card and an Intel i9-10900 K processor, supported by 64GB of memory and a 1 TB SSD. This configuration enables large-scale data processing and deep learning model training, providing the necessary computational power for the study. A development environment based on the Python programming language is set up, utilizing popular deep learning frameworks such as TensorFlow and PyTorch. These frameworks offer a wide range of GAI algorithm implementations, including GAN and VAE, ensuring robust technical support for the research. For dataset selection, the Disney Movies and Films Dataset, an open-source dataset, is employed. This dataset provides a diverse foundation for the experiments. In terms of experimental conditions, various scenarios are designed in alignment with the research objectives and requirements. Factors such as students’ age, gender, and learning preferences are considered when developing different teaching tasks and learning scenarios. The experimental tasks include animation production assignments of varying difficulty levels, as well as diverse animation material presentations and interactive learning experiences. These setups aim to explore the teaching effectiveness and learning outcomes across multiple conditions.

Statistical analysis method

To ensure the robustness and validity of the research findings, this work employs a more refined and systematic data analysis approach, combining both descriptive and inferential statistical analyses. In the descriptive statistical analysis, all data are comprehensively organized and summarized, focusing on the central tendencies and dispersion of the pre- and post-test scores, survey results, and behavioral data for both the experimental and control groups. Specifically, key statistical measures, including means, standard deviations, weighted means, and interquartile ranges for each group, are calculated, which helps in understanding the overall performance trends and variations of students during the experimental process. Additionally, to ensure comprehensive visualization of the data, SPSS software (version 26.0) is used to generate frequency distributions, histograms, and box plots, providing a more intuitive presentation of the data distribution, including skewness, kurtosis, and potential outliers within each group’s score distribution. This process assists in the preliminary identification of any anomalies or extreme values in the data, offering a foundation for further in-depth analysis.

In the inferential statistical analysis section, this work employs two methods: independent samples t-test and One-Way ANOVA to examine significant differences between the experimental groups. The independent samples t-test is used to compare mean differences between the experimental and control groups, testing whether differences in student performance, motivational changes, satisfaction, and other factors are statistically significant. To ensure the accuracy of the t-test, a normality test is performed on the data before analysis, using the Shapiro-Wilk test to determine whether the data follow a normal distribution. If the p-value exceeds 0.05, the data are considered to follow a normal distribution; if the p-value is less than 0.05, it suggests that the data may not follow a normal distribution, necessitating the use of non-parametric tests (such as the Mann-Whitney U test) as alternatives. Furthermore, Levene’s test is conducted to assess the homogeneity of variance between groups. If the p-value exceeds 0.05, the assumption of homogeneity of variance is considered valid; otherwise, Welch’s adjusted t-test is used to address situations with unequal variances. This multi-step validation approach ensures the reliability and robustness of the t-test results.

In conducting one-way ANOVA, the ANOVA test is first used to assess whether there are significant differences between multiple experimental groups. Specifically, the effects of various independent variables, such as age groups and learning preference groups, on student performance are analyzed. When the ANOVA results indicate significant differences between groups, post-hoc multiple comparisons are conducted using the Tukey HSD test. This process helps precisely identify which specific groups exhibit differences and provides targeted recommendations for instructional strategies.

To further interpret and supplement the significance testing results, this work also calculates the effect size to quantify the practical magnitude of the differences. For the independent samples t-test, Cohen’s d is used as the standard for effect size, calculated using Eq. (1):

$$\:\text{d}=\frac{{M}_{1}-{M}_{2}}{{SD}_{p}}$$

(1)

In Eq. (1), M1 and M2 represent the means of the experimental and control groups, respectively, and \(\:{SD}_{p}\) is the pooled standard deviation. The larger the Cohen’s d value, the more significant the difference between the two groups, indicating a stronger effect. For ANOVA, the effect size is measured using η2 (Eta squared). A larger η2 value signifies more significant between-group differences, indicating a stronger effect. Calculating the effect size allows for quantification of the actual impact of the differences between groups, further demonstrating their relevance in educational practice. To enhance the credibility of the statistical results, this work also reports the 95% confidence interval (CI) for the mean of each group. The CI provides an estimated range within which, with 95% confidence, the actual mean is likely to fall. Including the CI strengthens the credibility of the research findings and allows readers to better understand the error range of the mean estimate, facilitating more accurate judgments about the significance and practical meaning of the differences between groups. Throughout the data analysis process, thorough data cleaning was conducted, including the removal of outliers and addressing missing data. For missing data, imputation methods such as mean imputation or regression imputation were applied to ensure data completeness and the accuracy of the analysis results. Additionally, adjustments were made to the multiple comparisons to avoid Type I errors (false positives) resulting from repeated testing. Through these rigorous data processing and analysis steps, the scientific integrity and reliability of the research findings were ensured, providing stronger support for subsequent teaching practices. This comprehensive and in-depth data analysis process enables a more holistic and thorough interpretation of the experimental results, validating the application effect of GAI technology in animation teaching and offering a more rigorous analytical framework for future educational research.

Performance evaluation

Evaluation of the effectiveness of personalized teaching path design

A series of performance evaluation indicators and experiments are designed to comprehensively assess the effectiveness of innovative thinking and strategies in animation teaching within a GAI environment. The experiment involves two groups: the control group, which follows a traditional uniform teaching path, and the experimental group, which follows a personalized teaching path customized using GAI technology. Spanning 12 weeks, the experiment includes four teaching activities per week, each lasting 2 h. A basic knowledge test is administered to assess students’ understanding of foundational concepts in animation production. This test is crucial for determining whether personalized learning paths tailored by GAI contribute to a deeper understanding of the essential knowledge and skills required for animation. The content of the test focuses on fundamental animation concepts, including principles of animation (e.g., timing, spacing, and squash/stretch techniques), narrative foundations (e.g., story structure and character development), and basic technical skills (e.g., the use of software tools and visual composition). The goal of the test is to evaluate students’ retention and application abilities regarding core animation knowledge, which is essential for producing high-quality animated works. The objective is to assess whether personalized teaching paths enabled by AI can enhance students’ understanding of these core principles and whether they are more effective than traditional, one-size-fits-all teaching methods. This aligns with the broader aim of exploring how AI-driven personalized teaching can improve students’ retention and application of foundational knowledge in animation production. The impact of innovative instructional strategies on students’ mastery of fundamental animation concepts can be evaluated by comparing the test results of the experimental group (using personalized AI-driven teaching) with the control group (using conventional teaching methods). The test format includes a mix of multiple-choice questions, short-answer questions, and practical problem-solving tasks, assessing both theoretical understanding and practical application abilities. Each student’s score, ranging from 0 to 100, can be used to compare knowledge retention levels between the two groups. The main evaluation indicators include students’ learning outcomes, progress, satisfaction, and motivation.

Figure 2 illustrates the comparison of learning outcomes between the experimental and control groups. The experimental group achieved an average score of 85.2 on the Basic Knowledge Test, significantly outperforming the control group, which scored 78.5. This suggests that personalized learning methods, through targeted content and adaptive feedback, enable students to more efficiently consolidate foundational concepts and reinforce key knowledge points. In the Application Skills Test, the experimental group scored an average of 81.4, notably higher than the control group’s score of 72.1. This result indicates that personalized learning paths positively impact the development of students’ practical abilities. A potential explanation for this could be that AI-driven personalized teaching dynamically adjusts instructional materials and practice content, allowing students to apply their knowledge in problem-solving scenarios that closely resemble real-world contexts. This dynamic approach may enhance their ability to transfer and apply knowledge in practice. Additionally, personalized learning paths likely fostered greater learner autonomy and motivation, encouraging students to engage more deeply in task-driven learning environments and ultimately achieve better learning outcomes.

Fig. 2
figure 2

The comparison of learning outcomes.

Figure 3 presents the results of learning progress and learning satisfaction. In terms of learning progress, the experimental group achieved an average progress score of 8.2, significantly surpassing the control group’s score of 7.5. This suggests that personalized learning paths can accelerate students’ learning processes. By aligning with students’ individual learning characteristics and progress, these paths provide timely feedback and support, enabling students to complete instructional content more efficiently. The personalized learning system adapts the difficulty and complexity of tasks based on each student’s mastery level. This adaptive approach not only enhances learning efficiency but also ensures that students engage with content at the optimal time, contributing to greater progress.

Regarding learning satisfaction, the experimental group recorded an average score of 4.5, higher than the control group’s 3.8. This disparity indicates that personalized learning paths are more effective in meeting students’ learning needs and expectations. Customizing the learning path according to students’ interests, learning habits, and comprehension abilities offers individualized support, boosting engagement and motivation. Research indicates that students in personalized learning environments demonstrate higher engagement levels, as well as a stronger sense of achievement and satisfaction. The self-driven nature of this learning approach significantly enhances motivation and satisfaction. Additionally, the immediate feedback provided by personalized learning paths helps students identify learning challenges and adjust strategies accordingly, further strengthening their sense of alignment with the learning process and improving overall satisfaction.

Fig. 3
figure 3

Results of learning rate and learning satisfaction.

Figure 4 further illustrates the impact of innovative thinking and strategies in animation teaching on learning outcomes. In terms of learning efficiency, animation teaching in the GAI environment demonstrated a 21.4% improvement compared to traditional methods. This indicates that GAI-based instruction can significantly enhance learning efficiency. Through personalized learning paths and real-time feedback, GAI helps students acquire animation knowledge and skills more effectively. By dynamically adjusting the teaching pace and complexity, the GAI environment ensures that students progress at an optimal rate tailored to their individual needs, ultimately improving learning efficiency. Additionally, GAI technology can analyze students’ learning performance in real time, identifying weaknesses and providing targeted guidance, addressing challenges that traditional methods may struggle to resolve due to time constraints.

Regarding learning motivation, the GAI-driven animation teaching method resulted in a 25% increase, indicating that this approach more effectively stimulates students’ enthusiasm and interest in learning. The personalized content and highly interactive learning experiences enable students to achieve a greater sense of accomplishment and satisfaction throughout the learning process, thus boosting engagement and motivation. By providing immediate feedback and tailored learning suggestions, GAI customizes content to align with students’ interests and needs, making the learning process more relevant to their goals and enhancing motivation. Furthermore, the interactive and engaging nature of the GAI environment fosters active participation, which in turn strengthens students’ intrinsic motivation.

In terms of student engagement, animation teaching in the GAI environment showed a 30% improvement compared to traditional methods. This underscores the effectiveness of GAI-driven animation instruction in capturing students’ attention and fostering active participation. The GAI environment can dynamically adjust learning content and facilitate real-time interactions, enabling students to engage more deeply in the learning process. This increased interactivity enhances classroom involvement, as students not only receive immediate feedback but also have the opportunity to interact more extensively with the instructional content. Furthermore, they can participate in more creative and collaborative activities, significantly boosting their engagement in learning. The GAI-driven environment stimulates students’ interest, enhances interactivity, and helps students focus more on learning tasks, ultimately improving both the effectiveness and quality of their learning.

Fig. 4
figure 4

Comparison and analysis of the impact of innovative thinking and strategies in animation teaching on learning outcomes.

Evaluation of the effectiveness of intelligent teaching resource development

The development of intelligent teaching resources plays a crucial role in innovating animation teaching within a GAI environment. The experiment involves two groups: the control group, which utilizes traditional teaching resources, and the experimental group, which uses intelligent teaching resources developed with GAI technology. The experiment spans 8 weeks, with three 2.5-hour sessions per week. The primary evaluation indicators include students’ creation quality, creation speed, learning interest, and creativity levels. Figure 5 compares the creation quality and speed between the two groups. Figure 5 demonstrates that, in terms of creation quality, the experimental group achieved an average score of 84.2, significantly outperforming the control group’s score of 76.4. This result highlights the positive impact of intelligent teaching resources on the quality of student creations. Through intelligent recommendations and personalized guidance, students can more effectively apply the knowledge they have acquired, leading to improved creativity and technical quality in their work. Intelligent teaching resources not only support creative techniques and inspiration but also assist students in avoiding common mistakes and shortcomings during the creation process, thus enhancing the overall quality of their output. Additionally, intelligent resources provide real-time evaluation, offering targeted feedback and suggestions for improvement, further boosting the quality of the creative outcomes.

In terms of creation speed, the experimental group completed their tasks in an average of 4.1 h, significantly faster than the control group’s 5.2 h. This indicates that intelligent teaching resources can greatly enhance students’ creative efficiency. By leveraging automated tools and support functions, these resources help students minimize unnecessary repetitive tasks and complex steps in the creative process, thereby accelerating progress. For example, GAI can automatically generate certain creative materials or provide instant technical support, allowing students to focus more on innovative and creative tasks. Furthermore, intelligent teaching resources can adapt the teaching content and difficulty based on students’ progress and performance, ensuring that students remain motivated and make faster progress in an efficient learning environment. As a result, students are able to produce high-quality creations in less time, demonstrating the dual advantages of intelligent teaching resources in improving both learning efficiency and creative capabilities.

Fig. 5
figure 5

Comparison of creation quality and speed.

Figure 6 illustrates the comparison of learning interest and creativity levels between the two groups. Learning interest was measured through a questionnaire survey, revealing that the experimental group had an average interest score of 4.2, significantly higher than the control group’s score of 3.4. This indicates that intelligent teaching resources effectively stimulate students’ interest in learning. By offering personalized learning paths and interactive teaching methods, these resources cater to students’ diverse interests, thereby enhancing their engagement and motivation. Specifically, in animation teaching, intelligent teaching resources provide a variety of creative tools and real-time feedback systems, enabling students to visually track their progress and creative achievements. This fosters a greater enjoyment of learning. In contrast to traditional teaching methods, students are immersed in a more interactive and challenging learning environment, further boosting their motivation and interest in the subject.

In terms of creativity, as measured by standardized creativity tests, the experimental group achieved an average score of 77.5, surpassing the control group’s score of 68.9. This indicates that intelligent teaching resources significantly enhance students’ creativity. By offering personalized creative guidance, inspiration, and real-time feedback, these resources help students expand their thinking, overcome cognitive fixedness, and strengthen their innovative abilities. In an intelligent learning environment, students are encouraged to experiment with various creative styles and methods, explore diverse possibilities, and generate new ideas. Moreover, the real-time feedback provided by these resources allows students to quickly identify and correct flaws in their work, further stimulating their creative thinking. This ongoing support and feedback mechanism promotes continuous self-improvement, enhancing students’ creativity levels and reinforcing the effectiveness of intelligent teaching resources in fostering innovation.

Fig. 6
figure 6

Comparison of learning interest and creativity levels.

Evaluation of the effectiveness of interactive learning experience construction

The interactive learning experience is a fundamental aspect of innovation in animation teaching within a GAI environment. It aims to enhance student engagement and learning effectiveness through immersive and interactive environments. To evaluate the effectiveness of this approach, a series of experiments were conducted, utilizing quantitative data to measure its impact on learning outcomes. The experimental subjects were divided into two groups. One group participated in traditional, non-interactive animation teaching (control group), while the other engaged in interactive animation teaching developed using GAI technology (experimental group). The experiment lasted 10 weeks, with two interactive learning sessions per week, each lasting 3 h. The primary evaluation indicators included student engagement, learning satisfaction, problem-solving skills, and teamwork abilities. Figure 7 illustrates the comparison of engagement and satisfaction levels between the two groups.

Fig. 7
figure 7

Comparison of engagement and satisfaction levels.

Engagement is measured by the average number of interactions per student during the activities. The experimental group achieved an average engagement rate of 4.6, significantly higher than the control group’s 2.9. This finding indicates that interactive learning experiences substantially enhance student engagement. The higher engagement levels observed in the experimental group suggest that the personalized and highly interactive learning content provided by the intelligent teaching environment encourages students to actively participate in learning activities. In animation teaching, real-time interaction and feedback not only enable students to resolve creative challenges more efficiently but also stimulate greater interest in learning. Personalized learning paths and GAI-based recommendation systems provide students with continuous stimuli and challenges, fostering motivation and a stronger sense of involvement.

In terms of satisfaction, as assessed through a questionnaire survey, the experimental group achieved an average satisfaction score of 4.8, compared to 3.5 in the control group. This result suggests that students find the interactive learning experience more satisfying. The high satisfaction levels in the experimental group reflect the effectiveness of intelligent teaching resources in enhancing the overall learning experience. Personalized learning support and real-time interaction within the intelligent teaching environment help students maintain interest, improve learning efficiency, and reduce the complexity of learning tasks. Additionally, the interactive feedback mechanism between teachers and students on the intelligent platform strengthens communication and collaboration, further increasing students’ sense of connection to and satisfaction with the course. In this learning environment, students can adjust their learning pace based on individual needs, receive more personalized guidance, and enhance their overall learning experience. Figure 8 presents the assessment results for teamwork and problem-solving abilities.

Fig. 8
figure 8

Test results of teamwork and problem-solving abilities.

Teamwork abilities and problem-solving skills are assessed using specialized evaluation tests. In terms of teamwork, the experimental group achieved an average score of 85.6, surpassing the control group’s 74.1. This result suggests that interactive learning experiences contribute to the development of students’ teamwork skills. Through GAI-based instructional design, students engage in real-time communication and collaboration on group projects via interactive platforms, strengthening coordination among team members and fostering collective thinking. Intelligent teaching resources not only facilitate interaction within teams but also encourage students to share ideas and creative insights, allowing them to maximize their potential in collaborative settings. Additionally, personalized learning paths enable students to collaborate more effectively by aligning their strengths and learning styles, ultimately improving overall team performance.

Regarding problem-solving abilities, the experimental group achieved an average score of 81.2, exceeding the control group’s 70.3. This finding indicates that interactive learning experiences significantly enhance students’ problem-solving skills. The GAI-based environment provides a diverse range of learning contexts and resources, allowing students to explore and innovate when addressing real-world challenges. Intelligent support tools offer real-time feedback and recommended solutions, enabling students to quickly identify effective approaches while fostering independent thinking and critical analysis skills. Furthermore, interactive learning methods promote discussion and collaboration, enhancing students’ confidence and ability to tackle complex problems. In summary, the GAI-based interactive learning environment serves as an effective platform that significantly strengthens students’ teamwork and problem-solving capabilities.

Robustness test

The effectiveness of personalized teaching paths is evaluated using the following indicators: learning outcomes (basic knowledge tests and application ability tests), learning progress, learning satisfaction, and learning motivation. Data on learning outcomes and progress are analyzed using independent samples t-tests, while scores for learning satisfaction and motivation are assessed using chi-square tests. Effect sizes are calculated using Cohen’s d with a 95% CI. The comparative results of learning outcomes are presented in Table 2. The experimental group demonstrates significantly better performance than the control group in the basic knowledge test (t = 4.92, p < 0.001, effect size d = 0.99), indicating the substantial advantage of intelligent teaching resources in mastering foundational knowledge. Additionally, the average learning speed of the experimental group is significantly higher than that of the control group (t = 3.45, p = 0.001, effect size d = 0.69), suggesting that personalized teaching paths enhance learning efficiency.

Table 2 Comparative results of learning outcomes.

The effectiveness of intelligent teaching resource development is summarized in Table 3. The experimental group’s creation quality scores are significantly higher than those of the control group (F = 15.87, p < 0.001, effect size η2 = 0.22), demonstrating the substantial impact of intelligent teaching resources on improving creative output. Additionally, the experimental group completes their creations in significantly less time than the control group (F = 25.64, p < 0.001, effect size η2 = 0.31), indicating that intelligent teaching resources enhance creative efficiency.

Table 3 Evaluation of the effectiveness of intelligent teaching resources development.

The effectiveness of interactive learning experiences is summarized in Table 4. The experimental group demonstrates significantly higher teamwork ability scores than the control group (t = 6.32, p < 0.001, effect size d = 1.20), highlighting the positive impact of interactive learning on teamwork development. Additionally, the problem-solving scores of the experimental group are significantly higher than those of the control group (t = 7.18, p < 0.001, effect size d = 1.38), indicating a substantial improvement in problem-solving abilities facilitated by interactive learning experiences.

Table 4 The effectiveness evaluation of the interactive learning experience.

Individualized difference analysis

To further investigate the instructional effectiveness of GAI across varying task complexities and individual learner differences, this work conducts a stratified analysis of the experimental data. First, animation production tasks are categorized into three levels of difficulty—basic, intermediate, and advanced—based on predefined criteria (Table 5). Performance differences between the experimental group and the control group are then analyzed within each difficulty tier (Table 6). Second, students are divided into high-, medium-, and low-ability groups according to their pre-test scores, to assess the differentiated impact of GAI on learners of varying proficiency levels (Table 7). The results reveal that the instructional effects of GAI exhibit significant dependency on task difficulty as well as individual adaptability.

Table 5 Task difficulty classification criteria in animation production.
Table 6 Performance comparison between experimental and control groups across task difficulty levels (mean ± standard deviation).

Analysis of Table 6 indicates that the efficiency gains provided by GAI are most pronounced in basic-level tasks, with a 24.5% reduction in completion time for the experimental group. This advantage is primarily attributed to the automation of repetitive processes, such as automatic in-between frame generation. At the advanced task level, although the improvement in creativity score is relatively modest (an increase of 10.6 points), the collaboration efficiency score of the experimental group significantly exceeds that of the control group (4.5 vs. 2.9). This suggests that GAI plays a more substantial role in supporting collaborative creativity in complex projects, for instance by facilitating conflict detection and proposing resolution strategies. These findings underscore the need for differentiated instructional design when integrating GAI, emphasizing efficiency optimization in lower-level skill training and enhancing collaboration and problem-solving in high-level creative tasks.

Table 7 Improvement scores for students of different proficiency levels after using GAI (Δ = Post-test – Pre-test).

Table 7 provides further insight into the differentiated benefits of GAI across student proficiency levels. The most substantial improvement in knowledge acquisition is observed in the low-proficiency group (+ 32.5), which can be attributed to the gradual, scaffolded training facilitated by personalized learning paths—such as the stepwise decomposition of complex tasks like skeleton binding. In contrast, high-proficiency students demonstrate the greatest gains in creative expression (+ 31.2), benefiting primarily from GAI’s interdisciplinary inspiration database, which supports innovative applications like incorporating structural mechanics into mechanical character design. Notably, the medium-proficiency group experiences the most significant improvement in problem-solving ability (+ 19.8), likely due to their transitional stage of skill development, where real-time feedback provided by GAI proved particularly effective in overcoming learning bottlenecks.

The creative performance of the experimental group across tasks of varying difficulty is illustrated in Fig. 9. As shown, students demonstrate rapid improvement on basic tasks, reaching a performance peak by the fourth week, while progress on advanced tasks follows a slower but steady upward trajectory.

Fig. 9
figure 9

Creative performance of the experimental group across task difficulty levels.

A multidimensional analysis of student competencies is presented in Fig. 10, which reveals that students with lower proficiency show the greatest improvement in knowledge acquisition. In contrast, higher-proficiency students exhibit more significant gains in creativity and problem-solving, while students with medium proficiency achieve balanced development across all dimensions. These visualized results align with the stratified analysis findings and underscore the dynamic adaptability of GAI-assisted instruction.

Fig. 10
figure 10

Multidimensional analysis of student competency development.

Discussion

Current research on GAI in animation education has largely concentrated on tool development or the implementation of specific technologies. For instance, GANs have been employed to automate character design, while VAEs have been used for style transfer. However, such studies are often limited to demonstrating technical capabilities, lacking systematic integration into pedagogical strategies and failing to fully consider the synergistic effects of dynamic learning needs and multimodal interaction. The present work addresses these gaps through three key innovations. First, a real-time optimization mechanism for dynamically personalized learning pathways has been proposed. Existing research predominantly relies on static learning recommendation models, such as fixed path planning based on initial skill assessments, which are insufficient for capturing the nonlinear and iterative nature of learning in animation creation. For example, the animation teaching system developed by Cevahir et al. (2022) allocates tasks based solely on pre-test scores, thereby lacking responsiveness to real-time performance differences during collaborative projects69. In this work, a dynamic path optimization algorithm based on reinforcement learning was constructed by integrating multi-source data—including operational logs, iterative versions of creative work, and peer review feedback. This algorithm adjusts task complexity according to individual progress and identifies complementary needs within team collaboration. For instance, during the storyboard writing phase, students with weaker narrative skills are automatically assigned supplementary exercises in visual composition, while those with strong technical skills receive advanced training in rendering techniques. This dual-dimension adaptation mechanism—addressing both individual and group needs—significantly enhances the pedagogical flexibility of complex animation projects. Second, a multi-task-oriented intelligent teaching resource library has been developed. Existing GAI-based animation teaching tools are often developed for single-task scenarios—such as character generation or scene rendering—which results in challenges related to tool switching and knowledge fragmentation during cross-task creative processes. For example, although the animation generation tool developed by Kang et al. (2024) supports character motion design70, it lacks integration with screenplay development and sound synthesis. To address this issue, the present work introduces a modular resource library architecture that leverages a unified knowledge graph to integrate animation principles, technical tools, and creative cases. For instance, in a “sci-fi animation short film” project, the system can automatically connect character design (via GANs), physical simulation (using rigid body dynamics engines), and narrative logic (through natural language processing models), thereby generating cross-module creative suggestions. This seamless integration of multimodal resources enables students to focus on creative expression rather than technical minutiae, overcoming the functional silos typical of traditional tools. Third, a collaborative mixed-reality interactive environment has been developed. Existing virtual laboratories often rely on unidirectional operation simulations—such as VR-based animation workflow training—and lack the social presence and real-time co-creation support required for authentic team collaboration. For example, the VR animation platform developed by Li et al. (2023) only supports individual operation71, failing to facilitate multi-user real-time editing and AI-assisted guidance. In response, this work combines AR with a distributed GAI engine to construct a mixed-reality collaborative space. During group animation projects, students can use AR glasses to overlay virtual storyboard sketches on a physical table, which are then synchronized in real time to a cloud platform. The GAI assistant, based on multi-user operation data—such as drawing strokes and voice discussion keywords—automatically generates conflict-resolution suggestions. This “human-AI-environment” triadic collaboration model not only recreates the authentic collaborative atmosphere of an animation studio but also enhances students’ critical thinking and negotiation skills through subtle AI-guided interventions.

The findings indicate that AI-based teaching methods significantly enhance student engagement, creativity, and problem-solving abilities. Students report that AI tools allow them to learn at their own pace and receive immediate feedback, fostering greater motivation and engagement—both crucial factors for successful learning outcomes. AI tools provide new avenues for students to express their ideas and explore creativity. In a personalized learning environment, students can experiment with different animation techniques, take creative risks, and achieve higher-quality results in their projects. These results suggest that AI is a valuable tool in animation teaching, particularly in promoting personalized learning and interactive teaching. Teachers can leverage AI to provide real-time feedback, tailor learning experiences to individual student needs, and design collaborative, problem-solving-oriented activities. Future research can explore how AI fosters creativity and engagement in other disciplines and how it can be effectively implemented in larger classroom settings. Moreover, these positive outcomes align with existing literature, further reinforcing the potential and value of GAI technology in education.

First, previous research has analyzed students’ preferences and responses to different styles of animated characters. Researchers successfully employed GAI technology to generate character design schemes tailored to students’ preferences, achieving positive teaching outcomes72. These findings align with the results presented here, indicating that personalized teaching path design effectively enhances students’ learning experiences and outcomes. Additionally, other studies have used intelligent algorithms and machine learning techniques to provide personalized learning paths and feedback, enabling customized animation teaching73. This work further extends these insights by demonstrating the practical effectiveness of personalized teaching path design through empirical research. It highlights the significant role of intelligent teaching resources in improving students’ creative quality and efficiency.

Through the development of intelligent teaching resources, students gain a deeper understanding and mastery of animation production principles and techniques, enhancing their creativity and expressive abilities. Furthermore, this work suggests that constructing interactive learning experiences effectively strengthens students’ teamwork and problem-solving abilities. This finding is consistent with existing literature on the application of VR and AR technologies in education. For example, some scholars have pointed out that immersive learning environments created through VR technology promote active participation and collaborative learning among students, thereby improving learning outcomes74. In conclusion, these findings support and expand upon previous research, collectively advancing the application and development of GAI in animation teaching.

However, this work relies on a high-performance computing environment and advanced deep learning frameworks, such as TensorFlow and PyTorch, for training and deploying GAI models. In practical educational settings, particularly in resource-constrained institutions or schools in developing countries, the high computational costs and hardware requirements may present significant challenges to the widespread adoption of GAI in animation teaching. Therefore, exploring the applicability of GAI-based instructional methods in low-tech environments is crucial for enhancing their scalability and broader implementation.

For schools with limited computational resources, cloud-based AI solutions can offer a viable alternative. In this model, teachers and students can remotely access pre-trained GAI models via cloud-based Application Programming Interfaces (APIs) (such as Google Cloud AI, RunwayML, or OpenAI API), eliminating the need for locally deploying complex deep learning frameworks. This approach significantly reduces computational resource demands while providing flexible instructional support. For example, in some experiments, we utilized RunwayML for cloud-based animation generation tests, achieving good performance in tasks such as image stylization, automatic keyframe generation, and character animation. Moreover, it can run via APIs on standard PCs or even low-spec devices like iPads or Chromebooks. However, despite the feasibility of cloud computing, its reliance on a stable internet connection presents challenges for regions with poor network infrastructure. Therefore, future research will focus on developing offline versions of lightweight GAI tools. This can be achieved by optimizing the size of GAI models through techniques like model quantization and knowledge distillation, enabling them to run on local computing devices.

Source link

Subscribe our Newsletter

Congratulation!