Frequency and purpose of using generative AI by Chinese engineering students
Popularity of adopted generative AI tools
A significant portion of respondents reported first encountering generative AI within 1–2 years (i.e., 2023–2024). This reflects the rapid advancements in AI technology and its growing integration into educational and social contexts over the past couple of years in China. As the technology matures and its applications expand, more students have become aware of its potential and have started using it.
As shown in Fig. 1, among various generative AI tools, ChatGPT was by far the most popular, with 77.03% of respondents using it. Its widespread adoption can be attributed to its powerful capabilities, user-friendly interface, and effectiveness in text generation and interaction. Following ChatGPT, Baidu’s Wenxin Yiyan, which was particularly favored by Chinese engineering students, was used by 41.89% of respondents, likely due to its cultural and language advantages. Tools like DeepL (25.68%), Microsoft Bing (20.27%) and Google Bard (18.92%) also maintained a notable presence, especially for tasks like translation and search optimization. In contrast, tools such as DALL·E, Canva AI, Adobe Firefly, and others were less commonly used, each having usage rates around or below 10%, suggesting that their functions are not as aligned with the needs of engineering students in China.
It is worthwhile acknowledging that new generative AI tools have been rapidly emerging since the survey was completed in November 2024. A prominent example is DeepSeek, which has gained significant popularity, particularly in China. This development is likely to influence the AI tool preferences of Chinese engineering students. Nevertheless, the findings of this study remain valid, as the capabilities of these newly introduced tools are largely comparable to those of existing ones.
Frequency of using generative AI
Generative AI has become a frequent tool in the academic routines of many engineering students in China, with 20.95% using it daily or 41.89% using it multiple times per week, as illustrated in Fig. 2a. Only 4.73% of respondents reported never using generative AI. Figure 2c suggests that the frequency of use was higher among postgraduate students (40% reporting regular use) compared to undergraduates (only 25.64%), likely due to the greater complexity of postgraduate-level tasks such as research, which require more advanced tool support. Additionally, our survey results also reveal that students in computer-related disciplines used generative AI more frequently, reflecting their stronger technical alignment with the tools and a deeper understanding of AI technologies.
Application scenarios of generative AI
Figure 2d illustrate that generative AI has been widely used in various academic tasks by Chinese engineering students. The most common application was “finding learning resources or concept explanations” (55.41%), indicating that many students have turned to AI for help with complex concepts or course materials. Other popular uses include “compiling reports or documents” (54.73%) and “data analysis” (52.03%), reflecting the tool’s role in enhancing academic writing and data processing. The frequency of use stated in Section “Frequency of using generative AI” and the application scenarios suggest the overwhelming majority of respondents have used generative AI for tasks related to their majors, such as solving engineering problems, optimizing design plans, or writing research reports.
When it comes to using generative AI for assignments, 32.43% of students used it frequently, and 41.22% used it occasionally, as shown in Fig. 2b. This indicates that generative AI has become a regular part of their academic workflows. Despite most surveyed students viewed AI as an important aid in their academic work, a minority (18.92% rarely and 7.43% never) were less inclined to use it, possibly due to doubts about its effectiveness or ethical concerns.
In terms of basic application scenarios of generative AI for engineering disciplines, as illustrated in Fig. 2b, 50.68% of respondents have often adopted AI tools for searching literature and assisting literature review, while 47.97% of the respondents turned to generative AI for generating initial ideas for their design assignment or research projects. These highlight the tool’s role in supporting both academic work and creative thinking.
The impact of generative AI on Chinese engineering students’ learning
Impact on learning efficiency
As shown in Fig. 3a, the survey data reveals that 36.49% and 52.03% reported “significant improvement” and “improved”, respectively, totaling 88.52% of respondents felt their learning efficiency had been improved by generative AI tools. Only 10.81% noted “almost no change,” and a very small fraction (0.68%) felt that AI use had “reduced” or “significantly reduced” efficiency. These results indicate a strong consensus on the positive impact of AI on learning efficiency. These results provide additional support for previous studies (e.g.,40,55) highlighting that immediate feedback enables learners to recognize gaps in their understanding and proactively adapt their strategies to enhance learning efficiency. The rapid content generation, instant feedback, and extensive knowledge support offered by AI tools significantly reduced the time traditionally spent on learning and writing tasks, making them particularly valuable for time-sensitive academic activities like literature search, report writing and data analysis.
Impact on active learning
The questionnaire results show that generative AI tools had a significant positive impact on the active learning of Chinese engineering students. 64.19% of participants reported improved learning initiative, with 41.22% stating it had “improved” and 22.97% saying it had “significantly improved”, as illustrated in Fig. 3a. This highlights that most students viewed generative AI as a valuable tool to boost motivation and engagement. However, 6.76% reported a decline in learning initiative, suggesting that AI could be detrimental for students with certain learning habits, such as over-reliance on the tool that may weaken critical thinking and active learning.
According to self-determination theory12, frequent use of AI to complete learning tasks can undermine students’ sense of autonomy and competence, weakening their intrinsic motivation to learn. This effect is further supported by the Overjustification Effect56, which suggests that when intrinsically motivated activities are replaced by heavy reliance on external tools, individuals may lose interest in the activity itself. In this context, students who were once engaged in learning may begin to view generative AI merely as a means to quickly finish tasks, shifting their focus from genuine learning to mere task completion, and diminishing their interest and initiative over time. This emphasizes the need for caution when integrating generative AI into inclusive education. Clear usage guidelines should be established, positioning AI as a learning aid rather than a sole crutch, and educators should monitor its impact, especially on students with low engagement.
Impact on independent thinking
While generative AI tools have been widely regarded as efficiency boosters for learning, their impact on independent thinking shows a more complex pattern, as shown in Fig. 3a. About 34.46% of respondents felt that AI had “almost no change” on their ability to think independently, while 47.97% reported an “improvement” (23.65%) or “significant improvement (24.32)”. This suggests that nearly half of the students viewed generative AI as a tool that enhances independent thinking by providing new perspectives, feedback, and access to additional knowledge. In addition, students typically needed to judge the accuracy of the AI generated content, which might further cultivate their independent thinking. However, 14.86% and 2.70% of respondents felt that AI had weakened or significantly weakened their ability to think independently. This reflects the potential downsides of over-reliance on AI, where students might bypass deep engagement with problems, opting instead for AI-generated solutions. Our findings further reinforce existing research, especially the studies by Zhai et al.57 and Klingbeil et al.58, which showed that when the reliability of generative AI advice is hard to assess, individuals are more inclined to trust it uncritically to avoid cognitive effort. In complex fields like engineering, this may result in a reduction of students’ problem-solving skills and a lack of understanding of the underlying concepts.
Impact on creativity
When examining the impact of generative AI on creativity, the survey data show a similar pattern like AI’s impact on independent thinking, as depicted in Fig. 3a. While 58.78% of respondents felt that AI had a positive effect on their creativity, with 35.81% noting an “improvement” and 22.97% a “significant improvement”, a sizable portion (29.73%) reported “almost no change”. This group likely used AI for more practical tasks, such as executing predefined solutions, rather than for creative inspiration. On the other hand, 11.48% of respondents believed AI had a negative impact on creativity, with 8.78% reporting a “reduction” and 2.7% a “significant reduction.” These results suggest that while generative AI is widely seen as a powerful tool for generating ideas and solving complex problems, its ability to foster genuine innovation may be limited for some users, particularly if the generated content is too formulaic or restricts creative freedom. This can be explained through the theory of multiple intelligences51. Generative AI can mobilize multiple forms of intelligence, such as natural language understanding, human-like reasoning, and visual content, thereby improving one’s creativity. However, differences in the intelligence structure of different individuals may lead to divergent effects of technology on creativity. However, differences in individuals’ intelligence areas may result in varying effects of technology on creativity. If generative AI does not stimulate the intelligence areas that users rely on, they may feel that their creativity is limited.
Impact on academic performance
From Fig. 3b, It is notable that nearly half of the respondents did not feel that using generative AI had improved their academic performance, even though most reported enhanced learning efficiency and more active learning. This discrepancy can likely be attributed to the limited specialization of generative AI for specific engineering disciplines and concerns over the accuracy of generated content, as discussed in Section “Challenges faced by Chinese engineering students in using generative AI“. It’s important to recognize that perception doesn’t always align with actual outcomes59. While generative AI often helps students complete tasks more quickly and smoothly, this can lead to a sense of “higher efficiency” by students. However, task completion doesn’t necessarily indicate true understanding, and the satisfaction of finishing a task can mask the real extent of genuine knowledge learning. Over time, this immediate sense of accomplishment may create the illusion of “I have learned it,” which can diminish students’ continuous engagement in the learning process. These factors necessitate students’ academic judgment in evaluating the relevance and accuracy of AI-generated materials. When paired with the strengths of generative AI, such judgment has significant potential to enhance academic performance, as reflected by 12.2% and 36.5% of respondents who reported “improved” or “slightly improved” academic outcomes, respectively.
Challenges faced by Chinese engineering students in using generative AI
Key challenges in using generative AI
Generative AI has significantly impacted learning efficiency and creativity, but its use in education presents several challenges. Key issues identified by engineering students include the accuracy of AI-generated content, over-reliance on AI tools, and technical difficulties, as shown in Fig. 4.
The most prominent challenge, reported by 62.16% of respondents, was the inaccuracy of generated content. This lack of accuracy undermined students’ confidence in using AI outputs, particularly for tasks involving complex data analysis. Despite AI’s potential, there was still considerable room for improvement in content accuracy, especially tailored to specific engineering disciplines. This finding is consistent with existing theories suggesting that frequent exposure to inaccurate outputs, such as those from generative AI tools, can lead to “negative reinforcement,” ultimately decreasing students’ willingness to use these tools40. Moreover, the inaccuracy of automated feedback systems limits their ability to effectively support students in addressing more complex issues, such as developing critical thinking skills43,44. As shown in Fig. 5a, when asked about the frequency of encountering inaccurate AI-generated content, 43.24% of respondents reported facing this issue often or very often, while 37% experienced it occasionally, highlighting the gap between students’ needs and AI-generated content.
The second major concern, cited by 39.86% of respondents, was over-reliance on AI tools. Many students worried that excessive dependence on these tools might reduce their ability to solve problems independently and effectively.
Furthermore, 20.27% of respondents reported difficulties with the usability of AI tools, noting that the user interface can be difficult for beginners and that there was insufficient technical support, especially for the newly emerged AI tools. As indicated by the pedagogical theory TAM49, when users, especially beginners, struggle with system navigation or functions, they are more likely to experience operational frustration, leading to reduced behavioral intention. Ethical concerns and privacy issues were raised by 14.19% of respondents, indicating continued unease about data protection and academic integrity. Additionally, 17.57% of students mentioned high costs as a barrier to accessing AI tools, which could limit their widespread use.
While generative AI offers clear advantages in education, the need for improvements in accuracy, usability, and cost is evident. These findings highlight areas for future development, with tool developers and educational institutions needing to focus on enhancing technical reliability, simplifying user interfaces, improving support systems, and reducing costs to make AI tools more accessible.
Adaptability and specialization in engineering education
Despite rapid advancements in generative AI, its application in engineering education is constrained by the discipline’s complexity and specialization. Mixed responses were received in our survey, as shown in Fig. 5b. The survey results showed that 42.57% of respondents “agree” generative AI tools were suitable for their professional needs, while 18.24% finding them “strongly agree” on their adopted AI tools.
However, nearly 40% of the respondents expressed concerns about the tools’ effectiveness in addressing highly specialized problems. Specifically, 28.38% were “neutral” about AI’s adaptability, and 16.81% deemed them “disagree” or “strongly disagree”, suggesting concerns about AI’s specialization towards specific engineering disciplines. These findings are consistent with prior research showing that when generative AI tools fail to recognize the knowledge structure, contextual logic, or deep conceptual relationships within a professional domain, they struggle to effectively support students in constructing meaningful knowledge43,44.
Ethical concerns and data privacy in generative AI use
Ethical issues surrounding the use of generative AI, especially in academic assignments, were another significant concern. Figure 5c revealed that 35.14% and 22.97% of respondents considered ethical issues “important” or “very important”, respectively. However, 40.54% of respondents rated ethical issues as “average importance” and only 1.36% dismissed the importance of ethics. These findings suggest that ethical concerns regarding generative AI in education are widely acknowledged and should be addressed through regulation and guidance. Meanwhile, training on ethics is required for a broader recognition of various ethical issues.
Regarding data privacy, as shown in Fig. 5d, most respondents rated AI tools’ performance as “average satisfaction” (54.05%), suggesting their lack of confidence in AI tools’ privacy protection. In addition, 5.41% were “dissatisfied” and 2.7% were “very dissatisfied” with the data privacy, reflecting room for improvement in data privacy protection. Only 25% of the respondents found them “satisfied” and 12.84% “very satisfied” with data privacy, comparatively lower than their agreement on AI’s effectiveness in aiding their learning. Concerns about privacy violations or ethical issues can discourage users from engaging with generative AI, even if the technology is perceived as useful39. These findings highlight the urgent need to provide relevant training to students on secure practices when using generative AI.
Expectation on generative AI use from Chinese engineering students
Attitudes towards generative AI integration into engineering education
Regarding the integration of generative AI into engineering education, as shown in Fig. 6a, 20.95% supported full integration into teaching, while 43.92% favored a partial integration that applies only to certain courses or scenarios. Additionally, 30.41% saw AI as a helpful tool, but not a necessity. Survey results show strong support for integrating generative AI into education. A large majority of respondents believed that generative AI courses should be compulsory for engineering students, indicating widespread recognition of its importance and value in engineering education.
Figure 6b shows mixed opinions on whether generative AI can replace traditional teaching models. 41.89% of respondents were “uncertain”, while 12.84% thought it would “probably not” replace traditional methods. However, 12.84% believed that generative AI will eventually replace conventional teaching methods, while 30.41% supported this trend with some reservation. It appears that the overall support for complete replacement was not strong, likely due to technical limitations and ethical concerns surrounding generative AI. This finding indicates that the development and adoption of technology are not isolated processes but are deeply embedded within specific social contexts and norms. As previous studies23,24,25 have reported, technology adoption is influenced by a range of factors, including learning motivation, educational culture, and prevailing social norms.
In terms of AI-related training offerings, Fig. 7a 43.24% and 49.32% of respondents expected institutions to provide basic introductory AI courses and in-depth practical training on the use of generative AI, respectively. Nearly half of respondents preferred online training courses.
Respondents also suggested several ways to regulate the use of generative AI, as shown in Fig. 7b. 55.41% believed that clear usage guidelines should be developed by universities, while 47.3% favored creating tailored policies based on the specific needs of each curriculum. 46.62% thought training on generative AI should be provided to both students and faculty. Interestingly, 10.14% supported a complete ban on generative AI use, highlight the concerns on the adoption of AI for academic purposes from a small group of students. Overall, the survey results suggest that most respondents prefer regulation and training over prohibition.
Recommendations for educators and institutions
As illustrated in Fig. 8a, most respondents were optimistic about the future of generative AI in education, with 29.05% and 39.86% describing its potential as “broad” or “very broad”, respectively. However, 27.7% expressed a neutral stance, likely due to the current challenges and ethical concerns surrounding generative AI in engineering education. Only 3.38% held negative views, describing its prospects as “narrow” or “very narrow”.
Respondents highlighted several areas for improvement in generative AI tools, as illustrated in Fig. 8b. Over 60% emphasized the need for higher accuracy in addressing discipline-specific problems, improved literature search capabilities, and integration with professional software, reflecting the higher academic and professional demands of engineering students. 54.73% expressed a desire for AI tools that can provide deeper insights into professional questions. There was also interest in enhancing AI’s data-processing abilities, with 40.54% calling for improvements in this area.
Moving forward, schools and education policymakers should develop clear guidelines for the use of generative AI, design personalized integration plans tailored to different disciplines and curricula, and as well as provide comprehensive training programs covering technical operation, practical applications, and ethical issues.














