Among the convenience sample of students surveyed, medical and health sciences students in Saudi Arabia perceived themselves to be moderately prepared for AI. The overall mean AI readiness score of 62 out of 110 suggests that, while students possess a foundational understanding of AI, there is ample room for growth in knowledge and application of AI in healthcare. Using Bloom’s Taxonomy of Learning Domains as a conceptual framework, we discuss these results in terms of enhancing knowledge (cognitive domain), attitudes and values (affective domain), and practical skills (psychomotor domain), as follows.
Cognitive domain: reforming curriculum for longitudinal building knowledge
Students’ moderate scores in cognition and ability imply that they have a basic understanding of AI concepts and some confidence in their technical skills. However, the finding that year level was a significant predictor of AI readiness underscores the need for introducing AI-related content early and reinforcing it throughout the educational program. As students progress, increasing depth and complexity in AI topics– ranging from foundational algorithms to clinical decision-support systems– can ensure that their knowledge keeps pace with rapidly evolving technologies [8, 32]. In addition, the perception that current training is inadequate suggests that existing curricula may need to be revamped [3, 33]. Incorporating structured, mandatory AI courses that cover core principles and emerging applications could help bridge the gap [1], raising both the floor and ceiling of students’ cognitive preparedness.
Affective domain: fostering engaged attitudes and ethical reasoning
Students demonstrated moderate readiness in the vision and ethics domains, with the lowest scores in vision (54%), indicating that they may not fully appreciate the broader implications of AI [34]. The significant relationship between the belief that AI-related courses should be mandatory and improved readiness scores across all domains suggests that students who value structured AI education may also be more inclined to engage ethically and envision AI’s long-term implications. Furthermore, the type of study program was a significant predictor of ability and ethics, indicating that students in certain fields may be more receptive or better prepared to handle AI’s moral and professional challenges. By incorporating ethics case studies, debates on patient privacy and algorithmic bias, and discussions on AI’s future role in patient care, educators can nurture a generation of professionals who are not only knowledgeable but also ethically conscious and forward-thinking [35, 36].
To strengthen the affective domain, educators should incorporate ethics and vision into the curriculum beyond a single lecture or course. Case-based discussions, simulations, and interprofessional workshops can help students explore the complex ethical dilemmas associated with AI [35, 37]. By engaging students in conversations about data privacy, algorithmic bias, AI hallucinations, and long-term implications for patient care, educators can foster more nuanced attitudes that transcend rote memorization and encourage critical reflection [36].
Psychomotor domain: bridging the gap between theory and practice
Although the cognition and ability scores were moderate, the perception that AI training is inadequate points to gaps in translating theoretical knowledge into hands-on skills [32]. For students to apply their understanding effectively in clinical settings, curricula must include practical exercises, simulations, and opportunities for interprofessional collaboration [3, 33]. Encouraging students to engage with AI tools, analyze clinical scenarios where AI can optimize patient outcomes, and collaboratively work on AI-related projects can advance them through Bloom’s psychomotor domain—from merely knowing what AI is to effectively using it in real-world contexts.
Strengths and limitations
This study is the first to assess AI readiness among a large sample of medical and health sciences students in Saudi Arabia. Using the validated MAIRS-MS instrument enhances the reliability of our findings. We collected data using convenience sampling, a non-probability technique that may introduce selection bias. However, this approach provided us with a unique opportunity to gather data from multiple universities within a reasonable timeframe. Also, the cross-sectional design limits the ability to establish causality between variables and outcomes in this study [38]. Also, reliance on self-reported data may introduce bias due to inaccurate reporting [39], and the English version of the MAIRS-MS scale may limit generalizability of our results [29]. We acknowledge that the alphas for the vision and ethics subscales were on the borderline of acceptability, and we recognize that this may downplay concerns about internal consistency. In the current study, we did not explore alternative factor structures, but we encourage future research to consider this approach to improve scale reliability. It is important to interpret our findings within the context of these statistical limitations. We also advocate for further validation of the MAIRS-MS scale or the development of more robust scales to address these issues in future work. Furthermore, to address these limitations, future research should validate the MAIRS-MS in diverse linguistic settings and Arab cultural contexts. Additionally, incorporating qualitative methods [3] can offer deeper insights, exploring how instructors’ attitudes and students’ perceptions of AI’s influence on healthcare roles shape their readiness. Such investigations will provide a more holistic understanding of how to best prepare medical and health sciences students for an AI-driven future.