Skip to main content

This section of the article proposes strategies to address the challenges about applications of AI in universities, attempting to tackle them from two different perspectives. Initially, by employing a macro-environment segmentation method, corrective actions and their impacts on the management of education and learning in universities are examined within the context of AI education. Subsequently, an effective strategy, grounded in ML techniques, is presented to develop high-quality AI educational programs in higher education.

Corrective actions in university education and learning management, within the context of AI education, based on macro-environment segmentation

This section proposes a set of corrective actions in university education and learning management within the context of AI education, based on macro-environment segmentation. According to this research, the macro environment of university teaching and learning is divided into three parts: the external environment, the intermediate environment, and the internal environment. In the external environment, actions such as enacting supportive laws and regulations for AI education, collaborating with relevant industries and institutions, and utilizing external resources for the advancement of AI education are undertaken. In the intermediate environment, actions like establishing an appropriate management structure, developing human resources, and employing new teaching and learning technologies are carried out. Finally, within the internal environment, actions such as designing and offering AI-based educational programs, developing AI-based teaching and learning methods, and assessing the effectiveness of AI education are performed.

External environment

The external environment encompasses factors that are beyond the control of universities. These factors can influence university teaching and learning. Corrective actions in the external environment can aid in creating conducive conditions for the advancement of AI education in universities. These actions can be undertaken by governments, relevant industries and institutions, and the universities themselves. Following are the necessary corrective actions in the external environment listed to enhance university teaching and learning in the context of AI education:

  • Establishing laws and regulations to support AI education: Governments can assist in the development of this field in universities by establishing supportive laws and regulations for AI education. These laws and regulations may include the following:

  • Financial support for universities to develop AI education: Governments can support the development of AI education in universities by allocating government funds. These funds can be used to cover financial expenses, provide human resources, and acquire necessary technologies for the advancement of AI education.

  • Facilitating collaboration between universities and related industries and institutions: Governments can support the development of AI education in universities by facilitating collaboration between universities and related industries and institutions. These facilitations may include the following:

    • Establishing laws and regulations to support collaboration between universities and related industries and institutions.

    • Providing financial and tax incentives to companies that collaborate with universities.

  • Establishing educational standards for AI education: Governments can ensure the quality of AI education in universities by establishing educational standards. These standards may include the following:

  • Collaboration with related industries and institutions: Universities can benefit from the knowledge and experience of these organizations in developing AI education through collaboration. These collaborations can lead to the formation of joint educational programs, conducting joint research, and providing financial resources for the development of AI education.

  • Utilizing external resources for the development of AI education: Universities can make use of external resources, such as government grants, private sector grants, and grants from international organizations, to develop AI education. These resources can be used to cover financial expenses, provide human resources, and acquire necessary technologies for the development of AI education.

Intermediate environment

The intermediate environment includes factors that are within universities but outside the direct control of managers and education and learning experts. These factors can influence teaching and learning in universities. Corrective actions in the intermediate environment can help establish an appropriate structure and conditions for the development of AI education in universities. The corrective actions in the intermediate environment include:

  • Establishing an appropriate management structure: Universities should establish an appropriate management structure for the development of AI education. This structure should include the following:

    • A management unit or center responsible for AI education.

    • A policy council for AI education.

    • An expert team for the development of AI education.

  • Development of human resources: Universities should undertake necessary actions to develop human resources specialized in AI. These actions may include the following:

    • Conducting training courses for university staff.

    • Attracting and hiring AI specialists.

    • Creating job opportunities for AI specialists at the university.

  • Utilizing new teaching and learning technologies: Universities should utilize new teaching and learning technologies for the development of AI education. These technologies may include things like ML, DL, virtual reality, augmented reality, and so on.

Internal environment

The internal environment includes factors that are within universities and under the direct control of managers and education and learning experts. These factors can influence teaching and learning in universities. Corrective actions in the internal environment can contribute directly to the development of AI education in universities. The corrective actions in the university’s internal environment include:

  • Designing and offering AI-based educational programs: Universities should design and offer educational programs based on AI. These programs should fulfill the needs of students and society.

  • Development of AI-based teaching and learning methods: Universities should develop teaching and learning methods based on AI. These methods should facilitate active and collaborative learning among students.

  • Evaluating the effectiveness of AI education: Universities should evaluate the effectiveness of AI education. These evaluations can contribute to improving the quality of AI education.

In summary, corrective actions in the macro, intermediate, and internal environments can aid in the development of AI education in universities. Considering the above, it can be inferred that employing efficient and precise strategies for evaluating the quality and effectiveness of AI educational programs is one of the primary requirements at various levels. Implementing this process in traditional ways can be time-consuming and complex. However, by utilizing AI techniques, this task can be accomplished more desirably. The next section presents a strategy based on AI techniques to achieve this goal.

Proposed strategy for evaluating the quality of AI training programs in higher education

In this section, a new strategy is proposed for quality evaluation of AI education programs in higher education. In this regard, the dataset used for designing this model is described first, followed by the presentation of the steps of the proposed strategy.

Data

In this research, a dataset containing information related to AI educational programs at higher education levels was utilized. This data was collected through in-person AI training classes from various technical and engineering faculties. During the data collection process, 188 questionnaires were distributed among the respondents in the target population. After the data collection, the accuracy and completeness of the provided information were verified. All the questionnaires were anonymous, without any personal or identity information. This study was conducted following ethical principles for research involving human subjects. This study did not involve human subjects research as defined by Belmont Report. Data was collected through a voluntary anonymous questionnaire that did not pose any risks beyond those encountered in daily life. Informed consent was obtained from all subjects and/or their legal guardian(s). No identifiable data was collected through questionnaires. Additionally, all dataset instances were anonymized and the attributes were encoded; therefore, none of the participants could be identified through the data. None of the questionnaires contained any invalid information. Conversely, 8 questionnaires contained at least one unanswered question, which were discarded due to the presence of missing values in the analysis process. Therefore, the total number of samples in the dataset used in this research amounts to 180. All the questionnaires were evaluated by three experts, and the quality of the educational program was determined as a numerical variable ranging from zero (worst) to 100 (best). Ultimately, an averaging strategy was used to determine the final score for each sample. The standard deviation of the scores determined by the experts is 2.66, which validates the collected information.

The list of indicators collected through questionnaires is presented in Table 2. The dataset listed in this table encompasses three general categories, each of which could potentially be related to the quality of the educational program. Consequently, this research aims to evaluate the AI education quality using (a subset of) the 14 indicators that have been listed as Table 2.

Table 2 The set of indicators considered to evaluate the quality of AI education.

Proposed quality evaluation algorithm

The proposed algorithm for the AI education quality evaluation in higher education utilizes a combination of optimization techniques and ML. In this approach, the optimization technique is initially used for identifying the indicators associated with the quality of AI training. Subsequently, an optimally structured artificial neural network (ANN) is utilized for prediction. This algorithm can be broken down into the following steps:

  1. 1.

    Data Pre-processing.

  2. 2.

    Feature Selection based on the Capuchin Search Algorithm (CapSA).

  3. 3.

    Quality Prediction based on ANN and CapSA.

The rest of this section is devoted to the description of each of the above steps.

Data preprocessing

The data preprocessing stage is the initial phase of the proposed model and is utilized to prepare the database for processing in subsequent stages. This stage comprises two steps: value conversion and normalization. To this end, all nominal features are first converted into numerical values. Specifically, for each nominal feature, a unique list of all its nominal values is initially prepared. In ranked features, the unique values obtained are sorted based on rank, and in discrete nominal features, this list is sorted in ascending order based on the frequency of the value in that feature. Then, the nominal value is replaced by a natural number corresponding to its position in the sorted list. By doing this, all the features of the dataset are converted into a numerical format. At the end of the pre-processing step, all features are mapped to the range [0, 1] based on the following relationship:

$$\:\overrightarrow{{N}_{i}}=\frac{\overrightarrow{x}-\:\text{m}\text{i}\text{n}\left(\overrightarrow{x}\right)}{\text{max}\left(\overrightarrow{x}\right)-\text{m}\text{i}\text{n}\left(\overrightarrow{x}\right)}$$

(1)

Where, \(\:\overrightarrow{x}\) represents the input feature vector and \(\:\overrightarrow{{N}_{i}}\) represents the corresponding normalization vector. Also, min and max are the minimum and maximum functions for the feature vector, respectively.

Feature selection using CapSA

After normalizing the features of the database, the process of feature selection and data dimension reduction is carried out. This algorithm aims to reduce the dimensions of the features, thereby increasing processing speed and reducing the error rate in evaluating the quality of AI training. The CapSA algorithm is utilized to achieve this goal. CapSA is a proven and fast meta – heuristic optimization algorithm that is applicable in feature selection. It is capable of both global search and local fine-tuning, does not get trapped into local optima and is fast-converging to good solutions. Compared with other algorithms such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), CapSA is easier to implement, faster and less likely to converge into local optima. First, CapSA increases the model’s accuracy and decreases overfitting by choosing appropriate features, which also increases model interpretability. Subsequently, the structure of the response vector and the objectives defined for the optimization algorithm are first explained, followed by a description of the feature selection steps using CapSA.

In the proposed method, the number of optimization variables corresponds to the number of features present in the database (Table 2), which is equal to 14. In other words, each solution vector of the optimization algorithm is of length 14. CapSA should be capable of determining the selection or non-selection of a feature via the response vector. In this way, each solution vector can be viewed as a binary string where each existing feature is assigned a position in the response vector of the optimization algorithm. Each position can have a value of 0 or 1. If a position has a value of 0, that feature is not selected in the current solution, and if it has a value of 1, the feature corresponding to the current position is considered as the selected feature.

Optimization objectives can be considered the most crucial part of an optimization algorithm. In the proposed method, the following two objectives are utilized to assess the quality or fitness of solution vectors in CapSA:

A) Maximizing the average correlation of the selected features with the target variable: The more a feature correlates with the target variable of the problem, the more significant that feature becomes. In other words, it becomes easier to predict the change in the target based on it. For this reason, maximizing the correlation of the selected features with the target variable is considered as the first objective in the optimization algorithm. This objective criterion is described in Eq. (2):

$$\:{F}_{1}=\frac{1}{\left|S\right|}\sum\:_{\forall\:i\in\:S}corr(i,T)$$

(2)

Where, S represents the set of features selected in the current response and \(\:\left|S\right|\) represents the number of these features. Also, T describes the target variable and \(\:corr(i,T)\)is the correlation evaluation function between the selected feature i and the target variable.

B) Minimizing the average correlation of selected features with each other: A feature is suitable for selection if it can provide new information compared to other selected features. Features that have highly correlated values exhibit similar patterns, and it is not appropriate to select them as descriptive features of the data. For this reason, minimizing the correlation of selected features is considered as the second objective of CapSA. This objective can be described in Eq. (3):

$$\:{F}_{2}=\frac{1}{{\left|S\right|}^{2}}\sum\:_{\forall\:i\in\:S}\sum\:_{\forall\:j\in\:S,\:(j\ne\:i)}corr(i,j)$$

(3)

Since the two aforementioned objectives are incompatible with each other (the first objective is defined as a maximization problem and the second objective is defined as a minimization problem), they need to be harmonized in the optimization algorithm. Therefore, they are combined in the form of the following fitness function:

$$\:fitness=\frac{{F}_{2}}{{F}_{1}+1}$$

(4)

Thus, the goal of the feature selection algorithm in the proposed approach is to identify features that can minimize the above relationship. The proposed algorithm, which aims to select the most relevant features to the quality of AI training using CapSA, is as follows:

Step 1) Determine the initial population randomly based on the boundaries set for each optimization variable.

Step 2) Determine the quality of each capuchin (solution) using Eq. (4).

Step 3) Set the initial velocity of each capuchin agent.

Step 4) Select half of the Capuchin population randomly as leaders, and designate the rest as follower Capuchins.

Step 5) If the number of iterations of the algorithm has reached the maximum value of G, proceed to step 13. If not, repeat the following steps:

Step 6) Calculate the CapSA lifetime parameter using Eq. (5):

$$\:\tau\:={\beta\:}_{0}{e}^{{\left(-\frac{{\beta\:}_{1}g}{G}\right)}^{{\beta\:}_{2}}}$$

(5)

Where, g represents the current iteration number, and the parameters \(\:{\beta\:}_{0}\), \(\:{\beta\:}_{1}\), and \(\:{\beta\:}_{2}\) are the coefficients of CapSA lifetime.

Step 7) Repeat the following steps for each Capuchin agent (both leader and follower) like i:

Step 8) If i is a Capuchin leader, update its velocity based on Eq. (6):

$$\:{v}_{j}^{i}=\rho\:{v}_{j}^{i}+\tau\:{a}_{1}\left({x}_{bes{t}_{j}}^{i}-{x}_{j}^{i}\right){r}_{1}+\tau\:{a}_{2}\left(F-{x}_{j}^{i}\right){r}_{2}$$

(6)

Where, j represents the dimensions of the problem, and \(\:{v}_{j}^{i}\) represents the velocity of capuchin i in dimension j. \(\:{x}_{j}^{i}\) indicates the position of capuchin i for the jth variable, and\(\:{x}_{bes{t}_{j}}^{i}\) describes the best position of capuchin i for the jth variable from the beginning until now. Also, \(\:{r}_{1}\) and\(\:{r}_{2}\) are two random numbers in the interval [0, 1]. Finally, ρ is the parameter that influences the previous velocity.

Step 9) Update the new positions of the leader Capuchins based on their velocity and movement pattern.

Step 10) Update the new positions of the follower Capuchins based on their velocity and the position of the leader.

Step 11) Determine the quality of the population members using Eq. (4).

Step 12) If the position of the entire population is updated, proceed to step 5; else, go to step 7.

Step 13) Return the solution with the best quality value as the set of selected features.

After executing the above steps, a set, such as X, is selected as significant features for the quality of AI training in higher education. This set is then used as the input for the third step of the proposed method. It should be noted that while implementing CapSA for feature selection, the population size and umber of iterations were set as 50 and 100, respectively. Also, the parameters of CapSA lifetime: \(\:{\beta\:}_{0}\), \(\:{\beta\:}_{1}\), and \(\:{\beta\:}_{2}\) (in Eq. 5) were considered as 2, 21, and 2, respectively. Additionally, the parameter ρ for influencing the previous velocity in Eq. 6 was set at 0.7.

Quality prediction based on ANN and CapSA

After identifying the set of indicators that impact the quality of AI education, the final phase of the presented approach attempts to predict the target variable based on these indicators. The current research is an effort to model the relationship among the selected features and the target (quality of education) using ANNs. To achieve an accurate prediction model, attention must be given to the problem of optimally configuring the MLP model. The use of many neurons and layers in the MLP model can increase the complexity of the model, and conversely, using models with less complexity can lead to a decrease in prediction accuracy. Also, conventional training algorithms for adjusting the weight values of NNs cannot guarantee achieving the highest prediction accuracy. To address these challenges, CapSA is used in the proposed method to optimize the configuration of the MLP model and its training. The proposed CapSA-MLP hybrid model provides an effective solution for the quality prediction of AI education. As a result, CapSA adjusts MLP architecture and weight vectors enhancing the model’s performance of learning patterns in a data set. In general, MLPs are appropriate for nonlinear mappings and can be used for regression as well as for classification. As the result, the proposed hybrid model built based on the CapSA and MLP could provide a more accurate and reliable model for the quality prediction of AI education.

In the proposed method, CapSA replaces the conventional training algorithms for MLPs. This optimization model not only adjusts the configuration of the MLP model’s hidden layers but also strives to determine the optimal weight vector for the NN. It does this by defining training performance, as the objective function. Figure 1 illustrates the structure of the NN that the proposed method uses to predict the quality of AI training.

Fig. 1
figure 1

Structure of the NN employed for predicting the AI education quality.

According to Fig. 1, the proposed NN comprises an input layer, two hidden layers, and an output layer. The input layer is populated with the features selected in the previous phase. CapSA determines the number of neurons in the first and second hidden layers. Consequently, the proposed MLP structure lacks a static architecture. On the other hand, the activation functions of these two hidden layers are set as logarithmic and linear sigmoid, respectively. Finally, the output layer consists of a neuron, the value of which indicates the predicted score for the input sample. Each neuron in this NN receives a number of weighted inputs, depicted as directional vectors in Fig. 1. In addition, each neuron possesses a bias value, omitted from the figure for simplicity. Under these conditions, the output of each neuron, transferred to the neurons of the subsequent layer, is formulated as follows:

$$\:{y}_{i}=G\left(\sum\:_{n=1}^{{N}_{i}}{w}_{n}\times\:{x}_{n}+{b}_{i}\right)$$

(7)

Where, \(\:{x}_{n}\) and \(\:{w}_{n}\) denote the input value and weight of the nth neuron, respectively, and \(\:{b}_{i}\) represents the bias value of this neuron. Also, \(\:{N}_{i}\) indicates the number of inputs of the ith neuron and \(\:G(.)\) defines the activation function. As previously mentioned, CapSA is utilized to determine the number of neurons in the hidden layers and to fine-tune the weight vector of this NN. The optimization steps in this phase mirror the process outlined in the second step (feature selection). The distinction in this phase is that a different approach is employed to encode the solution vector and assess fitness. Consequently, the structure of the response vector and the criteria for evaluating suitability are elucidated in the following.

The response vector (capuchin) in the capuchin search algorithm, utilized in the presented approach, dictates the topology of the MLP, and also its weights/biases vector. Consequently, each response vector in CapSA is composed of 2 interconnected parts. The first part is defined for determining the size of hidden layers in NN, while the second part of solution defines its weight/bias vector, and corresponds to the topology established in the first part. Consequently, the capuchins in this step possess a variable length. Given that the number of states for the NN topology can be infinite, a range of 0 to 15 neurons is envisaged for each hidden layer in the aforementioned network. Therefore, each entry in the first part of the response vector is a natural number in the range of 0 to 15, and if a layer is defined with no neurons (0), that layer is eliminated. It’s worth noting that in the first part of each capuchin only specifies the size of hidden layers (not input or output layers).

The length of the second part of the solution vector is dictated by the topology established in the first part. For a NN with I input neurons, H1 neurons in the first hidden layer, H2 neurons in the second hidden layer, and P output neurons, the length of the second part of each solution vector in CapSA corresponds to:

$$\:L={H}_{1}\times\:\left(I+1\right)+{H}_{2}\times\:\left({H}_{1}+1\right)+P\times\:({H}_{2}+1)$$

(8)

Where, \(\:{H}_{1}\times\:\left(I+1\right)\) represents the number of weight values between the neurons of the input layer and the first hidden layer, in addition to the bias of the first hidden layer. The value of \(\:{H}_{2}\times\:\left({H}_{1}+1\right)\) denotes the number of weights between the first and second hidden layers, along with the bias of the second hidden layer. Finally, \(\:P\times\:({H}_{2}+1)\)illustrates the number of weights between the last two layers, as well as the bias of the output layer. Consequently, the length of the second part of each solution vector in the optimization algorithm equates to L. In this vector, the value of the weight and bias is represented as a real value within interval [-1, + 1]. In other words, each optimization variable in the second part of the chromosome is characterized as a real variable with search boundaries of [-1, + 1].

The first population of CapSA was generated randomly. The NN produces outputs for the training instances after the weights have been determined by the solution. These outputs are then matched with the ground-truth values of the target and based on that, the NN’s performance (training quality) is measured. Subsequently, the Mean Absolute Error (MAE) criterion is utilized to assess both the NN’s training quality and the optimality of the response. Consequently, the objective function of the CapSA is formulated by Eq. (9):

$$\:MAE=\frac{1}{N}\sum\:_{i=1}^{N}|{T}_{i}-{Z}_{i}|$$

(9)

Where, N denotes the number of training instances and Ti indicates the actual value for the target of ith training instance. Also, Zi corresponds to the output generated by the NN for the ith training sample. As previously mentioned, the optimization steps of the MLP model by CapSA in this phase mirror the process outlined in the second step (feature selection), thus the repetition of these contents is foregone. Upon determining the NN with the optimal topology and weight vector that can minimize Eq. (9), this NN is employed to predict the quality of training in new samples. In should be noted that in this phase, the CapSA was implemented using the parameter setting same as the values considered for feature selection.

Source link

Subscribe our Newsletter

Congratulation!