VALIDATION OF A DIGITAL COMPETENCE IN ARTIFICIAL INTELLIGENCE SCALE FOR NON-UNIVERSITY STUDENTS
BASED ON THE DIGCOMP MODEL
1University of Malaga (Spain)
2University of Sevilla (Spain)
Received June 2025
Accepted September 2025
Abstract
In the current educational landscape, artificial intelligence (AI) is profoundly transforming teaching and learning processes, underscoring the necessity for students to develop specific digital competences related to AI in order to address the challenges and opportunities of this technological era. This study examines the increasing relevance of AI in education, with a particular focus on the digital competence of non‑university students for its responsible and effective application, aligned with the European DigComp 2.2 framework. The primary aim was to validate the Digital Competence in Artificial Intelligence Questionnaire for Students (CompDigIA). To this end, an instrument was developed based on the five competence areas of DigComp, adapted to the context of AI, and administered to a representative sample of 480 students. The methodological approach involved both exploratory and confirmatory factor analyses using structural equation modeling, yielding satisfactory fit indices and robust internal consistency. The findings confirm the validity and reliability of the questionnaire for assessing digital competence in AI, while also identifying strengths and areas for improvement among students. The discussion highlights the study’s contribution to addressing a gap in the literature by providing a comprehensive and specialized assessment tool for the non-university educational stage, and situates the results within the context of prior research on digital competence and AI. The study concludes that the instrument offers a solid foundation for future research and educational practice aimed at fostering AI‑related digital competences, with recommendations for expanding its application and addressing ethical and pedagogical considerations associated with educational digitalization.
Keywords – Artificial intelligence, Digital competences, Non-university students, Questionnaire validation, DigComp 2.2.
To cite this article:
|
Rubio-Gragera, M., Palacios-Rodríguez, A.P., & Colomo-Magaña E. (2025). Validation of a digital competence in artificial intelligence scale for non-university students based on the DigComp model. Journal of Technology and Science Education, 15(3), 730-745. https://doi.org/10.3926/jotse.3616 |
----------
-
-
1. Introduction
-
In an increasingly digitalized society, artificial intelligence (AI) has swept across all spheres of human life, assuming considerable importance in the field of education as well (Cabero, Palacios, Loaiza & Rivas, 2024). This impact has created the need to regulate its use in educational contexts. At the international level, UNESCO has established policies governing the use of generative AI (Miao & Holmes, 2024), while in Europe, Regulation (EU) 2024/1689 has identified multiple challenges that must be addressed when integrating AI into teaching (European Union, 2024). Furthermore, there have been calls to establish ethical frameworks for its incorporation into education (Yan & Liu, 2024), making this one of the major obstacles to its implementation. It is therefore essential to safeguard issues such as privacy, equity, and the responsible use of AI (Dellepiane & Guidi, 2023), which demands critical reflection on how (means), why (reasons), and for what purposes (ends) AI is employed (Celik, 2023; Pratama, Sampelolo & Lura, 2023). Additional concerns also arise regarding the reliability of responses generated by AI (López-Regalado, Núñez-Rojas, López-Gil & Sánchez-Rodríguez, 2024), given the algorithms, biases, and automated outputs involved, which are not always fully trustworthy (Zeb, Ullah & Karim, 2024). Another issue is the potential dehumanization of learning (Selwyn, 2022), associated with the loss or reduction of key capacities such as creativity (Zawacki-Richter, Marín, Bond & Gouverneur, 2019), critical thinking (Ali, Murray, Momin, Dwivedi & Malik, 2024), and autonomous reasoning (Trisnawati, Putra & Balti, 2023), as a consequence of an overreliance on generative AI.
Nonetheless, there are numerous benefits associated with its use and potential. AI represents a tool capable of redefining both teaching and learning processes, as well as the role of educators and their possibilities in the classroom. Regarding educational processes, it drives a transformative change in several aspects. First, it transforms teaching models (Hinojo-Lucena, Aznar-Díaz, Cáceres-Reche & Romero‑Rodríguez, 2019), prioritizing learner-centered approaches over traditional methods. Second, it positions students as active participants in the learning process (Trujillo-Torres, 2024), emphasizing the accommodation of diverse learning styles. Third, it enables the personalization of instruction (González‑Calatayud, Prendes-Espinosa & Roig-Vila, 2021), focusing on methodologies and content tailored to the interests and characteristics of each learner. Finally, it fosters enhanced educational inclusion (Harishbhai-Tilala, Kumar-Chenchala, Choppadandi, Kaur, Naguri, Saoji et al., 2024; Romero‑Alonso, Araya-Carvajal & Reyes-Acevedo, 2025), supporting solutions that address student diversity, remove barriers, and foster genuine equality. For educators, AI serves as a complementary resource rather than a replacement (Gómez-Cárdenas, Fuentes-Penna & Castro-Rascón, 2024), shifting their role toward that of a guide and facilitator of active, flexible, and personalized learning processes (Yi, 2024). Additionally, AI provides teachers with enriching possibilities in assessment practices: automating the grading of objective tests such as quizzes (González-Calatayud et al., 2021); delivering personalized feedback tailored to each student’s needs and supportive of their learning (Celik, Dindar, Muukkonen & Järvelä, 2022); and monitoring student progress, enabling the identification of learning patterns and the adjustment of pedagogical strategies accordingly (Owan, Abang, Idika, Etta & Bassey, 2023).
All these changes and possibilities have sparked sustained scholarly interest, from various perspectives, in the implementation of artificial intelligence in education. Research in this field has explored diverse dimensions, including the influence of AI use on university students’ academic performance (García‑Martínez, Fernández-Batanero, Fernández-Cerero & León, 2023); its impact on language learning (Liang, Hwang, Chen & Darmawansah, 2021); the role of generative AI in supporting learning processes (Alenezi, Mohamed & Shaaban, 2023; Cooper, 2023); the potential for academic dishonesty arising from its misuse (Guillén, Sánchez, Colomo & Sánchez, 2025; Şahín, 2024); students’ attitudes toward its integration into educational settings (Pande, Jadhav & Mali, 2023); and the responsible use of AI for teaching among future education professionals (Gómez-García, Ruiz-Palmero, Boumadan-Hamed & Soto‑Varela, 2025).
Despite the generally high levels of acceptance of AI in education (Llorente-Cejudo, Barragán-Sánchez, Palacios-Rodríguez & Fernández-Scagliusi, 2025), there continues to be a clear need to strength training in the pedagogical use of AI, as this continues to represent a clear area for improvement among different educational stakeholders (Lucas, Bem-haja, Zhang, Llorente-Cejudo & Palacios-Rodríguez, 2025).
Alongside this, there are emerging thematic areas that require greater emphasis in AI research, such as digital competence, a key factor for the successful integration of AI (Al-Darayseh, 2023; Antonenko & Abramowitz, 2023). Although studies on digital competence have proliferated in recent years (Cabero, Gutiérrez, Palacios & Guillén, 2023; Colomo, Aguilar, Cívico & Colomo, 2023; Colomo, Cabero, Guillén & Palacios, 2025; Guillén, Colomo, Cívico & Linde, 2024; Palacios, Llorente, Lucas & Bem-Haja, 2025; Paz & Gisbert, 2024; Pinto, Pérez & Darder, 2023; Romero-Tena, Barragán, Gutiérrez & Palacios, 2024; Tomczyk, Mascia, Gierszewski & Walker, 2023; Villén, Ágreda & Rodríguez, 2024), there is a lack of research examining how AI has impacted these competencies. This gap is particularly evident when focusing on students’ digital competence for AI use (Rubio, Colomo & Palacios, 2025).
Initially, studies exclusively focused on students’ digital competence at the pre-university level reported low (Rodríguez, Betín, Caurcel & Gallardo, 2024; Turpo, Zea, Huamaní, Girón, Pérez & Aguaded, 2023) or moderate levels (López, Sánchez & García-Valcárcel, 2021). This situation confirms that digital competence levels were fairly limited prior to the arrival of AI among learners and now must additionally include skills for the effective and enriching use of this technology. In this regard, the study by Wu and Zhang (2025) shows that integrating AI into educational processes helped improve students’ overall digital literacy, while critical and ethical use remains an area requiring further development.
Among the efforts to integrate AI into digital competence, there is the update of the European Digital Competence Framework for Citizens (DigComp), version 2.2, which incorporates AI across its five areas and the competencies that define each of them (Vuorikari, Kluzer & Punie, 2022). Examining the studies that have begun to investigate students’ digital competence in relation to AI use, we can identify several interesting proposals.
The study by Levy-Nadav, Shamir-Inbal and Blau (2024), focused on secondary school teachers’ perceptions of their students’ AI-related digital competencies based on the DigComp 2.2 framework, concluded that it was needed to improve critical evaluation of information obtained through generative AI among students (Information and Data Literacy area); the writing of prompts and use of generative AI (Digital Content Creation area); and the selection of appropriate AI tools (Problem Solving area).
The research carried out by Zhang and Tian (2025) examined how digital competence acquisition with generative AI was approached using the DigComp framework across 88 prestigious universities worldwide, found a strong emphasis on information and digital literacy, as well as security. Estrada, Rodrigo, Ruiz and García (2025) analyzed AI use among 78 pre-service teachers, referencing the DigComp 2.2 competencies on critical data evaluation, copyright, and creative use of technology. The results indicated improved abilities in data evaluation and creativity, while training in the responsible and ethical use of AI remained necessary.
It is also noteworthy to mention Wang’s (2025) study with 43 secondary school students from a rural area in China, whose digital competence was measured using DigComp 2.0 before and after implementing a digital skills program based on AI. The results showed an improvement in digital competence, confirming that AI integration contributes to the development of these skills. Finally, the proposals by Głushkova and Ignatova (2023) are also worth highlighting, as they explored how to develop certain DigComp 2.2 competencies in secondary school students through AI, providing practical examples of implementation.
Although some studies have addressed certain digital competencies for AI use based on the DigComp 2.2 framework, no research has yet included all areas of this framework adapted to the phenomenon of artificial intelligence for secondary education students. Therefore, a deeper understanding of the skills and abilities students possess to use AI in their learning process depends on the availability of validated instruments that can explore the integration of these competencies. Without such instruments, it is not possible to obtain accurate insights into the impact of AI integration on students’ cognitive processes or academic performance in relation to the development of their digital competencies. This limitation also constrains decision-making regarding educational policies, due to the lack of supporting evidence.
In this regard, existing validated instruments explore both areas of interest, but independently and not always prioritizing students as the primary subjects. Concerning digital competence questionnaires, García‑Valcárcel, Casillas-Martín and Gómez-Pablos (2020) developed the INCODIES questionnaire to measure digital competencies across the five DigComp areas in secondary school students. Iglesias, Hernández, Martín and Herráez (2021) also based their work on the DigComp framework for secondary education students, although their focus was exclusively on the Communication area. Regarding AI, Ng, Wu, Leung, Chiu and Chu (2024) validated a questionnaire centered on AI literacy processes for secondary students. Including both primary and secondary students, Suh and Ahn (2022) validated a scale assessing attitudes toward AI use.
Therefore, no instrument was found in the scientific literature that combines both constructs and assesses secondary education students’ digital competence for using AI in their learning process, making the primary contribution of this study the filling of this empirical gap.
For the validation, the structural equation modeling (SEM) approach was used, a common validation method in the field of educational research for measuring complex constructs such as self-perception. This approach allows the correct verification of item design, the empirical support of the theoretical model underlying the questionnaire, and its validity and reliability for implementation in similar studies. This validation model has been successfully applied in various teacher-focused instruments, including general assessments (Cabero, Barroso, Gutiérrez & Palacios, 2020) and studies examining specific contexts, such as digital competence in research activities (Duarte, Palacios, Guzmán & Segura, 2024; Guillén, Tomczyk, Colomo & Mascia, 2024), YouTube as an educational resource (Guillén, Ruiz, Colomo & Cívico, 2023), and eco-responsible technology use (Barragán, Corujo, Palacios & Román, 2020). For all these reasons, the objective of this study was to validate a questionnaire designed to assess students’ Digital Competence in relation to AI, using the European DigComp framework as a reference, through structural equation modeling.
2. Methodology
To ensure the validity and reliability of the instrument designed to assess secondary education students’ digital competence in the use of artificial intelligence, a rigorous methodological process was followed, comprising three main stages. First, a reliability analysis was conducted to examine the internal consistency of the scale through the calculation of appropriate coefficients. Next, an exploratory factor analysis (EFA) was performed to identify the underlying structure of the items and verify the coherent grouping of dimensions proposed in relation to the DigComp model adapted to AI. Finally, a confirmatory factor analysis (CFA) using structural equation modeling was carried out to corroborate the validity of the theoretical model and assess the fit of the factorial structure derived from the EFA, thereby ensuring the robustness of the questionnaire for future application in similar educational contexts.
2.1. Objectives
The primary aim of this study was to design and validate a scale to assess secondary education students’ digital competence in the use of artificial intelligence, using the European DigComp framework as a reference, updated with AI-specific aspects. Specifically, the following objectives were set: (1) to analyze the reliability of the instrument through internal consistency indices; (2) to explore the factorial structure of the scale using an exploratory factor analysis (EFA) to identify and group dimensions consistent with the proposed theoretical model; and (3) to confirm the adequacy of the factorial model through a confirmatory factor analysis (CFA), verifying its structural validity and the suitability of fit indices, in order to ensure the relevance and robustness of the questionnaire for its application in future research on digital competence and artificial intelligence in secondary education contexts.
2.2. Participants
The sample for this study consisted of a total of 482 secondary education students. Regarding gender, 49 % identified as female, 48.5 % as male, and 2.5 % preferred not to specify. Age distribution showed that most students were between 12 and 17 years old, with notable groups at 14 years (21.6 %), 16 years (18.3 %), and 15 years (16.6 %); only 5.8 % were 18 years or older. Concerning educational level, 82.6 % were enrolled in Compulsory Secondary Education (ESO, in Spain), 16.6 % in Baccalaureate, and 0.8 % in Vocational Education and Training (VET). The majority attended public schools (99.6 %), mainly located in rural areas or small towns (87.6 %), while 12.4 % studied in urban schools. Although students from rural areas often face limitations in access to technological devices and connectivity required for AI, as well as difficulties with Wi-Fi access, this was influenced by the availability of the sample (non‑probabilistic sampling). Nevertheless, these areas, despite not being provincial capitals, have populations exceeding 25,000 inhabitants per municipality, with infrastructure and resources comparable to urban areas. This aspect will, however, be acknowledged as a limitation regarding the potential generalization of results.
Regarding parental education, a heterogeneous representation was observed: 37.8 % had parents or guardians with basic education, 23.7 % with university studies, 13.7 % with Baccalaureate, 10.8 % without formal education, 9.5 % with vocational/technical training, and 4.6 % with postgraduate studies (Master’s or PhD). Concerning technological resources, the majority of students have devices at home: 98.3 % have a mobile phone, 88.4 % a computer, 71.8 % a tablet, and 52.7 % virtual assistants such as Alexa, Google Assistant, or Siri; additionally, 66.8 % possess other technological devices.
With respect to daily technology use, 38.6 % reported using their mobile phone between 2 and 4 hours per day, and 32.4 % use it for more than 4 hours. Tablet, computer, and console use was lower, with most students using these devices between 0 and 2 hours daily. Moreover, 75.1 % indicated that there were no established limits on daily technology use.
Regarding AI use, 55.6 % reported that their teachers use it occasionally, while 58.5 % acknowledged using it sometimes to complete school tasks. Students also perceived that their peers use AI with similar frequency, mainly at medium or high levels. Finally, regarding the perceived usefulness of AI tools for improving English language skills, the highest positive evaluation was observed for writing (42.7 % somewhat useful, 28.2 % very useful), while listening and speaking were rated lower, with the majority perceiving little or no usefulness.
2.3. Instrument
For data collection, a questionnaire was designed and validated, named the Digital Competence of Students in Artificial Intelligence Questionnaire (CompDigIA), registered under Entry Number 04/2025/1748 in the Territorial Registry of Intellectual Property, administered by the Ministry of Culture of the Spanish Government. The questionnaire aims to assess students’ level of digital competence in relation to the use and application of artificial intelligence tools, following the structure of the European Union’s DigComp Framework for Citizens’ Digital Competence. The instrument is organized into five competence areas, each including several specific items that allow a detailed exploration of students’ skills, attitudes, and practices. Table 1 presents the correspondence between each area and its associated items.
Each item is formulated as a closed-ended question using a Likert-type response scale, allowing for the assessment of different levels of competence and perception. The questionnaire was reviewed by a panel of experts in educational technology, artificial intelligence, and digital competence assessment, thereby ensuring its content validity and its alignment with the indicators of the DigComp Framework.
2.4. Data Analysis Procedure
First, the internal reliability of the scale was assessed by calculating Cronbach’s Alpha and McDonald’s Omega for each dimension as well as for the overall questionnaire. These indicators provide a measure of the internal consistency of the items, that is, the degree to which they coherently assess the same latent construct. Alpha and Omega values close to 0.70 are generally considered acceptable for exploratory and descriptive studies in the social and educational sciences (Nunnally & Bernstein, 1994).
|
DigComp framework areas |
Items |
|
A. Information and Data Literacy |
A.1 Are you able to search for information on the Internet using artificial intelligence tools? A.2 Are you able to determine whether the information or content generated by artificial intelligence tools is reliable and accurate? A.3 Are you able to organize, save, and retrieve the information or content you have generated with artificial intelligence? |
|
B. Communication and Collaboration |
B.1 Do you know how to use artificial intelligence tools to communicate and interact with other people? B.2 Are you able to share information or content using artificial intelligence tools? B.3 Can you use artificial intelligence to carry out activities that benefit society? B.4 Do you find artificial intelligence useful for teamwork? B.5 When using artificial intelligence tools, do you follow proper digital etiquette and communication rules? (For example: avoiding swear words, not making spelling mistakes, etc.) B.6 Do you think artificial intelligence tools can help you maintain your online reputation and digital identity? |
|
C. Digital Content Creation |
C.1 Are you able to create new texts, images, or videos using artificial intelligence tools? C.2 Are you able to modify or improve existing digital content with the help of artificial intelligence tools? C.3 Do you know how to respect copyright and licensing rules for digital content when using artificial intelligence tools? C.4 Are you able to solve programming problems, such as those in Robotics class, with the help of artificial intelligence tools? |
|
D. Safety |
D.1 Can artificial intelligence help you protect your devices and digital content from online risks and threats? D.2 Can artificial intelligence help you protect your privacy and personal data that may be online? D.3 Do you think artificial intelligence tools can help you protect and improve your physical and mental health when using technology? D.4 Do you think using artificial intelligence tools can help reduce the environmental impact of technology? For example, do you think AI could help reduce energy consumption or make recycling more efficient? |
|
E. Problem Solving |
E.1 Can AI help you solve technical problems with your digital devices? E.2 Can you use AI tools to personalize and adjust digital environments to your needs? For example, using AI to remind you of a task or to choose your favorite music. E.3 Are you able to use AI tools to generate creative digital content, such as drawings, songs, poems, etc.? E.4 Do you think there are more things you could learn about the use and usefulness of artificial intelligence tools? |
Table 1. Dimensions and Items of the Digital Competence of Students in Artificial Intelligence Questionnaire (CompDigIA)
Next, an Exploratory Factor Analysis (EFA) was conducted using the principal components method with orthogonal Varimax rotation, implemented in SPSS version 29. The aim of this analysis was to identify the underlying structure of common factors across the set of items and to explore their grouping into coherent dimensions, in line with the theoretical structure based on the DigComp Framework. Varimax rotation was applied to obtain a clearer solution and facilitate the interpretation of the factors, maximizing the explained variance while minimizing cross-loadings between factors.
Prior to conducting the EFA, the suitability of the data for factor analysis was assessed. This involved evaluating the adequacy of the correlation matrix using Bartlett’s test of sphericity (Omnibus) and the Kaiser-Meyer-Olkin (KMO) measure. The significance of Bartlett’s test (<.001) and a KMO value above 0.70 (.892) indicated that it was appropriate to proceed with the factor analysis.
Once the factorial structure had been explored, a Confirmatory Factor Analysis (CFA) was conducted using the Maximum Likelihood (ML) method in AMOS software version 29. The CFA was used to statistically test the hypothesized factorial model, derived from the EFA and grounded in the DigComp Framework. This procedure enabled the assessment of model fit to the empirical data, examining construct validity and the adequacy of the factor loadings for each item. Absolute and parsimonious fit indices (e.g., RMSEA) were used to evaluate model fit, considering values deemed acceptable according to the specialized literature (Lévy-Mangin, Varela-Mallou & Abad-González, 2006; Hu & Bentler, 1999).
3. Results
3.1. Instrument Reliability
First, the internal reliability of the Digital Competence of Students in Artificial Intelligence Questionnaire (CompDigIA) was analyzed using Cronbach’s Alpha and McDonald’s Omega coefficients. The results indicate a high level of internal consistency for the overall scale, with α = 0.91 and Ω = 0.90, reflecting excellent reliability according to the criteria established by Nunnally and Bernstein (1994). Regarding the dimensions comprising the questionnaire, the following reliability indices were obtained:
-
•Area 1: Information and Data Literacy → α = 0.85 / Ω = 0.72
-
•Area 2: Communication and Collaboration → α = 0.88 / Ω = 0.77
-
•Area 3: Digital Content Creation→ α = 0.87 / Ω= 0.70
-
•Area 4: Safety → α = 0.83 / Ω= 0.79
-
•Area 5: Problem Solving → α = 0.86 / Ω= 0.73
All of these values exceed the recommended minimum threshold of 0.70, indicating adequate internal consistency within each dimension and supporting the coherence of the items grouped in each factor. These results suggest that the questionnaire demonstrates robust psychometric properties in terms of reliability, making it a suitable tool for assessing students’ digital competence in artificial intelligence.
3.2. Exploratory Factor Analysis
|
|
1 |
2 |
3 |
4 |
5 |
|
A1 |
.743 |
|
|
|
|
|
A2 |
.659 |
|
|
|
|
|
A3 |
.721 |
|
|
|
|
|
B1 |
|
.767 |
|
|
|
|
B2 |
|
.617 |
|
|
|
|
B3 |
|
.676 |
|
|
|
|
B4 |
|
.699 |
|
|
|
|
B5 |
|
.780 |
|
|
|
|
B6 |
|
.700 |
|
|
|
|
C1 |
|
|
.701 |
|
|
|
C2 |
|
|
.723 |
|
|
|
C3 |
|
|
.699 |
|
|
|
C4 |
|
|
.725 |
|
|
|
D1 |
|
|
|
.775 |
|
|
D2 |
|
|
|
.803 |
|
|
D3 |
|
|
|
.614 |
|
|
D4 |
|
|
|
.676 |
|
|
E1 |
|
|
|
|
.732 |
|
E2 |
|
|
|
|
.632 |
|
E3 |
|
|
|
|
.675 |
|
E4 |
|
|
|
|
.689 |
Table 2. Rotated component matrix
Subsequently, an Exploratory Factor Analysis (EFA) was conducted to identify the underlying structure of the items and to verify the adequacy of the proposed model. The principal components method was used with Varimax rotation and Kaiser normalization. he EFA identified five principal factors with eigenvalues exceeding 1, which together explained 59.73 % of the total variance. This percentage is considered satisfactory in social and educational science studies (Hair, Black, Babin & Anderson, 2014). After Varimax rotation, the factor loadings of the items showed a clear structure consistent with the theoretical framework. The items grouped consistently within each of the expected factors, as detailed below (Table 2).
Factor loadings ranged from 0.576 to 0.803, reflecting significant and meaningful associations relevant for the interpretation of each dimension (Hair et al., 2014). Therefore, the EFA results support the multidimensional structure of the questionnaire and confirm the appropriateness of the items within each of the proposed theoretical factors.
3.3. Confirmatory Factor Analysis
A Confirmatory Factor Analysis (CFA) was conducted using the Maximum Likelihood method to evaluate the factorial structure of the Digital Competence of Students in Artificial Intelligence Questionnaire (CompDigIA). The structural diagram is presented below (Figure 1).
Figure 1. Structural Diagram
The factor loadings of the items on their corresponding factors were generally satisfactory and exceeded 0.50 for most items, indicating that the items are appropriately associated with their latent factors (Lévy‑Mangin et al., 2006). Correlations between factors showed positive, moderate-to-strong relationships, ranging from 0.476 to 0.896, reflecting adequate coherence and association among the questionnaire’s dimensions without excessive multicollinearity. Regarding the overall fit indices, the following results were obtained in comparison with the recommended thresholds reported by Lévy‑Mangin et al. (2006).
-
•Chi-square (CMIN) = 821.495 with 179 degrees of freedom (p < 0.001), and a CMIN/DF ratio = 4.589. Although the Chi-square value is significant, this statistic is known to be sensitive to sample size. The CMIN/DF ratio, while somewhat high, falls within an acceptable range (below 5).
-
•Absolute fit indices: GFI = 0.862 and AGFI = 0.822, indicating a reasonable model fit.
-
•Incremental fit indices: CFI = 0.833, IFI = 0.835, and TLI = 0.804, values that indicate an acceptable model fit.
-
•RMSEA = 0.0688 (90 % CI: 0.080–0.092), a value below 0.08, indicating a good fit of the model to the data.
-
•Parsimony measures: PNFI = 0.680 and PCFI = 0.710, reflecting an adequate balance between model fit and simplicity.
-
•The Hoelter index at a significance level of 0.05 was 124, indicating that the sample size is adequate for this model.
Therefore, the proposed factorial model for the CompDigIA demonstrates a reasonable fit and a factorial structure consistent with the underlying theoretical framework.
4. Discussion
This study has developed and validated the Digital Competence of Students in Artificial Intelligence Questionnaire (CompDigIA), a pioneering instrument that specifically and cohesively integrates the digital competencies of non-university students aimed at the use of artificial intelligence within the learning process. In this regard, the research represents a significant advancement in the scientific landscape, where until now fragmented studies predominated, addressing either digital competence or AI independently, or employing instruments designed for other target groups, such as teachers (Cabero et al., 2020; Guillén, Tomczyk et al., 2024).
The reliability results, measured through Cronbach’s Alpha and McDonald’s Omega, showed consistent and adequate values both for the overall instrument and for each of its dimensions, confirming the internal coherence of the questionnaire. This aligns with the standards recommended for psychometric instruments in education (Nunnally & Bernstein, 1994). Furthermore, the exploratory and confirmatory factor analyses supported the theoretical structure based on the five areas of the European DigComp 2.2 framework, adapted to the AI context (Vuorikari et al., 2022), demonstrating that the proposed model adequately represents students’ competences for the use of AI.
The fit indices obtained from the confirmatory factor analysis—particularly the RMSEA and the acceptable values of CFI, GFI, and TLI—indicate that the theoretical model demonstrates a satisfactory fit to the empirical data, while leaving room for future improvement. This statistical robustness provides solid evidence supporting the validity of the CompDigIA and its practical utility in educational contexts.
Compared with previous studies, such as those by Levy-Nadav et al. (2024) and Estrada et al. (2025), which examined digital competences and AI use from more partial perspectives, our instrument addresses digital competence in AI comprehensively and specifically for non-university students, addressing a critical gap identified in the literature (Rubio et al., 2025). The findings confirm that students demonstrate varying levels of digital competence in AI, with certain areas requiring greater attention—particularly items with lower factor loadings, which may reflect emerging or less developed aspects of digital competence in AI.
This heterogeneity is consistent with previous research reporting low to moderate levels of digital competence in secondary education contexts (López et al., 2021). Therefore, the CompDigIA not only provides a reliable diagnostic tool but also serves as a starting point for educational interventions aimed at strengthening these competences, in line with the demands set forth by UNESCO (Miao & Holmes, 2024) and the European Regulation 2024/1689.
5. Conclusions
The presence of artificial intelligence in the educational sphere represents a revolution in teaching and learning practices (Matosas, Gómez & Boumadan, 2025; Yi, 2024). Alongside this, students need to strengthen their digital competences in order to use these technologies critically, ethically, and effectively (Al-Darayseh, 2023). In this context, it is essential to have valid and reliable instruments that can assess students’ levels of digital competence in AI (Rubio et al., 2025). The development and validation of such tools, as exemplified by the present study, not only support pedagogical practices but also have a direct impact on educational policy, providing evidence to inform decisions about which areas to prioritize in students’ digital training for AI use (Vuorikari et al., 2022). Accordingly, such training should address key aspects, including the critical evaluation of information, responsible digital content creation, the protection of security and privacy, and the ethical resolution of problems using artificial intelligence, thereby ensuring preparation that meets current educational challenges (Głushkova & Ignatova, 2023).
Despite its contributions, the study presents several limitations that should be considered when interpreting and applying the results. First, the sample was limited to a specific geographic and educational context, which may restrict the generalizability of the findings to other regions or educational levels. In particular, the high proportion of students from rural contexts, with their associated contextual characteristics, must be considered when making comparisons with similar research. It should be noted, however, that this situation was determined by the accessibility of the sample (non-probabilistic sampling). Nonetheless, for this study, the municipalities had populations exceeding 25,000 inhabitants and technological resources and infrastructure comparable to those in urban areas.
The cross-cultural or intercultural validity of the questionnaire still needs to be evaluated through future studies in different national and international contexts. Additionally, the instrument focuses on students’ self-perceptions of their digital competence in AI, which may introduce biases related to self-assessment, such as overestimation or underestimation of actual skills. Complementing these data with objective assessments or observed performance would be advisable for future research.
This study opens multiple avenues for further research on digital competence in AI and its impact on education. A priority line of inquiry is the cross-cultural validation and adaptation of the CompDigIA to ensure its applicability across diverse contexts, as well as its suitability for different educational levels, including primary, secondary, and vocational education. Additionally, the development of longitudinal studies is recommended to observe the evolution of digital competence in AI over time and in response to specific training programs, thereby identifying the most effective pedagogical strategies for its enhancement, as suggested by Wang (2025). Another important line of research involves integrating objective assessments and mixed-methods approaches that combine self-assessment with task analysis, observations, and actual performance to obtain a more comprehensive and reliable evaluation of students’ digital competence in AI. Furthermore, exploring the relationship between digital competence in AI and variables such as academic performance, motivation, creativity, or critical thinking will provide a deeper understanding of the educational impact of these skills. Finally, given the rapid technological evolution and ongoing ethical and legal debates regarding the responsible use of AI in education (Dellepiane & Guidi, 2023; Yan & Liu, 2024), it is essential that future research addresses ethical considerations, privacy, and equity in AI use, as well as the development of competencies related to critical thinking and reflective use of these tools, ensuring comprehensive training that prepares students for a digitized and ethically complex world.
In summary, this study makes a significant contribution to the field of digital education and artificial intelligence by providing a validated instrument that comprehensively measures students’ digital competence for using AI in their learning. This advancement facilitates the assessment, diagnosis, and planning of educational strategies that address the emerging needs of a society increasingly shaped by digitalization and the growing presence of artificial intelligence. The consolidation and broader application of the CompDigIA will not only advance research but also support evidence-based decision-making and the development of educational policies, contributing to a more equitable, ethical, and future-ready education in the twenty-first century.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
References
Al-Darayseh, A. (2023). Acceptance of artificial intelligence in teaching science: Science teachers’ perspective. Computers and Education: Artificial Intelligence, 4, 100132. https://doi.org/10.1016/j.caeai.2023.100132
Alenezi, M.A.K., Mohamed, A.M., & Shaaban, T.S. (2023). Revolutionizing EFL special education: how ChatGPT is transforming the way teachers approach language learning. Innoeduca. International Journal of Technology and Educational Innovation, 9(2), 5-23. https://doi.org/10.24310/innoeduca.2023.v9i2.16774
Ali, O., Murray, P.A., Momin, M., Dwivedi, Y.K., & Malik, T. (2024). The effects of artificial intelligence applications in educational settings: Challenges and strategies. Technological Forecasting and Social Change, 199, 123076. https://doi.org/10.1016/j.techfore.2023.123076
Antonenko, P., & Abramowitz, B. (2023). In-service teachers’ (mis)conceptions of artificial intelligence in K-12 science education. Journal of Research on Technology in Education, 55(1), 64-78. https://doi.org/10.1080/15391523.2022.2119450
Barragán, R., Corujo, M.C., Palacios, A., & Román, P. (2020). Teaching Digital Competence and Eco-Responsible Use of Technologies: Development and Validation of a Scale. Sustainability, 12(18), 7721. https://doi.org/10.3390/su12187721
Cabero, J., Barroso, J., Gutiérrez, J.J., & Palacios, A. (2020). Validación del cuestionario de competencia digital para futuros maestros mediante ecuaciones estructurales. Bordón. Revista de Pedagogía, 72(2), 45-63. https://doi.org/10.13042/Bordon.2020.73436
Cabero, J., Gutiérrez, J.J., Palacios, A., & Guillén, F.D. (2023). Digital Competence of university students with disabilities and factors that determine it. A descriptive, inferential and multivariate study. Education and Information Technologies, 28, 9417-9436. https://doi.org/10.1007/s10639-022-11297-w
Cabero, J., Palacios, A., Loaiza, M.I., & Rivas, M.R. (2024). Acceptance of Educational Artificial Intelligence by Teachers and Its Relationship with Some Variables and Pedagogical Beliefs. Education Sciences, 14(7), 740. https://doi.org/10.3390/educsci14070740
Celik, I. (2023). Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Computers in Human Behavior, 138, 107468. https://doi.org/10.1016/j.chb.2022.107468
Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research. TechTrends, 66, 616-630. https://doi.org/10.1007/s11528-022-00715-y
Colomo, E., Aguilar, Á.I., Cívico, A., & Colomo, A. (2023). Percepción de futuros docentes sobre su nivel de competencia digital. Revista Electrónica Interuniversitaria de Formación del Profesorado, 26(1), 27-39. https://doi.org/10.6018/reifop.542191
Colomo, E., Cabero, J., Guillén, F.D., & Palacios, A. (2025). Educators’ perspective on YouTube use: an analysis of their digital competences according to the territory and educational stage. Technology, Pedagogy and Education, 1, 1-20. https://doi.org/10.1080/1475939X.2025.2544711
Cooper, G. (2023). Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence. Journal of Science Education and Technology, 32, 444-452. https://doi.org/10.1007/s10956-023-10039-y
Dellepiane, P., & Guidi, P. (2023). La inteligencia artificial y la educación: Retos y oportunidades desde una perspectiva ética. Question/Cuestión, 3(76), e859. https://doi.org/10.24215/16696581e859
Duarte, R.E., Palacios, A., Guzmán, Y.I., & Segura, L.R. (2024). Validation Using Structural Equations of the “Cursa-T” Scale to Measure Research and Digital Competencies in Undergraduate Students. Societies, 14(2), 22. https://doi.org/10.3390/soc14020022
Estrada, O., Rodrigo, P., Ruiz, J.L., & García, E. (2025). Digital Competence in Initial Teacher Training in the Use of Artificial Intelligence: Students’ Perceptions of Artificial Intelligence Use in Education. In Mateo, C., & Cortijo, A. (Eds.), Transformations in Digital Learning and Educational Technologies (135-168). IGI Global.
European Union (2024). Reglamento (UE) 2024/1689 del Parlamento Europeo y del Consejo de 13 de junio de 2024 sobre un enfoque armonizado de la inteligencia artificial (Ley de Inteligencia Artificial). Diario Oficial de la Unión Europea. Available at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
García-Martínez, I., Fernández-Batanero, J.M., Fernández-Cerero, J., & León, S.P. (2023). Analysing the Impact of Artificial Intelligence and Computational Sciences on Student Performance: Systematic Review and Meta-analysis. Journal of New Approaches in Educational Research, 12(1), 171-197. https://doi.org/10.7821/naer.2023.1.1240
García-Valcárcel, A., Casillas-Martín, S. & Gómez-Pablos, V.B. (2020). Validation of an Indicator Model (INCODIES) for Assessing Student Digital Competence in Basic Education. Journal of New Approaches in Educational Research, 9, 110-125. https://doi.org/10.7821/naer.2020.1.459
Głushkova, T., & Ignatova, N. (2023). Approaches to building key digital competences in Secondary School artificial learning intelligence. Education and Technologies Journal, 14(1), 23-32. https://doi.org/10.26883/2010.231.5059
Gómez-Cárdenas, R., Fuentes-Penna, A., & Castro-Rascón, A. (2024). El Uso Ético y Moral de la Inteligencia Artificial en Educación e Investigación. Ciencia Latina Revista Científica Multidisciplinar, 8(5), 3243-3261. https://doi.org/10.37811/cl_rcm.v8i5.13801
Gómez-García, M., Ruiz-Palmero, J., Boumadan-Hamed, M., & Soto-Varela, R. (2025). Perceptions of future teachers and pedagogues on responsible AI. A measurement instrument. RIED-Revista Iberoamericana de Educación a Distancia, 28(2), 105-130. https://doi.org/10.5944/ried.28.2.43288
González-Calatayud, V., Prendes-Espinosa, P., & Roig-Vila, R. (2021). Artificial Intelligence for Student Assessment: A Systematic Review. Applied Sciences, 11(12), 5467. https://doi.org/10.3390/app11125467
Guillén, F.D., Ruiz, J., Colomo, E., & Cívico, A. (2023). Construcción de un instrumento sobre las competencias digitales del docente para utilizar YouTube como recurso didáctico: análisis de fiabilidad y validez. Revista de Educación a Distancia (RED), 23(76). https://doi.org/10.6018/red.549501
Guillén, F.D., Sánchez, E., Colomo, E., & Sánchez, E. (2025). Incident factors in the use of ChatGPT and dishonest practices as a system of academic plagiarism: the creation of a PLS-SEM model. Research and Practice in Technology Enhanced Learning, 20, 028. https://doi.org/10.58459/rptel.2025.20028
Guillén, F.D., Colomo, E., Cívico, A., & Linde, T. (2024). Which is the Digital Competence of Each Member of Educational Community to Use the Computer? Which Predictors Have a Greater Influence? Technology, Knowledge and Learning, 29, 1-20. https://doi.org/10.1007/s10758-023-09646-w
Guillén, F.D., Tomczyk, Ł., Colomo, E., & Mascia, M.L. (2024). Digital competence of Higher Education teachers in research work: validation of an explanatory and confirmatory model. Journal of e-Learning and Knowledge Society, 20(3), 1-12. https://doi.org/10.20368/1971-8829/1135963
Hair, J.F., Black, W.C., Babin, B.J., & Anderson, R.E. (2014). Multivariate data analysis (7th ed.). Pearson Education.
Harishbhai-Tilala, M., Kumar-Chenchala, P., Choppadandi, A., Kaur, J., Naguri, S., Saoji, R. et al. (2024). Ethical Considerations in the Use of Artificial Intelligence and Machine Learning in Health Care: A Comprehensive Review. Cureus, 16(6), e62443. https://doi.org/10.7759/cureus.62443
Hinojo-Lucena, F.J., Aznar-Díaz, I., Cáceres-Reche, M.P., & Romero-Rodríguez, J.M. (2019). Artificial Intelligence in Higher Education: A Bibliometric Study on its Impact in the Scientific Literature. Education Sciences, 9(1), 51. https://doi.org/10.3390/educsci9010051
Hu, L.T., & Bentler, P.M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1-55. https://doi.org/10.1080/10705519909540118
Iglesias, A., Hernández, A., Martín, Y., & Herráez, P. (2021). Design, Validation and Implementation of a Questionnaire to Assess Teenagers’ Digital Competence in the Area of Communication in Digital Environments. Sustainability, 13(12), 6733. https://doi.org/10.3390/SU13126733
Lévy-Mangin, J.P., Varela-Mallou, J., & Abad-González, J. (2006). Modelización con estructuras de covarianzas en ciencias sociales : temas esenciales, avanzados y aportaciones especiales. Netbiblo.
Levy-Nadav, L., Shamir-Inbal, T., & Blau, I. (2024). Integrating Generative Artificial Intelligence Tools to Develop Digital Competences in Secondary Schools. In Ferreira-Mello, R., Rummel, N., Jivet, I., Pishtari, G., & Ruipérez-Valiente, J.A. (Eds.), Technology Enhanced Learning for Inclusive and Equitable Quality Education. EC-TEL 2024. Lecture Notes in Computer Science (15160, 113-118). Springer.
https://doi.org/10.1007/978-3-031-72312-4_14
Liang, J.C., Hwang, G.J., Chen, M.R.A., & Darmawansah, D. (2021). Roles and research foci of artificial intelligence in language education: an integrated bibliographic analysis and systematic review approach. Interactive Learning Environments, 31(7), 4270-4296. https://doi.org/10.1080/10494820.2021.1958348
Llorente-Cejudo, C., Barragán-Sánchez, R., Palacios-Rodríguez, A., & Fernández-Scagliusi, V. (2025). Teaching with Artificial Intelligence: degree of acceptance of educational AI in the Latin American university context. Educação e Pesquisa, 51, e290821. https://doi.org/10.1590/S1678-4634202551290821en
López, C., Sánchez, M.C., & García-Valcárcel, A. (2021). Desarrollo de la competencia digital en estudiantes de primaria y secundaria en tres dimensiones: fluidez, aprendizaje-conocimiento y ciudadanía digital. RISTI: Revista Ibérica de Sistemas e Tecnologias de Informação, 48, 501-517. https://doi.org/10.17013/risti.44.5-20
López-Regalado, O., Núñez-Rojas, N., López-Gil, O.R., & Sánchez-Rodríguez, J. (2024). El Análisis del uso de la inteligencia artificial en la educación universitaria: Una revisión sistemática. Pixel-Bit, Revista de Medios y Educación, 70, 97-122. https://doi.org/10.12795/pixelbit.106336
Lucas, M., Bem-haja, P., Zhang, Y., Llorente-Cejudo, C., & Palacios-Rodríguez, A. (2025). A comparative analysis of pre-service teachers’ readiness for AI integration. Computers and Education: Artificial Intelligence, 8, 100396. https://doi.org/10.1016/j.caeai.2025.100396
Matosas, L., Gómez, M., & Boumadan, M. (2025). Aplicaciones, beneficios, retos, y áreas de desarrollo en el uso de IA-Chatbots en el ámbito educativo: Una revisión sis-temática de la literatura. Digital Education Review, 47, 44-61. https://doi.org/10.1344/der.2025.47.44-61
Miao, F., & Holmes, W. (2024). Guidance for generative AI in education and research. UNESCO.
Ng, D.T.K., Wu, W., Leung, J.K.L., Chiu, T.K.F., & Chu, S.K.W. (2024). Design and validation of the AI literacy questionnaire: The affective, behavioural, cognitive and ethical approach. British Journal of Educational Technology, 55(3), 1082-1104. https://doi.org/10.1111/bjet.13411
Nunnally, J.C., & Bernstein, I.H. (1994). Psychometric theory (3rd ed.). McGraw-Hill.
Owan, V.J., Abang, K.B., Idika, D.O., Etta, E.O., & Bassey, B.A. (2023). Exploring the potential of artificial intelligence tools in educational measurement and assessment. Eurasia Journal of Mathematics, Science and Technology Education, 19(8), em2307. https://doi.org/10.29333/ejmste/13428
Palacios, A., Llorente, M.C., Lucas, M., & Bem-Haja, P. (2025). Macroevaluación de la competencia digital docente. Estudio DigCompEdu en España y Portugal. RIED-Revista Iberoamericana de Educación a Distancia, 28(1), 177-196. https://doi.org/10.5944/ried.28.1.41379
Pande, K., Jadhav, V., & Mali, M. (2023). Artificial Intelligence: Exploring the Attitude of Secondary Students. Journal of E-Learning and Knowledge Society, 19(3), 43-48.
https://doi.org/10.20368/1971-8829/1135865
Paz, L.E., & Gisbert, M. (2024). Competencia digital docente y uso de tecnologías digitales en la educación universitaria. Revista Complutense de Educación, 35(4), 809-821. https://doi.org/10.5209/rced.90033
Pinto, A.R., Pérez, A., & Darder, A. (2023). Training in Teaching Digital Competence: Functional Validation of the TEP Model. Innoeduca. International Journal of Technology and Educational Innovation, 9(1), 39-52. https://doi.org/10.24310/innoeduca.2023.v9i1.15191
Pratama, M., Sampelolo, R., & Lura, H. (2023). Revolutionizing education: Harnessing the power of artificial intelligence for personalized learning. Klasical: Journal of Education, Language Teaching and Science, 5(2), 350-357. https://doi.org/10.52208/klasikal.v5i2.877
Rodríguez, A., Betín, A.B., Caurcel, M.J., & Gallardo, C.P. (2024). Estudio de la competencia digital en alumnado de secundaria colombiano. Aula Abierta, 53(2), 119-128. https://doi.org/10.17811/rifie.20312
Romero-Alonso, R., Araya-Carvajal, K., & Reyes-Acevedo, N. (2025). Rol de la Inteligencia Artificial en la personalización de la educación a distancia: Una revisión sistemática. RIED-Revista Iberoamericana de Educación a Distancia, 28(1), 9-36. https://doi.org/10.5944/ried.28.1.41538
Romero-Tena, R., Barragán, R., Gutiérrez, J.J., & Palacios, A. (2024). Análisis de la competencia digital docente en Educación Infantil. Perfil e identificación de factores que influyen. Bordón. Revista de Pedagogía, 76(2), 45-63. https://doi.org/10.13042/Bordon.2024.100427
Rubio, M., Colomo, E., & Palacios, A. (2025). Inteligencia artificial y Educación Secundaria: un análisis de la producción científica. In Montenegro, M., Fernández, J., Miravete, M., & Fernández, V. (Eds.), Docencia en la era digital. Experiencias, retos e Innovación (282-294). Dykinson. https://doi.org/10.14679/4027
Şahín, C. (2024). Artificial intelligence technologies and ethics in educational processes: solution suggestions and results. Innoeduca. International Journal of Technology and Educational Innovation, 10(2), 201‑216. https://doi.org/10.24310/ijtei.102.2024.19806
Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57, 620-631. https://doi.org/10.1111/ejed.12532
Suh, W., & Ahn, S. (2022). Development and Validation of a Scale Measuring Student Attitudes Toward Artificial Intelligence. SAGE Open, 12(2). https://doi.org/10.1177/21582440221100463
Tomczyk, L., Mascia, M.L., Gierszewski, D., & Walker, C. (2023). Barriers to digital inclusion among older people: an intergenerational reflection on the need to develop digital competences for the group with the highest level of digital exclusion. Innoeduca. International Journal of Technology and Educational Innovation, 9(1), 5-26. https://doi.org/10.24310/innoeduca.2023.v9i1.16433
Trisnawati, W., Putra, R.E., & Balti, L. (2023). The Impact of Artificial Intelligent in Education toward 21st Century Skills: A Literature Review. PPSDP International Journal of Education, 2(2), 501-513. https://doi.org/10.59175/pijed.v2i2.152
Trujillo-Torres, J.M. (2024). Inteligencia Artificial y la promesa de una Educación Inclusiva. Revista Internacional de Investigación en Ciencias Sociales, 20(1), 1-4. https://doi.org/10.18004/riics.2024.junio.1
Turpo, O., Zea, M., Huamaní, F., Girón, M., Pérez, A., & Aguaded, I. (2023). Media and information literacy in secondary students: Diagnosis and assessment. Journal of Technology and Science Education, 13(2), 514-531. https://doi.org/10.3926/jotse.1746
Villén, R., Ágreda, M., & Rodríguez, J. (2024). Perfil Competencial del Profesorado Andaluz en Seguridad Digital: Evaluación de la Protección de Datos y Privacidad de acuerdo con el Marco Común de Competencia Digital Ciudadana (DigComp). Pixel-Bit. Revista de Medios y Educación, 70, 123-142. https://doi.org/10.12795/pixelbit.104153
Vuorikari, R., Kluzer, S., & Punie, Y. (2022). DigComp 2.2: The Digital Competence Framework for Citizens - With new examples of knowledge, skills and attitudes. Publications Office of the European Union. https://doi.org/10.2760/115376
Wang, H. (2025). Experience of AI-Based Digital Intervention in Professional Education in Rural China: Digital Competencies and Academic Self-Efficacy. European Journal of Education. Research, Development and Policy, 60(1), e70031. https://doi.org/10.1111/ejed.70031
Wu, D., & Zhang, J. (2025). Generative artificial intelligence in secondary education: Applications and effects on students’ innovation skills and digital literacy. PLOS One, 20(5), e0323349. https://doi.org/10.1371/journal.pone.0323349
Yan, Y., & Liu, H. (2024).Ethical framework for AI education based on large language models. Education and Information Technologies, 29(1), 1-23. https://doi.org/10.1007/s10639-024-13241-6
Yi, J. (2024). Design and development of personalized education information management system based on artificial intelligence. Applied Mathematics and Nonlinear Sciences, 9(1), 1-20. https://doi.org/10.2478/amns.2023.2.00633
Zawacki-Richter, O., Marín, V.I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16, 39. https://doi.org/10.1186/s41239-019-0171-0
Zeb, A., Ullah, R., & Karim, R. (2024). Exploring the role of ChatGPT in higher education: Opportunities, challenges and ethical considerations. International Journal of Information and Learning Technology, 41(1), 99-111. https://doi.org/10.1108/IJILT-04-2023-0046
Zhang, Y., & Tian, Z. (2025). Digital Competence in Student Learning with Generative Artificial Intelligence: Policy Implications from World-Class Universities. (2025). Journal of University Teaching and Learning Practice, 22(2). https://doi.org/10.53761/av7c8830
This work is licensed under a Creative Commons Attribution 4.0 International License
Journal of Technology and Science Education, 2011-2026
Online ISSN: 2013-6374; Print ISSN: 2014-5349; DL: B-2000-2012
Publisher: OmniaScience



