ADOPTION OF AI-BASED PROCTORING PLATFORMS:
A MULTI‑STAKEHOLDER PERSPECTIVE FROM THE EDUCATION SECTOR STAKEHOLDERS
Panjab University (India)
Received August 2025
Accepted February 2026
Abstract
The rapid digitization of education has reshaped teaching, learning, and assessment practices worldwide, leading to the increasing adoption of AI-enabled online proctoring systems to support academic integrity in virtual examinations. This study examines the behavioural intentions of key stakeholders—students, parents, and professors towards the adoption of online proctoring technologies. Using a quantitative research design, data were collected from 1,392 respondents, comprising 464 participants from each stakeholder group. Partial Least Squares Structural Equation Modelling (PLS-SEM) was employed to test the proposed model and assess stakeholder-specific perceptions. The findings reveal that perceived ease of use is a consistent determinant of adoption across all groups. Trust and system reliability significantly influence acceptance among students and parents, whereas concerns related to privacy and fairness are salient only for students. In contrast, professors do not perceive AI-based proctoring systems as pedagogically beneficial, indicating resistance rooted in instructional value rather than technical considerations. The study underscores the need for transparent data practices, continuous monitoring of algorithmic fairness, and active incorporation of stakeholder feedback to ensure ethical, trustworthy, and context-appropriate deployment of AI-driven proctoring systems in education.
Keywords – AI-based proctoring systems, Educational institute, Behavioural intentions, Stakeholders’ perceptions, Online exams.
To cite this article:
|
Malhotra, M., & Chhabra, I. (2026). Adoption of AI-based proctoring platforms: A multi-stakeholder perspective from the education sector stakeholders. Journal of Technology and Science Education, 16(1), 353–373. https://doi.org/10.3926/jotse.3781 |
----------
-
-
1. Introduction
-
The rapid digitization of education has profoundly transformed the global landscape of teaching, learning, and assessment (Bucăţa & Tilega, 2024). Technological innovations, such as Artificial Intelligence (AI), are reshaping the educational landscape by enabling online assessment, personalizing learning experience, and equipping students with future-ready skills. These advancements have introduced new opportunities and challenges for educational institutions. Following the outbreak of the pandemic, one of the most prominent applications of AI in this realm is the development and implementation of AI-based proctoring platforms that aim to preserve academic integrity in online assessments through algorithmic surveillance and behavioural monitoring (Nigam et al., 2021).
AI-based proctoring offers advantages, including enabling students to take exams from any nearby exam centres, thereby reducing the need for extensive travel. The AI-based proctoring systems incorporate name features such as facial recognition, keystroke dynamics, screen activity monitoring, and webcam‑based observation to detect anomalies and flag potentially dishonest behaviour during examinations (Al-Munawar et al., 2025).
The unique features of AI-based proctoring systems have led to an increase in its vast adoption, especially during and after the COVID-19 pandemic, which necessitated remote learning and virtual evaluations at an unprecedented scale (Kulshrestha et al., 2023). Educational institutions across the world, facing the urgent need to maintain academic standards outside traditional classrooms, turned to AI solutions as a scalable and seemingly objective alternative to in-person invigilation. While these platforms promise increased efficiency, impartiality, and scalability, their implementation has not been without controversy. Despite the various advantages of AI systems, they also come with significant disadvantages, which include the violation of privacy, false flagging, surveillance overreach, algorithmic bias, and the psychological impact on students (Cyphert, 2019; George, 2024). Questions around consent, data protection, fairness in algorithmic decision-making, and accessibility for digitally underserved populations remain at the forefront of public and academic discourse. As the educational sector embraces innovative and advanced technology, it is essential to understand what influences stakeholders to use or reject AI‑based tools. Globally, the deployment of AI-based proctoring has triggered intense regulatory and ethical debates. In Europe, concerns around data minimization, consent, and biometric data processing have brought such systems under scrutiny within the framework of the General Data Protection Regulation (GDPR). Similarly, in the United States, the Family Educational Rights and Privacy Act (FERPA) has raised questions regarding student data ownership, surveillance limits, and institutional accountability. Across regions, critics have highlighted risks related to privacy infringement, algorithmic bias, unequal access for digitally marginalized students, and the psychological effects of continuous monitoring (Cyphert, 2019; George, 2024). These concerns emphasize that AI-based proctoring is not merely a technological solution, but a socio-technical system embedded within broader legal, cultural, and ethical contexts.
Despite the growing body of literature on AI adoption in education, existing research has predominantly examined AI-based proctoring from a single-stakeholder perspective, most often focusing on students’ perceptions or institutional efficiency outcomes. Limited access to diverse groups across institutional hierarchies along with organizational sensitivities around surveillance, privacy, and governance limit participation and openness, discouraging comprehensive cross-stakeholder investigation. Further, comparative multi-stakeholder investigations remain relatively scarce. This represents a critical research gap, as AI-based proctoring directly affects multiple stakeholder groups whose roles, expectations, and concerns differ substantially. A crucial aspect of this understanding lies in examining the behavioural intentions of the core stakeholders who directly engage with these systems, primarily students, professors, and parents. Each group interacts with AI-based proctoring tools through distinct lenses: students experience the system firsthand during examinations, Professors often serve as facilitators and evaluators, and parents act as observers concerned with academic outcomes and discipline. These divergent roles contribute to varying expectations, concerns, and acceptance levels, underscoring the necessity for stakeholder-specific analyses in educational technology research. Addressing this gap, the present study adopts a multi-stakeholder analytical framework to examine behavioural intentions toward AI-based proctoring among students, professors, and parents. By comparatively analysing these groups, the study seeks to identify both shared and stakeholder-specific determinants of acceptance and resistance, thereby offering a more holistic understanding of AI adoption in education. In particular, the research moves beyond generalized acceptance models by explicitly examining how perceptions of privacy, fairness, usefulness, and trust differ across stakeholder categories, enabling a more precise and meaningful comparative analysis.
The structure of this paper is organized as follows: the first section presents the introduction, followed by a comprehensive review of the relevant literature. The third section primarily outlines the research methodology and highlights the key findings of the study. Lastly, the paper concludes by explaining the main insights derived from the results, which provide implications and directions for future research and practice.
1.1. Research Questions
The present study explores the behavioural intentions of the key stakeholders, including parents, professors, and students, who directly engage with AI-based proctoring systems. The study adopts a comprehensive survey-based approach, which aims to address the following research questions.
RQ1: What are the factors that impact the behavioural intention of college professors to adopt online proctoring systems which rely on AI-based technologies?
RQ2: What are the factors that impact the behavioural intention of students to adopt online proctoring system which rely of AI-based technologies?
RQ3: What factors shape parent’s behavioural intention to endorse online proctoring system which largely depends on AI-based technologies for their children’s online examinations?
RQ4: To study the comparative behavioural intention of various stakeholders to adopt AI-based proctoring.
1.2. Research Contributions
The present study makes several significant research contributions to the field of educational technology and AI adoption. First, the study offers a comprehensive stakeholder-specific analysis by examining the behavioural intentions of students, parents, and professors toward AI-based proctoring systems. This multi-perspective approach addresses a gap in existing literature, which often focuses on a single user group. Second, the study extends traditional technology adoption models by incorporating constructs such as trust, privacy, fairness, and pedagogical relevance, and validating the framework using PLS-SEM. These findings collectively provide both theoretical advancements and actionable implications for policymakers and educators.
2. Literature Review
The rapid shift to online learning and assessments during the COVID-19 pandemic has accelerated the adoption of online based proctoring system which largely depends on AI-based technologies in educational institutions (Tweissi et al., 2022). AI-based platforms, particularly developed to uphold academic integrity during online assessments. The platforms primarily designed to monitor test-takers using advanced technologies such as facial recognition, eye tracking, and behavioural analysis are intended to ensure academic integrity during online assessments (Alessio et al., 2017; Bhardwaj et al., 2022). The objective of the automation of invigilation processes is to mitigate human intervention, improve scalability, and prevent cheating (Hollister & Berenson, 2009). However, despite these technological advancements, the adoption and acceptance of AI-based proctoring systems vary among stakeholders (students, Professors, and parents) due to different perceptions. Therefore, the present study seeks to explore the multi-stakeholders’ perspectives on the behavioural intention to adopt AI-based proctoring systems in the education sector.
2.1. Professor’s Behavioural Intention to Adopt AI-Based Proctoring
The adoption of AI-based proctoring systems by educators is a complex decision influenced by technological, pedagogical, and institutional factors. Professors working in higher education institutions often face challenges in determining whether online proctoring systems which rely on AI-based technologies are suitable for their needs, as the decision is influenced by various technical, educational, moral, and organizational factors (Shioji et al.,2025). The use of online assessment has become increasingly common particularly during and after the COVID-19 pandemic. The present study examines the reasons behind professors’ intent to use AI-based proctoring tools. This study follows the Technology Acceptance Model (TAM) (Davis, 1989), which views Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) as two important factors in determining people’s intention to use technology. According to Venkatesh and Davis (2000), in the context of AI proctoring, professors feel that the system protects the honesty of online exams and helps reduce dishonest actions, which is perceived as useful. If professors believe the system effectively prevents unfairness and minimizes misconduct, they tend to use it (Almaiah et al., 2020; Dwivedi et al., 2021). On top of that, Professors who feel the system is user-friendly are usually more inclined to use AI technologies (Teo, 2011). The presence of complex interfaces, a shortage of support services, or restricted learning opportunities can cause problems for some educators. Yet, there are still problems about ethics, privacy, and bias in AI to worry about. Research by Floridi et al. (2018) and Binns et al. (2018) also showed that educators are not comfortable with facial recognition or behavioural analytics when they cannot explain what makes behavior suspicious. Professors fear that such systems compromise student dignity or disproportionately affect certain groups, leading to false positives.
Trust in the system is a key determinant of behavioural intention. According to Chatterjee et al. (2021), transparency, explainability, and accountability mechanisms are crucial for building trust among educators. When Professors understand how the algorithm works and are involved in oversight or decision-making processes, they report higher levels of acceptance. Overall, the above research literature highlights the number of parameters that shape the behavioural intention to adopt an AI-based proctoring system. Based on the above literature, it is hypothesized as follows in Table 1.
|
Hypothesis Development: Professor’s Behavioural Intention |
|
H1: Perceived ease of use has a significant impact on behavioural intention to adopt online proctoring system rely on AI-based technologies |
|
H2: Perceived usefulness has a significant impact on behavioural intention to adopt online proctoring system rely on AI-based technologies. |
|
H3: Privacy has a significant impact on behavioural intention to adopt an online proctoring system rely on AI‑based technologies. |
|
H4: Trust has a significant impact on behavioural intention to adopt an online proctoring system rely on AI-based technologies. |
|
H5: Teaching and Learning have a significant impact on behavioural intention to adopt an online proctoring system rely on AI-based technologies. |
Table 1. Hypothesis Development based on Professor’s Behavioural Intention
2.2. Students’ Behavioural Intention to Adopt AI-Based Proctoring Platforms
The increasing adoption of AI-based proctoring platforms in the domain of online assessment has drawn a significant link to students’ behavioural intention to adopt AI-based proctoring systems.
Perceived usefulness is one of the major elements and plays a crucial role in technology adoption. If students find proctoring platforms user-friendly and easy to navigate, they are more likely to use them without any restrictions (Davis, 1989).
Fairness is another crucial factor that impacts student perception. Students are more likely to accept an assessment technology if they believe it ensures equitable treatment, unbiased monitoring, and consistent rule enforcement across diverse users (Kharbat & Abu-Daabes, 2021). Similarly, Privacy concerns are an essential factor in AI-based proctoring, where constant surveillance, data recording, and facial recognition may be viewed as intrusive or excessive (Fawns & Schaepkens, 2023; Eaton & Turner, 2020). Students who perceive their personal data and digital rights as being respected are more likely to accept such technologies. Lastly, the trust and reliability of AI systems are essential for building student confidence in their operations. Students need assurance that the platform is technically sound and operates accurately (Buolamwini & Gebru, 2018; Bhardwaj et al., 2022). When the platform is perceived as trustworthy and dependable, students are more inclined to accept its use. Hence, the combination of four dimensions is hypothesized in Table 2 to significantly impact students’ behavioural intention to adopt AI-based proctoring platforms.
|
Hypothesis Development- Students’ Behavioural Intention |
|
H1: Perceived Ease/ Ease of Use has a significant impact on behavioural intention to adopt the online proctoring system rely on AI-based technologies. |
|
H2: Fairness has a significant impact on the behavioural intention to adopt online proctoring system rely on AI‑based technologies |
|
H3: Privacy has a significant impact on the behavioural intention to adopt online proctoring system rely on AI‑based technologies. |
|
H4: Trust have a significant impact on the behavioural intention to adopt online proctoring system rely on AI‑based technologies. |
Table 2. Hypothesis Development based on Students ’ Behavioural Intention
2.3. Parents’ Behavioural Intention to Support AI-Based Proctoring for Children
The post-pandemic shift towards remote and hybrid learning has gained prominence in adopting online proctoring system which primarily depends on AI-based technologies educational institutions. However, parental acceptance of remote assessment has been an underexplored area but is increasingly important, especially in online education. Parents’ behavioural intention is shaped by their perception of system effectiveness, child safety, and ethical implications (Chin et al., 2022). Concerns about emotional well‑being, privacy violations, and data security of children are prevalent, especially when children are exposed to facial recognition technologies (Van-Wynsberghe, 2013). Trust in the educational institution, communication transparency, and control over consent and settings significantly affect parental support (Morley et al., 2021). Parents who are better informed about the system’s operation and benefits are more likely to endorse its use in online examinations. Furthermore, in the context of AI-based proctoring, effectiveness is closely linked to the ability of the platform to monitor and prevent cheating accurately. Parents are likely to support technologies they believe enhance academic integrity. Moreover, monitoring capabilities reassure stakeholders of the platform’s utility (Karim & Sugianto, 2023). Thus, the more effective and accurate the monitoring, the higher the intention to adopt. Based on the above literature, the hypotheses are as follows in Table 3.
|
Hypothesis Development- Parents’ Behavioural Intention |
|
H1: Effectiveness and Monitoring of use have a significant impact on behavioural intention to adopt an online proctoring system rely on AI-based technologies. |
|
H2: Fairness and Bias have a significant impact on behavioural intention to adopt an online proctoring system rely on AI-based technologies. |
|
H3: Privacy has a significant impact on behavioural intention to adopt online proctoring system reply on AI-based technologies. |
|
H4: Trust have a significant impact on behavioural intention to adopt online proctoring system reply on AI-based technologies. |
Table 3. Hypothesis Development based on Parents’ Behavioural Intention
3. Research Methodology
3.1. Respondents’ Characteristics
In the present study, the sample size was determined based on statistical power considerations and model complexity, which is align with the best practices for PLS-SEM. Unlike covariance-based approaches, PLS-SEM does not require prior specification of the population size, as it focuses on maximising explained variance and testing predictive relationships rather than estimating population parameters (Hair et al., 2017).
Accordingly, the minimum required sample size was calculated using G*Power and the 10-times rule suggested by Hair et al. (2017), which recommends that the sample size should be at least ten times the maximum number of structural paths directed at any latent construct in the model. Based on these criteria, a minimum of 464 respondents per stakeholder group (students, professors, and parents) was required to achieve adequate statistical power.
Therefore, a total of 1,392 respondents were included in the final analysis. This sample size exceeds the recommended threshold for robust PLS-SEM estimation, ensuring reliable parameter estimates and sufficient power to detect meaningful effects among the proposed constructs. Given that the primary objective of the study is to examine the determinants of behavioural intention toward AI-based online proctoring systems, rather than to estimate population prevalence, the adopted sampling approach is methodologically appropriate and consistent with prior PLS-SEM research.
3.2. Data Collection and Time Frame
For the present study, the researchers recruited the respondents via simple random sampling from constituent colleges of Panjab University, Chandigarh, India. The data collection was conducted over a period from December 2024 to April 2025. The data collection process began with the introductory email that informed parents, professors, and students about the objective of the study, which primarily emphasized the voluntary nature of participation and assured confidentiality of their responses.
A distinctive aspect of this study is the inclusion of a large parent sample which comprises of 464 participants. The high sample size datasets indicates that, parents have a high level of awareness and familiarity with online examination. These findings suggest that in the post–COVID-19 context, parents have become increasingly familiar with digital assessment systems and AI-enabled evaluation processes.
3.3. Instrument
In the present study, the authors developed three distinct questionnaires to examine the factors that impact behavioural intention to adopt online proctoring system rely on AI-based technologies among three stakeholder groups: students, professors, and parents.
To study the behavioural intention of Professors’ intention to adopt AI-based proctoring, the study employs five independent variables, which include (Perceived ease of use, Perceived usefulness, Trust and reliability, Privacy Ethical and Impact on Teaching and Learning), and Dependent variables include (Behavioural intention). Similarly, the instrument designed to examine students’ intention to adopt an AI-based proctoring system includes four independent variables (usability, trust, perceived fairness, privacy concerns, and Trust) with behavioural intention as the dependent variable. Lastly, the instrument use to study the parents’ intention to adopt the AI-based proctoring system also includes four independent variables (Effectiveness and Monitoring, Fairness and Bias, Privacy and Trust) and one dependent variable (Behavioural Intention). All instruments were presented in English with a minor contextual format to address the focus on “Behavioural Intention to adopt AI-based proctoring systems” while maintaining the integrity of the core construct. The scale questionnaire is attached in Annex 1.
3.4. Assessment of Scale Reliability and Validity Among Various Stakeholders (Professors, Students, and Parents)
Table (4, 5 and 6) illustrates the values of scale reliability, composite reliability, and AVE metric of three stakeholders (Parents, Students and Professors). Scale reliability measures the internal consistency of items in a construct. In contrast, composite reliability measures the overall reliability of latent constructs, and Average Variance Extract (AVE) indicates the amount of variance captured by a construct. In the Confirmatory Factor Analysis (CFA) conducted during the pilot test, all items loading exceeded the recommended threshold value of 0.70. As recommended by Hair et al. (2017), reliability and validity were confirmed through Cronbach’s alpha and composite reliability values exceeding 0.70. Factor loadings also met the minimum threshold of 0.70. Furthermore, the AVE for each construct was above 0.50, indicating adequate convergent validity.
|
Variables/ Factors |
Cronbach Apha |
CR |
AVE |
|
Behavioural Intention |
0.695 |
0.835 |
0.633 |
|
Perceived Ease of Use |
0.769 |
0.868 |
0.688 |
|
Perceived Usefulness |
0.789 |
0.878 |
0.706 |
|
Privacy |
0.938 |
0.97 |
0.941 |
|
Teaching and Learning |
0.787 |
0.862 |
0.809 |
|
Trust |
0.761 |
0.862 |
0.676 |
Table 4. Scale Reliability Analysis of Professor’s Respondents
|
Variables |
Cronbach Alpha |
CR |
AVE |
|
Behavioural Intention |
0.765 |
0.851 |
0.592 |
|
Perceived Ease of Use |
0.969 |
0.98 |
0.942 |
|
Fairness |
0.789 |
0.88 |
0.713 |
|
Privacy |
0.769 |
0.844 |
0.643 |
|
Trust |
0.744 |
0.855 |
0.644 |
Table 5. Scale Reliability Analysis of Students’ Respondents
|
Variables |
Cronbach Alpha |
CR |
AVE |
|
Behavioural Intention |
0.733 |
0.762 |
0.517 |
|
Effectiveness and Monitoring |
0.722 |
0.851 |
0.663 |
|
Fairness and Bias |
0.739 |
0.805 |
0.579 |
|
Privacy |
0.959 |
0.973 |
0.924 |
|
Trust |
0.668 |
0.816 |
0.597 |
Table 6. Scale Reliability Analysis of Parents Respondents
3.5. Pilot Study and Instrument Validation
A pilot study was conducted before the final data collection to assess the reliability, clarity, and feasibility of the questionnaire. The pilot survey involved 32 respondents, representing both students and faculty members. Participants were asked to complete the questionnaire and provide feedback on item clarity, readability, and overall comprehension.
The reliability of the instrument was evaluated using Cronbach’s alpha, which demonstrated satisfactory internal consistency across all constructs, with alpha values exceeding the recommended threshold of 0.70. Item-wise analysis indicated that all items contributed positively to the reliability of their respective constructs, and no items were removed at this stage.
Based on the pilot study results, minor refinements were made to improve wording clarity and reduce potential ambiguity. The successful completion of the pilot study confirmed that the instrument was reliable, valid, and suitable for large-scale data collection.
3.6. Data Analysis Software
This study adopts a quantitative cross-sectional survey design. A sample of 464 respondents from each stakeholder group was included in the main analysis, ensuring adequate data for robust statistical testing. Partial Least Squares Structural Equation Modelling (PLS-SEM) was conducted using Smart-PLS 4.0 to test the hypothesis.
PLS-SEM was selected over Covariance-Based SEM (CB-SEM) for several methodological and practical reasons. First, the primary objective of this study is prediction and theory extension rather than strict theory confirmation, which aligns with the strengths of PLS-SEM. Second, PLS-SEM is well suited for models that include multiple latent constructs and complex structural relationships, as is the case in the present research framework. Third, PLS-SEM places fewer restrictions on data distribution assumptions and is robust to deviations from multivariate normality, making it appropriate for survey-based data collected from diverse respondent groups.
Additionally, PLS-SEM performs effectively with moderate to large sample sizes and is capable of handling reflective measurement models without requiring model fit indices that are often restrictive in CB-SEM. Given these advantages and its widespread application in behavioural and technology adoption research, PLS-SEM was considered the most appropriate analytical technique for the present study.
The Variance Inflation Factor (VIF) values were assessed to examine potential multicollinearity among indicators for the parents, teachers, and students groups. For the parents’ sample, all VIF values range between 1.065 and 3.928, indicating low to moderate collinearity and confirming that the indicators are sufficiently independent. In the teachers’ group, VIF values vary from 1.083 to 3.531, the value highlights that the VIF range in the threshold limit. Similarly, for the students’ group, VIF values range from 1.118 to 3.949, with slightly higher values observed for fairness and trust and risk indicators; however, these values remain within acceptable limits. Overall, the VIF results across all three stakeholder groups confirm the absence of problematic multicollinearity, thereby supporting the adequacy of the measurement model and justifying the retention of all indicators for subsequent structural model analysis. The value of VIF presented in Table 7.
|
Parents |
VIF |
Teacher |
VIF |
Students |
VIF |
|
BINT1 |
1.167 |
B.INT1 |
1.083 |
B.INT1 |
1.316 |
|
BINT2 |
1.099 |
B.INT2 |
2.843 |
B.INT2 |
2.79 |
|
BINT3 |
1.162 |
B.INT3 |
2.76 |
B.INT3 |
2.596 |
|
E&M1 |
2.828 |
PEE1 |
1.134 |
B.INT4 |
1.303 |
|
E&M2 |
1.944 |
PEE2 |
2.797 |
Ease of use 1 |
1.498 |
|
E&M3 |
1.065 |
PEE3 |
3.873 |
Ease of use 2 |
2.609 |
|
F&B1 |
1.227 |
PEU1 |
1.223 |
Ease of use 3 |
2.114 |
|
F&B2 |
1.26 |
PEU2 |
3.504 |
Fairness 1 |
1.223 |
|
F&B3 |
1.258 |
PEU3 |
3.519 |
Fairness 2 |
3.504 |
|
Privacy 1 |
2.766 |
PRIVACY1 |
2.531 |
Fairness 3 |
3.519 |
|
Privacy 2 |
3.928 |
PRIVACY2 |
1.531 |
PRIVACY1 |
1.134 |
|
Privacy3 |
1.383 |
T&L1 |
2.054 |
PRIVACY2 |
1.797 |
|
T&L1 |
1.258 |
T&L2 |
3.928 |
PRIVACY3 |
1.873 |
|
T&L2 |
1.33 |
T&L3 |
1.118 |
T&R1 |
2.054 |
|
T&L3 |
1.325 |
Trust1 |
1.203 |
T&R2 |
3.949 |
|
|
Trust2 |
2.707 |
T&R3 |
1.118 |
|
|
|
Trust3 |
2.594 |
|
||
Table 7. Common Method Variance- VIF Value
4. Results
The demographic profile in this study is presented purely for descriptive purposes, with the intention of outlining the composition and characteristics of the sample. The primary objective of the study is to examine the determinants of behavioural intention toward AI-based proctoring systems across stakeholder groups, rather than to investigate differences in perceptions or adoption behaviour based on demographic attributes such as gender or age.
4.1. Demographic Profile for Students
Table 8 illustrates the demographic profile of the respondents; the study has a well-defined distribution of the sample. The majority of male respondents with 267 males and 197 females. In terms of age distribution, the largest age group was between 18-23 years old (274) respondents, while 152 respondents were aged between 23-30 and 38 respondents belong from the age group above 40 years. Regarding educational background, the majority of the participants (312) enrolled in undergraduate programs, and 152 respondents enrolled in graduation programs. This demographic spread provides a diverse and balance representation of the target population, which ensures generalizability and relevance of the study findings.
|
Demography |
Frequency |
|
Male |
267 |
|
Female |
197 |
|
Age |
|
|
18-23 |
274 |
|
23-30 |
152 |
|
30 and above |
38 |
|
Course |
|
|
Graduation |
312 |
|
Post-Graduation |
152 |
Table 8. Demographic Profile of Student Respondents
4.2. Demographic and Descriptive Metrics for Professors
Table 9 illustrates the demographic profile of the respondents’ Professors; the study has a well-defined distribution of the sample. The majority of male respondents with 315 males and 149 females. In terms of age distribution, the largest age group was between 35-45 years old, 180 respondents, while 14 respondents were aged 45 and above. Regarding years of teaching experience, the majority of the participants (330) have a teaching experience between 5 and 15 years. This demographic spread provides a diverse and balance representation of the target population, which ensures generalizability and relevance of the study findings.
|
Demography |
Frequency |
|
Male |
315 |
|
Female |
149 |
|
Age |
|
|
23-35 (Assistant Professors) |
156 |
|
35-45 (Assistant Professor and Associate Professors) |
294 |
|
45 and above (Professors and Associate Professors) |
14 |
|
Years of Teaching experience |
|
|
5-15 years |
330 |
|
15 yrs and above |
134 |
Table 9. Demographic Profile of Professor’s Respondents
4.3. Demographic and Descriptive Metrics for Parents
Table 10 illustrates the demographic profile of the respondents. The majority of male respondents with (264), and 200 female respondents. In terms of age distribution, the largest age group is between 35-45 years old (250), which indicates that a significant proportion of respondents belong to the millennial cohort. Millennials have substantial exposure to digital technologies, including AI-enabled systems and online examination platforms. Furthermore,108 respondents belong to the age group of 45 and above. The demographic spread provides a diverse and balanced representation of the target population, which ensures the relevance and generalizability of the study findings.
|
Demography |
Frequency |
|
Male |
264 |
|
Female |
200 |
|
Age Group |
|
|
23-35 |
106 |
|
35-45 (Millennials) |
250 |
|
45 and above |
108 |
Table 10. Demographic Profile of Parents Respondents
4.4. Path Analysis Results for Parents
Table 11 illustrate the path analysis results, providing strong evidence of positive and statistically significant relationships between most of the independent variables (Trust and Reliability, Privacy, Fairness and Bias, and Effectiveness & Monitoring) and Behavioural Intention in the proposed conceptual model. In the context of parents’ viewpoint on behavioural intention of adopting an online proctoring system that primarily rely on AI-based technologies, the result particularly reveals that Trust and Reliability and Effectiveness and Monitoring demonstrate a strong and positive association with Behavioural Intention, as indicated by P-values of 0.000, which is less significant at the p < 0.05 level.
Additionally, Fairness and Bias exhibit a weaker yet statistically significant effect on Behavioural Intention, with a t-statistic of 2.158 and p-value of 0.031, reinforcing its role as a contributing factor. On the other hand, the path from Privacy to Behavioural Intention shows a positive but statistically insignificant relationship (t = 1.758, p = 0.079), indicating that while Privacy may play a role, its influence is not strong enough to be statistically validated within the model.
Overall, the results demonstrate that the paths Trust → Behavioural Intention, Effectiveness and Monitoring → Behavioural Intention, and Fairness and Bias → Behavioural Intention are statistically significant, providing compelling evidence that these relationships are reliable and meaningful, whereas the Privacy → Behavioural Intention path needs further exploration due to its borderline significance. The corresponding path diagram illustrating this relationship is presented in Figure 1.
|
Hypothesis Statement |
Beta |
T statistics |
P values |
Hypothesis |
|
Effectiveness and Monitoring (E&B) has a significant impact on Behavioural Intention |
0.382 |
3.882 |
0.001 |
Accepted |
|
Fairness and Bias(F&B) has a significant Behavioural Intention |
0.05 |
2.158 |
0.031 |
Accepted |
|
Privacy has a significant impact Behavioural Intention |
0.163 |
1.758 |
0.079 |
Not supported |
|
Trust (T) has a significant Behavioural Intention(B.INT) |
0.532 |
11.036 |
0.001 |
Accepted |
Table 11. Path Analysis of Parents Respondents
Figure 1. Structural Path Analysis of Parents Respondents
4.5. Path Analysis for Students
Table 12 illustrate the path analysis results for student respondents. The findings of the study support the hypothesized relationship between independent variables (Ease of use, trust and reliability, privacy and fairness) with dependent variables (Behvaioural Intention).
The relationship between perceived ease of use to behavioral intention is statistically significant (p = 0.002), the outcome of the value indicates that user-friendly design of the AI-based proctoring platform significantly influences students’ intention to adopt it. Additionally, fairness emerges as a particularly dominant predictor, with a highly significant p-value (p = 0.000), suggesting that perceptions of fairness strongly shape behavioral intentions. Furthermore, the relationship between privacy and behavioral intention is significant (p = 0.000), emphasizing the importance of data privacy concerns in influencing students’ willingness to engage with the system. Lastly, the path from trust and reliability to behavioral intention is also supported (p = 0.000); the outcome of the p-value indicates that users’ trust in the system and its consistent performance are key determinants of their behavioral intentions. Overall, all proposed hypotheses are supported, demonstrating that perceived ease of use, fairness, privacy, and Trust significantly influence students’ intention to use the system. The corresponding path diagram illustrating this relationship is presented in Figure 2.
|
Hypothesis Statement |
Beta |
T statistics |
P values |
Hypothesis Results |
|
Perceived ease of use has a significant impact on Behavioural Intention B.INT |
0.054 |
3.13 |
0.002 |
Accepted |
|
Fairness has a significant impact on Behavioural Intention |
0.868 |
46.297 |
0.002 |
Accepted |
|
PRIVACY has a significant impact on Behavioural Intention |
0.280 |
8.576 |
0.001 |
Accepted |
|
Trust and Reliability (T&R) -has a significant impact on Behavioural Intention (B.INT) |
0.125 |
3.693 |
0.001 |
Accepted |
Table 12. Path Analysis of Students’ Respondents
Figure 2. Structural Path Analysis of Students Respondents
4.6. Path Analysis for Professors
Table 13 illustrates the path analysis results for the Professors’ respondents. The results of the PLS-SEM analysis indicate that among the five hypothesized factors influencing Behavioral Intention (B.INT), only Perceived Ease of Use (PEE) shows a statistically significant positive relationship with a p-value of 0.001, this finding indicates that when users perceive a system or platform as easy to navigate and operate, they are more inclined to form an intention to use it. Thus, the analysis supports the hypothesis that ease of use serves as a critical determinant of behavioral intention.
However, the remaining constructs- Perceived Usefulness (PEU), Privacy, Teaching and Learning (T&L), and Trust do not demonstrate statistically significant relationships with behavioural intention. Overall, the analysis confirms that Perceived Ease of Use plays a critical role in shaping users’ behavioural intentions, while other factors such as usefulness, privacy, trust, and perceived educational value do not significantly impact users’ intention to engage with the system in this specific context. The corresponding path diagram illustrating this relationship is presented in Figure 3.
|
Hypothesis Development |
Beta |
T statistics |
P values |
Hypothesis Support |
|
PEE (Perceived Ease of Use)- has a significant impact on B.INT (Behavioural Intention) |
0.280 |
3.201 |
0.001 |
Accepted |
|
PEU (Perceived Usefulness) has a significant impact B.INT(Behavioural Intention) |
0.136 |
0.485 |
0.628 |
Not Supported |
|
PRIVACY has a significant impact on B.INT(Behavioural Intention) |
0.132 |
1.24 |
0.215 |
Not Supported |
|
T&L (Teaching and Learning) has a significant impact on B.INT(Behavioural Intention) |
0.036 |
0.934 |
0.351 |
Not Supported |
|
Trust has significant impact on B.INT |
0.360 |
1.479 |
0.139 |
Not Supported |
Table 13. Path Analysis of Professors Respondents
Figure 3. Structural Path Analysis of Professors Respondents
4.7. Comparative Behavioural Intentions
Table 14 illustrates the value of one-sample t-test, which was conducted to examine whether behavioural intention to adopt an AI-based proctoring platform significantly differed from the test value of zero among parents, teachers, and students. The findings reveal that behavioural intention to adopt AI-based proctoring is significantly positive across all stakeholder groups. The findings of the value suggest that students show the highest value of behavioural intention to adopt AI-based proctoring system t(463) = 138.05, p < .001, with a mean difference of 14.42 and a 95% confidence interval of 14.21 to 14.62. The rational for students demonstrates higher behavioural intention than other two stakeholders because students, as primary users of digital assessment systems, possess greater technological familiarity. Moreover, students have a direct exposure to digital learning environment which enable to lower resistance to AI-enabled monitoring, and a stronger focus on the functional benefits of AI-based proctoring, such as assessment continuity, procedural fairness, and ease of examination completion. Hence, students exhibits higher level of behavioural intention which is followed by the teachers, who demonstrated a comparatively higher level of behavioural intention, t(463) = 162.68, p < .001.The higher behavioural intention among teachers can be attributed to their hands-on experience with digital assessment tools and online examination systems, which enhances their familiarity with AI-based proctoring and increases their readiness to adopt such platforms. Lastly, parents exhibited lowest though statistically significant behavioural intention to adopt AI-based proctoring systems, t(463) = 153.14, p < .001. Overall, all stakeholder groups showed a strong and statistically significant intention to adopt AI‑based proctoring platforms, the magnitude of the mean differences indicates systematic variation across stakeholders. Students emerged as the most receptive group, followed by teachers, while parents expressed relatively lower, but still substantial levels of adoption intention.
|
One-Sample Test |
|||||||
|
|
Test Value = 0 |
||||||
|
t |
df |
Significance |
Mean Difference |
95% Confidence Interval of the Difference |
|||
|
One-Sided p |
Two-Sided p |
Lower |
Upper |
||||
|
B.INT PARENTS |
153.140 |
463 |
0.000 |
0.000 |
10.795 |
10.66 |
10.93 |
|
B.INT TEACHERS |
162.680 |
463 |
0.000 |
0.000 |
11.200 |
11.07 |
11.34 |
|
B.INT STUDENTS |
138.049 |
463 |
0.000 |
0.000 |
14.416 |
14.21 |
14.62 |
Table 14. Comparative Study of Behavioural Intention among Stakeholders
5. Conclusion and Discussion
The findings of this study can be meaningfully interpreted through the lens of the Technology Acceptance Model (TAM), which proposed that Perceived Ease of Use (PEOU) and Perceived Usefulness (PU) are the primary determinants of users’ behavioural intention to adopt new technologies. The results provide partial yet strong empirical support for TAM, while also extending the model by incorporating context-specific and ethical constructs relevant to AI-based proctoring systems.
Consistent with TAM, Perceived Ease of Use emerged as a strong and significant predictor of behavioural intention across all stakeholder groups—students, professors, and parents. This reinforces TAM’s core assertion that systems perceived as easy to use reduce cognitive and operational effort, thereby enhancing acceptance irrespective of user role. In the context of AI-based proctoring, intuitive interfaces and seamless system interaction appear to be critical for adoption, highlighting the continued relevance of PEOU in complex, AI-driven educational technologies.
In contrast, Perceived Usefulness, which TAM identifies as a central determinant of behavioural intention, did not significantly influence adoption among professors. This deviation from traditional TAM expectations suggests that in surveillance-oriented educational technologies, usefulness may be interpreted differently by educators. Professors appear to perceive AI-based proctoring primarily as an administrative or compliance tool rather than as a technology that enhances teaching effectiveness or learning outcomes. This finding indicates a contextual limitation of PU within TAM when applied to assessment-focused AI systems, thereby supporting calls for model adaptation in emerging technology domains.
The role of Trust and Reliability, although not part of the original TAM, aligns with later extensions of the model that incorporate trust as an external variable influencing behavioural intention. The significant effect of trust for students and parents suggests that in high-stakes environments involving continuous monitoring and automated decision-making, trust functions as a prerequisite for acceptance. Students and parents, who are more directly exposed to surveillance and outcome consequences, rely heavily on system reliability to form adoption intentions. The absence of a significant trust effect among professors further highlights the role-dependent nature of TAM constructs in AI-enabled contexts.
Similarly, Privacy concerns, which can be conceptualized as a perceived risk factor moderating TAM relationships, were significant only among students. This suggests that perceived privacy risk can weaken the influence of core TAM constructs, particularly for users who experience direct data capture and algorithmic scrutiny. Professors and parents, who are less exposed to continuous data collection, may underestimate privacy risks, thereby diminishing its perceived relevance within their adoption decision‑making process.
The significance of Fairness and Bias among students extends TAM by introducing ethical perceptions as critical antecedents of behavioural intention. Students’ sensitivity to algorithmic fairness reflects growing concerns around transparency, explainability, and non-discrimination in AI systems. These ethical dimensions, although external to classical TAM, increasingly shape technology acceptance in AI-mediated environments, suggesting the need for an ethically augmented TAM framework.
For parents, Monitoring and Effectiveness emerged as a significant predictor of behavioural intention, reflecting a utility-based evaluation aligned with TAM’s outcome-oriented logic. Parents assess the usefulness of AI-based proctoring not in pedagogical terms but in terms of its ability to ensure examination integrity and discipline. This redefinition of “usefulness” indicates that PU within TAM may manifest differently depending on stakeholder expectations and functional priorities.
Overall, the findings support TAM’s foundational premise that ease of use remains a universal driver of technology adoption while simultaneously demonstrating that context-specific, ethical, and trust-related variables significantly condition behavioural intention in AI-based proctoring systems. The study thus contributes to TAM literature by evidencing the need for a stakeholder-sensitive and ethically extended TAM to adequately explain adoption behaviour in AI-driven educational surveillance technologies.
6. Recommendations
The findings of the study offer valuable recommendations to support the responsible and effective implementation of AI-based proctoring systems. The purpose of these recommendations is to make professors, students, and parents responsive similarly, to ensure technology is shared fairly, and to support using digital assessments.
Initially, it is essential to prioritize user-centric design principles to ensure accessibility and ease of use for wide range of users (Luo, 2024). AI-proctoring systems should incorporate intuitive interfaces, simplified and streamlined navigation, and clear, concise instructions to cater to both technologically adept users and those with limited digital proficiency (Somavarapu et al., 2024). Educational institutions should organize frequently orientation sessions, open Q&A (Questions and Answers) forums, and accessible documentation can develop user confidence and mitigate resistance.
Moreover, the AI-based proctoring platforms should implement robust data privacy and protection protocols. To ensure ethical implementation, it is necessary to address fairness and algorithmic bias. This includes regular audits of AI algorithms, involvement of independent reviewers, and the establishment of redressal systems for users who perceive injustice in monitoring outcomes. Addressing these concerns is vital to maintain the credibility and integrity of assessment processes. At the policy level, adoption of AI‑based proctoring systems should align with India’s Digital Personal Data Protection (DPDP) Act, 2023, ensuring lawful data processing, informed consent, purpose limitation, and adequate security safeguards. Institutions should establish internal AI governance frameworks consistent with national data protection regulations to enhance accountability, transparency, and stakeholder trust.
To address the parameter on monitoring effectiveness, it is important to increase parental support (Moran et al., 2004). Providing evidence-based outcomes, such as reduced academic dishonesty and enhanced exam integrity. Furthermore, educational policymakers must embed ethical guidelines and accountability measures into the regulatory framework governing AI applications in education. These should cover transparency, data governance, fairness, and user consent, thereby creating a foundation for responsible innovation. Institutions should also establish mechanisms for ongoing feedback and iterative system improvement. This participatory approach, involving all stakeholders in system refinement, can enhance the responsiveness and effectiveness of AI-based solutions. By implementing the above recommendations, educational stakeholders can support a more equitable, trustworthy, and pedagogically aligned integration of AI-proctoring technologies.
Kepping students in mind, since privacy concerns significantly influenced behavioural intention of students, institutions should implement transparent data-handling policies, provide clear consent mechanisms, and communicate how AI-based proctoring data are stored, processed, and deleted. For parents, as awareness and perceived usefulness are key factors, institutions should conduct orientation sessions and provide educational materials explaining system accuracy, fairness, and data protection safeguards. Finally, for faculty members training programs should be introduced to enhance trust in system reliability and ethical usage practices.
Developers should incorporate clear explainability features into AI systems. For example, proctoring tools such as Proctorio and Respondus should provide instructors with accessible dashboards that clarify: what data are collected, how behavioral flags are generated, the probability thresholds for detecting “suspicious” activity and known limitations or bias risks. Providing interpretable AI outputs would reduce perceived ethical risk and increase trust among faculty.
7. Implications of the Study
The findings reveal the major policy ideas that should be considered. Having a variety of strong influencing factors shows the importance of a strategy worked on by several stakeholders. Since AI-based proctoring is meant for students and educators, it’s important to make the platforms user-friendly (Paul, 2024). If a school ensures clear information sharing and gathers students’ trust, it will be more accepted by students and their parents (Doton, 2024). Fairness should be reinforced through regular algorithmic audits (Goodman & Trehu, 2022). Moreover, highlighting the monitoring effectiveness of AI-based platforms to parents may increase their support (Entenberg et al., 2023). For professors, offering professional development opportunities enables them to recognize the educational potential beyond administrative services.
8. Limitations and Future Scope of the Study
This study provides valuable insights, but it also has certain limitations. First, the data collection was limited to a specific demographic and geographic context, which may impact the generalizability of the findings to broader populations or international settings. Future studies may incorporate inferential analyses (e.g., chi‑square tests or multi-group comparisons) to examine whether demographic variables such as age and gender significantly influence behavioural intention toward AI-based proctoring systems. The stakeholders’ participants include Professors, students, and parents, primarily from urban and semi-urban-based educational institutions, which have easy access to digital infrastructure, potentially excluding perspectives from rural or underserved regions where digital literacy and access to technology may differ significantly. Secondly, the study relied on self-reported data, which may be influenced by social desirability bias or subjective interpretations of AI-proctoring experiences. Additionally, while the study captured perceptions and behavioural intentions, it did not assess long-term adoption patterns or actual user behaviour over time.
Acknowledgements
The authors would like to thank all anonymous reviewers for their valuable comments and suggestions, which greatly improved the quality of this paper. I express my gratitude to Panjab University for providing an intellectually enriching environment and supporting research endeavors. Appreciation is also extended to faculty from the university for their valuable support throughout the research process.
Ethics Approval Statement
Human subjects were involved in this study. (The study employs a quantitative approach and includes 1392 participants, each group with 464 respondents representing each stakeholder group). Informed consent from all subjects and/or their legal guardian(s) for both study participation and publication of identifying information/images in an online open-access publication.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Authors' contributions
All authors actively participated in completing the research study and throughout the publication process. The specific contribution roles are as follows:
Manit Malhotra and Indu Chhabra jointly conceived the research idea and developed the study methodology.
Manit Malhotra: Data collection, formal analysis, data curation, visualization, resource management, validation, and drafting the original manuscript.
Indu Chhabra: Supervision, project coordination, investigation, manuscript review and editing, and project administration.
Data availability
Data available upon request
Use of Artificial Intelligence
The authors employed the AI-based tool Grammarly to assist with language editing during the revision of this paper, including improvements to grammar, clarity, structure, and academic tone. The authors were solely responsible for all content and intellectual contributions. No generative AI technologies were used for autonomous content creation, data analysis, or interpretation. The accuracy and integrity of the final manuscript remain entirely the responsibility of the authors.
References
Alessio, H. M., Malay, N., Maurer, K., Bailer, A. J., & Rubin, B. (2017). Examining the effect of proctoring on online test scores. Online Learning, 21(1), 146–161. https://doi.org/10.24059/olj.v21i1.885
Almaiah, M. A., Al-Khasawneh, A., & Althunibat, A. (2020). Exploring the critical challenges and factors influencing the E-learning system usage during COVID-19 pandemic. Education and information technologies, 25, 5261–5280. https://doi.org/10.1007/s10639-020-10219-y
Al-Munawar, M. A. R., Azyan, N. I., Aurelia, S., Indriani, S., & Hadiapurwa, A. (2025). Professors’ views on optimizing Kurikulum Merdeka in SMK Kencana accounting department. Hipkin Journal of Educational Research, 2(1), 93–108. https://doi.org/10.64014/hipkin-jer.v2i1.45a
Belzak, W., Burstein, J., & von Davier, A. A. (2025). Evaluating Fairness in AI-Assisted Remote Proctoring. Proceedings of Machine Learning Research, 273(125), 132.
Bhardwaj, M., Kashyap, S., Aggarwal, D., & Bhawani, R. (2022). Perceptions and experience of medical students regarding E-learning during COVID-19 Lockdown-A Cross-sectional study. Journal Clinical and Diagnostic Research, 16, IC01–IC06. https://doi.org/10.7860/JCDR/2022/54803.16051
Binns, R., Van-Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). ’It’s Reducing a Human Being to a Percentage’ Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 Chi conference on human factors in computing systems (pp. 1–14). https://doi.org/10.1145/3173574.3173951
Bucăţa, G., & Tileagă, C. (2024). Digital renaissance in education: unveiling the transformative potential of digitization in educational institutions. Land Forces Academy Review, 29(1), 20–37. https://doi.org/10.2478/raft-2024-0003
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77–91). PMLR.
Chai, F., Ma, J., Wang, Y., Zhu, J., & Han, T. (2024). Grading by AI makes me feel fairer? How different evaluators affect college students’ perception of fairness. Frontiers in Psychology, 15, 1221177.
Chatterjee, S., Rana, N. P., Tamilmani, K., & Sharma, A. (2021). The effect of AI-based CRM on organization performance and competitive advantage: An empirical analysis in the B2B context. Industrial Marketing Management, 97, 205–219. https://doi.org/10.1016/j.indmarman.2021.07.013
Chin, T., Shi, Y., Singh, S. K., Agbanyo, G. K., & Ferraris, A. (2022). Leveraging blockchain technology for green innovation in ecosystem-based business models: A dynamic capability of values appropriation. Technological Forecasting and Social Change, 183, 121908. https://doi.org/10.1016/j.techfore.2022.121908
Cyphert, A. B. (2019). Tinker-ing with machine learning: The legality and consequences of online surveillance of students. Nevada Law Journal, 20, 457.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
Doton, W. (2024). The role of data security in the world of education to increase parental trust. Educational Researcher Journal, 1(3), 41–48. https://doi.org/10.71288/educationalresearcherjournal.v1i3.19
Durnell, E., Okabe-Miyamoto, K., Howell, R. T., & Zizi, M. (2020). Online privacy breaches, offline consequences: Construction and validation of the concerns with the protection of informational privacy scale. International Journal of Human–Computer Interaction, 36(19), 1834–1848.
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Vigneswara-Ilavarasan, P., Janssen, M., Jones, P., Kumar-Kar, A., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., ... Williams, M. D. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International journal of information management, 57, 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
Eaton, S. E., & Turner, K. L. (2020). Exploring academic integrity and mental health during COVID-19: Rapid review. Journal of Contemporary Education Theory & Research (JCETR), 4(2), 35–41.
Entenberg, G. A., Mizrahi, S., Walker, H., Aghakhani, S., Mostovoy, K., Carre, N., Marshall, Z., Dosovitsky, G., Benfica, D., Rousseau, A., Lin, G., & Bunge, E. L. (2023). AI-based chatbot micro-intervention for parents: Meaningful engagement, learning, and efficacy. Frontiers in Psychiatry, 14, 1080770. https://doi.org/10.3389/fpsyt.2023.1080770
Fawns, T., & Schaepkens, S. P. (2023). A matter of trust: Online proctored exams and the integration of technologies of assessment in medical education. In Helping a Field See Itself (pp. 71–80). CRC Press. https://doi.org/10.1201/9781003263463-9
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Shafer, B., Valcke, P., & Vayena, E. (2018). AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5
Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1), 51–90.
George, A. S. (2024). Technology Tension in Schools: Addressing the Complex Impacts of Digital Advances on Teaching, Learning, and Wellbeing. Partners Universal Multidisciplinary Research Journal, 1(3), 49–65.
Goodman, E. P., & Trehu, J. (2022). Algorithmic auditing: Chasing AI accountability. Santa Clara High Technology Law Journal, 39, 289. https://doi.org/10.2139/ssrn.4227350
Hair Jr, J. F., Matthews, L. M., Matthews, R. L., & Sarstedt, M. (2017). PLS-SEM or CB-SEM: updated guidelines on which method to use. International Journal of Multivariate Data Analysis, 1(2), 107–123. https://doi.org/10.1504/IJMDA.2017.087624
Hollister, K. K., & Berenson, M. L. (2009). Proctored versus unproctored online exams: Studying the impact of exam environment on student performance. Decision sciences journal of innovative education, 7(1), 271–294. https://doi.org/10.1111/j.1540-4609.2008.00220.x
Karim, A. R., & Sugianto, H. (2023). Measuring The Future Needs Of Islami Education Through The Role Of Artificial Intelligence. In Proceeding of International Conference on Education, Society and Humanity (Vol. 1(1), pp. 861–870).
Kharbat, F. F., & Abu Daabes, A. S. (2021). E-proctored exams during the COVID-19 pandemic: A close understanding. Education and Information Technologies, 26(6), 6589–6605.
https://doi.org/10.1007/s10639-021-10458-7
Kulshrestha, A., Gupta, A., Singh, U., Sharma, A., Shukla, A., Gautam, R., Kumar, P., & Pandey, D. (2023). Ai-based exam proctoring system. In 2023 International Conference on Disruptive Technologies (ICDT) (pp. 594–597). IEEE. https://doi.org/10.1109/ICDT57929.2023.10151160
Luo, Y. (2024). Enhancing educational interfaces: Integrating user-centric design principles for effective and inclusive learning environments. Applied and Computational Engineering, 64, 192–197. https://doi.org/10.54254/2755-2721/64/20241427
Malhotra, N. K., Kim, S. S., & Agarwal, J. (2004). Internet users’ information privacy concerns (IUIPC): The construct, the scale, and a causal model. Information Systems Research, 15(4), 336–355.
McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13(3), 334–359.
Moran, P., Ghate, D., Van-Der-Merwe, A., & Policy Research Bureau (2004). What works in parenting support?: A review of the international evidence. DfES Publications.
Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mökander, J., & Floridi, L. (2021). Ethics as a service: a pragmatic operationalisation of AI ethics. Minds and Machines, 31(2), 239–256. https://doi.org/10.1007/s11023-021-09563-w
Nigam, A., Pasricha, R., Singh, T., & Churi, P. (2021). A systematic review on AI-based proctoring systems: Past, present and future. Education and Information Technologies, 26(5), 6421–6445. https://doi.org/10.1007/s10639-021-10597-x
Paul, J. S., Farhath, O., & Selvan, M. P. (2024). AI based Proctoring System–A Review. In 2024 International Conference on Inventive Computation Technologies (ICICT) (pp. 1–5). IEEE. https://doi.org/10.1109/ICICT60155.2024.10544779
Seo, K., Tang, J., Roll, I., Fels, S., & Yoon, D. (2021). The impact of artificial intelligence on learner‑instructor interaction in online learning. International Journal of Educational Technology in Higher Education, 18(1), 54.
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International journal of human-computer studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
Shioji, E., Meliksetyan, A., Simko, L., Watkins, R., Aviv, A. J., & Cohney, S. (2025). It’s been Lovely Watching you: Institutional Decision-Making on Online Proctoring Software. In 2025 IEEE Symposium on Security and Privacy (SP) (pp. 2790–2808). IEEE. https://doi.org/10.1109/SP61157.2025.00018
Somavarapu, J., Biswas, S. K., Purkayastha, B., Abhisheka, B., & Potluri, T. (2024). Advancements and Challenges in Fully Automated Online Proctoring Systems: A Comprehensive Survey of AI-Driven Solutions. In International Conference on Smart Computing and Communication (pp. 199–212). Springer Nature Singapore. https://doi.org/10.1007/978-981-97-1326-4_17
Teo, T. (2011). Factors influencing Professors’ intention to use technology: Model development and test. Computers & Education, 57(4), 2432–2440. https://doi.org/10.1016/j.compedu.2011.06.008
Tweissi, A., Al Etaiwi, W., & Al Eisawi, D. (2022). The accuracy of AI-based automatic proctoring in online exams. Electronic Journal of e-Learning, 20(4), 419–435. https://doi.org/10.34190/ejel.20.4.2600
Van-Wynsberghe, A. (2013). A method for integrating ethics into the design of robots. Industrial Robot: An International Journal, 40(5), 433–440. https://doi.org/10.1108/IR-12-2012-451
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.
Williamson, B., Eynon, R., & Potter, J. (2020). Pandemic politics, pedagogies and practices: digital technologies and distance education during the coronavirus emergency. Learning, Media and Technology, 45(2), 107–114.
Annex 1: Scale Questionnaire with Sources
|
Scale Items Descriptions (Professors Perceptions on AI-based proctoring system) |
Source |
|
Perceived Usefulness
|
Davis (1989) |
|
Perceived Ease of Use
|
Davis (1989) |
|
Trust
|
McKnight et al. (2002) Gefen et al. (2003) |
|
Privacy Scale
|
Durnell et al. (2020) |
|
Impact on Teaching & Learning
|
Seo et al. (2021) Williamson et al. (2020) |
|
Behavioural Intention
|
Venkatesh et al. (2003) |
|
Scale Items Descriptions (Students Perceptions on AI-based proctoring system) |
Source |
|
Perceived Fairness
|
Chai et al. (2024) |
|
Privacy Concerns
|
Durnell et al. (2020) |
|
Trust
|
Shin (2021) |
|
Usability/Ease of Use
|
Davis (1989) |
|
Behavioural Intention to use AI-proctoring system
|
Venkatesh et al. (2003) |
|
Scale Items Descriptions (Parents Perceptions on AI-based proctoring system) |
Source |
|
Privacy Concerns
|
Malhotra et al. (2004) |
|
Effectiveness and Monitoring: (Accuracy)
|
Tweissi et al. (2022) |
|
Trust
|
McKnight et al. (2002) Gefen et al. (2003) |
|
Fairness and Bias
|
Belzak et al. (2025) |
|
Behavioural Intention
|
Venkatesh et al. (2003) |
This work is licensed under a Creative Commons Attribution 4.0 International License
Journal of Technology and Science Education, 2011-2026
Online ISSN: 2013-6374; Print ISSN: 2014-5349; DL: B-2000-2012
Publisher: OmniaScience



