Comparing supervised learning models for classifying learned helplessness in a mathematics tutoring environment

COMPARING SUPERVISED LEARNING MODELS FOR CLASSIFYING LEARNED HELPLESSNESS IN A MATHEMATICS TUTORING ENVIRONMENT

John Paul Miranda1,2* , Rex Bringula2 , Nelson Rodelas2 , Carmelita Ragasa2 ,
Edie Boy Dela Cruz2 , Sheila Geronimo2 , Arlene Mae Valderama3 ,
Albert Vinluan4 , Alexis John Rubio2

1Pampanga State University (Philippines)

2University of the East (Philippines)

3Jose Rizal University (Philippines)

4Isabela State University (Philippines)

Received August 2025

Accepted November 2025

Abstract

Learned helplessness affects student motivation and performance, especially in mathematics, where repeated difficulty often leads students to withdraw from tasks. Most existing research has used surveys and teacher observations to study this phenomenon. While these methods provide insights, they do not capture behavioral data or support real-time identification. There is a need for approaches that use behavioral data from learning technologies and apply interpretable machine learning to detect signs of helplessness. This study used interaction logs and academic profiles from 113 Grade 8 students in five public schools in the Philippines. The students used a web-based mathematics tutoring application for 30 minutes under self-paced conditions. The study compared six supervised learning models to classify students as either high or low in learned helplessness based on teacher ratings. Results indicated that Explainable Boosting Machine (EBM), XGBoost, CatBoost, LightGBM, and Random Forest showed high performance, and EBM produced a slightly higher test balanced accuracy. EBM also provided the most interpretable results by identifying general weighted average, mathematics anxiety, time spent, and problem-solving success as the most influential predictors. These results show that learned helplessness can be classified using academic and behavioral indicators collected in digital learning environments. Interpretable models such as EBM can support early detection and guide adaptive instruction. Future studies should examine how models like EBM can help teachers respond to students’ needs across a wider range of subjects and settings.

 

Keywords – Adaptive systems, Educational data mining, Learned helplessness, Machine learning, Tutoring systems.

To cite this article:

Miranda, J.P., Bringula, R., Rodelas, N., Ragasa, C., Dela Cruz, E.B., Geronimo, S. et al. (2025). Comparing supervised learning models for classifying learned helplessness in a mathematics tutoring environment. Journal of Technology and Science Education, 15(3), 834-845. https://doi.org/10.3926/jotse.3765

    1. 1. Introduction

Learned helplessness is a well-established concept in educational psychology. It describes a state in which individuals believe they cannot control outcomes (Winterflood & Climie, 2020), which leads them to reduce effort or disengage from tasks (Ghasemi & Karimi, 2021) In classroom settings, this often appears when students stop trying after repeated failure. Mathematics is one of the most common subjects where learned helplessness emerges because of its nature where sequential understanding of its concepts where each step builds on the previous one including the pressure to solve problems correctly (Taş & Deniz, 2018). When students feel incapable of success, they may begin to avoid challenges or rely heavily on help, which further limits their learning and performance (Taş & Deniz, 2018; Yates, 2009).

This concern has become more urgent in the context of the Philippine educational system. In recent years, international assessments have highlighted persistent gaps in mathematical performance among Filipino students. The 2018 Programme for International Student Assessment (PISA) ranked the Philippines near the bottom in mathematics achievement among participating countries (Department of Education, 2019). Similarly, the 2019 Trends in International Mathematics and Science Study (TIMSS) showed that most basic education students in the Philippines did not reach even the intermediate proficiency benchmark (Mullis, Martin, Foy, Kelly & Fishbein, 2020). These results suggest that many students struggle not only with mathematical skills but also with motivational and emotional factors. These factors are closely related to the phenomenon of learned helplessness and deserve greater attention in both research and practice.

In response to this, digital learning environments now offer new opportunities to study student behavior (Ali, Ashaari, Noor & Zainudin, 2022). As students interact with tutoring systems, their actions generate data such as time spent on tasks, number of mistakes, hint usage, and problem-solving patterns. These behavioral traces provide valuable insights into student engagement and cognitive states (Park, Nam & Cha, 2012). Although previous studies have used such data to detect affective states, engagement, or academic performance, very few have examined learned helplessness specifically (Seyri & Rezaee, 2024; Wong, Ang, Yong & Yong, 2023; Ziegler, Bedenlier, Gläser-Zikuda, Kopp & Händel, 2021) . At present, there is limited research that explores learned helplessness through human-computer interaction (HCI) data. Even fewer studies have evaluated and compared different supervised learning models for this classification task. This lack of research limits our understanding of how machine learning can be used to detect learned helplessness in real time and in ways that are interpretable and actionable for educators.

In addition to these, teachers often face clear signals when students struggle with control beliefs during mathematics tasks. For instance, students who show low success on routine steps often hesitate during problem solving, and this hesitation signals a need for guided practice (Estey & Coady, 2017). Students who report high anxiety often avoid challenging items, and this avoidance signals a need for short confidence-building routines before problem solving (Govindan, Kaliaperumal, Arulmozhi & Priya, 2024; Rada & Lucietto, 2022). Students who spend long periods on simple steps often need clearer procedural cues, and teachers can respond with worked examples that reduce confusión. These connections show how learned helplessness theory aligns with measurable behavioral indicators (Yates, 2009), and they show how machine learning models can support interventions during regular instruction.

To address this gap, the present study used behavioral interaction logs and academic profiles to classify learned helplessness among Grade 8 students in a mathematics tutoring environment. Six well-known supervised learning models were trained and evaluated: Random Forest, XGBoost, CatBoost, LightGBM, Logistic Regression, and Explainable Boosting Machine (EBM). Random Forest was selected for its robustness and ability to handle non-linear relationships through ensemble decision trees (Chen & Jin, 2024; Hari-Krishna, Vallabhaneni, Sri-Krishna-Chaitanya, Kumar-Kaveti, Narasimha-Rao & Tirumanadham, 2023; Kumar, Kothiyal, Kumar, Hemantha & Maranan, 2024; Wong et al., 2023). XGBoost was chosen for its efficiency and strong predictive performance using gradient boosting with regularization (Duan, Dai & Tu, 2021; Li & Zhou, 2023). CatBoost was included due to its native support for categorical variables and reduced need for preprocessing (Sapkota & Kaur, 2025; Tirumanadham, Thaiyalnayaki & SriRam, 2024). LightGBM was selected for its fast training time and suitability for large‑scale data (Wang, Chang & Liu, 2022; Wu, Chen, Zhang & Wang, 2024). Logistic Regression served as a baseline model for its simplicity and interpretability (Coussement, Phan, De-Caigny, Benoit & Raes, 2020; Pei & Xing, 2021). Lastly, EBM was chosen for its balance between accuracy and interpretability as well due to its built-in interpretability and feature contributions through additive boosting (Dsilva, Schleiss & Stober, 2023; Ghimire, Abdulla, Joseph, Prasad, Murphy, Devi et al. , 2024).

Teacher-provided ratings, based on the Yates (2009) instrument, were used to label students as showing high or low learned helplessness. Among all models tested, EBM achieved the highest balanced accuracy and provided clear insights into feature importance. The study aimed to (1) compare the performance of supervised learning models in classifying learned helplessness, (2) identify the most predictive features using an interpretable model, and (3) examine how academic and behavioral indicators contribute to learned helplessness detection in a digital learning environment.

2. Methodology

2.1. Participants, Software Utilized, and Data Source

The dataset consisted of 2,235 interaction records collected from 113 Grade 8 students in the Philippines (Figure 1). Data collection occurred during a 30-minute usage session of a web-based mathematics tutoring application for step-by-step solving of linear equations that was specifically modified and developed for the study (Figure 2) (Miranda & Bringula, 2023; Miranda, Bringula, Simpao, Salenga, Grume, Nacianceno et al., 2025). The software can generate unlimited equations based on the selected schema type, such as addition/subtraction, multiplication/division, or mixed operations. It checks whether each step entered by the student is correct and provides step-based hints when needed. The software also allows the student to skip a step and still proceed with the solution. Students may also input the final answer directly. Additionally, the software enables users to skip or reset the entire problem. The dataset was adapted from an earlier study used to develop a random forest model (Miranda & Bringula, 2025). The dataset was collected from sessions conducted in five public secondary schools under self-paced conditions where students can solve a maximum of 18 equations with different type of difficulty. Prior to these, a pretest was administered to assess students’ baseline ability to solve linear equation. Figure 3 shows the full workflow of the study. All records were anonymized and used solely for research purposes.

Ethical approval for this study was obtained from the University of the East Ethics Review Committee. Prior to data collection, written permissions were secured from provincial and city division officials of the Department of Education. Following institutional clearance, permission from school principals was also secured to conduct the study within their campuses. Teachers were consulted and briefed on the purpose and process of the research. Consent forms were distributed and signed by the parents or guardians of the participating students. Student assent was also obtained to ensure voluntary participation and understanding of the study.

 

Figure 1. Sample interaction logs

 

Figure 2. Main interface

 

Figure 3. Study workflow

2.2. Measures and Features

Two categories of features were used in the analysis: academic profile features and behavioral interaction features. Academic profile features included General Weighted Average (GWA), pretest score, average self-efficacy, and average anxiety. Behavioral interaction features included total equations attempted, total steps taken, number of mistakes, total hints used, total answer attempts, number of problems solved, number of problems skipped, average steps per equation, total click count, and total time spent (in seconds). The binary target variable, Label, classified students as either low (0) or high (1) in learned helplessness.

2.3. Self-Efficacy and Anxiety Measures

The study measured mathematics self-efficacy and anxiety using items from the Mathematics Self-Efficacy and Anxiety Questionnaire by May (2009). Students rated their confidence and anxiety on a five-point Likert scale. Higher scores indicated stronger confidence or higher anxiety. The study computed average scores for each subscale for every student. These averages served as academic profile features in the models.

2.4. Target Variable Definition

The classification of learned helplessness was based on teacher ratings using the 10-item Student Behaviour Scale developed by Yates (2009). This scale was adapted from the Student Behavior Checklist and psychometrically validated using Rasch modeling. Teachers rated student behaviors related to failure response, motivation, persistence, and effort using a five-point Likert scale. Scores from the helplessness subscale were used to classify students. Those exhibiting a higher degree of helplessness behaviors were labeled as high learned helplessness (Label = 1), while those showing fewer such behaviors were labeled as low learned helplessness (Label = 0).

2.5. Procedure

The dataset was randomly divided into training and testing sets using a stratified 70:30 split to preserve class proportions. The Synthetic Minority Over-Sampling Technique (SMOTE) was applied to the training set with 70% sampling strategy to address class imbalance (Mohd, Abdul-Jalil, Noora, Ismail, Yahya & Mohamad, 2019). No additional feature scaling or outlier removal was applied, as the models used were robust to unscaled numerical inputs. Class weights were specified in compatible models to further address imbalance, with greater weight assigned to the high helplessness class (Label = 1).

2.6. Model Selection and Configuration

Six popular supervised learning algorithms were selected to compare their effectiveness in classifying students by learned helplessness level. These included: (1) Random Forest, serving as the baseline model; (2) XGBoost, a gradient boosting framework optimized for structured data; (3) CatBoost, designed for datasets with categorical and ordinal features; (4) LightGBM, an efficient boosting algorithm for large-scale data; (5) EBM, a transparent and interpretable model based on additive boosting; and (6) Logistic Regression, included as a linear baseline. Among these six algorithms, EBM improved on earlier methods because it shows how each feature shapes the prediction, and this clarity is essential in tutoring systems that need interpretable outputs for effective support (Dsilva, 2023; Pei & Xing, 2021; Sahlaoui, Alaoui, Nayyar, Agoujil & Jaber, 2021). All models were implemented using Python 3.11 in Jupyter Notebook environment and related libraries (scikit-learn, xgboost, lightgbm, catboost, interpret). Default or recommended hyperparameters were used to maintain consistency and comparability across models.

2.7. Evaluation Strategy

Each model was evaluated using 5-fold stratified cross-validation on the training data to assess generalization performance. Following cross-validation, the models were retrained on the full SMOTE‑balanced training set and tested on the holdout set. Model performance was evaluated using test accuracy, balanced accuracy, and F1 score (Wardhani, Rochayani, Iriany, Sulistyono & Lestantyo, 2019). Balanced accuracy was prioritized to account for class imbalance in identifying students with high learned helplessness. All results were tabulated and used for comparative analysis

3. Results and Discussion

3.1. Descriptive Statistics

Table 1 provides an overview of the descriptive statistics for the variables used in the analysis. The dataset includes both academic profile features and behavioral interaction logs derived from the use of a web‑based mathematics tutoring application. These features represent students’ self-reported dispositions, such as self-efficacy and anxiety, alongside system-tracked data, including the number of equations attempted, hints used, mistakes made, and time spent. The data also capture cumulative indicators of engagement, such as total clicks and answer attempts. These variables were used to train machine learning models for classifying students into high and low learned helplessness groups.

Feature

Mean

SD

GWA

90.6

3.65

Pretest score

3.69

3.15

Average self-efficacy

3.38

0.56

Average anxiety

2.96

0.64

Total equations attempted

25.38

12.34

Total mistakes

9.88

6.52

Total hints

7.33

4.53

Total answer attempts

39.7

19.35

Total problems solved

4.37

4.68

Total problems skipped

17.4

11.74

Total click frequency

580.67

238.83

Total time spent (in seconds)

1,524.04

3,067.66

Note. N = 2,235 interaction logs

Table 1. Descriptive statistics and class distribution of learned helplessness label

3.2. Cross-Validation Results

Table 2 shows the results of the model evaluation using five-fold stratified cross-validation. The top‑performing models were XGBoost, EBM, LightGBM, and CatBoost. These models produced very similar accuracy scores, which suggests that they were all effective in classifying students based on learned helplessness. Random Forest also performed well, although its accuracy was slightly lower than the leading models. Logistic Regression had the lowest performance, which shows that it was not as effective in handling the features used in this study. The results highlight the strength of ensemble-based models and the value of using interpretable techniques like EBM in educational settings (Dsilva et al., 2023; Sahlaoui et al., 2021).

The strong performance of tree-based and boosting models aligns with recent findings that emphasize their effectiveness in handling complex learning behavior patterns (Borna, Saadat, Hojjati & Akbari, 2024; Lincke, Jansen, Milrad & Berge, 2021). These models are capable of capturing nonlinear relationships and interaction effects among features, which are common in educational data. In contrast, the lower performance of Logistic Regression supports earlier observations that linear models struggle with affective and behavioral learning constructs due to their limited capacity for modeling interactions (Hassan & Kaabouch, 2024; Kyriazos & Poga, 2024). These findings confirm that advanced ensemble methods are better suited for predicting latent psychological states from learning traces.

Model

Mean Balanced Accuracy

SD

XGBoost

0.997

0.003

EBM

0.997

0.002

LightGBM

0.997

0.003

CatBoost

0.995

0.003

Random Forest

0.917

0.011

Logistic Regression

0.63

0.039

Table 2. Cross-validation balanced accuracy across models

3.3. Test Set Validation

As shown in Table 3, the EBM, XGBoost, CatBoost, and LightGBM achieved nearly identical performance in terms of balanced accuracy and F1 score. EBM ranked first with a test balanced 99 % accuracy and a 99 % F1 score, while XGBoost, CatBoost, and LightGBM followed closely with 99 % balanced accuracy scores and identical F1 scores. All four models exceeded 99 % in overall test accuracy. Random Forest achieved a 92% balanced accuracy and a 71 % F1 score. While its performance was slightly lower than that of the boosting models, it remained relatively strong. Logistic Regression showed the weakest performance, with a 56 % balanced accuracy and a 29 % F1 score.

These results confirm that high-performing models maintain strong generalization to unseen data. The nearly indistinguishable performance among EBM, XGBoost, CatBoost, and LightGBM reinforces the finding that learned helplessness, when operationalized through behavioral and academic features, can be effectively classified by multiple robust algorithms (Zeineddine, Braendle & Farah, 2021). However, interpretability remains an important concern. As several studies emphasized, when decisions impact student support or intervention, stakeholders benefit from transparent models (Lundberg & Lee, 2017; Mariyono & Nur-Alif-Hd, 2025; Memarian & Doleck, 2023). In the context of Philippine schools, models like EBM offer advantages by balancing performance and explainability, which supports informed instructional decisions.

The high balanced accuracy scores raise the possibility of overfitting. The dataset came from a single 30‑minute session, and this short duration may limit the range of behaviors captured. Boosting models can learn detailed patterns in small structured datasets, and this ability can inflate performance. The study used cross-validation and a separate test set to reduce this risk.

Rank

Model

Accuracy

Balanced Accuracy

F1 Score

1

EBM

0.997

0.991

0.991

2

XGBoost

0.996

0.99

0.986

3

CatBoost

0.996

0.99

0.986

4

LightGBM

0.996

0.99

0.986

5

Random Forest

0.866

0.916

0.706

6

Logistic Regression

0.592

0.557

0.287

Table 3. Test set performance of supervised learning models

3.4. Top Predictive Features Identified by the EBM

Table 4 presents the top 10 predictive features identified by the EBM, which was selected for interpretation since it achieved the highest overall performance among all models. In EBM, GWA emerged as the strongest individual predictor of learned helplessness, followed by both individual and interaction terms involving average anxiety, total problems solved, and time spent. Specifically, interaction between AVG_Anxiety and TotalSolved was highly influential. Students who show high levels of anxiety and achieve low problem-solving success are more likely to develop learned helplessness (Andres, Baker, Hutt, Mills, Zhang, Rhodes et al., 2023; Gürefe & Bakalım, 2018). Other important contributors included the total number of steps and hints used, as well as composite interactions involving GWA with Pretest_Score and TotalMistakeUser. These results indicate that both academic background and behavioral engagement patterns jointly influence the model’s ability to detect high learned helplessness.

These results provide insight into how learned helplessness manifests in student data. The combination of cognitive (e.g., TotalSolved) and emotional (e.g., AVG_Anxiety) variables reflects the relationship between affective states and problem-solving behavior, consistent with Dweck and Leggett’s 1988’s theory of helplessness orientation (Schroder, Fisher, Lin, Lo, Danovitch & Moser, 2017; Sideridis, 2003). Prior research has also shown that students with lower perceived competence are more likely to give up or rely on external help (Yates, 2009). EBM’s ability to highlight these patterns makes it particularly useful for identifying at-risk students. It can support systems that adapt based on students’ academic history, response patterns, and emotional indicators, all of which contribute to motivation and persistence in mathematical learning.

Teachers can use the model outputs to guide targeted actions during mathematics instruction, and this strengthens the practical value of the study. GWA trends can warn teachers about foundational gaps, and early review sessions can help students rebuild essential skills. High anxiety scores can prompt teachers to use brief confidence routines before problem-solving tasks, while low problem-solving success can direct teachers to provide guided examples that reduce confusion. Time-on-task patterns can help teachers adjust pacing or difficulty to keep students engaged and confident. The study offers a modest theoretical contribution because it builds on established models of learned helplessness, yet its methodological and practical strengths are clear. The integration of interpretable machine learning with behavioral and affective indicators shows how digital learning environments can support early detection of motivational difficulties. EBM improves earlier predictive approaches because it gives clear explanations for each feature, and this supports instructional decisions in ways that black-box models cannot. These points highlight the applied relevance of the study for learning analytics and classroom practice.

Rank

Feature

Importance

1

GWA

2.995

2

AVG_Anxiety & TotalSolved

1.902

3

AVG_Anxiety

1.811

4

TotalSolved

1.687

5

TotalTimeSpentInSecondsUser

1.58

6

GWA & Pretest_Score

1.563

7

GWA & TotalMistakeUser

1.506

8

TotalStepsUser

1.276

9

TotalHintsUser

1.271

10

TotalEquationsUser & TotalTimeSpentInSecondsUser

1.161

Table 4. Top 10 predictive features identified by the EBM

4. Conclusion and Recommendations

This study compared six supervised learning models in classifying learned helplessness among Grade 8 students based on their academic and interaction data during mathematics problem-solving. EBM, XGBoost, CatBoost, and LightGBM showed nearly identical levels of predictive accuracy. Random Forest also demonstrated high performance, with scores slightly lower than the top models. Logistic Regression showed substantially weaker results. The findings confirmed that learned helplessness can be identified using machine learning models that process academic standing, self-reported emotions, and behavioral patterns. Features such as general weighted average, anxiety, time spent, and problem-solving success were important for accurate classification.

Although all top models performed well, EBM had the added benefit of offering interpretable output. This made it easier to understand the relationship between the input features and the predictions. EBM may support teachers and designers of tutoring systems in identifying students who are struggling and providing appropriate support. The study was limited by its small sample size, short observation period, and focus on a single subject and grade level. Future research should include more students, longer usage periods, and additional classroom factors. Interpretable models like EBM can help improve learning environments by offering early insights into students who may need help.

Teachers can use the variables identified by EBM to organize early interventions during mathematics instruction. GWA can signal gaps in prerequisite knowledge, and teachers can respond with short review sessions or focused practice. Anxiety scores can guide teachers to include confidence-building routines or brief emotional check-ins before problem-solving tasks. Low problem-solving success can prompt teachers to give worked examples or guided steps during equation-solving. Time-on-task patterns can show when students need clearer instructions or reduced task complexity. These actions can help teachers support learners who show early signs of helplessness. Schools can also apply these models in broader settings because the algorithms operate with modest computational demands, and this capability supports integration into existing tutoring platforms and school dashboards.

Declaration of Conflicting Interests

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

This study is supported by the authors individual affiliation.

References

Ali, M.A., Ashaari, N.S., Noor, S.F.M., & Zainudin, S. (2022). Identifying Students’ Learning Patterns in Online Learning Environments: A Literature Review. International Journal of Emerging Technologies in Learning, 17(8), 189-205. https://doi.org/10.3991/ijet.v17i08.29811

Andres, J.M.A.L., Baker, R.S., Hutt, S.J., Mills, C., Zhang, J., Rhodes, S. et al. (2023). Anxiety, Achievement, and Self-Regulated Learning in CueThink. In Blikstein, P., Blikstein, P., Van-Aalst, J., Kizito, R., & Brennan, K. (Eds.), Proceedings of the 17th International Conference of the Learning Sciences - ICLS 2023 (258‑265). International Society of the Learning Sciences.

Borna, M.R., Saadat, H., Hojjati, A.T., & Akbari, E. (2024). Analyzing click data with AI: implications for student performance prediction and learning assessment. Frontiers in Education, 9, 1421479. https://doi.org/10.3389/feduc.2024.1421479

Chen, Y., & Jin, K. (2024). Educational Performance Prediction with Random Forest and Innovative Optimizers: A Data Mining Approach. International Journal of Advanced Computer Science and Applications, 15(3), 69-78. https://doi.org/10.14569/IJACSA.2024.0150308

Coussement, K., Phan, M., De-Caigny, A., Benoit, D.F., & Raes, A. (2020). Predicting student dropout in subscription-based online learning environments: The beneficial impact of the logit leaf model. Decision Support Systems, 135, 113325. https://doi.org/10.1016/j.dss.2020.113325

Department of Education (2019). PISA 2018: National report of the Philippines. Available at: https://www.deped.gov.ph/wp-content/uploads/2019/12/PISA-2018-Philippine-National-Report.pdf

Dsilva, V., Schleiss, J., & Stober, S. (2023). Trustworthy Academic Risk Prediction with Explainable Boosting Machines. In Wang, N., Rebolledo-Mendez, G., Matsuda, N., Santos, O.C., & Dimitrova, V. (Eds.), Artificial Intelligence in Education (463-475). Springer Nature Switzerland.

Duan, D., Dai, C., & Tu, R. (2021). Research on the Prediction of Students’ Academic Performance Based on XGBoost. Proceedings - 2021 10th International Conference of Educational Innovation through Technology, EITT (316-319). Chongqing, China. https://doi.org/10.1109/EITT53287.2021.00068

Estey, A. & Coady, Y. (2017). Study Habits, Exam Performance, and Confidence: How Do Workflow Practices and Self-Efficacy Ratings Align? Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education (158-163). New York. https://doi.org/10.1145/3059009.3059056

Ghasemi, F., & Karimi, M.N. (2021). Learned helplessness in public middle schools: The effects of an intervention program based on motivational strategies. Middle School Journal, 52(4), 23-32.
https://doi.org/10.1080/00940771.2021.1948297

Ghimire, S., Abdulla, S., Joseph, L.P., Prasad, S., Murphy, A., Devi, A. et al. (2024). Explainable artificial intelligence-machine learning models to estimate overall scores in tertiary preparatory general science course. Computers and Education: Artificial Intelligence, 7, 100331. https://doi.org/10.1016/j.caeai.2024.100331

Govindan, S., Kaliaperumal, M., Arulmozhi, M., & Priya, P. (2024). Procrastination as a Marker of Anxiety Disorder Among College Students: An Institution-Based Cross-Sectional Study from Puducherry, India. Cureus, 16(5), e61033. https://doi.org/10.7759/cureus.61033

Gürefe, N., & Bakalım, O. (2018). Mathematics Anxiety, Perceived Mathematics Self-efficacy and Learned Helplessness in Mathematics in Faculty of Education Students. International Online Journal of Educational Sciences, 10(3), 154-166. https://doi.org/10.15345/iojes.2018.03.011

Hari-Krishna, R., Vallabhaneni, P., Sri-Krishna-Chaitanya, R., Kumar-Kaveti, K., Narasimha-Rao, M.V.A.L., & Tirumanadham, N.S.K.M.K. (2023). Data-Driven Early Warning System for Subject Performance: A SMOTE and Ensemble Approach (SMOTE-RFET). 2023 International Conference on Sustainable Communication Networks and Application, ICSCNA (998-1004), Theni, India. https://doi.org/10.1109/ICSCNA58489.2023.10370047

Hassan, M., & Kaabouch, N. (2024). Impact of Feature Selection Techniques on the Performance of Machine Learning Models for Depression Detection Using EEG Data. Applied Sciences, 14(22), 10532. https://doi.org/10.3390/app142210532

Kumar, D., Kothiyal, A., Kumar, R., Hemantha, C., & Maranan, R. (2024). Random Forest approach optimized by the Grid Search process for predicting the dropout students. 2024 International Conference on Innovations and Challenges in Emerging Technologies, ICICET (1-6). Nagpur, India. https://doi.org/10.1109/ICICET59348.2024.10616372

Kyriazos, T., & Poga, M. (2024). Application of Machine Learning Models in Social Sciences: Managing Nonlinear Relationships. Encyclopedia, 4(4), 1790-1805. https://doi.org/10.3390/encyclopedia4040118

Li, G., & Zhou, H. (2023). Modeling and Estimation Methods for Student Achievement Recognition Based on XGBoost Algorithm. 2023 International Conference on Evolutionary Algorithms and Soft Computing Techniques, EASCT (1-6). Bengaluru, India. https://doi.org/10.1109/EASCT59475.2023.10392502

Lincke, A., Jansen, M., Milrad, M., & Berge, E. (2021). The performance of some machine learning approaches and a rich context model in student answer prediction. Research and Practice in Technology Enhanced Learning, 16(1), 10. https://doi.org/10.1186/s41039-021-00159-7

Lundberg, S.M., & Lee, S.I. (2017). A unified approach to interpreting model predictions. Proceedings of the 31st International Conference on Neural Information Processing Systems (4768-4777). Red Hook, NY, USA.

Mariyono, D., & Nur-Alif-Hd, A. (2025). AI’s role in transforming learning environments: a review of collaborative approaches and innovations. Quality Education for All, 2(1), 265-288.
https://doi.org/10.1108/QEA-08-2024-0071

May, D. (2009). Mathematics anxiety and self-efficacy questionnaire. University of Georgia. Available at: https://getd.libs.uga.edu/pdfs/may_diana_k_200908_phd.pdf

Memarian, B., & Doleck, T. (2023). Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education: A systematic review. Computers and Education: Artificial Intelligence, 5, 100152. https://doi.org/10.1016/j.caeai.2023.100152

Miranda, J.P.P., & Bringula, R.P. (2025). Modeling Student Learned Helplessness in Mathematics Using Random Forest. 2025 13th International Conference on Information and Education Technology (138-142). Fukuyama, Japan. https://doi.org/10.1109/ICIET66371.2025.11046295

Miranda, J.P.P., & Bringula, R.P. (2023). Towards the Development of a Model to Detect Learned Helplessness Among Grade 6 Mathematics Students. 2023 IEEE International Conference on Advanced Learning Technologies (ICALT) (373-375). Orem, UT, USA. https://doi.org/10.1109/ICALT58122.2023.00115

Miranda, J.P.P., Bringula, R.P., Simpao, L.S., Salenga, J.L., Grume, J.C., Nacianceno, M.C.B. et al. (2025). Towards the Development of Detection of Learned Helplessness in Mathematics: Design and Data Collection Challenges from a Developing Country Perspective. In Sottilare, R.A., & Schwarz, J. (Eds.), Adaptive Instructional Systems (87-96). Springer Nature Switzerland.
https://doi.org/10.1007/978-3-031-92970-0_7

Mohd, F., Abdul-Jalil, M., Noora, N.M.M., Ismail, S., Yahya, W.F.F., & Mohamad, M. (2019). Improving Accuracy of Imbalanced Clinical Data Classification Using Synthetic Minority Over-Sampling Technique. In Alfaries, A., Mengash, H., Yasar, A., & Shakshuki, E. (Eds.), Advances in Data Science, Cyber Security and IT Applications (99-110). Springer International Publishing.

Mullis, I.V.S., Martin, M.O., Foy, P., Kelly, D.L., & Fishbein, B. (2020). TIMSS 2019 International Results in Mathematics and Science. TIMSS & PIRLS International Study Center. Available at: https://timss2019.org/reports/wp-content/themes/timssandpirls/download-center/TIMSS-2019-International-Results-in-Mathematics-and-Science.pdf

Park, S.Y., Nam, M.W., & Cha, S.B. (2012). University students’ behavioral intention to use mobile learning: Evaluating the technology acceptance model. British Journal of Educational Technology, 43(4), 592‑605. https://doi.org/10.1111/j.1467-8535.2011.01229.x

Pei, B., & Xing, W. (2021). An Interpretable Pipeline for Identifying At-Risk Students. Journal of Educational Computing Research, 60(2), 380-405. https://doi.org/10.1177/07356331211038168

Rada, E., & Lucietto, A. (2022). Math Anxiety - A Literature Review on Confounding Factors. Journal of Research in Science Mathematics and Technology Education 5(2), 117-129. https://doi.org/10.31756/jrsmte.12040

Sahlaoui, H., Alaoui, E.A.A., Nayyar, A., Agoujil, S., & Jaber, M.M. (2021). Predicting and Interpreting Student Performance Using Ensemble Models and Shapley Additive Explanations. IEEE Access, 9, 152688-152703. https://doi.org/10.1109/ACCESS.2021.3124270

Sapkota, A., & Kaur, S. (2025). Enhancing Student Success: Predictive Modeling of Graduation and Dropout Rates in University Management Using Machine Learning. Lecture Notes in Networks and Systems, 1128, 309‑319. https://doi.org/10.1007/978-981-97-7371-8_24

Schroder, H.S., Fisher, M.E., Lin, Y., Lo, S.L., Danovitch, J.H., & Moser, J.S. (2017). Neural evidence for enhanced attention to mistakes among school-aged children with a growth mindset. Developmental Cognitive Neuroscience, 24, 42-50. https://doi.org/10.1016/j.dcn.2017.01.004

Seyri, H., & Rezaee, A.A. (2024). An autonomy-oriented response to EAP students’ learned helplessness in online classes. Current Psychology, 43(9), 7887-7898. https://doi.org/10.1007/s12144-023-04981-8

Sideridis, G.D. (2003). On the origins of helpless behavior of students with learning disabilities: avoidance motivation? International Journal of Educational Research, 39(4), 497-517. https://doi.org/10.1016/j.ijer.2004.06.011

Taş, S., & Deniz, S. (2018). Prediction concerning the learned helplessness about mathematics of the 8th grade students: Problem-solving skills and cognitive flexibility. Turkish Journal of Computer and Mathematics Education, 9(3), 618-635. https://doi.org/10.16949/turkbilmat.415087

Tirumanadham, N.S.K.M.K., Thaiyalnayaki, S., & SriRam, M. (2024). Evaluating Boosting Algorithms for Academic Performance Prediction in E-Learning Environments. Proceedings of the 2nd International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics, ICIITCEE (1-8). Bangalore, India. https://doi.org/10.1109/IITCEE59897.2024.10467968

Wang, C., Chang, L., & Liu, T. (2022). Predicting Student Performance in Online Learning Using a Highly Efficient Gradient Boosting Decision Tree. In Shi, Z., Zucker,  J.D., & An, B. (Eds.), Intelligent Information Processing XI (508-521). Springer International Publishing.

Wardhani, N.W.S., Rochayani, M.Y., Iriany, A., Sulistyono, A.D., & Lestantyo, P. (2019). Cross-validation Metrics for Evaluating Classification Performance on Imbalanced Data. 2019 International Conference on Computer, Control, Informatics and Its Applications (IC3INA) (14-18). Tangerang, Indonesia. https://doi.org/10.1109/IC3INA48034.2019.8949568

Winterflood, H., & Climie, E.A. (2020). Learned helplessness. In Personality Processes and Individuals Differences of The Wiley Encyclopedia of Personality and Individual Differences (269-274). Wiley. https://doi.org/10.1002/9781119547174.ch223

Wong, W.P., Ang, C.T., Yong, X.Y., & Yong, C.S.T. (2023). Langerian Mindfulness Reduces Learned Helplessness: An Online Experiment on Undergraduates in Malaysia. Asia-Pacific Social Science Review, 23(2), 29-40. Available at:
https://www.dlsu.edu.ph/wp-content/uploads/pdf/research/journals/apssr/2023-june-vol23-2/ra3.pdf

Wu, J., Chen, X., Zhang, Y., & Wang, H. (2024). Artificial Intelligence Technology Empowers Practical Teaching of Higher Vocational Accounting Majors. Applied Mathematics and Nonlinear Sciences, 9(1). https://doi.org/10.2478/amns-2024-0272

Yates, S. (2009). Teacher Identification of Student Learned Helplessness in Mathematics. Mathematics Education Research Journal, 21(3), 86-106. Available at: https://files.eric.ed.gov/fulltext/EJ883874.pdf

Zeineddine, H., Braendle, U., & Farah, A. (2021). Enhancing prediction of student success: Automated machine learning approach. Computers & Electrical Engineering, 89, 106903. https://doi.org/10.1016/j.compeleceng.2020.106903

Ziegler, A., Bedenlier, S., Gläser-Zikuda, M., Kopp, B., & Händel, M. (2021). Helplessness among university students: An empirical study based on a modified framework of implicit personality theories. Education Sciences, 11(10), 630. https://doi.org/10.3390/educsci11100630




Licencia de Creative Commons 

This work is licensed under a Creative Commons Attribution 4.0 International License

Journal of Technology and Science Education, 2011-2026

Online ISSN: 2013-6374; Print ISSN: 2014-5349; DL: B-2000-2012

Publisher: OmniaScience