The perspective of students and faculty members on the efficiency and usability of e-learning courses at ajman university: A case study

THE PERSPECTIVE OF STUDENTS AND FACULTY MEMBERS ON THE EFFICIENCY AND USABILITY OF E-LEARNING COURSES AT AJMAN UNIVERSITY: A CASE STUDY

Mohd. Elmagzoub Eltahir, Sami Al-Qatawneh, Najah Al-Ramahi, Najeh Alsalhi

Ajman University (United Arab Emirates)

Received November 2018

Accepted May 2019

Abstract

While there is much in the literature on the usability of Course Management Systems (CMS) themselves, there is little that looks at content of these CMS. This study aims to investigate the usability of the e‑learning courses at Ajman University from the perspective of students and faculty members. The e‑learning usability evaluation questionnaire developed by Zaharias (2009) has been used as the main instrument for data collection in this study. A semi-structured interview is another instrument that has been used to investigate the evaluation of e-learning usability. This form has been used to collect qualitative data from the faculty members. The research results revealed that the attitude of a majority of the respondents toward the usability of e-learning courses at Ajman University was, in general, at the agree level, which means that there is a positive agreement for using e-learning courses in the university.

Most of the participants in this study view the e-learning courses at AU as being easy to use, easy to learn and with a user-friendly interface. However, first-year students hesitate to express a firm opinion about the usability of e-learning courses at AU. The students’ perspective varied by gender (with women showing a more favorable opinion) and college type (with students in the Business Administration College showing a more positive opinion). Interviews with faculty members from all colleges at AU revealed that most staff members are satisfied with the Moodle system. The study concludes that there is a great need to conduct more training for freshman students on how to use Moodle and recommends it.

 

Keywords – Course management system, Efficiency, E-Learning, Moodle, Usability.

To cite this article:

Eltahir, M.E., Al-Qatawneh, S., Al-Ramahi, N., & Alsalhi, N. (2019). The perspective of students and faculty members on the efficiency and usability of e-learning courses at Ajman University: A case study. Journal of Technology and Science Education, 9(3), 388-403. https://doi.org/10.3926/jotse.590

 

----------

    1. 1. Introduction

The use of e-learning systems has become widely spread in education institutions around the world and they are very much preferred for their core benefits and wide range of advantages. They are considered to form part of a new trend in the education sector, where they are continuously used to improve the quality of education and to enhance better teaching and learning processes (Blecken & Marx, 2010).

E-learning systems are known to provide a platform that uses technology to improve education and increase the success of achieving a quality education (Babiker, 2014). When applied correctly, they can increase learners’ involvement in their learning process and achieve a learner-centered type of pedagogy and self-dependent learners. E-learning can reduce teaching and learning time as well as minimize costs; achieve more effective learning, better lecturer productivity and more consistent learning; offer remote delivery; provide measurable learning outcomes; and ensure multi-cultural learning (Alturki, Aldraiweesh & Kinshuck, 2016).

The focal point of e-learning is to enable learning anywhere, at any time by providing wide range of resources and different opportunities for active participation, content mastery and self-learning (Kiget, Wanyembi & Peters, 2014).

However, despite the widespread appeal of e-learning systems, and the wide range of advantages and benefits, there is still no agreement on how they can be effectively set up in the educational system and the extent to which they can be successful in reaching the required goals and objectives. A great deal of research has been conducted to evaluate their efficiency and usability, but researchers need to consider their pedagogical effectiveness, as well as their efficiency and usability (Granić & Ćukušić, 2011; Nyang’or, Villiers & Ssemugabi, 2013; Zaharias & Koutsabasis, 2012).

Here we can see the importance of this research, the main aim of which is to investigate the efficiency and usability of e-learning courses at Ajman University and the extent to which they are used effectively to improve teaching & learning processes.

1.1. The Research Problem

In spite of the widespread use of the Course Management System (CMS) in educational institutions using a blended learning system, very little has been done to critically examine its efficiency or usability for enhancing student learning experiences (Zaharias, 2009). The common problem we see with the integration of CMS in blended learning is that it is used more as a content management system or cloud file storage than it is used as a learning management system. Pedagogical effectiveness or efficiency and usability are frequently not considered in the development of e-learning courses, mostly because many instructors and faculty members in higher education institutions are not trained to do so or lack the required skills (Vrasidas, 2004). File loading and links by themselves do not constitute learning; however, the learning process does need to be collaborative, interactive, reflective, flexible and progressive.

Instructors simply do not move beyond loading files and links in the LMS and onto using more interactive features, which so easily create opportunities for students to self-direct and regulate their own learning. The aim is to help the faculty members and instructional designers at our current institution and our colleagues at other institutions to ensure the usability of the e-learning courses they develop. In the second phase of this study, the evaluation findings will be used to improve the development and design of e‑learning courses by designing a model focused on the interrelationship between content, learning, instruction and learning management system (Moodle) features. According to Dick, Carey and Carey (2005: page 1), “Components such as the instructor, learners, materials, instructional activities, delivery system, and learning and performance environments interact with each other and work together to bring about the desired student learning outcomes”.

1.2. Research Questions

Q1: What are the students’ current views about the usability of the e-learning courses at Ajman University?

Q2: Does the attitude toward the usability of the e-learning courses at Ajman University vary according to the students’ gender, college and classification?

Q3: What are the faculty members’ current views about the usability of the e-learning courses at Ajman University?

2. Related Literature and Studies

2.1. The Concept of Usability

Usability is a reflection of the human-computer-interaction (HCI), since users adopt a particular technological product to accomplish their task quickly and effortlessly. In other words, usability factors make a system easy to learn and easy to use, through a usable user interface.

From the 1960s on, a great number of theories have been developed on technology acceptance behaviors, from the perspectives of information systems, sociology and psychology. The main theory is the Theory of Reasoned Action (TRA) developed by Martin Fishbein in 1975 (Davis, Bagozzi & Warshaw, 1989: page 983). Many theories have been developed based on TRA, such as TAM and TAM2. However, there are also many theories related to the fields of sociology and psychology, such as Innovation Diffusion Theory (IDT), proposed by Rogers (1962), Social Cognitive Theory (SCT) and Motivational Model (MM).

2.2. Theory of Reasonable Action (TRA)

The Theory of Reasoned Action (TRA) is a model of persuasion and a behavioral theory. TRA theory comes from the field of social psychology, and was developed by Fishbein and Ajzen (1975) (Davis et al., 1989).

The aim of this theory is to study behavioral intention; it predicates that predicting people’s behaviors and decisions depends on their existing attitudes. It also attempts to understand the relationship between attitude and human behaviors. The main attributes of the TRA are: (1) attitude, understood as a general evaluation and a positive, negative, or mixed reaction to something, and (2) subjective norm, which is people’s perceptions of the beliefs from people who surround them. In Information Systems, this theory has been used to identify user behaviors and attitudes related to Internet usage.

2.3. Technology Acceptance Model (TAM)

The Technology Acceptance Model (TAM) is a theory on user acceptance and use of technology. This theory was based on the Theory of Reasonable Action (TRA). It is used to study behavioral intention to use an Information System. The main reasons for this study were to explore two beliefs of user satisfaction, which are perceived ease of use and perceived usefulness. Davis et al. (1989: page 985) defined perceived usefulness as “the degree to which a person believes that using a particular system would enhance his or her job performance.” Moreover, he defined perceived ease of use as “the degree to which a person believes that using a particular system would be free from effort.”

2.4. TAM2

In 2000, Venkatesh, Morris, Davis & Davis (2003) developed TAM2. They added additional attributes of perceived ease of use and usefulness to create a better definition of them. The new attributes were the social influence process and the cognitive instrumental process. The variables in the social influence process are subjective norms, voluntariness, image and experience. However, the variables in the cognitive instrumental process are job relevance, output quality and result demonstrability. The goal of this model was to explain why users accept or reject an information system.

2.5. Social Cognitive Theory (SCT)

The Social Cognitive Theory (SCT) is a very well-established theory developed by Bandura (1986). It focuses on studying people’s learnability from their experiences, observing others and their interactions with the social environment. It also focuses on studying human behaviors that can be affected by self‑efficacy, behavioral capability and their environment.

2.6. Motivational Model Theory (MM)

Motivation is the process that pushes an individual to complete the desired tasks in order to achieve certain goals (Simon, 1976). There are both internal and external factors to motivate someone to use technology. MM is an important theory proposed by Davis et al. (1989) to discuss the motivation of users to utilize and accept an information system.

2.7. Usability Evaluation Models

There are several usability models, such as that by McCall, Richards and Walters (1977), the Eason Model (1984), the Shackel and Richardson Model (1991), the Nielsen Model (Nielsen & Landauer, 1993), ISO 9241-11 (1998), ISO/IEC IS 9126-1 (2001) and the QUIM model (2006).

McCall et al. (1977) identified three main aspects of a software product. The aspects are product revision (ability to change), product transition (adaptability to new environments) and product operations (basic operational characteristics) (McCall et al., 1977). The usability factors of the McCall model in terms of product operations are operability, training and communicativeness. The model published by Kenneth Eason (1984) is related to behavior and Information Technology. In this model, three dimensions are important to usability: the system, user and task. Each of them has independent variables. When these independent variables interact with one another, the outcome will change and affect the usability. In the Eason Model, usability can be measured by considering users and their task. The chart below presents the sub-attributes for each dimension in this model:

 

Figure 1. The sub-attribute for each dimension in Eason’s model (1984)

Shackel and Richardson (1991: page 21-37) defined usability as an operational definition. His definition focuses on the system evaluation. This model has four attributes: effectiveness, learnability, flexibility and attitude. The importance of these attributes may vary from one system to another. Nielsen (2012), on the other hand, defines usability in terms of five attributes:

  • Learnability: How easy is it for users to learn to use a system? Can the users quickly learn to use it?

  • Efficiency: The ability of users to complete their tasks without wasting time or effort.

  • Memorability: How easy is it to remember how to use it?

  • Errors: Are there any mistakes in the system? Do any errors occur when the users use it?

  • Satisfaction: How convenient is it to use the system?

The International Organization for Standardization (ISO) (1998) defines usability as: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context” (ISO 1998, 3.1).

The ISO 9241 definition of usability contains three different attributes:

  • Effectiveness, which is the precision and perfection in helping users to achieve their tasks and goals.

  • Efficiency, which is the relationship between the precision and perfection in helping users to achieve particular tasks and goals, and the resources that have been used in achieving them.

  • Satisfaction, which is the users’ relief and positive feelings about the system.

Seffah, Donyaee, Kline and Padda (2006) developed their Quality in Use Integrated Measurement (QUIM) model in 2006. This model depends on previously models, such as ISO 9241 and ISO 9126. Usability can be measured based on both process and product quality metrics. This model contains ten factors. The first four attributes are similar to those suggested by Shackel and Nielsen, including efficiency, effectiveness, learnability and satisfaction. The remaining six attributes are productivity, safety, trustfulness, accessibility, usefulness and universality (Aziz, Kamaludin & Sulaiman, 2013). The table below summarizes the common attributes that are found in some well-known usability models.

In addition, Table 2 shows the dimensions of usability that are considered in the previous models.

Models

Learnability

Effectiveness

Efficiency

Satisfaction

Eason Model

 

 

Shackel Model

 

 

Nielsen Model

 

ISO 9241-11

 

QUIM

Table 1. The common attributes found in some well-known usability models

 

Jim McCall

Eason Model

Shackel Model

Nielsen Model

ISO 9241-11

QUIM

Dimensions of usability

Product operations

System

System

Users interact with the system

-System

-Users

-External factors

-Internal factors

-Product

-Process

Table 2. The dimensions of usability that are considered in the previous models

2.8. Previous Studies

Several studies have been conducted on the efficiency and usability of e-learning Courses and focusing on evaluating e-learning courses and applications in educational institutions (Althobaiti & Mayhew, 2016; Alturki et al., 2016; Davids, Chikte & Halperin, 2014; Muries & Masele, 2017; Sobodic, Balaban & Kermek, 2018; Zaharias & Koutsabasis, 2012; Thuseethan, Achchuthan & Kuhanesan, 2014; Green, Inan & Denton, 2012; Jamaludin & Funn, 2007). The results of these studies have confirmed the positive application of the LMS system in terms of the efficiency and usability of e-learning courses as seen by students and staff members, and it was recommended that e-learning content courses must shift from a programming perspective to a learning perspective. Meanwhile, some other studies have shown certain deficiencies in the application and use of LMS and courses, such as the studies by Rosato, Dodds and Laughlin (2007), Alghamdi and Bayaga (2016) and Blecken and Marx (2010).

The study thus confirmed the importance of evaluating the efficiency and usability of e-learning courses in Middle Eastern Universities in the Arab region, considering Ajman University as a case study.

3. Methodology

3.1. Participants

The sample consisted of 377 students, of whom nearly 46.7% were males and 53.3% were female. The intention was to diversify the target population, and Table 3 shows the demographic information for the students.

Study Variables

Variable levels

Frequency (f)

Percentage (%)

Gender

Female

201

53.3%

Male

176

46.7%

Total

377

100%

College

College of Business Administration

61

16%

College of Education and Basic Sciences

52

14%

College of Information Technology

24

6%

College of Mass Communication and Humanities

42

11%

College of Dentistry

33

9%

College of Engineering

93

25%

College of Law

32

8%

College of Pharmacy and Health Sciences

40

11%

Total

377

100%

Classification

Freshman/first year

169

45%

Sophomore

99

26%

Junior

50

13%

Senior

48

13%

Graduate students

11

3%

Total

377

100%

Table 3. Demographic information of the participants

3.2. The Instrument of the Study

Most e-learning instructional designers and researchers usually employ some of the well-known and validated questionnaires. Such questionnaires or variations of them have been used in several e-learning studies (Zaharias & Poylymenakou, 2009; Zaharias, 2009; Zaharias & Koutsabasis, 2012; Koohang, 2004; Qureshi & Irfan, 2009; Kiget et al., 2014). The e-learning usability evaluation questionnaire developed by Zaharias (2009) has been used as the main instrument for the five-point Likert scale (strongly agree= 5, agree= 4, neutral= 3, disagree= 2, and strongly disagree= 1) employed to record student responses in this study. This tool can be considered the most suitable instrument to measure usability in LMS courses, because many factors related to LMS courses are included, considering cognitive and affective factors that may have an effect on e-learning usability. In this regard, Sandoval (2015: page 148) states that “The E‑learning Usability Evaluation Questionnaire created by Zaharias (2009) is the closest instrument found to measure usability, including pedagogical elements and students’ motivation to learn.”

In this study, a questionnaire was administered to a population of students at Ajman University Colleges in the UAE. The participants were obtained by a random sampling of each of the 8 colleges. To obtain the data required to achieve the goal of the study, 400 questionnaires were distributed to students during the first month of the second semester of the 2017/2018 academic year. The number of questionnaires returned was 377. The questionnaire consisted of two parts, in which the first aimed to elucidate the background information of the students. The second part of the questionnaire consisted of a total of 45 items addressing eight different criteria. In this section, the researcher asks the participants to rate the usability of the e-learning courses according to eight criteria: (A) Content, (B) Learning and Support, (C) Visual Design, (D) Navigation, (E) Accessibility, (F) Interactivity, (G) Self-Assessment and Learnability and (H) Motivation to Learn. The researchers added together the percentages for strongly agree and agree, and this total is referred to as ‘positive responses.’

In order to combine the advantages of different empirical approaches used to evaluate e-learning usability, another instrument was used, namely, the e-learning usability evaluation interview form. This form was used to collect qualitative research data from the faculty members. As a qualitative data collection tool, the interview has the advantages of revealing participants’ perceptions, views and experiences, as well as their feedback, suggestions and recommendations.

3.3. Validity of the Data Collection Instrument

The validity of the questionnaire was confirmed by the virtual validity method, after obtaining permission from Dean of Graduate Studies and Research, by checking the validity of the content. A group of arbitrators, experts in the fields of education technology and psychology, were asked to express their opinions on the appropriateness of items to achieve the objectives of the study and the adequacy of the tools in terms of the number of items and comprehensiveness, as well as the diversity of content.

3.4. Reliability of the Data Collection Instrument

The researchers verified the reliability of the questionnaire by conducting a pilot study involving 20 students who did not take part in the main study, using the following methods:

Cronbach’s alpha coefficient method: The Cronbach’s alpha coefficient was calculated for the entire questionnaire using the SPSS program, which resulted in a value of 0.93, indicating a suitable level of internal consistency (see Table 4).

Criteria

No. of Items

Reliability Coefficient of Alpha Cronbach

(A) Content

6

0.924

(B) Learning and Support

7

0.920

(C) Visual Design

4

0.922

(D) Navigation

5

0.918

(E) Accessibility

3

0.925

(F) Interactivity

5

0.919

(G) Self-Assessment and Learnability

5

0.923

(H) Motivation to Learn

10

0.924

All questionnaire criteria

45

0.93

Table 4. Cronbach’s alpha coefficients of reliability for the questionnaire on eight criteria

3.5. Data Analysis Measures

A five-dimension Likert scale was used in this study as follows: strongly agree (5), agree (4), neutral (3), disagree (2) and strongly disagree (1), as shown in Table 4 with the corresponding intervals. Responses were then categorized into equal five range levels, using the following equation: the range of the category = (maximum value-minimum value) ÷ number of alternatives = (5-1) ÷ 5 = 0.80, as shown in Table 5 below.

Options (Description)

Scores

Score Intervals

Strongly agree

5

4.21–5.00

Agree

4

3.41–4.20

Neutral

3

2.61–3.40

Disagree

2

1.81–2.60

Strongly disagree

1

1.00–1.80

Table 5. The evaluation of scale data based on the scale options and score intervals

3.6. Statistical Treatments

For data analysis, the researchers used the SPSS software package to conduct the descriptive analyses (number, percentage, mean, and standard deviation), in addition to the independent samples t-test, one‑way ANOVA and Schiffe test.

4. Results & Discussion

4.1. Findings Related to RQ1 and Discussion

What are the students’ current views on the usability of the e-learning courses at Ajman University? To answer the first research question, mean scores and standard deviations for the students’ responses to each of the questionnaire items 1−45 were calculated, as shown in Table 6. The mean scores and standard deviations of the eight criteria were calculated as shown in Table 7 and Figure 2.

Criteria

Item

Mean

S. Deviation

Description

Criteria

Item

Mean

S. Deviation

Description

Content

Q1

3.58

1.14

Agree

Accessibility

Q23

3.43

1.19

Agree

Q2

3.69

1.04

Agree

Q24

3.54

1.09

Agree

Q3

3.52

1.09

Agree

Q25

3.54

1.09

Agree

Q4

3.51

1.13

Agree

 

Interactivity

Q26

3.11

1.26

Neutral

Q5

3.62

1.00

Agree

Q27

3.37

1.12

Agree

Q6

3.62

1.05

Agree

Q28

3.36

1.11

Agree

Learning and Support

Q7

3.34

1.16

Neutral

Q29

3.55

1.10

Agree

Q8

3.47

1.08

Agree

Q30

3.46

1.12

Agree

Q9

3.54

1.05

Agree

Self-Assessment and Learnability

Q31

3.47

1.08

Agree

Q10

3.45

1.13

Agree

Q32

3.59

1.01

Agree

Q11

3.47

1.08

Agree

Q33

3.57

.98

Agree

Q12

3.41

1.08

Agree

Q34

3.42

1.04

Agree

Q13

3.49

1.16

Agree

Q35

3.51

1.09

Agree

Visual Design

Q14

3.42

1.15

Agree

Motivation to Learn

Q36

3.32

1.19

Neutral

Q15

3.71

1.02

Agree

Q37

3.42

1.10

Agree

Q16

3.66

.99

Agree

Q38

3.47

1.09

Agree

Q17

3.65

1.05

Agree

Q39

3.47

1.04

Agree

Navigation

Q18

3.56

1.04

Agree

Q40

3.59

1.03

Agree

Q19

3.49

1.02

Agree

Q41

3.43

1.11

Agree

Q20

3.53

1.02

Agree

Q42

3.47

1.05

Agree

Q21

3.50

1.11

Agree

Q43

3.51

1.07

Agree

Q22

3.45

1.10

Agree

Q44

3.52

1.09

Agree

 

 

 

 

Q45

3.54

1.12

Agree

Overall mean for all question items

3.50

Agree

Standard deviation

1.09

Table 6. Descriptive statistics for the students’ responses to the items about the usability of the e-learning courses at Ajman University

As described above, the questionnaire consisted of a total of 8 criteria and 45 questions. The results reported in Table 6 indicate that the general arithmetic mean (3.50) and standard deviation (1.09) of all items related to the usability criterion of e-learning courses at Ajman University was at the agree level from the perspective of students, which means that the results are positive for using it at the university. It is also evident from Table 6 that the students’ responses to Q-2 Vocabulary, terminology and concepts used are clear and appropriate for the learners had the highest average (3.69), with an agree level. However, the lowest average (3.11) was obtained for Q- 26 The courses use games, simulations, role-playing activities, and case studies to gain the attention, and maintain motivation of learners, indicating a neutral level. Similarly, also a neutral level was indicated for Q-36. All other questions scored at an agree level.

Criteria

N

Mean

Std. Deviation

Content

377

3.59

0.84

Learning and Support

377

3.45

0.84

Visual Design

377

3.61

0.88

Navigation

377

3.50

0.85

Accessibility

377

3.50

0.97

Interactivity

377

3.37

0.91

Self-Assessment and Learnability

377

3.51

0.84

Motivation to Learn

377

3.48

0.85

Total mean for all criteria

3.50

Standard deviation

0.85

Table 7. Mean and standard deviations of the eight criteria regarding the usability of the e-learning courses

 

Figure 2. Mean of the eight criteria regarding the usability of the e-learning courses at Ajman University

In addition, the results displayed in Table 7 and Figure 2 also indicate that the general arithmetic mean (3.50) and standard deviation (0.85) of 8 criteria for the usability of e-learning courses was at the agree level from the perspective of students. It is also evident from Table 7 and Figure 2 that the students responses to the Visual Design criterion had the highest average (3.61), with an agree level. However, the lowest average (3.37) was obtained for the Interactivity criterion.

4.2. Findings Related to RQ2 and Discussion

Does the attitude toward the usability of the e-learning courses at Ajman University vary according to the students’ gender, college or classification?

The researchers calculated the mean scores and standard deviations to answer the second research question of the study. The researchers then carried out an independent T-test and one-way ANOVA test to find out the significance of the differences between the averages. Scheffe’s test for post-hoc comparisons was also conducted to find out the significance of the differences between the means. The results are detailed in the following section.

4.2.1 Gender

The researchers used an independent sample test (T) to assess the significance of the differences between the averages of usability of the e-learning courses, according to the criteria at Ajman University, from the perspective of the students. The results were calculated according to gender (see Table 8).

Gender

N

Mean

Std. Deviation

T. Value

Sig. (tailed)

Sig. level

Male

176

3.6221

0.68614

3.081

 

0.002*

 

Significant

Female

201

3.3967

0.72758

*Statistically significant at α 0.05 ≥

Table 8. Mean and SD of the students’ responses, according to gender

The results in Table 8 indicate that the computed value for T was 3.018, which is greater than the T table values. This means there are significant differences at a level of significance of 0.002, which is less than the required level of statistical significance (0.05) between the mean value of Males and Females, where males score higher than females.

4.2.2 College

Table 9 shows the results for the one-way ANOVA test to analyze the students’ responses according to college.

It is clear from Table 9 that there are statistically significant differences in the students’ perspectives, according to the variable of college at the level of 0.00, which is less than the required level of statistical significance (0.05). To determine the source of these differences, a Schiffe test was conducted for the comparisons reported in Table 10.

 

 

Sum of Squares

df

Mean Square

F

Sig. (tailed)

Sig. level

College

Between Groups

31.489

7

4.498

10.276

.000

Significant

Within Groups

161.540

369

.438

Total

193.029

376

 

*Statistically significant at α 0.05 ≥

Table 9. One-way ANOVA of teachers’ responses, according to college

(I) College

(J) College

Mean Difference (I-J)

Sig.

Business and Administration

Education and Basic Science

.54035*

.010

Information Technology

.22559

.959

Mass communication and Humanities

.19253

.953

Dentistry

.10402

.999

Engineering

.46465*

.013

Law

1.06317*

.000

Pharmacy and Health Sciences

.17361

.976

Education and Basic Science

Business and Administration

-.54035-*

.010

Information Technology

-.31476

.811

Mass communication and Humanities

-.34782

.493

Dentistry

-.43633

.272

Engineering

-.07570

1.000

Law

.52283

.093

Pharmacy and Health Sciences

-.36674

.436

Information Technology

Business and Administration

-.22559

.959

Education and Basic Science

.31476

.811

Mass communication and Humanities

-.03306

1.000

Dentistry

-.12157

1.000

Engineering

.23906

.927

Law

.83759*

.003

Pharmacy and Health Sciences

-.05198

1.000

Mass Communication and Humanities

Business and Administration

-.19253

.953

Education and Basic Science

.34782

.493

Information Technology

.03306

1.000

Dentistry

-.08851

1.000

Engineering

.27212

.673

Law

.87065*

.000

Pharmacy and Health Sciences

-.01892

1.000

Dentistry

Business and Administration

-.10402

.999

Education and Basic Science

.43633

.272

Information Technology

.12157

1.000

Mass communication and Humanities

.08851

1.000

Engineering

.36063

.407

Law

.95916*

.000

Pharmacy and Health Sciences

.06959

1.000

Engineering

Business and Administration

-.46465-*

.013

Education and Basic Science

.07570

1.000

Information Technology

-.23906

.927

Mass communication and Humanities

-.27212

.673

Dentistry

-.36063

.407

Law

.59853*

.008

Pharmacy and Health Sciences

-.29104

.610

Law

Business and Administration

-1.06317-*

.000

Education and Basic Science

-.52283

.093

Information Technology

-.83759-*

.003

Mass communication and Humanities

-.87065-*

.000

Dentistry

-.95916-*

.000

Engineering

-.59853-*

.008

Pharmacy and Health Sciences

-.88957-*

.000

Pharmacy and Health Sciences

Business and Administration

-.17361

.976

Education and Basic Science

.36674

.436

Information Technology

.05198

1.000

Mass communication and Humanities

.01892

1.000

Dentistry

-.06959

1.000

Engineering

.29104

.610

Law

.88957*

.000

*Statistically significant at α 0.05 ≥

Table 10. Schiffe test results to identify the source of differences of the students’ responses, according to the college variable

It is clear from Table 10 that the results reported confirm that the source of the differences from the perspective of students according to the college variable was in favor of the Business Administration College students.

4.2.3 Classification

Table 11 shows the results of the one-way ANOVA test of the students’ responses, according to the variable of classification.

It is clear from Table 11 that there are statistically significant differences in the students’ perspectives according to the variable of classification at the level of 0.00, which is less than the required level of statistical significance (0.05). To determine the source of these differences, a Schiffe test was conducted for the comparisons reported in Table 12.

 

 

Sum of Squares

df

Mean Square

F

Sig. (tailed)

Sig. level

Experience

Between Groups

10.769

4

2.692

5.495

.000

Significant

Within Groups

182.260

372

.490

Total

193.029

376

 

*Statistically significant at α 0.05 ≥

Table 11. One-way ANOVA of students’ responses, according to classification

(I) Classification

(J) Classification

Mean Difference (I-J)

Sig.

Freshman/First year

Sophomore

.16120

.011

Junior

.48410*

.999

Senior

.19209

.739

Graduate Student

.53255

.574

Sophomore

Freshman/First year

-.32290-*

.011

Junior

-.29201

.218

Senior

-.48410-*

.004

Graduate Student

.04845

1.000

Junior

Freshman/First year

-.03089

.999

Sophomore

.29201

.218

Senior

-.19209

.764

Graduate Student

.34046

.711

Senior

Freshman/First year

.32290*

.739

Sophomore

.03089

.004

Junior

-.16120

.764

Graduate Student

.37135

.271

Graduate Student

Freshman/First year

-.37135

.574

Sophomore

-.04845

1.000

Junior

-.34046

.711

Senior

-.53255

.271

*Statistically significant at α 0.05 ≥

Table 12. The Scheffe test results to identify the source of the differences in the students’
responses, according to the classification variable

The results reported in Table 11 confirm that the source of the differences from the perspective of students, according to the classification variable, was in favor of the freshman/first year students.

4.3. Findings Related to RQ3 and Discussion

What are the faculty members’ current views about the usability of the e-learning courses at Ajman University?

4.3.1. Findings from the Interviews

The interview form asked 6 questions about the usability of the Moodle for Ajman University users. The goal is to determine whether the Moodle and its components seem to be user-friendly, or easy to use and supportive, motivating those for whom it is designed to learn.

The data collected from semi-structured interviews with faculty members from all colleges at Ajman University revealed that most staff members are satisfied with the Moodle system and its different strengths and features, such as uploading the course materials and weekly lectures, assignments and quizzes. In addition, they commented on the continuous processes of communication and interaction between the instructors and students, instructor announcements and continuous feedback on student work. However, there are a number of faculty members who mentioned a number of shortcomings and weaknesses in the Moodle system, and they identified a few gaps that needed to be worked on in order to improve its features for better usability. The problems and suggestions are the following:

The majority of these critiques agreed that the capacity is not enough and there is a great need to increase its capacity in order to achieve more flexible and effective usage. They suggested increasing the capacity in order to be able to upload videos, more data or larger data files. In addition, they suggested the ability to record lectures, upload lectures as videos, and make them accessible to all students at any time.

One participant said: “the process should be easier and access should be more flexible. The quiz processes should be easier to use by the lecturer and students.” A few interviewees suggested that there is a need to improve tools and methods of interacting with students. As one interviewee said, “it needs more privacy in the process of interaction with students”.

A few participants suggested that they do not make the connection between the students’ use of the Moodle and the instructor’s assessment. They think that in this case, the assessment is imposed on the students so, in return, their answers will not be objective. Two interviewees recommended designing the Moodle as a mobile app for easier access: “I suggest introducing the Moodle as a smartphone application” and “If you can, make it as an application on smart phones. Access will be much easier and, in particular, for all students.”

5. Conclusions

The survey revealed the attitude of AU students towards e-learning courses at Ajman University. The most important issues that emerged from the survey were the general positive opinion of the usability of AU e-learning courses.

An important finding of the survey was that the degree of usability for AU e-learning courses was positive from the student’s perspective, but varied by gender in favor of the female students, and according to college type, in favor of the Business Administration College. Moreover, it also varied according to the classification, in favor of first year/freshman students. Furthermore, the interviews with faculty members from all colleges at Ajman University revealed that most staff members are satisfied with the Moodle system.

The first-year respondents express more conservative attitudes towards the usability of e-learning courses. This may be attributed to the level of knowledge that each group has about using Moodle and e-learning. Perhaps senior respondents take firm positions because they have better knowledge. The freshman respondents, on the other hand, lack experience with e-learning and may be less well-informed about e‑learning and CMS features, and therefore they hesitate to express a firm opinion.

The findings of this study are consistent with the studies of other scholars (Althobaiti & Mayhew, 2016; Alturki et al., 2016; Davids, Chikte & Halperin, 2014; Muries & Masele, 2017; Sobodic, Balaban & Kermek, 2018; Zaharias & Koutsabasis, 2012; Thuseethan et al., 2014; Green et al., 2012; Jamaludin & Funn, 2007). The results of these studies confirmed the positive application of the LMS system and the efficiency and usability of e-learning courses by students and staff members. However, these findings contradicted those obtained by other researchers (Rosato et al., 2007; Alghamdi & Bayaga, 2016; Blecken & Marx, 2010), which show some deficiencies in the application and use of LMS and courses.

6. Recommendations

There is a great need to have more training for freshman students on how to use the Moodle, as well as advanced training sessions for instructors to improve their skills on how to use the Moodle more effectively and utilize all its features. Adding more features would make it more effective and easier to interact with by all students.

Declaration of Conflicting Interests

The authors declare that they have no competing interests.

Funding

This work was supported by deanship of graduate studies and a 2017-2018 research grant (ID Number: 2017-A-ED-05) at Ajman University.

References

Alghamdi, S.R., & Bayaga, A. (2016). Use and attitude towards Learning Management Systems (LMS) in Saudi Arabian universities. Eurasia Journal of Mathematics, Science & Technology Education, 12(9).

Althobaiti, M, & Mayhew, P. (2016). Assessing the Usability of Learning Management System: User Experience Study. In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering (9-18). LNICST. E -Learning, E-Education, and Online Training. UNSPECIFIED. https://doi.org/10.1007/978-3-319-28883-3_2

Alturki, U., Aldraiweesh, A., & Kinshuck. (2016). Evaluating the Usability and Accessibility of LMS “Blackboard” at King Saud University. Contemporary Issues in Education Research (CIER), 9, 33-44. https://doi.org/10.19030/cier.v9i1.9548

Aziz, N.S., Kamaludin, A., & Sulaiman, N. (2013). Assessing web site usability measurement. IJRET: International Journal of Research in Engineering and Technology, 2(9), 386-392. https://doi.org/10.15623/ijret.2013.0209058

Babiker, M.E. (2014). Challenges and Future of E-Learning in the Arab World. In INTED2014 Proceedings (pp. 5156-5165). IATED.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice- Hall, Inc.

Blecken, A, Marx, W. (2010). Usability Evaluation of a Learning Management System. [pdf]. Proceedings of the 43rd Hawaii International Conference on System Sciences (1-9) [online]. Available at: https://www.computer.org/csdl/proceedings/hicss/2010/3869/00/01-01-11.pdf (Accessed: June 2018). https://doi.org/10.1109/HICSS.2010.422

Davids, M.R., Chikte, U.M., & Halperin, M.L. (2014). Effect of improving the usability of an e-learning resource: a randomized trial. Advances in Physiology education, 38(2), 155- 160. https://doi.org/10.1152/advan.00119.2013

Davis, F., Bagozzi, R., &  Warshaw, R. (1989). User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. [pdf] Management Science, 35, 982-1003 [online]. Available at: https://pdfs.semanticscholar.org/ba06/44aa7569f33194090ade9f8f91fa51968b18.pdf (Accessed: March 2018). https://doi.org/10.1287/mnsc.35.8.982

Dick, W., Carey, L., & Carey, J. (2005). The systematic design of instruction. (6th ed.). Boston: Pearson/Allyn and Bacon.

Eason, K.D. (1984). Towards the experimental study of usability. Behaviour & Information Technology, 3(2), 133-143. https://doi.org/10.1080/01449298408901744

Fishbein, M., & Ajzen, I. (1975). Belief, attitude, and behavior: An introduction to theory and research. Reading, Mass.: Addison Wessley.

Granić, A., & Ćukušić, M. (2011). Usability testing and expert inspections complemented by educational evaluation: A case study for an e-Learning platform. Educational Technology & Society, 14(2), 107-123.

Green, L., Inan, F., & Denton, B. (2012).Examination of Factors Impacting Student Satisfaction with a New Learning Management System. Turkish Online Journal of Distance Education, 13(3), 189 197.

International Organization for Standardization (1998). ISO 9241-11:1998 Ergonomic requirements for office work with visual display terminals (VDTs) -- Part 11: Guidance on usability [online]. Available at: https://www.iso.org/obp/ui/#iso:std:iso:9241:-11:ed-1:v1:en (Accessed: March 2018).

ISO (1998) ISO 9241: Guidance on Usability Standards. [On-line]. Available at: http://www.iso.org/iso/iso_catalogue/catalogueics/catalogue_detail_ics.htm?csnumber=16883&ICS1=13&ICS2=180&IC S3

ISO/IEC IS 9126-1 (2001). Software Engineering - Product Quality – Part 1: Quality Model. International Organization for Standardization. Geneva, Switzerland.

Jamaludin, R., & Funn, PL. (2007). Users’ Current Views about Applied E-learning Courseware Usability: A Case Study at University Sains Malaysia. Educational Technology Association International Conference. Johor Bahru, 1-2 November 2007. Certificate in Training Technology.

Kiget, N.K., Wanyembi, G., & Peters, A.I. (2014). Evaluating usability of e-learning systems in universities. International Journal of Advanced Computer Science and Applications, 5(8), 97-102. https://doi.org/10.14569/IJACSA.2014.050815

Koohang, A. (2004). A Study of Users’ Perceptions Toward E-Learning Courseware Usability. International Journal on E-Learning (e-journal), 3(2), 10-17. Available at: Norfolk, VA: Association for the Advancement of Computing in Education (AACE) website. https://www.learntechlib.org/p/12792 (Accessed: April 2018).

McCall, J.A., Richards, P.K., & Walters, G.F. (1977). Factors in Software Quality. Volume-III. Preliminary Handbook on Software Quality for an Acquisition Manager. Available at: http://www.dtic.mil/dtic/tr/fulltext/u2/a049055.pdf (Accessed: June 2018). https://doi.org/10.21236/ADA049055

Muries, B., & Masele, J. (2017) International Journal of Education and Development using Information and Communication Technology (IJEDICT), 13(1), 123-141.

Nielsen, J. (2012). Usability 101: Introduction to Usability [online]. Available at: http://www.nngroup.com/articles/usability-101-introduction-to-usability (Accessed: April 2018).

Nielsen, J., & Landauer, T.K. (1993, May). A mathematical model of the finding of usability problems. In Proceedings of the INTERACT'93 and CHI'93 conference on Human factors in computing systems (pp. 206-213). ACM.

Nyang’or, J.O., De Villiers, M.R., & Ssemugabi, S. (2013). A framework for usability evaluation of an offline e-learning tutorial and its application in usability testing. In Jan Herrington et al. (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications (1097-1105). Chesapeake, VA: AACE.

Qureshi, K.M., & Irfan, M. (2009). Usability evaluation of e-learning applications, a case study of It’s Learning from a student’s perspective. M.Sc. Blekinge Institute of Technology. Available at: https://www.diva-portal.org/smash/get/diva2:830757/FULLTEXT01.pdf (Accessed: April 2018).

Rogers, E.M. (1962). Library of Congress Cataloging in Publication Data. Innovation, 11, 2.

Rosato, J., Dodds, C., & Laughlin, S. (2007). Usability of course management systems by students. Dept. Computer Information Systems/Computer Science, College of Scholastica, Duluth.

Sandoval, Z.V. (2015). The development of a usability instrument for e-learning in educational settings. Issues in Information Systems, 16(3), 148-155.

Seffah, A., Donyaee, M., Kline, R.B., & Padda, H.K. (2006). Usability measurement and metrics: A consolidated model. Software Quality Journal, 14(2), 159-178. https://doi.org/10.1007/s11219-006-7600-8

Shackel, B., & Richardson, S.J. (Eds.). (1991). Human factors for informatics usability. New York, NY: Cambridge University press.

Simon, H. (1976). Administrative Behavior (3rd ed.). New York, NY: The Free Press

Sobodic, A., Balaban, I., & Kermek, D. (2018). Usability Metrics for Gamified E-learning Course: A Multilevel Approach. International Journal of Emerging Technologies in Learning (iJET), 13(5), 41-55 [Online]. Available at: https://online-journals.org/index.php/i-jet/article/view/8425/4928 (Accessed: March 2018). https://doi.org/10.3991/ijet.v13i05.8425

Thuseethan, S., Achchuthan, S., & Kuhanesan, S. (2014). Usability Evaluation of Learning Management Systems in Sri Lankan Universities. ArXiv, 1412.0197.

Venkatesh, V., Morris, M.G, Davis, G.B., & Davis, F.D. (2003). User acceptance of information technology toward a unified view. MIS Quarterly, 27(3). https://doi.org/10.2307/30036540

Vrasidas, C. (2004). Issues of Pedagogy and Design in E-learning System. In Proceedings of the 2004 ACM symposium on Applied computing (911-915). Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.458.5979&rep=rep1&type=pdf (Accessed: April 2018).

Zaharias, P. (2009). Developing a usability evaluation method for e-learning applications: Beyond functional usability. Intl. Journal of Human–Computer Interaction, 25(1), 75-98. https://doi.org/10.1080/10447310802546716

Zaharias, P., & Koutsabasis, P. (2012). Heuristic evaluation of e-learning courses: A comparative analysis of two e-learning heuristic sets. Campus Wide Information Systems, 29(1), 45-60. https://doi.org/10.1108/10650741211192046

Zaharias, P., & Poylymenakou, A. (2009). Developing a usability evaluation method for e-learning applications: Beyond functional usability. Intl. Journal of Human–Computer Interaction, 25(1), 75-98. https://doi.org/10.1080/10447310802546716




Licencia de Creative Commons 

This work is licensed under a Creative Commons Attribution 4.0 International License

Journal of Technology and Science Education, 2011-2024

Online ISSN: 2013-6374; Print ISSN: 2014-5349; DL: B-2000-2012

Publisher: OmniaScience