Proposals for ImprovingAssessment Systems in Higher Education: an approach from the model 'Working with People'

PROPOSALS FOR IMPROVING ASSESSMENT SYSTEMS IN HIGHER EDUCATION: AN APPROACH FROM THE MODEL 'WORKING WITH PEOPLE'

Ignacio de los Ríos-Carmenado, Susana Sastre-Merino, Consuelo Fernández Jiménez , Mª Cristina Núñez del Río, Encarnación Reyes Pozo, Noemi García Arjona

Universidad Politécnica de Madrid (Spain)

Received November 2015

Accepted February 2016

Abstract

The European Higher Education Area (EHEA) represents a challenge to university teachers to adapt their assessment systems, directing them towards continuous assessment. The integration of competence-based learning as an educational benchmark has also led to a perspective more focused on student and with complex learning situations closer to reality. However, its assessment entails an increase in lecturers’ workload and a continuous demand for students due to the diversity of assessment tests required to assess each aspect of competences. After a period in which those changes have been introduced, the Technical University of Madrid (UPM) considered to analyse the assessment systems and the ways to improve them, at both bachelor's and master's degree programmes. The methodology used is based on the model "Working with People", which for the first time at the UPM, creates a participatory process with students and lecturers aimed at knowing their opinion and their feelings about the assessment process and the potential for improvement. Eight WWW-workshops were developed, with 32 students and 39 university teachers in total. The results indicate that the perception of students and lecturers regarding the assessment systems have many common points, as well as the need to undertake an improvement strategy for integrating actions from all three model dimensions, seeking a balance in joint work among lecturers, university administrators and students. This work has been performed within the framework of educational innovations cross-cutting project named "Analysis of the UPM Degree Programmes Evaluation Procedures and Proposal for Improvements" (EVALÚA)", supported by the Educational Innovation Department.

Keywords – Innovative learning, Continuous assessment, Focus group, Technical studies, Students’ voice, Teachers’ voice.

----------

 

 

    1. 1. Introduction

Assessment is a crucial task in the teaching-learning process, which provides an insight into the extent to which students are involved and what they actually learn. The European Higher Education Area (EHEA) implementation which launched the Bologna Process brought a new structure of studies as well as some guidelines and regulations aimed at ensuring the quality of higher level education particularly official university studies in Spain. The goal is "adopting an understandable and comparable flexible system of degree programmes, promoting jobs opportunities for students and a greater international competitiveness of European higher education system" (Real Decreto 55/2005, pp. 2842). This new scenario in higher education has resulted in significant and complex changes affecting lecturers’ and students' own culture (Pérez Pueyo et al., 2008). Among those changes, a new configuration of the assessment systems stands out from the rest.

Indeed, the evaluation from the EHEA framework should be based on a continuous assessment (Bordas Alsina & Cabrera Rodríguez, 2001) and should consider the competence assessment, both cross-curricular and specific skills (Olmos Migueláñez, 2008). A competence may be defined as the "ability to functionally use the knowledge and skills in different contexts. It implies understanding, reflection and discernment, taking into account simultaneously and interactively the social dimension of the actions to be taken" (Mateo, 2007, pp. 520). Thus, the lecturer's role evolves from a role of transmitter of knowledge or content into a students' guide (Pérez Pueyo et al., 2008) or into a "facilitator of opportunities for growth" (Cano García, 2008; pp.7). Similarly, students will not be asked for assimilating of the contents, but they will take a much more active role in their own education.

The student’s acquisition of competences as a vital part of the teaching-learning process involves a change in the way of assessing, which has also been described as competence-based or formative assessment (López-Pastor, 2009). Likewise, the concept of competence, including the acquisition of several capacities, necessarily involves combining different assessment kinds and tools. Written tests are controversial due to its difficulty for assessing by themselves the acquisition of student's competences (Brown & Glasner, 2003), they must be supplemented by other specific assessment tools like the portfolio (Cole, 2000) or the rubric (Reddy & Andrade, 2010), among others. It has been proved that the competence-based assessment implies an improvement in the student's teaching-learning process (Cano García, 2008), as it seeks to "develop complex tasks in real or simulated situations from reality, like the case method, and project-based or problem-based learning" (Villardón Gallego, 2006, pp. 64).

However, the implementation of improvements intended with this new assessment system has given rise to an increase in lecturers’ workload. The number and variety of assessment activities to be performed to determine the student's progression, and the involvement of different actors in the assessment itself, not just the lecturer but also peers or the students themselves have greatly risen. These new requirements, in a context in which high rate of results is positively valued, create certain confusion and dissatisfaction among some lecturers, who are forced to change their assessment systems. In the case of the students, they are not only asked to assimilate the contents, but also take on a more active role. They also feel the stress and the workload due to the high number of tasks and activities they have to overcome, which in many cases are concentrated on certain dates.

Despite the students are asked to adopt a more active role, they usually do not participate in the design of such assessment systems. A greater involvement is considered as a mean to improve students' motivation and participation in a learning process, as well as in the planning of teaching and assessment methodologies, because it is a key element in education. The motivation with which students face academic activities, both inside and outside the classroom, is one of the most important factors of the level of learning they will achieve (Alonso-Tapia & Pardo, 2006). When a student becomes motivated he/she is willing to start before his/her task, focuses more on it, he/she seems to be more persistent in finding solutions to the difficulties arising, and, in general, he/she also invests more time and effort than other students who are not motivated. Logically, all this offers better chances of success in the learning process, which will also be more consolidated. Otherwise, the lack of motivation is a serious problem in university education, as well as in general education at all levels (Rodríguez-Largacha et al., 2014).

There is no doubt that in the approach to the actions carried out, an assessment component have to be included, because in the end, what really matters to students in their daily work, and in most cases, is getting a good mark rather than the learning (Gibbs & Simpson, 2004). Eventually, students become strategists for getting best results, and this may also be used for motivating them to perform different techniques to improve their chances to learn, something that was not initially their goal. For that reason, the authors of this study have devoted their effort to obtain the students' point of view regarding the assessment and teaching techniques, as well as the lecturers' opinion to obtain valuable information for performing an effective feedback of their subject, and significantly improving them. The evidences leading us to the conclusion that this is necessary are common in the experience of the authors of this research: low self-learning, students' little interest in getting further information, lack of curiosity in deepening the study beyond what have been taught in class, almost no participation in the topics discussed in class, studying only when they have tests/exams, and only interested in passing the subject instead of learning.

In the particular case presented in this paper, we have been working under the educational innovation cross-cutting project “Analysis of the UPM Degree Programmes Evaluation Procedures and Proposal for Improvements" (EVALÚA)”. This project is aimed at reflecting on the current state of the UPM assessment systems and drawing conclusions that may help improve them. For that reason, it has been used the participatory methodology Working with People (WWP) to describe students´ and university teachers´assessment view. The research was conducted in the university context, and the results are intended to be used to reconsider teaching and assessment methods in relation to the subjects taught.



2. Methodology

The methodology is based on the Working with People (WWP) model. This model aims to generate structured learning dynamics, on teams connecting the knowledge - expert and experienced - in the context of joint projects. The main goal is to incorporate values from the people involved, engaged and who improve through these tasks (Cazorla, De los Ríos & Salvo, 2013). This WWP technique has been applied in several projects, and more recently from the Educational Innovation Group `GIE-Project´ in Educational Innovation Projects (EIPs) (De los Ríos-Carmenado, Cazorla, Díaz-Puente & Yagüe, 2010; De los Ríos-Carmenado & Rodríguez, 2015) with the agents (students and lecturers) involved in the teaching planning, design and assessment. From the WWP premise, in which projects are developed "with" and not "for" people, WWP-w have been organized to analyse and evaluate their opinions regarding some key questions. Through this procedure, we seek information regarding participants' concerns, feelings and attitudes, so that the researcher team preconceptions do not affect the study, as it may occur when applying other techniques such as questionnaires or structured interviews (GilFlores, 2009).The groups were selected trying them to be homogeneous in those key features that can significantly affect the perception about assessment, and heterogeneous with respect to the diversity of the UPM reality.

Homogeneity was primarily sought by considering the two major clearly distinguished groups regarding the assessment existing in our population (lecturers and students). The main objective was to facilitate the creation of groups and generate a constructive dialogue within them. To this end, lecturers and students were separated to avoid excessively heterogeneous groups. Building too dissimilar groups could give rise to conflicts, as a result of opinions which can differ very widely and also differences in their authority and/or power relationship between student-teacher. Besides, taking into account the UPM current context, some aspects included in Figure 1 were also considered as factors affecting the assessment perception: academic achievement, experience in other universities, massive or non-massive courses and participation in EIPs. These elements served to group all the actors in workshops, seeking them to represent as much diversity as possible, thus ensuring a more comprehensive approach.



Figure 1. Features of the actors participating in the WWP-workshops (WWP-w)

 


This heterogeneity was aimed at encouraging discussion within the group, exchanging point of views from different participants, and exploring different standpoints. To do this, various criteria were established for participants to represent the UPM group variety of lecturers and students, but trying the criteria not to be relevant regarding the assessment perception. On the one hand, the criteria for the selection of students’ group participants were the following: domain of knowledge (industrial, aerospace and naval domain; architecture and building construction domain; ICT domain; civil works and topography domain; agroforestry domain and FSS (Faculty of Sports Sciences); gender; mobility and experience in other universities; time commitment (part-time or full-time); academic year (from 1st to 4th) and performance level. On the other hand, the criteria by which the lecturer's group were fixed are as follows: domain of knowledge (the same as those for the students); researcher profile (the possession or not of one or more successful six-year research assessment periods), category of educational staff (professor, senior lecturer, lecturer, assistant lecturer, and others); teaching experience (3-5 years, 5-10 years, 10-15 years, over 15 years); experience in other universities; year (from 1st to 4th); and gender. Basically, the number of students and university teachers initially selected within each criterion was in accordance with their percentage regarding the whole UPM (Tables 1 and 2, respectively). In relation to academic year, it was decided to reject 1st year students and replace them with upper-year students. Although the number of students is lower in upper year groups, their opinion is highly valued because they have more experience and knowledge about assessment procedures. In the case of teachers, the number of lecturers in both teaching experience and academic year categories was evenly distributed among their corresponding subcategories.



Category

Criteria

% in UPM

No. of Students in workshops

Domain of knowledge

Industrial-Aerospace-Naval

35

4

Architecture and Building Construction

19

2

ICT

19

2

Civil Works and Topography

13

2

Agroforestry

10

1

Faculty of Sports Sciences

3

1

Gender

Female

33

4

Male

67

8

Mobility

Other Universities

5

4

Only UPM

95

8

Time commitment

Part-time

8

1

Full-time

92

11

Year

2nd

 

4

3rd

 

4

4th

 

4

Table 1. Breakdown of students for WWP workshops by features. Source: Universidad Politécnica de Madrid, 2015



Category

Criteria

% in UPM

No. of Lecturers in WWP-w

Domain of knowledge

Industrial-Aerospace-Naval

33

4

Architecture and Building Construction

21

2

ICT

17

2

Civil Works and Topography

15

2

Agroforestry

12

1

Faculty of Sports Sciences

2

1

Researcher Profile (six-year research assessment periods)

0

62

7

1

13

2

>1

25

3

Categories

Professor

14

2

Senior Lecturer

56

7

Lecturer

18

2

Assistant Lecturer

9

1

Others

3

 

Teaching experience

3-5 years

 

3

5-10 years

 

3

10-15 years

 

3

15-20 years

 

3

Experience in other universities

Yes

33

4

No

67

8

Year

1st

 

3

2nd

 

3

3rd

 

3

4th

 

3

Gender

Female

33

4

Male

67

8

Table 2. Breakdown of lecturers for WWP-w by features. Source: Universidad Politécnica de Madrid, 2015



From these criteria, nine different groups were scheduled, four for students and five for university teachers (Figure 1). Two workshops were performed for bachelor's degree students (one for medium-high academic achievement and one for low academic achievement), and two for masters' students (one for those with international background and one without). With respect to university teachers, four workshops for bachelor's degree teachers were held. Two of them included those with massive courses and/or with many groups were coordination is needed, one of them for teachers with active participation and the other one for non-active participation in educational innovation projects. The other two workshops were composed of teachers with non-massive courses and/or with few groups so that coordination is not required, also divided into active or non-active participation in teaching innovation projects. Additionally, there was one workshop for master's degree teachers.

Nonetheless, due to some difficulties regarding participants' diary, eventually eight workshops were held, bringing the two last lecturers' groups together. In spite of this, the change did not affect the participation representativeness and it is considered that the opinion received regarding the assessment can be seen as representative of the UPM reality. The number of participants in each group was between six and twelve, chosen from different UPM Schools and Faculties so that they neither have previously met each other nor the moderator. Finally 32 students and 39 university teachers in total participated in the process.

Data were obtained through a mixed group technique, called WWP workshops, which takes elements of Focus Groups (Krueger & Casey, 2009) and Empowerment Evaluation methodology (Fetterman, 2001). The session was moderated by a member of the GIE-Project Research Group and supported by a UPM Rectoral Team. For obtaining the results according to the research goals, the project expert panel agreed to address four open questions:

  • What are the assessment systems for?,

  • What is the best assessment method for you?,

  • What are the weaknesses of the assessment systems?, and

  • How might assessment systems be improved?

In all WWP workshops, the same working methodology was applied according to the following steps for each question:

  • Presentation of the question to all members of the group to ensure it was understood;

  • each participant freely expressed his/her opinion and all of them were literally gathered and summarized on a blackboard. Finally, they were approved to reflect their contributions;

  • assessment of the ideas provided: each member of the group was given five votes, in the form of stickers, to be assigned to the most relevant ideas;

  • during this process, one of the members was in charge of noting down participants' qualitative comments in order to justify quantitative assessment awarded to each idea; and

  • finally, the conclusions reached by the group were drawn and an idea sharing regarding those conclusions was performed.

The information was analysed from grounded theory (Hernández Sampieri, Fernández-Collado & Baptista Lucio, 2010). Annotations provided for each idea listed on the blackboard were introduced in Office Excel program as well as the votes cast for each of them. Identification of units of meaning was conducted by choosing a constant unit so that each agreed sentence was written down on the blackboard summarizing the idea expressed by participants. Later, the first level categorization was carried out by three researchers in an independent manner (to ensure dependence criterion), and the results were compared and agreed. Categories were grouped into themes (Table 5) and finally into three complementary dimensions of the WWP model (ethical-social, technical-instrumental and contextual-institutional) (Table 6). In order to accomplish the quantitative analysis of data, we proceeded to calculate the number of votes attributed to each category and its weight with reference to the total votes. Moreover, those data were classified into students and teachers categories, and these were further divided into bachelor and master (Tables 3 and 4).



3. Results and discussion

For data analysis, the categories student/teacher and educational level (bachelor/master) were eventually established. The results regarding the answers to the four questions raised in those different groups are shown below.

What are the assessment systems for?

In relation to the assessment usefulness, both teachers and students agree (67% of workshop participants) that it is primarily used for assessing the student learning. Besides, 25% of participants believe that the assessment systems help them develop some competences, among them, organization and planning.

From these results, it seems clear that the assessment systems comply with one of its objectives, which involves measuring or differentiating the students' learning level. However, according to the literature, assessment must also have an educational function fully integrated into the teaching-learning process. It is in this sense that UPM current assessment systems seem to be weaker, not only because of its low percentage of their contribution to education, but also because it focuses on the obvious, on the students' need to plan their work because they have to fulfil an examination timetable.



What is the best assessment method for you?

Regarding the most appreciated assessment systems, in general, those combining different methods and assessment tools (18%) were highly valued, followed by the competence-based assessment from project-based learning (17%) and finally a well-planned and personalised continuous assessment (16.5%). Students highlight assessment systems based on real cases, as well as competence-based assessment from practical teamworks (9.5%).

No essential differences were seen in the lecturers' and students’ contributions in this case either. As a first outcome regarding this question, it may be stressed the greater dispersion when choosing a good assessment system. According to the opinion of both teachers and students, there is no optimal assessment system, but more appropriate procedures. Some features of those systems to be highlighted are: the variety, the continuity and the practical application to real cases. This result supports the literature recommendations regarding the assessment methods(Morales Vallejo, 2009).



What are the weaknesses of the assessment systems?

Globally, in relation to the identified weaknesses, five of the most important issues raised by students and teachers were the following: excessive workload and lack of correspondence with the allocated ECTS (12.7%); limited consistency between what is explained and what is required (12.4%); misapplication of continuous assessment (9.2%); concentration of exams and too many of them (8.4%), and too many students per group which hampers a personalized assessment (8.1%). Table 3 shows teachers’ and students’ opinion about the weaknesses on the current assessment systems disaggregated by level of studies (bachelor’s and master’s degree).

University teachers also point to factors related to their pedagogical training, lack of resources, mismatches in the content coordination that sometimes overlap and are compartmentalized. They also mention a wide students' diversity with regard to their starting level, motivation, commitment and maturity.

Likewise, students miss having initial assessments, demand more information about the assessment systems and ask for the compliance with the Learning Guides. They also demand a wider use of ICT, as well as strengthening English activities.



 

Students (%)

Lecturers (%)

Total (%)

Detected factors

Bach

Mast

Total

Bach

Mast

Total

 

Excessive number of classes and misallocation of ECTS

3.8

2.3

6.1

2.3

4.3

6.6

12.7

Mismatching between explained and required

3.5

4.0

7.5

4.9

0.0

4.9

12.4

Poor continuous assessment. Non personalised, poor and late feedback

4.9

2.3

7.2

2.0

0.0

2.0

9.2

Concentration of exams and too many of them

4.6

1.2

5.8

2.6

0.0

2.6

8.4

Too many students. Too many students per group. Hamper personalised assessment

2.6

0.9

3.5

1.2

3.5

4.6

8.1

Lack of teachers' pedagogical training

0.0

1.2

1.2

6.4

0.0

6.4

7.5

Lack of teachers' resources. Excessive workload

0.0

0.0

0.0

4.3

1.2

5.5

5.5

Mismatches between educational contents and assessment criteria

0.3

2.0

2.3

2.3

0.6

2.9

5.2

Demotivated student for learning. Immaturity

0.0

0.0

0.0

3.2

1.4

4.6

4.6

Poor exams updating

2.0

0.3

2.3

1.7

0.0

1.7

4.0

Teachers' attitude. Demotivated teachers'

1.2

0.0

1.2

1.4

0.9

2.3

3.5

Heterogeneous students' starting level

0.0

0.0

0.0

0.6

2.3

2.9

2.9

Poor competence assessment (teamwork)

0.3

0.6

0.9

0.9

0.0

0.9

1.7

Compartmentalisation

0.0

0.0

0.0

0.0

1.4

1.4

1.4

Overlapped educational contents

0.3

0.0

0.3

1.2

0.0

1.2

1.4

Badly thought out efficiency rates Demotivated teachers'

0.0

0.0

0.0

1.4

0.0

1.4

1.4

Results do not help learning Memory-oriented

0.0

1.2

1.2

0.3

0.0

0.3

1.4

Poor information about assessment system

0.0

1.2

1.2

0.0

0.0

0.0

1.2

Complex regulations

0.0

0.0

0.0

1.2

0.0

1.2

1.2

Lack of initial assessment

0.0

0.9

0.9

0.3

0.0

0.3

1.2

Disparity between assessment tests and results

0.3

0.0

0.3

0.9

0.0

0.9

1.2

Lack of English exams and tasks

0.0

0.9

0.9

0.0

0.0

0.0

0.9

ICT poorly used

0.9

0.0

0.9

0.0

0.0

0.0

0.9

Excessive length of exams

0.9

0.0

0.9

0.0

0.0

0.0

0.9

Non-observance of Learning Guide

0.6

0.0

0.6

0.0

0.0

0.0

0.6

Low starting level

0.0

0.0

0.0

0.0

0.3

0.3

0.3

Lack of assessment criteria

0.0

0.3

0.3

0.0

0.0

0.0

0.3

Some lecturers do not allow being assessed

0.0

0.0

0.0

0.0

0.0

0.0

0.0

Exams were cheating is allowed

0.0

0.0

0.0

0.0

0.0

0.0

0.0

General Total

26.0

19.1

45.1

39.0

15.9

54.9

100.0

*Bach: Bachelor´s degree; Mast: Master´s degree

Table 3. Perceived weaknesses on the current assessment systems disaggregated by groups of teachers and students and by level of studies (bachelor’s and master’s degree)



How could assessment systems be improved?

As for improving assessment systems, students mainly recommend that teachers take into account their views to improve the assessment systems (10.4%), designing methods which enhance continuous and personalized assessment (6.9%) and competence-based assessment from project-based learning (PBL) techniques, which include practical work presentations (6.1%). In contrast, although university teachers also agree with the relevance of the above proposals, most of them mentioned priority actions like a reduction of the number of student per group and also a balance in the students/teacher ratio (11.4%), assessment methods better aligned with learning and course content (9.4%) and teaching incentives (7.4%). Table 4 summarizes the main outcomes in relation to those proposals for improvement disaggregated by groups of teachers and students and by level of studies (bachelor’s and master’s degree).

The results are consistent with the role each of the groups plays in the assessment. While students demand more attention and information regarding their learning evolution through feedback, lecturers talk about the need to reduce the number of students to be able to do it. Although there are techniques that facilitate working with large groups, it is a fact that a high students/teacher ratio makes it difficult for the latter to provide the mentioned feedback. Given the importance of this practice for improving learning (Alonso-Tapia & Panadero, 2010; Nicol & Macfarlane-Dick, 2006) and that there are big differences in the students/teacher ratio within the UPM, it would be appropriate to analyse in subsequent studies the influence of this parameter on the feedback received by students in order to address some specific solutions.



 

 

Students (%)

Lecturers (%)

Total (%)

Bach

Mast

Total

Bach

Mast

Total

 

Reduction of students per group. Balancing students/teacher ratio

3.0

0.0

3.0

8.4

3.0

11.4

14.5

Feedback and continuous teaching improvement from the students' global assessment

3.3

7.1

10.4

1.3

1.3

2.5

12.9

Improving educational communication, participation and coordination in the design and implementation of the assessment global strategy

3.6

1.5

5.1

5.3

1.0

6.3

11.4

Designing assessment methods aligned with courses learning and contents

0.0

0.0

0.0

7.9

1.5

9.4

9.4

Designing methods for improving continuous and personalized assessment

2.8

4.1

6.9

1.8

2.0

3.8

10.7

Designing incentives for good teachers based on the results

1.5

0.0

1.5

6.3

1.0

7.4

8.9

Improving lecturers' training regarding assessment systems

1.5

0.3

1.8

6.3

0.0

6.3

8.1

Competence-based assessment. Increasing labs and academic works. Assessment from PBL with academic work presentations

4.1

2.0

6.1

0.0

1.3

1.3

7.4

Designing assessment methods with different activities, instruments and various sources of knowledge (peer assessment)

0.0

0.8

0.8

0.0

4.3

4.3

5.1

Complying with exams regulations

2.8

0.0

2.8

0.0

0.0

0.0

2.8

Suitability of correction methods and unification of assessment criteria

2.0

0.0

2.0

0.5

0.0

0.5

2.5

Designing assessment methods to foster creativity, reflection and competences

1.5

0.0

1.5

0.0

0.8

0.8

2.3

Do not take class attendance into account

0.0

1.0

1.0

0.0

0.0

0.0

1.0

Foster English language in assessment

0.0

1.0

1.0

0.0

0.0

0.0

1.0

Promoting the use of ICT in assessment

0.8

0.0

0.8

0.8

0.0

0.8

1.5

Promoting young lecturer access

0.0

0.3

0.3

0.0

0.0

0.0

0.3

Feedback and continuous improvement from students' global assessment (particularly in Master’s degree)

0.0

0.3

0.3

0.0

0.0

0.0

0.3

General Total

26.9

18.3

45.2

38.6

16.2

54.8

100.0

*Bach: Bachelor´s degree; Mast: Master´s degree

Table 4. Proposals for improving assessment systems disaggregated by groups of teachers and students and by level of studies (bachelor’s and master’s degree)



Contributions in relation to the weaknesses and the proposals for improvement regarding of each of the subjects were grouped into three main areas, as it is shown in Table 5.

Proposal grouping of areas

Weaknesses of the assessment systems

Proposals for improving assessment

Assessment and teaching planning

151

43.6%

136

34.5%

Development of assessment activities

157

45.4%

160

40.6%

Assessment activities results

38

11.0%

98

24.9%

General Total

346

100.0%

394

100.0%

Table 5. Weaknesses and proposals for improving assessment per content areas

Moreover, the contributions from the workshops were classified into the three dimensions of WWP model applied to the assessment systems field (Table 6).

Dimensions of weaknesses

Weaknesses of assessment systems

Proposals for improving assessment

Technical-Instrumental (Methodological)

76.0%

40.6%

Political-Contextual(Institutional)

8.7%

35.0%

Ethical-social (Behaviour)

15.3%

24.4%

General Total

100.0%

100.0%

Table 6. Weaknesses and proposals for improvement assessment according to the model WWP dimensions

4. Conclusions

The results show that the students' and teachers' perception regarding assessment systems have many points in common but also important differences. However, those improvement proposals they suggest are of great interest.

There is a clear need to undertake improvements in the three dimensions identified in the WWP model. First of all, 76% of mentioned weaknesses, with 40% of most prominent proposals, are those aimed at adapting the "technical" dimension of assessment. In this dimension, some proposals have been included pointing at a good design of assessment systems, instruments and methodologies able to meet EHEA new requirements and objectives, and in accordance with the strategies defined at the university. Many of them worth to be highlighted regarding the design of assessment methods more aligned to learning, methodologies which improve continuous and personalized follow-up, assessment tools and competence-based assessment methodologies that incorporate different activities, tools and sources of knowledge (like peer assessment), showing the need to strengthen teachers' psycho-pedagogy training.

Second of all, a low level of institutional weaknesses has been revealed (8.7%). Nonetheless, it has a wider impact, since 35% of those proposals include actions to achieve a better coordination in teaching, particularly at bachelor’s level. These are important activities aimed at making the assessment systems to ensure that learning and competences developed from the university are adapted to society trends and needs, and to the world of work. It had also been suggested that it would be desirable to promote the recognition of teaching, currently less valued than research. The assessment is not continuous just because it is written on a document. University teachers need some training to carry it out. From the institutional point of view, it would be advisable a greater coherence when assessing the lecturers' dedication if we want to improve teaching quality.

Furthermore, regarding the ethical-social dimension, with a weight of 15.3%, a number of proposals related to behaviour have been also mentioned, as well as people (teachers and students) attitudes and values involved in the assessment process, and who are related in the educational activities context. These proposals (with a value of 24.4%) are of great importance in order to improve key aspects such as communication, participation, academic coordination, commitment and motivation. Many of the major difficulties identified, like the challenge of competence-based assessment, demand a "new mentality" to overcome resistance to change regarding the need to get adapted to the EHEA. Such is the case of some university teachers with an established academic position, who are used to traditional assessment systems. Certain measures that may help the “mentality change” are actions of training and exchange experiences

Finally, the whole university community recognizes the importance of creating joint processes, such as the workshops carried out for this work, because they are an excellent opportunity to reflect and advance the continuous improvement from the Educational Innovation as an action strategyfor all centres of the Universidad Politécnica de Madrid and from relationships with society.

In short, this study describes the perceptions of those involved in the assessment of educational results in the UPM, which is the first step towards improving such procedures. It provides an excellent starting point to deepen and become updated in relation to the assessment techniques to provide the highest quality in the academic process.



References

Alonso-Tapia, J., & Panadero, E. (2010). Effects of self-assessment scripts on self-regulation and learning. Infancia y Aprendizaje, 33(3), 385-397. http://dx.doi.org/10.1174/021037010792215145

Alonso-Tapia, J., & Pardo, A. (2006). Assessment of learning environment motivational quality from the point of view of secondary and high school learners. Learning and Instruction, 16(4), 295309. http://dx.doi.org/10.1016/j.learninstruc.2006.07.002

Bordas Alsina, I., & Cabrera Rodríguez, F.A. (2001). Estrategias de evaluación de los aprendizajes basados en el proceso. Revista Española de Pedagogía, 218, 25-48.

Brown, S., & Glasner, A. (2003). Evaluar en la universidad. Problemas y nuevos enfoques. Madrid: Narcea.

Cano García, M.E. (2008). La evaluación por competencias en la educación superior. Profesorado. Revista de currículum y formación del profesorado, 12(3), 1-16.

Cazorla, A., De los Ríos, I., & Salvo, M. (2013). Working With People (WWP) in Rural Development Projects: A Proposal from Social Learning. Cuadernos de Desarrollo Rural, 10(70), 131-157.

Cole, D.J. (2000). Portfolios across the curriculum and beyond. Thousand Oaks, CA: Corwin Press.

De los Ríos-Carmenado, I., & Rodríguez, F. (2015). Promoting professional Project Management skills in Engineering Higher Education: Project-based learning (PBL) strategy. International Journal of Engineering Education, 31(1-B), 1-15.

De los Ríos-Carmenado, I., Cazorla, A., Díaz-Puente J.M., & Yagüe, J.L. (2010). Project–based learning in engineering higher education: two decades of teaching competences in real environments. Procedia – Social and Behavioral Sciences, 2(2), 1368-1378.        http://dx.doi.org/10.1016/j.sbspro.2010.03.202

Fetterman, D.M. (2001). Foundations of Empowerment Evaluation. Thousand Oaks. California: Sage Publications.

Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports students’ learning. Learning and Teaching in higher education, 1, 3-3.

Gil-Flores, J. (2009). La metodología de investigación mediante grupos de discusión. Enseñanza & Teaching, 10.

Hernández Sampieri, R., Fernández-Collado, C., & Baptista Lucio, P. (2010). Metodología de la investigación (5th ed.). McGraw Hill.

López-Pastor, V.M. (2009). Evaluación formativa y compartida en Educación Superior. Propuestas, técnicas, instrumentos y experiencias. Madrid: Narcea.

Mateo, J. (2007). Interpretando la realidad, construyendo nuevas formas de conocimiento: El desarrollo competencial y su evaluación. Revista de Investigación Educativa (RIE), 25(2), 513-531.

Morales Vallejo, P. (2009). Ser profesor: una mirada al alumno (pp. 41-98). Guatemala: Universidad Rafael Landívar.

Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and selfregulated learning: A model and seven principles ofgood feedback practice. Studies in higher education, 31(2), 199-218. http://dx.doi.org/10.1080/03075070600572090

Olmos Migueláñez, S. (2008). Evaluación Formativa y Sumativa de estudiantes universitarios: Aplicación de las Tecnologías a la Evaluación Educativa. Universidad de Salamanca.

Pérez Pueyo, Á., Tabernero Sánchez, B., López-Pastor, V.M., Ureña Ortiz, N., Ruiz Lara, E., Caplloch Bujosa et al. (2008). Evaluación formativa y compartida en la docencia universitaria y el Espacio Europeo de Educación Superior: Cuestiones clave para su puesta en práctica. Revista de Educación, 347, 435-451.

Krueger, R., & Casey, M. (2009). Focus groups. A practical guide for applied research (4th ed.). Thousand Oaks, CA: Sage.

Real Decreto 55/2005, de 21 de enero, por el que se establece la estructura de las enseñanzas universitarias y se regulan los estudios universitarios oficiales de Grado.

Reddy, Y.M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-448. http://dx.doi.org/10.1080/02602930902862859

Rodríguez-Largacha, M.J., García-Flores, F.M., Fernández-Sánchez, G., Fernández-Heredia, A., Millan, M.A., Martínez, J.M. et al. (2014). Improving Student Participation and Motivation in the Learning Process. Journal of Professional Issues in Engineering Education and Practice, 141(1).

Universidad Politécnica de Madrid (2015). Vicerrectorado de Estructura Organizativa y Calidad.

Villardón Gallego, L. (2006). Evaluación del aprendizaje para promover el desarrollo de las competencias. Educatio XXI, 24, 57-76.




Licencia de Creative Commons 

This work is licensed under a Creative Commons Attribution 4.0 International License

Journal of Technology and Science Education, 2011-2024

Online ISSN: 2013-6374; Print ISSN: 2014-5349; DL: B-2000-2012

Publisher: OmniaScience