Feedback and feedforward: Focal points for improving academic performance

FEEDBACK and FEEDFORWARD: FOCAL POINTS FOR IMPROVING ACADEMIC PERFORMANCE

María José García-Sanpedro

Group of Investigation EDO, Universitat Autònoma de Barcelona

Spain



Received July 2012
Accepted September 2012


Abstract

The effective integration of competencies in university programmes follows a holistic and diversified assessment model and the educational potential development of students’ assessment results.

This work questions: how are students informed about the results of their learning? Specifically, it aims to understand students’ and professors’ perspectives about the use of learning results and the strategies that are promoted in the practice of improved use of their educational potential.

The results described are derived from a case study on 12 degrees adapted to the EHEA. Although feedback and the feedforward are strategies for informing students about their learning results, the results of the study show that their use is not entirely generalised and frequently only inform the grades obtained. Students identify the difference between knowing the grade and obtaining feedback. The tutorial dimension is also valued positively when students are informed about the results of their assessment. However, it seems that use of the educational potential is pending. The students say that the tutorials and the follow up through continual assessment helps to reduce failure. Also, the faculty identifies that reflection about the results obtained is very much linked to metacognitive reflection, although it is not generalised in practice. The students recognise the limitations and the work load involved for the professor to individually monitor them. The study is concluded with the need for systematically incorporating feedback and feedforward in teaching practices and offers guidelines for orienting these strategies towards improving academic performance.



Keywords – Feedback; feedforward; academic achievement; learning outcomes.


----------


1 INTRODUCTION

The change from an assessment “of” learning, known as a traditional assessment, to an assessment “for” learning implies a conceptual change for the faculty as well as for students. On the one hand, there is a change in the focus, the objectives and the assessment task itself. On the other hand, a change is also implied in the way students are informed about learning assessment results and how they make decisions from these results.

It is therefore understood that a “good assessment” not only assesses whether the competencies defined along with the associated learning, it also generates new learning and is oriented towards improvement.

In the case of education by competencies, learning and assessment are a continuum as such that the same strategies that have been developed to promote competencies may serve to assess whether or not they have been acquired. This is valid as long as it is accompanied by the corresponding assessment guidelines and criteria and understood by the students beforehand.

Similarly, it must be clarified that a single methodology or combination of methodological strategies for assessing competencies may not be useful for all educational areas. In this sense, the diversified and contextualised use of assessment strategies is advised. Diversity in strategies enables different aspects of the competencies to be covered. In other words, testing the different learning acquired and the skills and aptitudes developed, for example. Contextualised application requires the transfer and application of knowledge in a context closer to the professional profile. In other words, it simulates reality. Therefore, both characteristics address assessment with a more authentic and more real situation, based on a construction founded on competency.

Although now, in order to develop good assessment in practice, how it is communicated and how the results of the assessment will work must be planned and defined beforehand. Planning and assessment requires consistency. In other words, that the assessment strategies are coherent with the objectives or competencies being assessed. Fullerton (1995) proposes working smarter not harder. In other words, strategically combining procedures, foci and the agents with the aim of optimising assessment (gathering relevant evidence with a lesser amount of work). It is also advisable to consider certain characteristics that facilitate anticipation of assessment demands, such as: promoting performance closest to the professional profile, explaining the performance levels and expected results to students, explaining assessment criteria, explaining the relation of the task or activity with respect to the general development of the assignment, subject or area. This explanation makes the assessment map visible. In other words, it brings together what the professor is aiming to assess with what the student understands will be assessed.

This alignment between what the professor assesses and what the student understands about the assessment is not what is often criticised as “training for the test”, rather it is an adjustment in the assessment demand and the development of the potential learning and transfer area. When students understand the assessment criteria and the competencies to be developed, they can anticipate the type of demands that will be made of them and the different possibilities that may be presented in the assessment situation. The basic idea of this transparency in the criteria is to democratise the assessment and co-responsibility in the learning process as well as in the results.

Therefore, planning the strategies as well as the way of communicating and the assessment results and how they work promotes student initiative and responsibility in the learning process itself. In general terms, it may be said that there are two strategies for informing students about the results of their learning: one is more retrospective, while the other is prospective.

“Feedback” is understood as the assessor’s (professor, expert, practical work tutor, etc.) response to the result or process implemented by the student. This response is essentially based on the description of the errors or failures on the student’s part on carrying out the task. This response enables the process to be assessed retrospectively, -in other words, what was done and not what should have been done- and to generate information or modify the competencies acquisition process. On addressing these results and the information generated, for the faculty as well as for students, the prevailing analysis criteria is that of not making the same mistakes.

The method and type of feedback is very extensive (oral, written, more or less meaningful, etc.) and is governed by criteria according to university tradition, institutional culture, departmental regulations and the faculty’s focus on teaching. According to Gibbs and Simpson (2009) the feedback archetype has been the personalised, detailed and frequent model of the Oxford and Cambridge universities, where the students complete a weekly test, which is read to the tutor during the individual tutorial and critical comments about it are provided at the time. The origins of this teaching model involved giving feedback on the tests, although this training assessment was quite separate from summative assessment, which involved final examinations at the end of the three study years. However, this model has been influenced today by the increased number of students, limited resources, the reduced number of activities and works requested from students. As a consequence, the quality and quantity of feedback offered by professors has been affected and the response and return time for results has increased. In practice, all this has led to feedback being more identified with a summative nature and that, in extreme cases, it is identified with information about grading, which it is not.

On the other hand, “feedforward” is more about assessment for learning (Ramsden, 1995). The concept of feedforward (which can literally be interpreted as providing feedback in advance) comes from cybernetics and is understood as a process able to improve control over the system. While feedback promotes the resolution of errors when a deviation over the initial status is recorded, feedforward uses knowledge of the system to act on or remedy failures Brosilow and Joseph (2002) cited by Basso and Olivetti Belardinelli (2006), thus enabling changes to be anticipated. Consequently, feedforward works by perfecting generated through successive comparisons between the actual and the final product expected. In its educational applications and covering cognitive psychology contributions, feedforward is a process modelled by the student in relationship to the proposals and objectives in the environment.

The focus promoted by feedforward for informing and making decisions about the results of the assessment, highlighting the prospective nature, seeking and fostering the elements in students that enable them to advance in order to acquire the declared competencies. It also enables difficulties to be anticipated in the situation to be resolved (the assessment task) and their transfer. This enables the student to be instructed in the aspects of the system (situation/problem) that need to be detected in order to successfully resolve the demand. In this sense, it is more strategic than feedback, which enables it to foster continual learning. This focus requires the student’s participation in and commitment to the task and a more authentic type of assessment to be developed. More specifically, feedforward requires the assessment maps to be harmonised between the professor and the student. In other words, it is adapted to the assessment demands and learning requirements. It has a more democratic and committed nature because it opens shared dialogue, based on ethical dialogue, where the participants create the roadmap towards acquiring the competencies (García‑Sanpedro, 2010).

The objective of this work is to understand how are students informed about the results of their learning. Specifically, it aims to understand the students’ and professors’ perspectives about the use of learning results and strategies that are promoted in the practice of improved use of their educational potential.


2 methodology

A case study (Eisenhardt, 1989) was undertaken under the symbolic interpretative paradigm (Habermas, 1982) on 12 Spanish degrees whose education was adapted to the EEES. Given the extensive nature of the study, this work focuses on how the students’ assessment results are used in the cases that were part of the study.

The degrees accessed to form part of the study are distributed as follows: Humanities 2; Social Sciences 5, Scientific Technologies 2 and Health and Life Sciences 3 and their distribution and origins are presented in Table 1.

University

Degree course

Universitat Autònoma de Barcelona

Degree in pedagogy

Universidad Complutense de Madrid

Degree in geography and land management

Universidad Complutense de Madrid

Degree in biology

Universidad de Alcalá de Henares

Degree in linguistics and English studies

Universidad de Alicante

Degree in social work

Universitat de Barcelona

Degree in librarianship

Universitat de Barcelona

Degree in managementand public administration

Universidad de Deusto

Degree in pedagogy

Universidad de A Coruña

Degree in occupational therapy

Univesitat Politècnica de Catalunya (Escola Superior Politècnica de Castelldefels)

Degree in telecommunication systems engineering

Universitat Politècnica de Catalunya

Degree in statistics

Universitat Pompeu Fabra

Degree in biology

Table 1. List of degree courses that collaborated on this study

The field trips took place from January 2009 to October 2010. The professors and students directly involved in the conception, design and development of the training and assessment by competencies are considered key informants. A range of instruments was applied depending on the possibilities permitted in each case. The following methods were generally used: group and individual in-depth interviews, discussion groups and direct observation. A record was maintained with field notes that were integrated in the data analysis. Table 2 presents the distribution of informants for the entire study.



Cases

Academic management directors

Professors

Students

Others

Other information source

Total informants per case

1

1

3

Eight course four students

 

x

12

2

3

3

 

1

x

7

3

2

2

 

 

x

4

4

2

7

11 course one students

 

x

20

5

1

1

 

 

x

2

6

1

2

 

 

x

3

7

2

Access not permitted

 

 

x

2

8

1

3

 

2

x

6

9

2

6

 

 

x

8

10

4

6

Five from course one and three from course four

2

x

20

11

1

Could not be specified

 

 

x

1

12

1

2

 

1

x

4

Total number of informants

23

35

27

6

-

91

Total percentage of informants

25%

38%

30%

7%

-

100%

Table 2. Distribution of the informants who participated in the case study


The number assigned to the cases in Table 2 is random, the degree courses presented in Table 1 are alphabetically ordered according to the origin university. The cases identified as 1, 4 and 10 are those that allowed group discussions with the students. Although the data was managed by cases, the information was treated in a descriptive and non-comparative manner. In this sense, the focus of the analysis aims to describe how the informants perceive the topic and what their experiences are.


3 results

The results presented refer to the participating students’ and professors’ experiences about how assessment results are informed. These results describe what they think, how they perceive and what effect this information has for professors and students. On the other hand, the results enable some strategies to be identified that the faculty uses for fostering reflection about the assessment results by competencies.


3.1 the faculty's and students' experiences of the use of the assessment results


The students identify the difference between knowing the grade and obtaining feedback. However, they are not familiarised with the information strategies of the results based on feedforward.


Some course four students from Case 10 were aware that a general assessment is made on completing some assignments. In other words, “sit down for five minutes when the assignment is finished and say that things have gone well, that they will improve”. However, this type of assessment has a terminal nature, aimed at the general aspects of the assignments, and not at improving students individually.

Other students identified professors who “are concerned” about improving results and others who “are not”. It is therefore recognised that there are different ways of returning the assessment results, which are known as “different types of feedback”. They recognise that informing the grade obtained is not the same as providing feedback. One student admits that “I was only given a numeric note and nothing else”, another said that “some professors only give you a number, the grade, but they don’t tell you anything else”. Most of the students positively value the reasoned explanation of achievements and errors and express the importance “that the professor takes time to explain why you have failed”. However, the discussion groups held do not obtain results that enable identification that the students experience return methods close to the focus that promotes feedforward.


The pedagogical dimension is also valued positively when students are informed about the results of their assessment. However, it seems that use of the educational potential is lacking.


Course one as well as course four students in the cases were aware of the effort the faculty makes when giving feedback facilitates the pedagogical relation. In their words, “we are not so distant from the professor, this is a great help”. This nearness promotes motivation and academic performance “it doesn’t take so much effort to do things because you know that your efforts are being assessed. Whether you do things well or do them poorly, you know you will be assessed by your work”. It also helps in accepting the criticism aimed at improvement, “the criticism is more constructive because they are always concerned about your shortcomings, they are concerned about this, they ask you, it is not so generalised”. In this sense, the improvement becomes motivation and the climate of trust generated by the faculty when an interest is shown in the students’ success by assessing their academic efforts. However, there are no consistent results that demonstrate the use of educational potential linked to improved academic performance.


The students indicate that the presence of the professor as tutor of the learning process and the follow up through continual assessment helps to “reduce failure”.

In line with the previous result, the presence, continual monitoring and reflexive work of the faculty is perceived by students as a factor that is intimately related to increased responsibility for their learning. The students express: “what helps is that they are constantly assessing”; “You also take more responsibility because when you have two or three works accumulating, they make you aware of it”, “it is not that they are dressing you down, but the follow up helps”, “there is nothing negative about continual monitoring”, “if they are always on top of you, I think this helps to reduce failure”.

From the faculty’s perspective, it is also identified that the follow up affects motivation and increases students’ responsibility for their academic performance. For example, a professor from Case 3 says: “There are students who have been abandoned but I spoke to them. It was noted that they are content, they feel that we are concerned for them and that we are involved. They do not have the sense that “they have abandoned me” but realise they have abandoned themselves”.

Individualised performance monitoring is valued as positive by the faculty as well as by the students because the “anonymity” is removed and “it is entirely more personalised”, as one professor from Case 5 says, “I spoke directly to the student because I know him/her very well”. The faculty also indicates that different channels are used for feedback and it is rewarding to see individual and group progress. This rewarding aspect is also acknowledged by the professors as well as students.


The faculty identifies that reflection about the results obtained is very much linked to metacognitive reflection.


Although the concept of “feedforward” was not found amongst the informants, some professors developed assessment results return strategies that promote a proactive focus. Therefore, some professors identify a positive relation between continual reflection in the classes over the demands of assessment and the results obtained by students. After this continual work, they identify that the most favoured aspects are regularity in the work, effort and the permanent development of metacognitive strategies. To know how to recognise shortcomings and identify the difficulties and problems that interfere in learning is an important aspect in fostering the acquisition of competencies and improved results. For example, a professor in Case 5 compares learning with a visit to the doctor: ”If I go to the doctor and say that I do not feel well, he/she will not be able to help me, but if I say that I do not feel well in a specific part, he/she will be able to help me. I firmly insist on very specifically defining what it is they do not understand and that they try to reflect on this”.

On the other hand, some professors identify that metacognitive reflection applied to assessment implies that students know that the strategy resolves the assessment task, and also that strategies must be used to resolve possible errors in learning. For example, a professor in Case 4 maintains that “students who reflect about what the task will be like, learn it and those who do not reflect do not learn”. Becoming aware about learning and assessment provides direction and orients both processes, in the words of the same professor “students see their shortcomings and see how everything is related”.


The students recognise the limitations and the work load involved for the professor to individually monitor them.


Students acknowledge and value the professors’ efforts in explaining the results obtained. They also acknowledge that more preparation time is required, and given the number of students per course, they doubt the sustainability of the work model. For example, a student from course one in Case 4 expresses this by saying “there will probably come a time when the professors cannot cope. It is ok here because it is a small university, but in Madrid, this is impossible in engineering”.


The use of feedback and feedforward as improvement strategies.


Some professors maintain that students do not sufficiently use the feedback given to them, especially in relation to errors, they say that “it doesn’t matter to them”. However, this perspective is not shared among students.

The students say that the use of the results has a mainly “summative” nature and “in the end it is only the examination that counts”. They also say that it is not frequent that work and examinations are returned on time, which sometimes makes preparing for other tests difficult. For example, it was recorded that “only one professor uploaded the note of the works to the virtual campus before the exam, the others did not”.

The students say that, among the faculty which effectively informs them about assessment criteria, the use of the results is merely informative, and not prospective; they are more an element for organising the student.

On the other hand, students acknowledge that some strategies more than others foster the use of results for improvement. For example, there is no possibility for feedback in the lectures, they do not simulate possible difficulties. Feedback does not exist in this type of class and the possibility for improvement before the exam is lost: “In a lecture attended before the assessment, you cannot identify the errors you may have” says one course 4 student in Case 10. Whenever more participative strategies are used, there are more opportunities for feedback. One student in Case 1 explained that “for me, the work dynamic was one of the work methods used that made me identify my mistakes and look at my performances in the assignments we have”.


3.2 Strategies that foster reflection about the assessment results by competencies

The contributions and experiences shared by the informant professors enabled the strategies to be determined that are aimed at promoting reflection and improvement of the results of the assessment of competencies. Basically, four strategies were identified:

1. Correction without grading. A strategy used by several different professors in different cases for fostering reflection about the results is to correct without grading. This enables “learning and returning to the practices”: one professor from Case 5 explains “I correct without making notes, except in some cases where there is a note and added for the final grade. (…) What is important is to learn and return to the work experience. If not, the students do nor reflect on the result”. Another professor from Case 3 says “The idea is that the students have feedback on what it is they have to do, with things that can be assessed”.

2. Simulation or modelling through exams applied in advance. According to the informants, modelling from examinations and tests from previous years fosters academic performance and orients students to understand what is expected of them. When facing an exam or test model, students not only anticipate the content but also the learning focus, which is a way of seeing the competencies in action. One professor explains “one thing that has provided positive results is that I give students the exam from the previous year, and this provides great orientation for understanding what is expected of them in the course”.

3. Document and disseminate the students’ experiences. The strategy of drafting a document that covers successful experiences and improvement strategies developed by the students, for example in Case 9, is a clear model of feedforward. It is a guide that offers the experiences and assessments of the students who are more advanced in their careers and redirects them in “survival” strategies to the learning and assessment model. In the words of the informant coordinator “we are preparing a document called something like “Everything you ever wanted to know about ABP and never asked”… It is inspired by something similar the Harvard faculty of medicine did which the students on the last courses did for those of the first, we have made our own”.

4. Report of the most frequent difficulties and errors. These documents are presented in different ways, although they basically inform students about the relationship between assessment criteria and the subject and the results obtained in the different tests applied to the group of students. When they are presented and explained to students, they are built on and in agreement with all the indications to prepare for, for example, the overall assignment test. These indications are covered in a consented report based on the experiences and contributions of all students and the team of professors.


3.3 Any reflections and recommendations from these results

Assessing students’ learning is a complex and dynamic matter and places constant strain on reliability, coherency and consistency. This also applies to informing and communicating the results of the assessment to students. Furthermore, the main challenge is to ensure this communication enables new learning and leads to student development.

It is believed that in relation to how the students’ assessment results are communicated, the results of the case study enable reflection on three basic questions.

The first is that the students identify the difference between knowing the grade and obtaining feedback. Knowing the grade obtained in an exam is not feedback. Feedback is identified with the information about the errors committed and successes achieved in a certain test and which is communicated by the faculty to students. However, there is no evidence that the information and the exchange generated between the students themselves through discussion of their results is valued as feedback and consequently used for rectifying possible errors. On the other hand, although the students acknowledge the value of the information about successes and errors, it cannot be affirmed that it is a generalised practice. Neither are they familiarised with the information strategies of the results based on feedforward. In consequence, on the one hand, the challenge seems to come down to making reflection about the results obtained more participative, incorporating them into classes as an important part of the daily training process. On the other hand, it is necessary to extend and generalise the times for communicating the results, also planning them as part of the daily work.

The second question is the value the students as well as the faculty place on communicating learning results and following up. The relational nature of the pedagogical mediation is valued, which in general terms is transformed into a factor of increased motivation for learning. However, it is believed that in order to really have an impact on improved academic performance, the value of the act of communicating the results of learning needs to go beyond personal satisfaction, promoting the idea of communicating for improvement and not only for personal monitoring. The purpose of communicating results is promoting student maturity and responsibility in the acquisition of learning and developing improvement strategies or remedial actions that enable them to learn. The pedagogical challenge is to promote a vision of an autonomous student who takes initiative and is co-responsible for his/her learning.

The third question is that the faculty recognises the important role of metacognitive strategies in improving learning results. Although professors who practice them were identified, the results did not enable conclusion that the practice is extended to the students in the case studies or that they are familiar with it. On the other hand, the strategies presented as examples cover the orientation and focus promoted by feedforward. This means using information from the environment, collecting it from results, prior experiences, student participation, etc., in order to orient students about how to address assessment demands, simulate scenarios and model situations that foster learning and transfer. On this matter, the challenge seems to mainly be the systemisation of including this type of focus through planning specific actions such as those commented in the results. This does not mean an increase in the number of assessments or the development of new reports that would make the daily work of the faculty unsustainable. It means making use of the results, the experiences and the daily difficulties in order to reflect on improvement. What is necessary and pressing is to incorporate reflection about the results and possible improvement strategies. Extending, highlighting and standardising this type of work seems to be the way to go.


4 CONCLUSIONS

This work focuses on the question: how are the students’ assessment results used? The training use and prospective nature of the assessment results are key elements in the change from an “assessment of learning” to an “assessment for learning”.

Although feedback and feedforward offer useful possibilities and present different foci, the results presented evidence that the faculty as well as the students acknowledge these differences and their possibilities, especially admitting that the strategies aimed at feedforward empower aspects linked to students’ motivation, commitment to the task and academic performance. On the other hand, from continual monitoring of these types of evaluation strategies, the faculty develops the tutorial function to foster proximity and improved pedagogical relations. Four strategies arising from the cases are also presented that foster reflection about the assessment results by competencies.

The results show the evident need for systematically incorporating feedback and feedforward in teaching practices, how to use the assessment results and how to orient them towards improvement. However, incorporating these strategies also requires acknowledgement of the difficulties that must be overcome in order to integrate them into practice: the increased numbers and diversity of students, fewer resources, the difficulty of making a diagnostic assessment that enables understanding prior knowledge, study techniques and habits, students’ conceptions about learning and the construction of their knowledge, among others.

In summary, incorporating this type of focus on assessment results in a sustainable manner essentially requires the students’ participation in their learning process and becoming aware about their role as protagonists. The task of the faculty is to embrace the challenge of harnessing the educational potential of the results and incorporating these strategies into practice without them becoming beaurocratic procedures with little meaning.


References

Basso, D., & Olivetti Belardinelli, M. (2006). The role of the feedforward paradigm in cognitive psychology. Cognitive Processing, 7(2), 73-88. http://dx.doi.org/10.1007/s10339-006-0034-1

Eisenhardt, K. (1989). Building Theories from Case Study Research. Academy of Management Review, 14(4), 532-550. http://dx.doi.org/10.2307/258557

Fullerton, H. (1995). Embedding alternatives approaches in assessment, in E. Knight (Ed.), Assessment for learning in Higher Education, London: Kogan Page.

García-Sanpedro, M.J. (2010). Diseño y validación de un modelo de evaluación por competencias en la universidad. Tesis doctoral. Universidad Autónoma de Barcelona. Available online in: http://www.tesisenred.net/handle/10803/5065

Gibbs, G., & Simpson, C. (2009). Condiciones para una evaluación continuada favorecedora del aprendizaje. Colección: Cuadernos de docencia universitaria, Nº13. Barcelona: ICE-UB y Ediciones Octaedro.

Habermas, J. (1982). Conocimiento e interés. Madrid: Taurus.

Ramsden, P. (1992). Learning to teach in higher education. London: Routledge. http://dx.doi.org/10.4324/9780203413937


Citation: García-Sanpedro, M.J. (2012). Feedback and feedforward: Focal points for improving academic performance. Journal of Technology and Science Education (JOTSE), 2(2), 77-85. http://dx.doi.org/10.3926/jotse.49


On-line ISSN: 2013-6374 – Print ISSN: 2014-5349 – DL: B-2000-2012

 


Author biography


María José García-Sanpedro

María José García San Pedro is lecturer at Universidad Internacional de La Rioja and is member of the EDO research group at Universitat Autònoma de Barcelona.

She was distinguished as Honorific Collaborator by School of Social Work at the Universidad Complutense de Madrid. She is member of the inter-university research network for university teaching at the Universidad de Alicante. She has coordinated research projects and is currently working on several national and international researches. She has participated in the research training program of the Catalonia Government and the European Social Fund. Her research is in the areas of innovative higher education, student engagement, student centered assessment, gifted student in higher education, and teaching and learning strategies.



Journal of Technology and Science Education, 2012 (www.jotse.org)


Article's contents are provided on a Attribution-Non Commercial 3.0 Creative commons license. Readers are allowed to copy, distribute and communicate article's contents, provided the author's and Intangible Capital journal's names are included. It must not be used for commercial purposes. To see the complete licence contents, please visit http://creativecommons.org/licenses/by-nc/3.0/es/


Published by OmniaScience (www.omniascience.com)




Licencia de Creative Commons 

This work is licensed under a Creative Commons Attribution 4.0 International License

Journal of Technology and Science Education, 2011-2024

Online ISSN: 2013-6374; Print ISSN: 2014-5349; DL: B-2000-2012

Publisher: OmniaScience