Jump to page content
The Pequod
Dr Alistair Brown | Associate lecturer in English Literature; researching video games and literature

Recent Posts

Twitter @alibrown18

New Essay

Through exploring the psychopathology of Capgras syndrome, in which a patient mistakes a loved one for an imposter, The Echo Maker offers a sustained meditation on the ways in which we project our own problems onto other people. As a reflection on the mysteries of consciousness, the novel offers some interesting if not especially new insights into the fuzzy boundaries between scientific and literary interpretations of the mind. Read more

Getting Feedback on Teaching Effectiveness

Tuesday, August 18, 2009

Last week, the UK's postgraduate support body, Vitae, published a survey of distance learning and part-time doctorates. The next day in the UK, universities and the higher education press were pouring over the results from the annual National Student Survey, extrapolating a general rule about the state of universities from every dropped percentage point. Meanwhile, as an ongoing feature of this time of the academic year, many postgraduates will be busy filling in questionnaires for their graduate schools.

It sometimes seems as if higher education is fuelled by two things: money, and survey results. But whilst entire university administration departments are dedicated to designing questionnaires, or to dissecting their results, should individual academics or postgraduate teaching assistants seek feedback on their own teaching? In this blog post, I want to share my own experience (or initial lack of it) in designing surveys to direct my own teaching. But before thinking about the methods of questionnaire design, it is worth stressing why feedback is such a vital tool for teaching, even at the early stage of an academic career in which I find myself.

If UK academics want to apply to the Higher Education Academy (HEA), the national centre for teaching excellence, to receive accreditation - often, for early-career researchers, at the Associate level - all they need to do is to fill in a form describing their teaching activities in two "areas of activity" from a list of five. These include assessing students, planning a lesson, or providing a supportive environment in which students can learn.

Area 1, for instance, asks the applicant to:
Identify the ways in which you contribute to the design and planning of learning activities. These might include involvement in the design or redesign of curricula, courses and programmes of study and/or identifying and planning different kinds of interaction with learners in various contexts for single sessions or larger programmes.
Perhaps you have held a tutorial with a group of students, planned a seminar or even given a lecture. Surely all you need to do, then, is to state what you have taught - books you have used, for example - just as you would on a CV?

The catch comes with the subsequent sentence, a mantra that is repeated beneath all of the other areas of activity:
Please give reasons for your choice of learning content, activities and techniques and how they relate to developing the learners' understanding of the subject. Please explain how you know that your work is effective and how you try to improve it.
These repeated sentences represent the tip of the huge, foundational move that has taken place in universities over the last two decades. This is the shift towards "student-centred learning."

At first, this might seem like one of those clever-sounding but ultimately meaningless phrases so beloved of technocrats. Surely universities have always had students at their centre? But consider the case of the professor who stands in front of the blackboard in a stuffy lecture theatre, droning on for an hour about some scrawl behind him, to which he occasionally gestures with an absent hand. I am sure you have encountered this stereotype at some point in your education. In response to a lecturer like this, you might well have buried your head in your hands and your eyes in your textbooks, believing that the reason you cannot understand the material or find it so dull is not that the lecturer is bad, but rather than he is so brilliant and the material he is teaching so difficult that you will never understand it. You may even have fallen for that fallacy that the worse the teacher, the better the student you must become, because you are being forced to study independently in order to understand.

Student-centred learning attempts to send such lecturers up in a (metaphorical) puff of smoke, turning attention instead to students. Lecturers should not plan lessons in ways that they find easiest to teach; and although teaching is often most exciting when at the cutting-edge of research, they should not simply talk about their current career-defining, ultra-complex project because that interests them more than teaching the basics. Rather, lecturers should instead consider what hard knowledge and soft skills they want students to learn by the end of their degree, and think about how students might best go about learning it.

This is what lies behind the HEA's application for Associate Practitioners, with that repeated requirement that you "give reasons for your choice of learning content" and that you "explain how you know that your work is effective and how you try to improve it."

How, though, can one provide this sort of evidence? The clue lies in compulsory Area 5 of the HEA's application form:
This area is about how you use research, scholarly activity and/or professional activities to support learning. Please use this section to give examples of ways in which you draw upon discipline based and pedagogic research, scholarly activity and/or other professional activities in the support of teaching and learning.
No PhD researcher should ever uncritically accept the findings of the latest journal article. Instead, research requires one to evaluate the previous reliability of an author, or to look out for any cunning argumentative tricks or methodological errors that may compromise a paper's findings. Area 5 asks that researchers bring the same evaluative techniques to bear on teaching. How do we know our teaching is effective for students? How do we evaluate whether they are learning in ways that best suit them?

This is where the questionnaire or survey starts to become of value to individual academics, not just to institutions focused on the latest league tables. Whilst good teachers will always be able to pick up on the vibes of a classroom, and sense whether students are engaged with the material they are teaching, there is no substitute for hard evidence of teaching effectiveness, just as in research one would never reject a paper's findings purely on a hunch.

And so onto my own experience. When I started teaching tutorials in English Literature four years ago, I thought I was doing a great job. Not only did I give students preliminary reading to do, I also got them to download podcasts and audio books. Rather than sticking with dusty books, I asked them to do exercises with hypertext editions.

But whilst I may have enjoyed implementing these twenty-first-century practices, when I set about applying to the HEA in 2006, how could I know that these were actually what students wanted? How could I measure whether my teaching techniques were benefiting their learning? In large part, answering this meant designing a survey. And here, I as a literary academic found myself somewhat in the dark - and, by sharing my experiences here, I hope to enlighten others in the same position as I was three years ago.

The first thing I found is that in order to be useful, questions need to be specific. Asking students whether they enjoyed tutorials (or not) might (or might not) do something for my ego, but reveals little about how they are using tutorials for learning. Perhaps they enjoyed tutorials, for example, because I never scheduled them first thing in the morning. Perhaps they were with their friends in my group, and their memories of tutorials are conditioned more by the chatter before and after, than by what actually went on in the classroom.

Even when I later modified this question to ask whether my tutorials were useful in "developing your understanding of course material," I found this did not really help much. At my university, students receive just four one-hour tutorials per year, the rest of teaching taking the form of lectures. Therefore, in preparing for a one-off tutorial, a student will usually read their primary and secondary material carefully, thereby improving their understanding of the course material automatically. Although they may not have perceived it that way when they responded to my question, the exact form of the tutorial may have mattered less than the mere fact of its existence as a key punctuation point in the academic year. Additionally, whilst I try to allow students to do most of the talking and debate in tutorials, with me taking the role of chairperson, I know that other tutors prefer to take a more active role, disseminating ideas and information rather than merely facilitating a debate. Neither approach may be better at ensuring that the preparation for the tutorial, and the contact time itself, become sound platforms for subsequent study. If a student acknowledged that my tutorials had helped their understanding, this did not necessarily mean my way of running tutorials was the best or only way of achieving the same result.

The first recommendation with surveys, then, is to try avoid generalised questions, or to ask them as part of a suite of more detailed ones. Asking whether a student enjoyed or benefited from my tutorials told me little about my teaching. Neither did asking whether it helped their understanding, because a different approach might have been similarly beneficial. In the most recent version of the survey I issued last academic year, I still asked these questions, as they provide an opportunity for students to complain if they really have not enjoyed the tutorials, and for me to get a general sense of student perceptions of me generally (it is not necessarily the case that just because a student may offer lots of valid pointers for improvement, this means your teaching overall is poor). However, I also ask more detailed, follow-up questions. Did students feel they got more out of set tasks (mini-exercises I get them to do in pairs), or when discussing a text as a whole group? Did students use any handouts given in tutorials as a way of pursuing themes developed in the space of the hour's discussion?

I also now ask very specific questions looking ahead to teaching in subsequent years, whilst reflecting on the one just gone. For example, I ask whether students benefited from any online resources or podcasts, and if so whether they think the next generation of students would appreciate more of them. This year, I even asked whether my current students thought future ones would be happy for me to use Facebook in teaching. Their responses to these questions were clear enough to guide my teaching next year decisively. Broadly put, yes to more online resources and podcasts, but no way can I use Facebook for teaching, breaking the boundary between academic teacher and student socialite.

Another trick I have discovered with surveys is to introduce them at the start of the year rather than towards the end. This might seem counter-intuitive, and against the way surveys usually run at universities. How can students comment on a course when they have not yet completed it? However, I realised that presenting the questionnaire at the start of the year would have two benefits.

Firstly, when I hand out the questionnaires at our first meeting, it forms a sort of contract between student and teacher. By indicating that I value their pedagogic feedback and will try act upon it, this implies that I will also value any literary comments they might make in tutorials. I may be the "teacher," but that does not make me God, and giving them the opportunity to criticise my teaching also illustrates the point that my ideas about a text should not be viewed as the "right" answers. This is critical in English literature, which is a discursive rather than fact-based subject.

Secondly, one of the well-recognised problems with questionnaires is that they tend to represent polarised opinions. Either students who have hated a course use them to sound off all their grumbles, even if they do not originate with you, or students who have loved the course praise the teacher, even if it is really the selection of books or material that they have liked. In previous years, I have experienced both sides of the equation in my survey results. What can be missing is the intermediate students who have liked some aspects but not others. It is hard to make concrete changes in response either to total demolition or praise, whereas students with balanced views can be more specific about what was good or bad about your teaching.

Therefore, issuing a questionnaire at the start of the year helps to encourage comments on individual aspects of teaching that can be enhanced as the year progresses, rather than coming to light only retrospectively, in a sweeping appraisal at the end. Last year, for example, after one tutorial a student commented that the questions about a novel which I asked in preparation for that tutorial did not really align with the essay questions asked in the exam at the end of the year. A fair point, and one I could rectify straight away in later teaching.

A final move I made this year, to hone the process I had started when applying to the HEA three years ago, was to place the questionnaires online. Though the paper versions never asked a student to put their names on the top, the online environment ensures greater anonymity for students, which is crucial if they are going to offer free judgements. Additionally, I noticed with the paper versions that many respondents simply offered numerical answers (that is, rating for specific questions from 1, good, to 5, bad) rather than giving more useful, written feedback. I guess this is because they felt that they had to answer every question on the written sheet, and so simply dashed through it. In contrast, the online questionnaires often received detailed comments about just one or two aspects - perhaps in response to a particular issue with one tutorial - rather than purely quantitative ratings across the full spread of questions.

I cannot stress enough how beneficial surveys have been for my teaching. Whether it is doing the specific, simple things (such as using more podcasts or keeping well away from Facebook), confirming my intuitions about how much a group is getting from my teaching, assuaging my ego with positive comments or keeping me on my toes with negative ones, questionnaires force me continually into that position desired by the HEA, where teaching is never statically delivered from the front of a lecture hall, but part of a reciprocal process which sees the student as the key figure in learning.

Given this, I do find it odd that although my own department have been supportive in allowing me to produce my own surveys, I know other postgraduates have had different experiences, because of fears of negative criticism which might lead students, who are after all fee paying clients, to complain formally to their universities, once they realise enough of them feel the same way about poor teaching at their institution. I am also aware that, over four years, my questionnaires have evolved tortuously out of my earlier naivete. I have eventually, I hope, found a good way to ask that all-important, daring question of a student: how effective is my teaching?

[This is a slightly modified cross-post from the Graduate Junction blog.]

Labels: , , , ,

Posted by Alistair at 8:54 am


Post a Comment

Links to this post:

Create a Link

<< Home

The content of this website is Copyright © 2009 using a Creative Commons Licence. One term of this copyright policy is that Plagiarism is theft. If using information from this website in your own work, please ensure that you use the correct citation.

Valid XHTML 1.0. Level A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0. | Labelled with ICRA.