Jump to page content
The Pequod
Dr Alistair Brown | Associate lecturer in English Literature; researching video games and literature

Recent Posts

Twitter @alibrown18

New Essay

Through exploring the psychopathology of Capgras syndrome, in which a patient mistakes a loved one for an imposter, The Echo Maker offers a sustained meditation on the ways in which we project our own problems onto other people. As a reflection on the mysteries of consciousness, the novel offers some interesting if not especially new insights into the fuzzy boundaries between scientific and literary interpretations of the mind. Read more


When Publishers Own the (Dead) Author on Facebook

Tuesday, February 14, 2017

An interesting phenomenon I've just spotted on Facebook: major authors like Charlotte Bronte or Charles Dickens have their own verified pages - that is to say, pages confirmed by Facebook with the little blue tick as being "an authentic page for this public figure."


But who "owns" these pages? Follow the links from the About section, and you'll end up at Penguin-Random House's own website, where naturally you can buy the author's books. Evidently these pages are managed not by some altruistic-minded eager reader, but by the publishing conglomerate.

The content of these pages seems generally good: there is lots of community discussion and informative link sharing. It's not just a stream of posts inviting you to buy the latest Random House edition. 

Nevertheless, these publications do feature heavily - though since many of these are by imprints such as Vintage, which are ultimately owned by Random House, it would be easy to miss that the page owner is solely promoting its own works. It's also questionable that pages such as the Jane Austen are badged as being "maintained by Jane Austen's U.S. & U.K. publisher Vintage Books" when of course, Austen has many US and UK publishers, and indeed her works are available free via the likes of Gutenberg. 

The way in which Facebook presents such pages as being the authentic location - "authentic" carrying the whiff of objectivity - raises ethical questions. Is it right that a publisher can colonise the long-dead author, and piggy back on his or her identity as a sales route? If readers are landing on these pages as the top results on Facebook (which most would do, as these are the unique, verified accounts) are they missing word on interesting books released by competing publishers? How are the news feeds being steered so that what looks to be a fan site actually ties in with a wider publishing (and economic) agenda?

Of course, I've no objection to publishers using Facebook to promote their activities. Neither with publishers hosting fan sites for authors. But to hide behind the persona of the author, curating his or her historical identity in the twenty-first century, when the ultimate aim is presumably to sell more texts makes me uneasy. Is anyone with me on this?

Labels: , , , , ,

Can we imagine a Statcheck for the arts and humanities?

Wednesday, February 08, 2017

Here's a wondering for a Wednesday. Can we imagine having software tools in the arts and humanities that do some of the dirty work of fact and data checking ahead of peer review?

The inspiration for this comes from the stir that has been created recently in the sciences - especially experimental psychology - by a tool called Statcheck. Experimental psychology often depends upon applying p-value assessments to data, to determine whether findings are statistically significant or simply the result of experimental bias or background noise. Statcheck was a program devised at Tilberg University, which automatically scanned a massive set of 250 000 published papers, recalculated the p-values within them, and checked whether the researchers had made errors in their original calculations.

The finding was that around half of all published papers have at least one calculating error within them. That's not to say that half of all published papers were fundamentally wrong, such that their findings have to be thrown out of the window entirely. Nevertheless, it does highlight significant deficiencies in the peer review and editorial process, where such errors should be picked up. And while one miscalculation in a series may not be in itself significant, a number of miscalculations might spur suspicion as to the credibility of the findings more generally. Miscalculation also offers a glimpse into the mindset of the paper's author(s) and the processes that went into its production: have calculations been produced by one author alone, or by two authors independently to cross-check? were calculations done on statistical software or by hand? and, most seriously, do miscalculations point to attempts to manipulate data to support a preconceived outcome?

In a time-pressured academic world, peer reviewers often take shortcuts. Among one of the many reasons peer review is flawed as a gate-keeping mechanism for excellence, we know that even though reviews are technically blind, reviewers are often looking for an implicit feeling about the unknown author's overall trustworthiness rather than scrutinising every single feature of the individual article in detail. Beyond exposing problems with the articles themselves, this is a revelation about peer review that may emerge from Statcheck. In the arts and humanities, peer review should ideally be based on an assessment of the clarity and reliability with which an author advances his or her claims, rather than whether we agree with the claims themselves. To make an analogy with philosophical logic, we're looking for validity, not soundness. One of the basic functions of peer review is to get a feel for the author's argument as being based on legitimate reason even if the outcome of that argument is not one with which we concur. In assessing this, where there are deficiencies in basic details these may point to deeper structural or logical flaws in the author's thought processes.

The existence of Statcheck got me thinking about whether in the arts and humanities, and English in particular, our published papers depend upon similar basic mechanisms like the p-value test and, if they do, whether the author's accuracy in using those mechanisms could be checked automatically as a prelude to peer review. Of course, even in the age of the digital humanities, arts and humanities still don't tend to deal in statistical data but rather in 'soft' rhetoric and argumentation. Still, are there any rough equivalents? And if so, could we envisage software capable of running papers through pre-publication tests (just as Statcheck now does) to get a general sense of the care authors have paid to the 'data' on which their argument depends, which might then cue peer reviewers or editors to pay closer attention to some of the deeper assumptions and the article's overall credibility?

Here are some very hypothetical, testing-the-waters assumptions about the sorts of quantifiable signals it might be useful to pick up programmatically (all of which we would like to think peer reviewers would notice anyway - but the lesson of Statcheck in experimental psychology suggests otherwise):
  • Quotation forms the bedrock of argumentation in the arts and humanities. As I constantly tell my students, if you have not quoted a primary or secondary text with absolute precision, how I am supposed to trust your arguments that depend upon that quotation? If someone is trying to persuade me about their reading of the sprung meter of a Gerald Manley Hopkins poem, but they have mistyped a key word in such a way that the meter is 'broken' in the quotation, this hardly looks good. A software tool that automatically checks the accuracy of quotations within papers, and highlights errors would in many ways be an inversion of plagiarism-testing software, but here we would be actively looking for a match between the quotation and the source.
  • Similar to the above, spelling of titles of texts and author's names. 
  • Referencing and citation are clearly important, and checking whether references - even or especially in a first draft - have been accurately compiled may highlight flaws in the author's record keeping.
  • Historical dates may provide another clue as to the author's own processes for writing and his or her strictness in self-verifying. In presenting a date in a paper, we may often be making a case for literary lineage, tradition, or the links between a text and its contexts. It matters that we get dates precise. In not double-checking every date (for example, because an author thinks they know off the top of their head) author's have missed a key step in the process. Erroneous dates may be a clue to problems in arguments that depend upon historical contingency.
  • If we're looking at novels in particular, there are key markers of place and character, and relationality within these, which need to be rendered precisely. To describe Isabella Linton as mother of Cathy Linton in Wuthering Heights or to write Thrushcross Grange when meaning the Heights might be easy mistakes. But these may also be symptomatic of an issue with the author's close (re)reading of the text. It should in principle be possible to apply computational stylistics to verify that an author really means who or what they refer to in the context of their writing.
I'm sure that there are more possibilities to add to this list - but I'm not sure that even if (and it's a big if for a host of technical reasons) we could devise programs to automatically parse papers for accuracy in areas like this it would be ultimately beneficial. Nevertheless, if peer review is a legacy mechanism for a pre-digital age, what harm in a little futuristic speculation now and again? 

And, since I'm feeling cheeky, imagine if we could do a Statcheck on a whole mass of Arts and Humanities articles. Wouldn't it be deliciously gossipy to see just how many big name scholars make basic errors?

Labels: , , , , , , , ,

Why the OU is right not to enter the TEF

Thursday, February 02, 2017

Writing in the Times Higher, the vice-chancellor of the Open University, Peter Horrocks, has explained why the OU will not be entering the TEF in this initial cycle. His arguments are absolutely justified. Having seen some of the strategy documents floating around the institution prior to this decision, it's clear that the OU would have been attempting to bash the proverbial square peg into a rigid, round hole. Or perhaps the more accurate metaphor, given the OU's vast and amorphous student cohort, would that of trying to nail jelly to the wall.

For the standard college-leaving, three-year undergraduate, success has a particular shape as far as TEF construes it: completing the degree and employment at the end of it. But the TEF simply doesn't account for the types of students the OU takes in, the journey they go on with us, and the many ways in which 'success' may occur in a typical six-year part-time degree.

Unfortunately, retention and degree completion are scores on which, on the face of it, the OU does quite badly. Only around 13% who start with us complete a degree. But while there are numerous ways in which the OU needs to improve its approaches to personalised teaching and support (there have been several recent pedagogically-destructive fiascoes that I won't go into here), this headline number does not mean that we fail our students on the whole.

Many of our part-time students don't complete their degrees for reasons that we have very little or no control over: disability and illness, family circumstances, change in financial situation. Indeed, one common reason for not completing in my experience is a change in employment status. I've encountered many students who have begun studying part-time while working part-time in relatively low-paid jobs. Midway through, their OU modules have given them the confidence, transferable skills, and indicators of motivation and ability that lead employers to reward them with full time work or promotions. They no longer have time to study and so drop out mid-degree. Perversely, the very outcome that TEF wants to drive institutions to improve, students' employment prospects, is the thing that counts against the OU in a TEF measure of teaching excellence, retention.

Then there is the anecdotal evidence which shows that success can come in shapes and forms that don't line up nicely with the columns of an excel spreadsheet.

Consider the case of the student who came to one of my modules with a background of mental illness. This was a single parent, who had been in work but then stopped on health grounds. She studied the module. Failed. Studied it again. Passed. She left the institution at that point, because studying had served as a kind of therapy, and given her the confidence that she could dedicate herself to being the best possible parent to her kids by not going back to work, and that doing so was not a hallmark of her own inability.

Then there's the student who at school was told they were useless and would never succeed. She desperately wanted to go to university even so, but felt she was not good enough. She went into menial work, but then a few years later came to us. She studied for one module at level 1, realised she was actually very good indeed, and left us to go to the brick university that she had always craved.

Or what about the student of a colleague, who was terminally ill. That student finished his module, and then shortly afterwards, and very sadly, passed away. Later, a friend of the students told my colleague that he was convinced his friend had survived as long as he had because he wanted to complete his studies.

These are just three stories that immediately spring to my mind. If I dug through my back catalogue of students I've taught and farewell emails I've received, there would be many more. My colleagues could no doubt add many others still. They are touching, important cases - those that motivate us individually as educators, and that remind that the OU is one of the most powerful social engineering tools the country possesses.

None of these things would 'count' towards the TEF; all these non-completions would count against the OU. But only the most statistically-minded, hard-nosed, market-driven, minister could possibly think these are evidence of teaching failure. Unfortunately, in the absence of price indicators of quality in the rigged non-market of Higher Education, TEF is designed to bundle an institution into a single rankable number that can be plucked from the shelves. But students and institutions are not numbers, and education is not always about employment or even getting a degree certificate. We need a TEF which allows for the uniqueness of each institution and its intake, and that counts students as humans, not beans.

Labels: , , , , , ,

How the Faiz Siddiqui case reveals the limitations of the TEF

Tuesday, December 06, 2016

The case of Faiz Siddiqui, who is suing Oxford University over his failure to achieve a First due to perceived poor teaching, is being met with a combination of incredulity and alarm in the popular and academic media. Many see this both as evidence that the rise of the student consumer is complete, and as a foretaste of things to come when the market model of education is further entrenched by the looming Teaching Excellence Framework. One other thing I think it offers, though, almost incidentally, is a sense of the limitations of the TEF as a way of improving teaching quality across the board.

What may seem remarkable to outside observers is that Siddiqui seems, objectively, like a success story. He attained a 2.1 from one of the world's top universities. He went on to train as a solicitor. On the key teaching metrics of TEF - retention, degree classification, employability - he ticks the boxes. And yet these generic measures of teaching quality are not representative of his experience. As he argues in his claim for £1 million lost earnings, that few marks between his 2.1 and a First mattered in defeating his dream of becoming a commercial barrister.

Whether Siddiqui has a valid case against his tutors and institution, and whether his teaching genuinely was poor, is a matter for the court to decide. However, it's also a provocation to reflect on teaching in general and the extent to which we support those who occupy a middle ground between absolute success (epitomised by the First-class Oxford candidate) and failure. I certainly do not buy into Jo Johnson's narrative that HE teaching is 'lamentable' (see Liz Morrish for an exellent critique of this, and other myths). However, if I am honest with myself, when I reflect on my practice with own students from various institutions with intakes at both the top and bottom of the student cohort, I tend to expend most energy on students at the extremes, which are picked up in TEF metrics: the obvious high fliers and those who are at risk of dropping out.

That brilliant student who emails me at two in the morning with some challenging question about Foucault's view of the literary author - he or she has got my ear.

That student who is scraping by or even failing - I will work hard with him or her and call on additional support to push, cajole, nudge or drag him or her over the line (this is something we do repeatedly, and on the whole very well, at the Open University).

It's students like Siddiqui who are most at risk of falling from my radar, given my humanly limited time and enthusiasm. The students who are all to easily missed are those who are, to borrow an economic metaphor, the just about managing.

While it may not always seem like in when labouring under the weight of groaning inboxes, a large proportion of the student body do get by without substantially 'bothering' their teachers with 'problems' or without overt appeals for motivational support. Sometimes students coast on their inherent intellect. Sometimes students fudge through with all nighters. Most often they work genuinely hard and independently and pull themselves along through sheer abundance of effort. These are the students I think less about, the ones who never (for various reasons) explicitly reach out for help. Maybe this is because at the time we think of them as successful relative to the rest. It's only with hindsight, the sort that Siddiqui is bringing to bear in his court case 16 years after his graduation, that we might reflect on their failure relative to their own potential.

The just-about-managings are those who we could do even more for if we had resources to do so (something the TEF certainly won't correct). They are the mid 2.1 student who might, with a bit of pushing and proactive engagement, push to a First. Or the student who is comfortably passing with ever quite excelling, even though there may be latent talent waiting to be unlocked. Despite the best of wills, these students are all too easily missed. Speaking to colleagues, I know I'm not alone in feeling this.

On its current basis, TEF will do nothing to incentivise universities to address this middle ground where really meaningful teaching can happen. VCs at the lower end will be very focused on boosting retention, progression and pass rates. At the OU for instance, with a very atypical student intake, our degree completion statistics look pretty poor - but they don't tell the whole story, as many students finish early because after a couple of years we've given them the confidence or skills to step back into mainstream education, or to boost their career. The OU and similar providers may need to offer more named exit points before degree level, to boost TEF metrics. However, this framework will also give more opportunities for the middling students to quit before their race is run, to let them go early with some success rather than pushing them as far as their talents will take them.

For Russell Group VCs, a different pressure applies. With so many students graduating with good (2.1 or higher) degrees, employability and their differentiation in the marketplace come from the additional extra-curricular opportunities. Sports, drama, volunteering are all vital parts of the student experience. But these are also opportunities for students who are not at risk of failure to divert into other things rather than pressing to excel academically. Despite their split attention spans, or indeed because of them, they are heralded as successful multitaskers ideally equipped for the work hard, play hard city of London.

The TEF is not a solution for the students who comprise the silent majority in the sector. Nor indeed is it meant to be. Its interest is in the bottom and top: forcing some institutions out of the market altogether, while bolstering the ability of elite universities to attract high paying international students. It's an economic not pedagogic tool. If we want to see genuine pedagogic gains, we need to look and think about what we do not at the marginal successes or failures, but those who, having attained the baseline standard, could be pushed to make incremental gains. The TEF will make it harder, not easier, to concentrate our attention on these.

Labels: , , , , , ,

The content of this website is Copyright © 2009 using a Creative Commons Licence. One term of this copyright policy is that Plagiarism is theft. If using information from this website in your own work, please ensure that you use the correct citation.

Valid XHTML 1.0. Level A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0. | Labelled with ICRA.