Jump to page content
The Pequod
Dr Alistair Brown | Associate lecturer in English Literature; researching video games and literature

Recent Posts

Twitter @alibrown18

New Essay

Through exploring the psychopathology of Capgras syndrome, in which a patient mistakes a loved one for an imposter, The Echo Maker offers a sustained meditation on the ways in which we project our own problems onto other people. As a reflection on the mysteries of consciousness, the novel offers some interesting if not especially new insights into the fuzzy boundaries between scientific and literary interpretations of the mind. Read more


When Publishers Own the (Dead) Author on Facebook

Tuesday, February 14, 2017

An interesting phenomenon I've just spotted on Facebook: major authors like Charlotte Bronte or Charles Dickens have their own verified pages - that is to say, pages confirmed by Facebook with the little blue tick as being "an authentic page for this public figure."


But who "owns" these pages? Follow the links from the About section, and you'll end up at Penguin-Random House's own website, where naturally you can buy the author's books. Evidently these pages are managed not by some altruistic-minded eager reader, but by the publishing conglomerate.

The content of these pages seems generally good: there is lots of community discussion and informative link sharing. It's not just a stream of posts inviting you to buy the latest Random House edition. 

Nevertheless, these publications do feature heavily - though since many of these are by imprints such as Vintage, which are ultimately owned by Random House, it would be easy to miss that the page owner is solely promoting its own works. It's also questionable that pages such as the Jane Austen are badged as being "maintained by Jane Austen's U.S. & U.K. publisher Vintage Books" when of course, Austen has many US and UK publishers, and indeed her works are available free via the likes of Gutenberg. 

The way in which Facebook presents such pages as being the authentic location - "authentic" carrying the whiff of objectivity - raises ethical questions. Is it right that a publisher can colonise the long-dead author, and piggy back on his or her identity as a sales route? If readers are landing on these pages as the top results on Facebook (which most would do, as these are the unique, verified accounts) are they missing word on interesting books released by competing publishers? How are the news feeds being steered so that what looks to be a fan site actually ties in with a wider publishing (and economic) agenda?

Of course, I've no objection to publishers using Facebook to promote their activities. Neither with publishers hosting fan sites for authors. But to hide behind the persona of the author, curating his or her historical identity in the twenty-first century, when the ultimate aim is presumably to sell more texts makes me uneasy. Is anyone with me on this?

Labels: , , , , ,

Can we imagine a Statcheck for the arts and humanities?

Wednesday, February 08, 2017

Here's a wondering for a Wednesday. Can we imagine having software tools in the arts and humanities that do some of the dirty work of fact and data checking ahead of peer review?

The inspiration for this comes from the stir that has been created recently in the sciences - especially experimental psychology - by a tool called Statcheck. Experimental psychology often depends upon applying p-value assessments to data, to determine whether findings are statistically significant or simply the result of experimental bias or background noise. Statcheck was a program devised at Tilberg University, which automatically scanned a massive set of 250 000 published papers, recalculated the p-values within them, and checked whether the researchers had made errors in their original calculations.

The finding was that around half of all published papers have at least one calculating error within them. That's not to say that half of all published papers were fundamentally wrong, such that their findings have to be thrown out of the window entirely. Nevertheless, it does highlight significant deficiencies in the peer review and editorial process, where such errors should be picked up. And while one miscalculation in a series may not be in itself significant, a number of miscalculations might spur suspicion as to the credibility of the findings more generally. Miscalculation also offers a glimpse into the mindset of the paper's author(s) and the processes that went into its production: have calculations been produced by one author alone, or by two authors independently to cross-check? were calculations done on statistical software or by hand? and, most seriously, do miscalculations point to attempts to manipulate data to support a preconceived outcome?

In a time-pressured academic world, peer reviewers often take shortcuts. Among one of the many reasons peer review is flawed as a gate-keeping mechanism for excellence, we know that even though reviews are technically blind, reviewers are often looking for an implicit feeling about the unknown author's overall trustworthiness rather than scrutinising every single feature of the individual article in detail. Beyond exposing problems with the articles themselves, this is a revelation about peer review that may emerge from Statcheck. In the arts and humanities, peer review should ideally be based on an assessment of the clarity and reliability with which an author advances his or her claims, rather than whether we agree with the claims themselves. To make an analogy with philosophical logic, we're looking for validity, not soundness. One of the basic functions of peer review is to get a feel for the author's argument as being based on legitimate reason even if the outcome of that argument is not one with which we concur. In assessing this, where there are deficiencies in basic details these may point to deeper structural or logical flaws in the author's thought processes.

The existence of Statcheck got me thinking about whether in the arts and humanities, and English in particular, our published papers depend upon similar basic mechanisms like the p-value test and, if they do, whether the author's accuracy in using those mechanisms could be checked automatically as a prelude to peer review. Of course, even in the age of the digital humanities, arts and humanities still don't tend to deal in statistical data but rather in 'soft' rhetoric and argumentation. Still, are there any rough equivalents? And if so, could we envisage software capable of running papers through pre-publication tests (just as Statcheck now does) to get a general sense of the care authors have paid to the 'data' on which their argument depends, which might then cue peer reviewers or editors to pay closer attention to some of the deeper assumptions and the article's overall credibility?

Here are some very hypothetical, testing-the-waters assumptions about the sorts of quantifiable signals it might be useful to pick up programmatically (all of which we would like to think peer reviewers would notice anyway - but the lesson of Statcheck in experimental psychology suggests otherwise):
  • Quotation forms the bedrock of argumentation in the arts and humanities. As I constantly tell my students, if you have not quoted a primary or secondary text with absolute precision, how I am supposed to trust your arguments that depend upon that quotation? If someone is trying to persuade me about their reading of the sprung meter of a Gerald Manley Hopkins poem, but they have mistyped a key word in such a way that the meter is 'broken' in the quotation, this hardly looks good. A software tool that automatically checks the accuracy of quotations within papers, and highlights errors would in many ways be an inversion of plagiarism-testing software, but here we would be actively looking for a match between the quotation and the source.
  • Similar to the above, spelling of titles of texts and author's names. 
  • Referencing and citation are clearly important, and checking whether references - even or especially in a first draft - have been accurately compiled may highlight flaws in the author's record keeping.
  • Historical dates may provide another clue as to the author's own processes for writing and his or her strictness in self-verifying. In presenting a date in a paper, we may often be making a case for literary lineage, tradition, or the links between a text and its contexts. It matters that we get dates precise. In not double-checking every date (for example, because an author thinks they know off the top of their head) author's have missed a key step in the process. Erroneous dates may be a clue to problems in arguments that depend upon historical contingency.
  • If we're looking at novels in particular, there are key markers of place and character, and relationality within these, which need to be rendered precisely. To describe Isabella Linton as mother of Cathy Linton in Wuthering Heights or to write Thrushcross Grange when meaning the Heights might be easy mistakes. But these may also be symptomatic of an issue with the author's close (re)reading of the text. It should in principle be possible to apply computational stylistics to verify that an author really means who or what they refer to in the context of their writing.
I'm sure that there are more possibilities to add to this list - but I'm not sure that even if (and it's a big if for a host of technical reasons) we could devise programs to automatically parse papers for accuracy in areas like this it would be ultimately beneficial. Nevertheless, if peer review is a legacy mechanism for a pre-digital age, what harm in a little futuristic speculation now and again? 

And, since I'm feeling cheeky, imagine if we could do a Statcheck on a whole mass of Arts and Humanities articles. Wouldn't it be deliciously gossipy to see just how many big name scholars make basic errors?

Labels: , , , , , , , ,

Why the OU is right not to enter the TEF

Thursday, February 02, 2017

Writing in the Times Higher, the vice-chancellor of the Open University, Peter Horrocks, has explained why the OU will not be entering the TEF in this initial cycle. His arguments are absolutely justified. Having seen some of the strategy documents floating around the institution prior to this decision, it's clear that the OU would have been attempting to bash the proverbial square peg into a rigid, round hole. Or perhaps the more accurate metaphor, given the OU's vast and amorphous student cohort, would that of trying to nail jelly to the wall.

For the standard college-leaving, three-year undergraduate, success has a particular shape as far as TEF construes it: completing the degree and employment at the end of it. But the TEF simply doesn't account for the types of students the OU takes in, the journey they go on with us, and the many ways in which 'success' may occur in a typical six-year part-time degree.

Unfortunately, retention and degree completion are scores on which, on the face of it, the OU does quite badly. Only around 13% who start with us complete a degree. But while there are numerous ways in which the OU needs to improve its approaches to personalised teaching and support (there have been several recent pedagogically-destructive fiascoes that I won't go into here), this headline number does not mean that we fail our students on the whole.

Many of our part-time students don't complete their degrees for reasons that we have very little or no control over: disability and illness, family circumstances, change in financial situation. Indeed, one common reason for not completing in my experience is a change in employment status. I've encountered many students who have begun studying part-time while working part-time in relatively low-paid jobs. Midway through, their OU modules have given them the confidence, transferable skills, and indicators of motivation and ability that lead employers to reward them with full time work or promotions. They no longer have time to study and so drop out mid-degree. Perversely, the very outcome that TEF wants to drive institutions to improve, students' employment prospects, is the thing that counts against the OU in a TEF measure of teaching excellence, retention.

Then there is the anecdotal evidence which shows that success can come in shapes and forms that don't line up nicely with the columns of an excel spreadsheet.

Consider the case of the student who came to one of my modules with a background of mental illness. This was a single parent, who had been in work but then stopped on health grounds. She studied the module. Failed. Studied it again. Passed. She left the institution at that point, because studying had served as a kind of therapy, and given her the confidence that she could dedicate herself to being the best possible parent to her kids by not going back to work, and that doing so was not a hallmark of her own inability.

Then there's the student who at school was told they were useless and would never succeed. She desperately wanted to go to university even so, but felt she was not good enough. She went into menial work, but then a few years later came to us. She studied for one module at level 1, realised she was actually very good indeed, and left us to go to the brick university that she had always craved.

Or what about the student of a colleague, who was terminally ill. That student finished his module, and then shortly afterwards, and very sadly, passed away. Later, a friend of the students told my colleague that he was convinced his friend had survived as long as he had because he wanted to complete his studies.

These are just three stories that immediately spring to my mind. If I dug through my back catalogue of students I've taught and farewell emails I've received, there would be many more. My colleagues could no doubt add many others still. They are touching, important cases - those that motivate us individually as educators, and that remind that the OU is one of the most powerful social engineering tools the country possesses.

None of these things would 'count' towards the TEF; all these non-completions would count against the OU. But only the most statistically-minded, hard-nosed, market-driven, minister could possibly think these are evidence of teaching failure. Unfortunately, in the absence of price indicators of quality in the rigged non-market of Higher Education, TEF is designed to bundle an institution into a single rankable number that can be plucked from the shelves. But students and institutions are not numbers, and education is not always about employment or even getting a degree certificate. We need a TEF which allows for the uniqueness of each institution and its intake, and that counts students as humans, not beans.

Labels: , , , , , ,

The content of this website is Copyright © 2009 using a Creative Commons Licence. One term of this copyright policy is that Plagiarism is theft. If using information from this website in your own work, please ensure that you use the correct citation.

Valid XHTML 1.0. Level A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0. | Labelled with ICRA.