Models of Literary Evaluation and Web 2.0. An Annotation Experiment with Goodreads Reviews

Abstract

In the context of the Web 2.0, user-genrated reviews are becoming more and more prominent. The particular case of book reviews, often shared through digital social reading platforms such as Goodreads or Wattpad, is of particular interest, in that it offers scholars data regarding literary reception of unprecedented size and diversity. In this paper, we test whether the evaluative criteria employed in Goodreads reviews can be included in the framework of traditional literary criticism, by combining literary theory and computational methods. Our model, based on the work of von Heydebrand and Winko, is first tested through the practice of heuristic annotation. The generated dataset is then used to train a Tranformer-based classifier. Last, we compare the performance of the latter with that obtained by instructing a Large Language Model, namely GPT-4.