| CARVIEW |
In a nutshell, the core of this work consisted in identifying a range of strategies that experts deploy when reading and studying ancient documents. These strategies were identified through ethnographic observations of experts at work; these observations were then tentatively related to results reported in the cognitive sciences.
Searching for further funding to continue this work, I was recently able to secure funding from the OUP Fell fund to design experiments in order to test the validity of the intuited correspondences between my ethnographic observations and findings from the cognitive sciences literature.
Here are the title and abstract of our project proposal:
Cognitive Underpinnings of Reading Handwritten Scripts: Investigating Variations for Applications in Digital Palaeography (CURHSIVA-DP)
This event is free of charge; all welcome!
[In order to help with numbers for catering, please email: segolene.tarte [at] oerc.ox.ac.uk to confirm attendance – Thank you]
The information below is also available for download (A5 recto-verso flyer; A3 poster; A4 – programme only, without the overview; A5 abstracts booklet – with the available abstracts of the talks).
************
Interpreting Textual Artefacts:
Cognitive Perspectives and Digital Support for Knowledge Creation
Dates: 11th & 12th December 2012 (early afternoon 11th to tea time 12th)
Place: University of Oxford – Lecture theatre at the Ioannou Centre for Classical and Byzantine Studies, 66 St Giles’, Oxford OX1 3LU)
Overview:
The reading of textual artefacts in cuneiform studies (Assyriology), papyrology, epigraphy, palaeography, and mediaeval studies is at the core of the creation (discovery, invention) of new knowledge of past cultures and civilizations. The artefacts in themselves are devoid of meaning, and it is their interpretation and re-interpretation that contextualizes them and turns them into conveyors of knowledge.
This colloquium convenes scholars from a wide-ranging selection of fields in order to explore how knowledge is created through the act of interpretation of ancient documents. Each session puts into dialogue the work and methods, both digital and more traditional, of ancient documents scholars with findings from the cognitive sciences around the processes involved in the act of interpretation of ancient documents.
Through this event, we aim to gain a better-integrated view of the cognitive processes involved in the interpretation of ancient documents as well as some ways of supporting them and facilitating them digitally.
Programme:
|
11th December 2012 |
|||
|
14:00 – 14:15 |
Welcome |
|
|
|
14:15 – 15:45 |
Materiality Chair: Dr Dirk Obbink (Classics, Oxford) 3x20min + 30min discussion |
Prof. Chris Gosden (Archaeology, Oxford) |
“Beyond Art and Agency: The Sensory Impacts of Objects” |
|
Dr Kathryn Piquette (TOPOI, FU Berlin) |
“The Impact of Reflectance Transformation |
||
|
Prof. Glyn Humphreys (Neuroscience, Oxford) |
“The architecture of visual object recognition and attention” |
||
|
15:45 – 16:15 |
Tea/coffee break |
|
|
|
16:15 – 17:15 |
Kinaesthetic Chair: Dr Ségolène Tarte (e-Research Centre, Oxford) 2x20min + 20min discussion |
Dr Dominique Stutzmann (Institut de Recherche et d’Histoire des Textes, Paris) |
“Reading the movement: morphology, ductus, and reading skills for medieval Latin scripts” |
|
Prof. Marieke Longcamp (Neurosciences |
“Contribution of Writing |
||
|
17:15 – 18:30 |
Drinks reception |
|
|
|
12th December 2012 |
|||
|
10:00 – 11:00 |
Word Chair: Dr Arietta Papaconstantinou (Classics, Reading) 2x20min + 20min discussion |
Prof. Alan Bowman (Classics, Oxford) |
“The grammar of legibility in Latin cursive writing” |
|
Prof. Laurent Cohen (CHU Pitié-Salpêtrière, Paris) |
“The reading brain, neuropsychology and |
||
|
11:00 – 11:30 |
Tea/coffee break |
|
|
|
11:30 – 12:30 |
Structural Chair: Dr John Lowe (Linguistics, Oxford) 2x20min + 20min discussion |
Dr Klaus Wagensonner |
“Lexical texts in perspective. On the implementation of lexical information in bilingual narratives” |
|
Dr Richard Tunney (Psychology, Nottingham) |
“Implicit learning of artificial and natural languages” |
||
|
12:30 – 13:30 |
Lunch |
|
|
|
13:30 – 15:00 |
Creativity Chair: Prof. David de Roure (e-Research Centre, Oxford) 3x20min + 30min discussion |
Dr Peter Stokes (Digital Humanities, KCL) |
“Communicating Palaeography Across and Beyond the Discipline(s)” |
|
Dr Susana Avila Garcia (e-Research Centre, Oxford) |
“Exploring the use of interactive surface technologies to support image analysis researchers” |
||
|
Prof. David Kirsh (Cognitive Sciences, UC San Diego) |
“Interactively interpreting: solo and distributively” |
||
|
15:00 – 15:30 |
Tea/coffee break |
|
|
|
15:30 – 16:30 |
Round |
|
|
This is a joint event between Oxford’s Centre for the Study of Ancient Documents and the e-Research Centre. It has been made possible thanks to funding from the Arts and Humanities Research Council, through my early-career research fellowship, of which it is the closing event.
]]>
- Digital Humanities Congress at Sheffield (6th-8th Sept 2012)
- Digital Research at Oxford (10th-12th Sept)
- Perspectives workshop on Computing and Palaeography at Scholss Dagstuhl, Germany (18th-21st Sept)
- Materiality of Texts conference at Durham (24th-26th Sept)
I have to say that I started that month exhausted and was really wondering how I’d ever make it through it. It turns out it was amazingly energising, thanks to all the fruitful exchanges I had with people from a wide variety of horizons.
Here are tidbits of what I thought were interesting points that emerged throughout this dense month, dense in content and in networking :
- Cognition is on every lips. My approach to understanding the act of interpretation of ancient documents has become more and more involved with checking the cognitive sciences literature throughout the fellowship, and five major themes have emerged (see the preliminary announcement of the colloquium I’m convening in Dec). While discussing these themes at the Digital Humanities Congress I realised that people are not only interested in how they do things but also intrigued as to what it involves cognitively. And all of them want to do it better and, for that, they need to understand how they’re doing things now (well that is what expertise is made of , after all, cognition and metacognition!). The most frequent feedback I had on my presentations (eg this one) was: “It made me think about how I do things”. That was very gratifying, and definitely encouraging as far as the landscape of future research, beyond the fellowship, is concerned.
- What I’m doing is not really akin to cognitive archaeology although I can see how some confusion can arise. As I understand it, cognitive archaeology tries to unpack what was happening in the minds of people past; in contrast, what I’m doing is trying to unpack what happens in present people’s minds when they attempt to understand the past. I believe the distinction to be significant. Even if cognitive archaeology is an interesting approach, I’m not entirely sure how tractable the questions about people past asked by cognitive archeologists are. In contrast, I have a direct access to the people creating/discovering/accessing knowledge about the past. And my focus is on them – what are they doing, how are they doing it, what might help them? Yes, there is some kind of connection between the two, but it is rather tenuous, and if made, it needs to be made much further downstream from the results emerging from my research; it should relate to the content of the knowledge scholars create, rather than directly to their cognitive processes.
- More multi-disciplinarity is the way to go, but more “in-betweeners” are needed. And this is not just another case of putting C.P. Snow’s Two Cultures theory forward. On the contrary. It’s a rebuttal of it. We’re in a Multitude of Cultures landscape. And that was very flagrant at Dagstuhl. Each discipline, each field, each sub-field has developed their own terminologies, sometimes borrowing from each other and subverting the borrowed meanings, or just evolving the meanings in different (if sometimes divergent) ways. Take the word “feature”. In a palaeographical context, the word feature will, for example, serve to describe the remarkable characteristics of the end of stroke in a given scribal school; in image and signal processing terms, it will describe the behaviour of a curve or surface where changes occur in the (n-th order) derivatives (like a step or a ridge). There can then be some overlaps between palaeographical features and image processing features, but there are also some contexts where they definitely do not designate the some object. And actually, even within palaeography, the word feature might signify something different depending on the kind of script that is studied. Another striking example is the word “ontology”, used in AI and in philosophy. Here again there are overlaps, but most philosophers will cringe at the AI use of the word “ontology”. These simple examples show concretely how important it is to have “in-betweeners”, peolpe who are acquainted with the various domain-specific terminologies, and who will be able to identify when such terminological issues occur and take on the role of translator so that the scholars can understand each other across domains.
- Asking questions vs. giving answers. Under this heading, I’m referring to styles of scholarly approaches in research. I had a real epiphany moment at Dagstuhl when I realised what was going on. At times, it felt like there were hiccoughs in the communication, but not hiccoughs that could be blamed on misaligned terminologies. I couldn’t quite put my finger on it, it felt like people were getting frustrated because they weren’t heard; I pondered and it eventually dawned on me. The tradition in computer sciences is to find hard questions and then to answer them. The emphasis is often put on the answering. Answers act as a deliverables; that’s what computer scientists do, answer hard questions. In the palaeagraphical world, as well as in papyrology, epigraphy, and assyriology, with which I am familiar (I can’t really talk for other subjects), the tradition and emphasis is on the questions. What is of primordial importance is to ask the right questions. They can be hard questions (and they often are), and of course answering them is important, but the research process always puts the accent on the next question. No wonder people were slowly getting frustrated, on one hand you had the “answer-ers” and on the other the “question-ers”. A dialogue were each question is met by an answer and each answer by another question might just never end! I don’t have developed any tricks on how to handle this yet, but it’s really important to be aware of it – and again, that’s the kind of situation where having an “in-betweener” can come in really useful.
- Presentation styles unpacked. With my background in the so-called hard sciences, I was always taught to not, ever, in a million years, read my presentations and to always always have slides (and even in a foreign language – like for my very first academic presentation, at a German conference, when my German was shaky at best, I spent a lot of time practicing not to read – reading really was a big no-no). My incursions in the Humanities have taught me differently. It’s again a question of culture. In the eyes of their peers, speakers in textual scholarship who read their presentations are perceived as mastering their subject, and having a definite point to make with some possibly complex arguments to put across; whereas in the image processing community (and in medical imaging and computer sciences in general), a speaker who reads is someone who is perceived by their peers as being insecure about their subject. And towards the end of Sept, after lengthy and detailed discussions with palaeographers, papyrologists, and epigraphers I discovered this: for textual scholars, presentations are built just like written articles: the slides are illustrations, the text, which is (most of the time) read, is the main text of the article, and the handouts (if any) are the footnotes. To me, what is striking about this is that it absolutely makes sense, yet it doesn’t seem to take full advantage of the orality of the mode of communication. And this cultural diversity usually shows at Digital Humanities conferences; some presentations are more read than others, usually depending on the traditional subject the presentation is rooted into, and some presentations are hybrids of reading and talking, showing some signs of the mutual influences of the disciplines they bring together. (I haven’t broached the question of the language the talk is given in here; that would likely yield a whole discussion on yet another type of cultural differences!) .
To be fair each one of these points would probably be worth a full post, but with deadlines looming and a colloquium to prepare, I thought I’d try to evoke these topics – in broad brushstrokes.
Have you encountered any occurrences such as those in points 3, 4, and 5? Have you developed tips and tricks to deal with them?
]]>More details will be posted here soon, but in the mean time, here is an overview.
Interpreting Textual Artefacts: Cognitive Perspectives and Digital Support for Knowledge Creation
Colloquium
Date: 11th & 12th December 2012 (early afternoon 11th to tea time 12th)
Place: University of Oxford (Lecture theatre at the Ioannou Centre for Classical and Byzantine Studies, 66 St Giles’ Oxford OX1 3LU)
Overview
The reading of textual artefacts in cuneiform studies (Assyriology), papyrology, epigraphy, palaeography, and mediaeval studies is at the core of the creation (discovery, invention) of new knowledge of past cultures and civilizations. The artefacts in themselves are devoid of meaning, and it is their interpretation and re-interpretation that contextualizes them and turns them into conveyors of knowledge.
This colloquium aims to convene scholars from a wide-ranging selection of fields in order to explore how knowledge is created through the act of interpretation of ancient documents. Each session will put into dialogue the work and methods, both digital and more traditional, of ancient documents scholars with findings from the cognitive sciences around the processes involved in the act of interpretation of ancient documents.
Through this event, we aim to gain a better-integrated view of the cognitive processes involved in the interpretation of ancient documents as well as some ways of supporting them and facilitating them digitally. As a tangible outcome of this colloquium, the presented papers will be gathered into an edited volume.
This 1.5 day colloquium will be articulated around the five following sessions:
- Materiality and visual perception
- Kinaesthetic engagement in reading
- Word identification
- Structural knowledge and context
- Creativity and collaboration
Do mark the dates! All welcome!
]]>

What do expertise and expert performance have to do with this digital curation of knowledge creation project?
Well, if I can understand better what expertise is, where it comes from, and how it is constituted, I will have armed myself with a very valuable tool to identify how experts in the reading of ancient documents can be digitally supported in their task.
The Cambridge Handbook of Expertise and Expert Performance [1] is a very informative and rich book, which gathers findings on and around expertise as studied from various points of view: from Psychology, from Artificial Intelligence, from the Cognitive Sciences. Here are the highlights of my readings, presented, of course, within the framework of my own interests, that is, my aim to develop a piece of software which experts working with inscribed artefacts (including papyrologists and ancient near-east scholars) will find useful to conduct their research.
How to define Expertise?
Naturally, expertise is defined with respect to a specific domain; experts are specialists, they excel in a given domain. Yet regardless of the domain, there are two stances to define expertise and thereby experts. The first one states that expertise is a talent; this is the absolute approach, where experts are identified as those who produce exceptional results. The other one states that expertise is characterized by a high level of proficiency; this is the relative approach, where experts are those whose achievements and experience are greater than that of novices [2]. In that scope, Hoffman [3] defined the following proficiency scale (analogical to the craftsmanship stages as established in medieval times):
| 0 | 1 | 2 | 3 | 4 | 5 | 6 |
| Naivette | Novice | Initiate | Apprentice | Journeyman | Expert | Master |
More generally, anyone in the range from 1 to 4 in this table is considered a novice (0 corresponds to the person completely ignorant in the studied domain), and 5 and 6 are experts. Those two stances are in my opinion complementary. The second one is pragmatic, and does by no means exclude the presence of talent.
In terms of what expertise intrinsically is and requires, the understanding of expertise is rather fluid, not only evolving with time and new research results, but also dependent on the domain, on the context, and on the intention behind the efforts to define it.
How to study Expertise?
First, why would one want to study expertise? Provided that expertise is a mixture of talent and ability, then understanding what the talent and/or the ability are made of is one way of informing the process of how to teach novices to become experts. One other objective in the study of expertise is to build computational models that can either emulate or support experts.
The trickiness in studying expertise as a general concept resides in that expertise cannot be dissociated from the domain in which it exists. I will present in detail in the next section the general traits that scholars have however been able to identify in experts; in this section, I will restrict myself to presenting the strategies that have been developed to study and identify those traits. These methods can be organized on an axis ranging from unstructured methods to structured methods, where I’m using the term structure as referring to the presence (or not) of a predefined workflow designed by the investigator for the experts being studied, rather than as a qualitative appraisal of the method itself; so that at the far end of the “unstructured” range of methods, one would find ethnographic studies of expertise, and at the far end of the “structured” range of methods, one would find a set of specific tasks defined and completed in lab settings and viewed as characteristic/representative of the tasks experts undertake in their “natural settings”. Many of those methods can be combined, and most of them set their focus on what is widely called Cognitive Task Analysis (CTA) [4] – by contrast with Behavioural Task Analysis (BTA).
- Ethnographic study (BTA – CTA); based mostly on observation of experts in their “natural settings”; uses contrast frameworks (practice vs process – behaviour vs function – activity vs task – invisible vs overt – documentation vs literal account – knowledge application vs tension resolution) and Multiple Perspective Analysis (from the point of view of persons, objects, settings, tasks, communities, temporality, networks of all that precede) [5]
- Unstructured interview (CTA)
- Think Aloud Protocol (CTA); experts explain what they do and why while they do it
- Retrospective task analysis (CTA); experts recounts one or several specific tasks that they have conducted in the past and how/why
- structured interview (CTA)
- constrained-processing tasks, limited information tasks (CTA)
- predetermined characteristic tasks (CTA)
Depending on how those methods are applied or combined, they can yield:
- critical decision maps (e.g. decision trees, coded transcripts using annotations such as: appraisal, cue, action, deliberation, contingency, meta-cognition)
- work domain analyses (e.g. abstraction-decomposition matrix)
- concept maps (e.g. relational diagrams between concepts and propositions)
- psychometric scores
So, in some sense, the study of expertise is the search for a model of expertise suited to the intended application, a model that can guide whoever uses it in their endeavour, be it teaching or the building of expert systems. Further possible outcomes, based on the list above are: knowledge bases and domain ontologies (as understood by computer scientists).
Knowledge bases and domain ontologies are often part of expert systems. When it comes to building expert systems that imitate human thought or at least that have some cognitive ability, the general view is to adopt a model of expertise where expertise is made of two main components [6]:
- a knowledge base, where knowledge can be categorized as follows:
- factual knowledge – composed of textbook knowledge and “common” knowledge in the specific domain
- heuristic knowledge – made of more person-specific experience-based knowledge
- a knowledge representation and reasoning (KRR) framework, (e.g. an ontology) which can be made of (possibly a selection amongst):
- a set of production rules – similar to First-Oder Logic rules, but that would accommodate uncertainty
- a structured object or schema that defines a taxonomy of the domain
- a problem solving model, such as inference engines
- an analogical reasoning scheme, like those developed in Machine Learning
In keeping with the scientific tradition, the breaking up into smaller components aims to facilitate their computational handling. I will comment in a future post on my take on KRR and on how the separation between knowledge bases and KRR might sometimes be difficult to make, as well as on the problems that might occur when it comes to encoding and codifying knowledge and reasoning.
Despite the variety of ways in which expert knowledge can be studied, it is essential to remember that:
“Expertise is not just about inference applied to facts and heuristics, but about being a social actor.” [5, p127]
Expertise is a situated activity. So that to understand expertise, and when undertaking knowledge elicitation in order to build expert systems, it is essential for knowledge engineers to keep in mind that they are undertaking “an epistemological enterprise” [6, p91] as much as an ethnographic study [5], where cultural and social contexts participate in the building of expertise.
What characterizes Experts?
Following the exposition of the strategies developed to study expertise, here is now an attempt at summarizing the characteristics that experts exhibit, according to the Handbook [1]. I’m not claiming it’s an impartial view, as I’ve tried to categorize the series of characteristics evoked and described in each of the chapters in such a way that they can be seen as abstract expressions of specific traits [2,6-10].
- Experience. Experts have accumulated years of practice, which enable them to be efficient, to automate some tasks. It also entails that some shortcuts have been devised and that some of the processes have been internalized and have become implicit.
- Acquired knowledge. Experts know a lot in their domain. This can be assimilated to the cognitive ability known as crystallized intelligence (aka Gc in the psychometrics framework), it is a capacity to store data and facts relevant to their domain related to Long-Term Working Memory (LTWM).
- Knowledge organization and retrieval. Experts structure their knowledge. They are able to identify salient features and organize their knowledge into meaningful cognitive units, facilitating the dialogical relationship of LTWM and information retrieval (also considered part of Gc). The very definition of what saliency is for a feature depends on the domain and on what the best “handle” on the data is – where I’m using the term “handle” to express both reach-ability and representativeness of the data based on a feature.
- Modelling. Experts spend a lot of time assessing qualitatively the problem at hand, making it into an abstract and conceptual problem that can share properties with other already encountered problems and thus enable them to recall strategies to solve the current problem.
- Reasoning. Reasoning involves juggling with the knowledge and models at hand, possibly at a symbolic level. It involves strategies such as data-driven reasoning and hypothesis-driven reasoning (which seems to be used more by novices than by experts). It calls upon inferences (induction, abduction, deduction), analogy, consistency checking, counter-factual reasoning, and the handling of constraints, uncertainty and ambiguity. This can be assimilated to what is called fluid intelligence (Gf) in the psychometrics framework.
- Meta-cognition. Experts constantly and accurately self-monitor; they keep track of what they’re doing, check for errors in their reasoning, and are more resistant to interruptions than novices.
- Opportunism and creativity. Experts are opportunistic with their sources of information, they adapt to the present problem. Regarding creativity and imagination, there are views according to which experts can be affected by functional fixedness, and thus lack in creativity. Yet, other views claim that creative thinking advances knowledge and thus enhances expertise (and vice versa expertise and skill promote creativity!).
Attempts at being objective in this categorization are rather futile in my opinion, provided that I am immersed in a specific culture and society and have my own specific intentions when drawing up this list, although, of course, one of my intentions was to summarize the findings exposed in the Handbook [1]. For those very same reasons of cultural bias all of these traits are affected by the context in which experts accomplish their tasks (despite the claim that Gc is affected by cultural bias but not Gf – a claim that doesn’t entirely convince me, possibly because the evaluation of Gf cannot be unbiased?).

To conclude this post, I would like to narrow down the subject, and refocus on my research endeavour, that is on the processes at play in the interpretation of ancient documents. In particular, I was intrigued by Voss’s mention, as in passing [9,p579] of three factors to evaluate the quality of an argument in History:
- acceptability of the evidence
- supportive-ness towards a certain claim
- consideration of opposing evidence
These three factors could be compared to the three characteristics that enable to evaluate the goodness of an argument according to Haack [11], in a more general epistemological framework:
- favourableness towards a claim: ranging from preclusive to conclusive (via supportive)
- independent security: ensuring that the full argument doesn’t entirely collapse if one piece of evidence is removed
- comprehensiveness: ensuring that all relevant evidence (supportive or otherwise) has been taken into account
And although these two characterizations cannot be mapped bijectively (i.e. with a one to one correspondence), they seem to be covering the same kind of grounds. The main reservation I have about those otherwise seductive theories of justifications is that none of them seems to be taking into account that the justification is produced by an expert, a person, a person with a body in a cultural context, and with their specific intentions and expectations.
Do you think that one or the other characterization could accommodate the injection of the cultural and/or personal bias of an expert? How do you think that might be performed? Would that even be possible?
References:
[1] K. A. Ericsson, N. Charness, P. J. Feltovich, and R. R. Hoffman, eds., The Cambridge Handbook of Expertise and Expert Performance. New York: Cambridge University Press, 2006. (ToC available here)
[2] M. T. H. Chi, “Two approaches to the study of experts’ characteristics,” in [1], ch. 2, pp. 21–30.
[3] R. R. Hoffman, “How can expertise be defined? Implications of research from cognitive psychology,” in Exploring expertise (R. Willimas, W. Faulkner, and J. Fleck, eds.), New York: Macmillan, 1998.
[4] R. R. Hoffman and G. Lintern, “Eliciting and representing the knowldege of experts,” in [1], ch. 12, pp. 203–22.
[5] W. J. Clancey, “Observation of work practices in natural settings,” in [1], ch. 8, pp. 127–45.
[6] B. G. Buchanan, R. Davis, and E. A. Feigenbaum, “Expert systems: A perspective from computer science,” in [1], ch. 6, pp. 87–103.
[7] E. Hunt, “Expertise, talent and social encouragement,” in [1], ch. 3, pp. 31–38.
[8] P. J. Feltovich, M. J. Prietula, and K. A. Ericsson, “Studies of expertise from psychological perspectives,” in[1], ch. 4, pp. 41–67.
[9] J. F. Voss and J. Wiley, “Expertise in history,” in [1], ch. 33, pp. 539–84.
[10] R. W. Weisberg, “Modes of expertise in creative thinking: Evidence from case studies,” in [1], ch. 42, pp. 761–87.
[11] S. Haack, Evidence and Inquiry: towards reconstruction in epistemology, ch. Foundherentism articulated, pp. 73–94. Oxford: Blackwell, 1993.

This is where it all starts. Today is officially the first day of my fellowship, and I need to get myself into gear and out of those starting blocks. I’ve done cross-disciplinary work before, but here, I’m flying solo – or almost, as I thankfully have a mentor, and an advisory board of 7 to help me steer this project in the right direction.
So. I have 9 months to build a tool that will enable to curate digitally the knowledge creation process that is the act of interpretation of ancient textual artefacts such as Roman writing tablets and cuneiform tablets. I’m not starting entirely from scratch as this project is the natural continuation of the e-Science and Ancient Documents project; however, I’ll be taking a different approach, and attacking the problem by many angles at the same time. That’s where logistics and organization come into play. I have piles of literature to go through, from books on Embodied Cognition to handbooks on Expertise and Expert Performance and on Science and Technology Studies, via articles and manuals on Argumentation Theory, Logics, and Epistemology. All very exciting indeed – if a little intimidating maybe (is that possibly why I’ve just capitalized the names of all those fields)? And I’ll have to go through them one by one. I want to.
Beyond the workplan that I have established in the grant proposal, and that dictates monthly milestones (e.g.: the first month, January 2012, is mostly dedicated to bibliography), I will be using this blog to report and react on my readings. First in the pile, and the subject of my next post, will be an account of my musings through:
”The Cambridge Handbook of Expertise and Expert Performance” (2006) K. Anders Ercisson, Neil Charness, Paul J. Feltovich and Robert R. Hoffman (eds). Cambridge University Press.
]]>I mentioned this early-career fellowship application in my last post, and well, now I can present to you what I’ll be working on. It’ll be a 9 months project, starting in January and it’ll be concerned with knowledge curation in the context of the development of interpretations of ancient textual artefacts. And this blog was originally started with this very project in mind. I will remain at OeRC, and Prof. Alan Bowman will be my mentor.
Here are the official title and summary:
Digitally Curating Knowledge Creation: Understanding and Recording the Process of Interpreting Cultural and Historical Artefacts
Interpretation and re-interpretation of textual artefacts such as Roman wooden tablets (e.g. the Vindolanda tablets) and cuneiform clay tablets are a core activity for documentary scholars and historians. It is the act of interpretation of these documentary artefacts that gives them a meaning both as a text and as an object, one that can then be shared amongst academics and with the wider public, shedding light on a past history and culture. Interpretation is thus an act of knowledge creation.
Creating this knowledge is a complex and often arduous task that involves elaborate rationales that are strongly influenced by the context in which they are developed. For example, interpreting a certain glyph on a roman incised tablet thought to be dating from 29 AD as an ‘A’ is strongly influenced by the current palaeographical knowledge of letter shapes from that period; or reading a word on a tablet as ‘ox’ is influenced by the fact that the tablet was found in a region where cows were and still are renowned for their size, by the fact that that region is known to have rebelled against a taxation system in ox hides around the date the tablet is thought to have been written, thus leading one scholar to interpret the tablet as a record of a sale of an ox; a century later, the tablet is reinterpreted as a debt acknowledgment, where no mention of an ox is found (check out this slide show on the eSAD website for a snippet on how the story of the “Frisian ox” tablet was revisited). The variables that influenced these divergent readings of the same tablet are contextual; they comprise perception, expectations, intentions or have to do with a cultural perspective, all of which are mostly implicit. We aim in this project to facilitate the identification and exposition of such variables to facilitate interpretations and potential revisions.
At times when museums and libraries are concentrating on digitizing and curating artefacts, it is crucial to also address the question of digitally recording the knowledge associated with the data. By digitally recording not only the knowledge but also the knowledge creation process, we will allow, beyond data curation, curation of the knowledge that confers a meaning on an artefacts. Such digitization also responds to a need to trace the provenance of knowledge.
To that effect, and building on previous work that analyzed how interpretations of ancient documents unravel, I will build a software component dedicated to the support and recording of the development of interpretations, and in particular we aim to make explicit implicit variables such as cultural perspective and intention, thereby facilitating processes such as revision of an interpretation and development of alternate interpretations. In order to enable digital curation of knowledge creation, we will deploy a number of methodologies and techniques spanning:
- ICT technology and Computer Science to develop a web-based software component that will work with other software such as artefact digitization software,
- Argumentation Theory and Epistemology, to model the interpretation thought process,
- Cognitive Sciences, Sociology and Information Studies to study the expert practice of interpretation of documentary artefacts, and naturally
- documentary scholarship in Classics and Oriental Studies, to design the software component according to the scholarly practice.
The vastly inter- and multi-disciplinary reach of this project, beyond the software component that will directly result of it, will also serve:
- to engage disciplines that are more traditionally scientific with the type of questions and challenges that Humanities scholarship faces, and
- to establish a direct feedback loop between the design and use of ICT technology in a Humanities discipline and the impact of these technologies on the scholarly practice.
I’m over the moon that this project has been funded! Now to prepare and tackle the task! Can’t wait to get started 

Three months I’ve kept you on hold – even if without drab music or compliments! It’s now high time to resume activities on this blog! It’s not that I’ve been doing nothing though, on the contrary. In those three months, I’ve been mostly writing, writing papers, writing grant proposals and replies to reviews of grant proposals. And exploring new avenues for future research.
It was that time of year again, where, as a staff member of a non teaching department at the University of Oxford, I had to start thinking again of how to maintain myself in employment – and preferably in research. Being of post-doctoral standing in a non-teaching department has both advantages and drawbacks. One advantage is that my daily tasks all revolve around research without ever being overruled by teaching and admin-related responsibilities, the main drawback however is that without teaching duties, it is much much harder to get a permanent contract at the university – and, as a matter of fact, the permanent staff in the department are either of professorial standing, or are lecturers jointly employed by the department and a teaching department, or are part of the research support team. So with the end of my contract lurking in the not so far distance (December), the main focus was on creating my next job. I had some research ideas in store, all of which are connected in one way or another to the main focus of this research blog.
So first, I had to:
- wrap up the report for the e-Science and Ancient documents project,
- write up a talk I had made at a conference in March for further publication (my Visualization in the Age of Computerization conference talk) – this paper also elaborates on my post ‘Şalmu’ and the nature of digitized artefacts; it’s now under review, and I should be hearing back soon
- write up another talk I did in June (my Problems in the Artemidorus Papyrus Colloquium talk); this paper also presents new work I did with modelling the papyrus
Of course none of this paper writing was mandatory, but first of all it was fun, and also it is important in terms of building my track record for my grant proposals. Even if digital dissemination via blogs and social networking along with presence at conferences and workshops are improtant, until we devise new models and means of publication (by which I mean that digital tools can also be part of output that is assessed by research councils), peer-reviewed publications prevail (with all the faults and lacunae that system might hold).

Next I had to tackle grant writing. Always a daunting task, especially when you haven’t written so many of them before! I’d been waiting to hear back from a proposal I had submitted in January. I received reviews, and as I had a right of response, I used it. That proposal was the first one I’ve ever written and submitted. Two of the three reviews seemed rather enthusiastic, the third one expressed some reservations, but they seemed to be rooted in a misunderstanding rather than in a huge flaw in my project proposal. So I had gone through the first stage of the funding process, and had to wait until I heard a final answer. So, in the mean time, well I had to think of a plan B – so I started thinking of another project proposal – pondered over it for about 3 to 4 weeks, and decided that that specific scheme was probably too big a bite for what I could chew at the moment. So I’ve revereted to a less grand idea with a much more modest scale, which is what I’m working on at the moment. There’s another idea floating around too, and I’ll have to tackle that one at some stage soon. You’ll understand that I can’t go into the specifics, but all of these ideas do revolve around the questions of knowledge creation and meaning of textual artefacts, and ways to handle and support this process digitally. There must be at least 10 years of research floating in my head around this theme, and the challenge is “just” to get it funded…
All this writing is part of the process. Researching and writing grants in parallel is no easy task, but, boy, am I learning!
My next post will be more artefact and meaning related, promised!
Maybe even a sneak-peak at my work on the Artemidorus papyrus…
]]>“[…]the doubts, the hesitations, the numerous false starts and new beginnings; the guesses sometimes confirmed, sometimes rejected by the script; the continual recourse to books for information of every sort – lexical, grammatical, palaeographical, historical, legal; the interludes of exhaustion and depression[…]”
Unsurprisingly, I realized at the Digital Palaeography workshop that palaeographers seem to undergo the same type of process. In his keynote, Prof. Eef Overgaauw underlined the fact that uncertainty is ubiquitous: “No answer is final” he said, “No result is conclusive. They are all provisional.”
Not only does this chime with my current reflections on uncertainty, it also made me acknowledge that the notions of public and private research do likely extend beyond papyrology and palaeography, and that the shifting shape of uncertainty (the type of uncertainty that is not quantified!) is usually contained to the realm of private research. And, as a researcher, I do have my own way of dealing with my uncertainties: just days ago I noticed that I reached for my pencil to scribble insecure back-of-the-envelope calculations and tentative software design choices, whereas when I write a to-do list or take notes at a meeting or lecture, I decidedly use my ballpoint pen (yes, I do still use a paper log book – and that might well be significant too!)
So the questions that I’m pondering now run along this axis:
When attempting to digitize the process of interpretation that operates the transition from artefact to meaning, we’re dealing with private research, and mutable uncertainty. Beyond the difficulties that this poses in terms of what to capture, does that mean that I’m aiming to infringe (for lack of a better term) on the privacy of research?
Similarly to the changing notions of public and private identities in digital social networks (e.g., see this UCL lunch hour lecture about Twitter and the notion of public and private), could it be that aiming to digitize the process of interpretation might displace the notions of public and private research and blur the frontier between them?
Does it mean that the private uncertainty will end up in the public realm? And if so, is that such a bad thing? Wouldn’t it just be showing integrity in research?
And after all, isn’t one of the aims of research to produce reproducible, understandable and verifiable output?
Wouldn’t the best way to do this be to be able to keep track of knowledge creation as a process? It is not just data provenance (see this blog post by Prof. David de Roure for more about data provenance and reproducibility/repeatability) that needs to be ascertained, but the process that led to the results we produce.
Rather than chasing an elusive (and possibly utopian?) objectivity and attempting to resist interpretation at all cost, couldn’t we rather document our research processes, give an insight to colleagues as to how we’ve come to produce a result?
[1] H. C. Youtie. The papyrologist: artificer of fact. Greek, Roman and Byzantine Studies, 4(1):19–33, 1963.
]]>It’s great to be back “in the field”! Well, it’s nothing exotic, really, as I went to Wolfson College, Oxford, to attend one of Dr Jacob Dahl‘s interpretation sessions, where the subject of study for this group of three students, Jacob and myself was a set of Proto-Elamite texts (yes, this is a wikipedia link; it has been approved by Jacob, for more info on cuneiform scripts, check here). When I say it’s not exotic, in fact, to me it is – well, not Wolfson that is, but Proto-Elamite, which I have never encountered before. So far, the interpretation sessions I’ve attended had to do with Latin cursive texts such as the Vindolanda tablets, or with Greek epigraphy; two scripts that belong to the group of segmental alphabet writing systems (as does the script I’m using to write this post). Proto-Elamite, in contrast, is thought to be a derived from the proto-cuneiform script; it is classified as logophonetic and seems to be at times pictographic and at times syllabic. Additionally – and non-negligibly – Proto-Elamite is one of those few ancient languages that have not yet been entirely deciphered.
So each of those interpretation sessions is a bit like a think-aloud forum, where, under the guidance of Jacob who has worked extensively on Proto-Elamite, all participants voice hypotheses of decipherment, of interpretation. Mostly discussed in this session, were signs, their meaning, their grouping and their sequences.
I don’t intend to make a full report or transcription of the session here. What I have in mind is more to share my (after-)thoughts on what seemed to be the salient differences and commonalities in method between the decipherment and interpretation of Latin or Greek scripts and the efforts to “crack” this yet undeciphered logophonetic script, without the support of my notes or of the documents that were handed out.

The first striking thing to me is how different the images of those tablets look from the line drawings we were given (the photo above and the line drawing on the right are not of the same clay tablet, but compare the upper-right hand sign on both the line drawing and the photograph, and their visibility). Bear in mind that, in a drawing (and in a photograph, too), there is already some kind of interpretation going on, and, as undeciphered as the script may be, it was still less understood when the line drawings were made than it is now. Jacob actually pointed out an example where a line drawing seemed to display some unique kind of imprint while, when he inspected the actual corresponding tablet in the Louvre, he was able to recognize signs he had already encountered in other Proto-Elamite texts. That is definitely something that I’ve observed in the Latin cursive interpretation sessions: prior knowledge that prompts to see as (interpretative vision) more than to see that (descriptive vision). One avenue that I’ll have to explore in depth will be to evaluate to what extent the nature of the writing system influences these constant oscillations between the text-as-shape and the text-as-meaning.
The second thing that intrigued me was to see how, given a same A4 sheet of paper with a line drawing of a Proto-Elamite text like the one above, one participant worked with the sheet oriented portrait, two with it landscape, and the third one alternately looked at the text in portrait and landscape. Being completely new to such texts, I puzzled for a while on various questions: what was the direction of reading, what is a line, how were the signs counted in a line (or a column, depending on how the paper is oriented!). These are basic questions, but the simple fact that I was asking them and that none of the other participants seemed bothered by them, already showed how some of the knowledge is already implicit and participates in the building of expertise and its adjoined tacit knowledge. Interestingly, when I asked about the reason for the disparity in how people looked at the text, I was given a very well constructed and rational reply that had two main arguments: one was historical and stated that at some (undetermined) point in the history of cuneiform scripts, the clay tablets have been rotated by 90 degrees, along with the direction of writing; the other was a reinforcement of the notion that it’s a good idea to be able to make abstraction of the orientation of the text, to avoid the temptation of attributing the text to a time period according to its orientation. That specific question of dating according to features of the text or its layout rang a bell. I’ve seen this before with Latin cursive, but under a palaeographical guise.
The last thing I’d like to mention here is the dialogical nature of the type of argumentation I witnessed. The collaborative aspects of the session were obvious, and everyone played along in this charade deciphering game they were presented with: for each new hypothesis emitted, there was plethora of support and rebuttal, instinct and uncertainty, doubting, convincing and self-convincing, connecting and cross-referencing, contradicting and reinforcing. This is really exciting as one of the main reasons why I’m attending these sessions is to understand how one becomes confident enough in one’s interpretation to publish it, discuss it, convince others until it is either naturally accepted as new knowledge or eventually supplanted by a “better” interpretation; alternately, it can be simply rejected, or linger at the centre of heated expert debates.
I’ve enjoyed this little memory-based exercise on this session! Now to go and process it in full.
And I’m very much looking forward to the next one…
Last term (Hilary term 2011) at the traditional Oxford Slade Lectures, I had the chance to hear Prof. Z. Bahrani speak about the nature of representations in the Ancient Middle-East. The title of her lecture series “The Infinite Image: Art and Ontology in Antiquity” (many thanks to Prof. A. K. Bowman who flagged them up!) resonated with some of the questions we were dealing with while working on the e-Science and Ancient Documents project. I was not able to attend the very first lecture, but enjoyed tremendously the second one, as well as all the following ones. The second lecture, entitled: “What is/was an image?”, struck very close to home and got me thinking about mimesis and subsequently about the nature of digital images of artefacts, or more generally of digitized artefacts. The core of Prof. Bahrani’s argument in this lecture can be found in her book “The Graven Image” (University of Pennsylvania Press, Philadelphia, 2003.)
The first slide of that “What is/was an image?” lecture showed a Jericho skull (~7000BC, such as the one from the British Museum on the right) and set the tone for the rest of the lecture. With this artefact, it is legitimate to ask: is it a skull? is it a mask? In some sense it is both. It is a plaster-covered skull of a deceased. So then: is it an image of the deceased? It is certainly not our contemporary version of a portrayal of the deceased. Is it an auto-icon then? After all, it incorporates a part of the body of the deceased (the skull). In this artefact, the deceased is both present and represented. It is more than a representation in a mimetic sense. Mimesis as a representational concept is our contemporary understanding of images; it is inherited from Classical Antiquity (Aristotle, 4th Cent. BC), and should not be used as an ontology when studying images/representations from anterior cultures. Prof. Bahrani then went on to explain what she thought the ontology of images was in Ancient Mesopotamia (or rather, Assyria and Babylonia). The Akkadian word şalmu, which scholars have translated as image, representation, or portrait, seems to have had a meaning that goes beyond our concept of representation by extending it to a representation that conveys a form of presence of the signified (note again the anachronism, and how, to explain a past ontology, we have to use our inherited Aristotelian ontology of images, in which the – posterior – signifier/signified distinction is deeply rooted).
Here are the two reasons why this lecture inspired me so much:
- Mimesis has been a guiding principle in the way we have performed image capture and processing of ancient texts in the e-Science and Ancient Documents project. But, suddenly, in the light of this lecture, I started doubting mimesis was the term I wanted to use. In fact procedural mimesis was our guiding principle in the development of image capture and processing strategies; we were attempting to mimic the strategies of the papyrologists who decipher the texts, the processes they call upon, rather than emulate the experts themselves.
- Not entirely disconnected from the mimesis issue, I have been bothered for a while by the use of the term digital surrogate in the Digital Humanities literature when talking about digitized versions of artefacts; and I was only able to propose the term avatar to replace it. Now, with the concept of şalmu, I can explain why digital surrogate dissatisfies me, and articulate why avatar, or better yet, şalmu, seems more appropriate. The term digital surrogate conveys a sense of replacing the digitized artefact in its originally intended function, whereas the terms avatar and şalmu offer a sense of representation of the artefact with a specific form of presence; and as the act of digitization is always conducted with specific – if often implicit – intentions and expectations, a digitized version of an artefact can only be a specific form of presence of the original artefact. Reading further on the concept of şalmu in Prof. Bahrani’s “The Graven Image” (chap. 5), I realized that, actually, a digitized version of an artefact is even more like a şalmu than I originally thought. See for yourself; the three main characteristics of a şalmu are that:
- it is encoded
- it is embedded into the real
- it influences the real
Don’t you think that these characteristics describe perfectly a digitized version of an artefact?
Doesn’t the term şalmu convey better than digital surrogate the fact that digitized versions of artefacts are contingent on the intentions and expectations that the digitizer has from the artefact?
Or would avatar be better suited, seeing that it also expresses a form of presence and is already widely used in the digital world?
Want to share your thoughts? Please do leave a comment below!
]]>• l’étude des pratiques des experts lorsqu’ils s’attèlent à la tâche de déchiffrement et d’interprétation d’un artefact.
• les méthodes adoptées afin de provoquer, d’identifier, de soutenir, et d’enregistrer les moments clefs de la découverte, de la création, de l’invention de la signification des objets étudiés et du savoir historique qui en découle. Ces méthodes auront recours à des approches issues aussi bien des sciences cognitives que de la philosophie et de la sociologie des sciences.
• l’élaboration d’un logiciel de support pour cette tâche, basé sur les résultats d’études de cas.
• l’évaluation de l’impact de l’outil informatique sur les pratiques traditionnelles, et son intégration dans le processus professionnel des chercheurs.
L’objectif ultime est de montrer que, au-delà de la curation des (méta-)données attachées à un artefact, il est essentiel d’aussi conserver les étapes intermédiaires de la création du savoir attaché à cet objet afin, le cas échéant, d’en faciliter la ré-interprétation. La contextualisation des objets est cruciale à leur interprétation, et par conséquent à leur signification, telle qu’elle est restituée dans les musées. L’aspect narratif associé à la diffusion du savoir historique et culturel consistera aussi en un des objets d’études de ce carnet de recherche. ]]>