| CARVIEW |
The theme of the Congress is particularly well suited for the moment we live in, focused as it is on the social and political issue at stake involving the digital. This is something we need to address if we want to create meaningful research and teaching, otherwise Digital Humanities will be only some kind of technical navel gazing and/or a new form of shining ivory tower, so to speak, or even worst, a new kind of latinorum for a new kind of elite. And a latinorum that is much more invisible and (apparently at least) friendly than the older one.
Browsing on the program, I found many stimulating paper that I am eager to listen to, mostly the ones in which political and/or rhetorical and epistemological issues are involved. I will keep reading abstracts in the few days before the actual congress begins, and if I find interesting ideas I will probably write something on it. Stay tuned.
Among the workshops that are always part of Digital Humanities meetings, I am particularly interested in
Narrativa e divulgazione scientifica delle DH: l’esperienza dei QUARANTIP, organised by KRINO and aimed at “fostering a critical debate about ways, strategies and approaches to disseminate Digital Humanities”.
as well as
Aldina (Archivi Letterari Digitali Nativi), a roundtable focused on the situation of literary native digital archives, an issue I am particularly interested in.
The most interesting change, in my opinion, is the fact that the papers will not be read in the traditional way but in a double step discussion: first a sort of panel interview in which a chairman will ask questions about the session papers (questions prepared in advance, together with each speaker), followed later on by an open debate in a different session. In addition to that, material can be downloaded before hand from the website, allowing the public a better understanding of the arguments. This will address, and hopefully solve, the issue of Zoom fatigue that emerged clearly during this long year spent in lockdown.
Personally I would recommend this to become an habit also for future conference in presence from now on, but I know that not everyone agrees with me on this.
Let’s see what the future of academic conferences will be.
Whiel you wait, let me add that a few videos of presentation are already available online, in Italian, on youtube, if you are interested.
You can also watch a Survival Video explaining how to participate, in which some of the new approaches thought for the occasion.
In the past few months thinking about Viruses has been an occupational hazard for everyone in the planet. In fact, this is something that the real world has in common with the digital, provided that at present there is still a difference between the two.
During covid-induced lockdowns, we were all hooked up to our internet connections to breathe and survive. At least those of us that were lucky enough not to be attached to a ventilator and have the economical and technical means to be online 24/7 (digital divide is not a myth, is extremely real)
In the early days of the Covid, the new key role of the digital was immediately emphasised, sort of digital humanities with a vengeance (“oggi, in un momento in cui il materiale di ricerca deve essere reperito virtualmente, forse una seppur minima dose di scuse agli informatici umanisti spetterebbe pure” wrote Antonello Fabio Caterino on March 13) I am not sure I would endorse this request for an apology, but indeed digital humanists were the most qualified to help at a chaotic moment of emergency. And they did. A lot of people were in need for sound advice, and quickly, and thing such as Caterino’s Vademecum digitale d’emergenza or Ripartiamo dal crowdsourcing: un vaccino anti COVID-19 per la ricerca umanistica italiana, or the Open Science and DH tools that come handy at the time of corona, were indeed helpful, and a good starting point for a discussion. But ten months have passed now, and we should soften the rhetoric of emergency a bit, and start to make a more systemic and holistic approach about the future of research and teaching in the academy. As any chess player can tell you, it is normal to make mistakes and inaccuracies, even blunders, during a game, especially under pressure, but you always need to carefully analyze your games afterwards, no matter how good you are.
The experience of the Covid brought forward two things that digital humanists should discuss more and in public: the problem of the infrastructure and the problem of education. Some discussions have already emerged, but there is a need to do more, in my opinion. And this need is quite urgent, I might add. The window of opportunity that the Covid granted us is not going to last forever. Discussions and planning such as the one on Next Generation EU or UNESCO’s Global Learning Coalition, the most sustained attempt at a global outlook on education, as far as I can see, do not happen frequently.
Huge interests are at play here, of course, regarding the whole institutional frame of research and teaching, the whole world of the University as we know it. The main issue is, as usual, the question of data privacy, and the role of GAFAM, the “five sisters” of Big Tech (back in the 1970s, the oil multinational sisters were 7… hardly an improvement), and the field of battle at the moment is the use of platform of distant learning and video conference (such as Google Meet or Microsoft Team): open or proprietary?
The issue is complex, and one should not have an ideological and biased stance. However, I do believe that a public digital infrastructure for schools and universities* (originally in Italian, and also in French) is probably the best possible solution, if not at a national level, at least at the European one.
]]>Every weekend (if I have time, which is not always the case) I read news and essays on Digital Culture that I collected during the week, from various sources, trying to give them some perspective, The collection is idiosyncratic and highly personal, but not, I hope, without interests for the few readers of this blog.
[A slightly different Italian Version is published on Leggere, scrivere e far di conto]
Streaming videogames session, live or recorded (youtube), has become more and more popular in recent years. The phenomenon was described in the Guardian and The Conversation in 2014, and has been the topic of a Master’s Thesis at the University of Utrecht).
On the website in media res there was an interdisciplinry discussion on Video Game Spectatorship (8-12 january 2018), with a number of short essays:.
- Sustained Viewership: (Re)Playability of Video Games (Ashley Jones, Georgia State University)
- Despite Host’s Infamous Toxicity, Independently-organized Tyler1 Championship Series Rivals Official League of Legends eSports Viewership (Eric A James, NorthWestern University)
- Gacha Gonna Get Ya: Watching Mobile Card Collecting ( Caitlin Casiello, Yale University)
- Fan Community and Celebrity in Video Game Spectatorship (Lindsey Decker — Boston University)
- Twitch Plays Diversity: Building Community on Streaming Platforms (Laurel Rogers & Dan Lark, University of Southern California )
In media res is one of the many experiments shaped by the Institute of the Future of the Book, a small think-and-do tank “investigating the evolution of intellectual discourse as it shifts from printed pages to networked screens”. The idea behind it is to promote “collaborative, multi-modal forms of online scholarship”, thanks to a “more immediate critical engagement with media at a pace closer to how we experience mediated texts”.
The pieces, published every week, are short video clip or visual image slideshow taken from the web and repurposed and contextualised (framed) through a a short impressionistic response by a curator (like in museums).
In other words, the distance, in time, between the vision and the critical analysis is almost non existent.
———————————————————————
Danah Boyd suggests we read Automating Inequality, a book of digital anthropology that shows how a deterministic and uncritical adoption of algorithmic decision-making tools to enable social services can make the situation worse.
On the same topic: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy di Cathy O’Neil.
At Academic level one can read the Reading list on Critical Algorithm Studies published on the Social Media Collective Research Blog.
On Newspapers:
- In US, Pro Publica has a section on Investigating Algorithmic Injustice.
- In France, the CNIL (Commission Nationale de l’Informatique et des Libertés) published a report on ethical issues of algorythms and artificial intelligence and a n summary of a one-day seminar on the role of algorithms on daily lifes of citizens (Montpellier, 14 october 2017).
- Still in France, Le Monde interviewed the philosopher Antoinette Rouvroy (29 dicembre 2017): “De mon point de vue, ce ne sont pas les algorithmes qui posent problème, mais plutôt notre propre paresse, notre renonciation à nous gouverner nous-mêmes”.
- also on Le Monde: La peur de l’intelligence artificielle n’est pas (encore) d’actualité.
The topic of fake news, surely not new (as many panels in the recent MLA conference show), came back with a vengence with the advent of social media. The Poynter Institute published a guide.
And lets not forget the Around The World 2017 conference (KULE Institute) who addressed the role of Digital Media in a Post-Truth Era.
———————————————————————
I should also mention the publication of
- two new issues of DHQ (Digital Humanities Quarterly) appeared: the third one and a preview of the fourth of 2017.
- the book Les humanités digitales. Historique et développements by Olivier Le Deuff (Université Bordeaux Montaigne), a history of the origin and development of DH, with a focus on epistemology. You can download for free the Introduction and the Table of Content.
———————————————————————
Call for Papers
International Conference for Electronic Publishing 2018: Deadline Extended to 31 January (#ELPUB2018).
Mathematics and Modern Literature, Manchester, 2018 (Deadline: 5th February 2018)
Multilingual Digital Authorship, Lancaster University, 8-9 March 2018 (Deadline: 2nd February 2018).
7th International Conference on Data Science, Technology and Applications, July 26 – 28, 2018, Porto – Portugal (Deadline: March 13, 2018)
———————————————————————
Programming tutorials for humanists
Python programming for the humanities
Correspondence Analysis for Historical Research with R
———————————————————————
More
]]>The first thing that I want to say is that there are different kinds of multilingualism. We are in a place (Lausanne) where multilingualism comes out from a peaceful (more or less) contact between different languages that went on for many centuries, slowly affecting the very essence of European languages in many ways (not to speak about common latin roots, of course). Multilingualism in a colonial and postcolonial context obviously was far less peaceful and much more abrupt form of cultural and linguistical contact. This is just to say that perhaps when people use the word multiculturalism is not necessarily talking about the same thing, depending on the place and the context he comes from.
Having said that, the most troubling thing about what Grandjean says, especially considering that there has been a multilingualism and multiculturalism committee in place since 2006 , is that
malgré le grand succès de cet appel qui a reçu un nombre record de propositions (589), non seulement la communauté scientifique non-anglophone a très peu utilisé la possibilité qui lui était offerte de soumettre des communications dans une autre langue que l’anglais, mais les articles, posters et tables-rondes proposés en français, allemand, espagnol et italien ont été très souvent recalés par le comité de sélection (voir ci-dessous).
It is not simply a language issue, and that is probably why the solution proposed by the committee (as far as the annual reports say) kind of failed. The final question of Grandjean makes this point quite clearly:
si l’objectif d’un tel congrès international est bien de faire se rencontrer 600 chercheurs du monde entier, ne faudrait-il donc pas se rabattre sur une langue unique (anglais ou pas), accessible à tous, plutôt que de faire miroiter à chacun la possibilité de communiquer confortablement dans sa langue ?
The problem, then, is not using different languages, since the meaning of the conference is reaching a global audience (I should emphasise here that this is also a utopian effort, as much as the dream of multiculturalism) and to do that you clearly need a common language of communication. Which is precisely the way in which English should be seen and used, not as the language of scientific/academic discourse (there are valuable scientific/academic discourse in many languages, no doubt about that). The problem then is the way in which English is used, and I see two ways to address this that could be helpful.
One comes from a translation studies scholar, Lawrence Venuti, who distinguishes two competing strategies in translation, domestication and foreignization, attempting to address “‘the question of how much a translation assimilates a foreign text to the translating language and culture, and how much it rather signals the differences of that text”.
Venuti sees translation as an arena of conflict between ethnocentric, domesticating forces, and foreignizing forces, and in my view one should be aware of these tensions and try to move as much as possible towards foreignization, if the aim is to receive communications from other culture. The balance between these two is not designed entirely during the act of translation, but is the result of systemic structural forces, that are not laws (in the sense rejected by Latour yesterday), but rather social (and political) norms inscribed in the system that surrounds it.
Coming back to DH2014, I think that the role of English as simply a language of communication and not THE language of academic and scientific discourse, should be emphasised more from the very beginning. For instance, I don’t see why the original of a call for papers has to be in English if the language of the country that is hosting and organising the conference is not. If English is simply the language of communication the original could be written first in the language of the host nation, and afterword translated into other languages, including English. In this way the local culture, and not the language of communication, is the one that become predominant. Imagine a situation in which you read a call for paper first in Polish, with the possibility to read it in several other languages (including English, of course). In this way, if you are French or Italian you could read the call for paper without any trace of English, giving from the very beginning a sense of a multilingual community.
As for the paper themselves, the problem is to what extent the English used is foreignized and to what extent it’s domesticated. I believe that if this issue is not addressed, the fact of having the call for papers translated in many different languages will not be effective. This sort of bring me to my second point, that is to say the question of audience, already discussed by Elika Ortega in one or two occasions (and recently in a post inspired by Grandjean).
To explain what I mean I will quote the ending of a previous post I wrote some times ago, in which I was addressing a similar question:
DH people routinely ask themselves if we should eventually drop the word digital, because what we are actually doing is the very same thing that we always did as humanists. I don’t disagree with this position. But, looking at the Italian label, informatica umanistica [humanities computing], I think that, given the interdisciplinary nature of informatica [computing] itself, we could also reverse the claim, dropping the adjective umanistica [humanities]. The question I am now asking to myself (and to anyone else who dare to answer, of course. Comments are welcome) is:
Should we perhaps say that, as digital humanists, what we are actually doing (in fact, what we have been doing for a while already) is pure a simple informatica [computing, or computer science], with no humanistic or scientific label attached? would this be too daring, too far-fetched?
Following this train of thoughts (actually, if you haven’t read that post, you should read it now. Hopefully it will put this one in perspective), what I am proposing is to start from Computer Science as it is and see to what extent it can be considered humanistic. In practice, what I will do in the following weeks is to read Brookshear’s Computer Science: an overview (Pearson 2011) and comment, chapter by chapter, on this blog, highlighting in what sense humanities are involved, putting together a list of topics and a reading list (using what I read in these three years) with material that could complement what is said in the book, summarising, rephrasing and rewriting the topics from the humanistic point of view, etc.
As I said, it’s an experiment, but as far as I know (actually I am quite sure about this) such an approach has never been tried before, so I am very curious where it could lead.
[UPDATE: I just want to add that this experiment is not necessarily aimed at saying something new about anything, it will mainly be a way to put order in my brain. Sometimes you need to do that if you want to move forward.]
]]>During the launching event of Art/Works: Platform for the creative arts and industries, here in Cork, at Triskel, there was a performance of the Conflicted Theatre Company dramatizing the life of an aspiring… artist? journalist? writer? not sure… the protagonist wasn’t sure either, and ended up working as some sort of answering/calling “machine” all his life, with bleak and depressing consequences, as you might expect. It was a moving look at the dark side of a “creative” career, precisely the kind of thing that the rest of the day (and the various events that happened during it) was aiming at avoiding. First of all the panel that immediately followed, when a group of professional involved in the creative side of things gave advice about artistic and creative writing careers (especially on how to start one). One thing that I found intriguing was that several times the idea of ‘failure’ surfaced, not necessarily in negative terms. It seems to me, in fact, that in this kind of industry, failure is more than an option, is almost a necessity, in order to become a writer and an artist. We could almost speak of a “right to failure” that should be established openly (even by the industry, perhaps). The good thing about failing in arts and creative writing is that there are no real consequences (apart from the career of the one who failed, of course), compared to other profession where the consequences can be very serious.
During the discussion, it also emerged the possibility that new media (that is to say the staple food of a digital humanist diet) can be seen as something very promising that opened up unexpected possibilities in the creative field, also creating a space for experiments in which the risk of failure could be less damaging, career-wise.
Failure, of course, is part of the dark side of things, and this reminded me of a panel I attended at the recent MLA about The dark side of the digital humanities, which was in fact addressing a similar issue. But what if the dark side is actually the bright side? If new media, more than a solution, is a problem? This was the main point of the first speaker at the MLA panel, Wendy Hui Kyong Chun, when he said that the dark side of DH is actually its bright side, that is to say “its alleged promise to save the humanities by making them and their graduates relevant, by giving their graduates technical skills that will allow them to thrive in a difficult and precarious job market”. In his view this bright side was actually a way of giving up to a “bureaucratic technocratic logic[…] an enframing that has made publishing a question of quantity rather than quality, so that we spew forth MPUs or minimum publishable units.” The issue here is of course the way in which the humanities job market work, and how DH is NOT solving the problem. Nothing is said about the intrinsic value and potential of DH work.
The risk, however, even in cases of technically well-shaped digital work, is to “shine a flashlight under a streetlamp”. What is important is in fact what is hidden from view because of the shining light. What should be endorsed instead is precisely the dark side, “the side of passion”. Humanities, for instance, should play a role in big data “because we can see what big data ignores”. The dark side of things, in fact. Hence the importance of things such as TransfomDH, or anything that focus on Diverse and Open Digital Humanities.
Another speaker, Patrick Jagoda, considered “gamification”, that is to say “the use of game mechanics in traditionally non-games activities”, as the dark side of DH. Thanks to gamification, “the structure and logic of games creeps into consumerism, crowdsourcing, and social media applications”, like in the case of the Chore Wars website, an “house chore management system”, meant “to help you track how much housework people are doing – and to inspire everyone to do more” (McGonigal 2011: 120). According to him this trend will increase, especially in educational settings, where the use of badges is becoming more and more popular. In spite of the fact that “adopters of gamification across different fields, including education, regularly proclaim it to be an unparalleled organizational technique”, says Jagoda, “there has been some resistance to this concept and its widespread application”:
Curiously, much of the criticism has come from game designers. Gamification has been condemned, in these circles, for adopting only the least artistic aspects of contemporary digital games — namely, their repetitive grinding and achievement-oriented operant conditioning. […] My own visceral reaction to the phenomenon has often been one of deep skepticism. Game-based badges or experience points motivate people to perform repetitive tasks but not necessarily to engage closely with texts or to undertake projects at a more complex level.
This is not to say, however, that games are evil. In fact Pagoda himself designed games that
offer players interactive contexts for thinking through and experimenting with complex problems in a hands-on fashion. Digital games enable multiple learning styles and engage players at several levels simultaneously through text, graphics, animation, audio, algorithms, and haptic feedback. They spur decision-making, enable roleplaying, and do many other things that exceed the addictiveness of point accumulation and victory that characterizes gamification.
Still, we need to “navigate the darkness” of gamification, and in order to do it Jagoda, at the end of the talk, posed three questions
- How should we think about games in the historical present when gamification is arguably not merely a local phenomenon (for instance, in business, marketing, or education) but increasingly the form that economic and social reality takes in our world? […]
- Do the benefits of gamified “badges” outweigh their potential to operate as a reductive form of behaviorism? Should we incorporate badges into our pedagogy? Can we imagine badges that move beyond the superficial level of short-term behavioral modification? […]
- How might we imagine what are called “serious games” or “countergames” as complicating gamification? […]
For those of you who are interested in the Dark Side, this is the first of (at least) a series of posts I am planning to write on the subject.
]]>This is the prezi of the presentation I gave (in collaboration with James O’Sullivan) at the ninth ICT in Education Conference hosted on the Thurles Campus of Limerick Institute of Technology on Saturday 11th of May 2013. I decided to use prezi instead of powerpoint because I thought it gave a better visual representation of the idea of ‘flipping’. I also added a draft of the basic points of my talk.
Draft Text [the number are the sequence of the prezi’s steps]
[1] I am working on literary studies and I found myself recently puzzled by the idea of having to teach something like the Mondrian Mood, one of the many example available of e-poetry[2]. It is a need that will become more and more pressing in the future, since forms of e-poetry will presumably become more widespread. A number of questions come into my mind when looking at it: can I use a traditional literary approach? what other kind of expertise do I need to interpret it? Is this changing the cognitive way I approach a “text”? Can I still call it a text? And so on and so forth…
I recently called this situation in which we are now “la maionese impazzita” [3] (you can follow the link to my blog where I first used the expression). I don’t know if you ever tried to make home-made mayonnaise[4], but, if you have, I am sure you know what this expression means, you have experienced it at some point (at least as part of the learning process). And if you don’t you can still look at wikipedia’s entry for mayonnaise: “Making mayonnaise is a process that requires watching; if the liquid starts to separate and look like pack-ice, or curd, it simply requires starting again with an egg yolk, whisking it, slowly adding the ‘curd’ while whisking, and the mixture will emulsify to become mayonnaise. If water is added to the yolk it can emulsify more oil, thus making more mayonnaise.” You need to focus to avoid destabilizing the mixture, the emulsion. In Italy we call this destabilized result “maionese impazzita” (literally, “mayonnaise gone mad”). [5]
Making mayonnaise, as you can see from the video[6], can have troubling effects on the way we look at things, that is to say in our cognitive abilities. In fact this is what is happening, and Marx summarizes it in a very famous quote[7] (I am sure you have heard it before) that was used in a book to describe the modern(its) condition.
You might, I guess, call this a problem of “information overload”, something that starts very early in life[8], as you can see, and never leave us afterwards. But I [would] rather not. Simply because I do not believe that the problem is that we have too much information these days. The problem is not the increased amount (after all even in a medium-sized library there is way more information than you can process in a lifetime), the problem is that it is organized in ways that we are not able to control properly. It is the way in which information reaches us that is problematic and has to be discussed, not the quantity. And for sure, information overload is not something that is “born digital”, it pre-dates the invention of digital computers. The first one to speak about it was Diderot, in the eighteenth century [9]. Sorry, Wikipedia again.
I see this as more of a cognitive problem than a quantitative problem. For me the reason is not so much the amount of information but the way in which it is organized and presented. And of course there is a huge amount of research work on knowledge management I could quote here. But I am more interested in the effect of the digital, on the digital instruments that are involved. The discussion could be very long but I will limit myself to this one quote by Jaron Lanier [10] that pretty much summarize what I want to say.
The nature of the digital revolution is that the “language” is inscribed in the material support in a more cogent way than in a more traditional support such as the book. The 1 and 0 of the first programming language were (and still are) physically inscribed in the switches of the first computers, the big ones like ENIAC[11]. At that stage of computing, programming a machine meant physically change the switches, the “physical” 1 and 0, creating a different hardware for each problem or equation you were trying to solve. There was no programming language like the ones we have now, which were conceived “extracting” or “abstracting” them from the actual machine that became therefore virtual.
What has this got to do with pedagogy and flipping/blended classroom? A lot, I think. As Mark Pegrum[12] says, to survive in a digital working environment people needs multiple literacy. And you need multiple literacy to interpret e-poetry as well.
He made a long list [13] of them in his article.
Flipping the classroom make[s] sense because it takes into account this change in our cognitive relationship with the objects – it is not so much that we learn a language and then we organize reality according to it, but we need to struggle to discover and extract the language(s) inscribed in the objects (not only physical but also digital objects) we are using. This is the process we need to teach in a digital age, and this can be done not with theoretical discussions but with practical engagement with the tools themselves.
The idea is to create digital pedagogical tools that foster awareness of the process, in the same way in which meta-fictional features in a novel focus on how fiction is created, on the mechanism of “world-creation”, if you want, rather than the actual content. And to spend time in class using those tools, discovering and extracting their language(s), rather than lecturing about them (languages). If we don’t do that, the risk is not going beyond the patchy technological knowledge of the so-called “digital natives”. And not being able to make a tasty home-made mayonnaise[14].
And then we introduced some specific tools.
]]>There is been a lot of discussion around Digital Humanities lately, caused by (but also causing – or the other way around) what Ted Underwood called a PR crisis. This affected the current job market in humanities and literary studies, where “a lot of job postings are suddenly requesting interest or experience in DH”. Underwood has an interesting approach on this, saying that is perfectly legitimate for a literary scholar NOT to have a deep understanding of digital humanities and do his job perfectly well. Only these days there is a need to demonstrate some sort of awareness of what is happening in the DH field, even if you are not in fact doing it, in the same way in which, during a job interview, you can pretend that your line of research might be interesting from, say, a post-colonial point of view, even though following that kind of lead never crossed your mind before (and maybe never will). For some reasons, though, says Underwood, DH prevented people to make such a claim (what he calls “intelligent, informed BS”). Quite often, when people is asked about DH
they feel they have to say “no, I really don’t do DH.” Which sounds bracingly straightforward. Except, in my opinion, bracingly straightforward is bad for everyone’s health. It locks deserving candidates out of jobs they might end up excelling in, and conversely, locks DH itself out of the mainstream of departmental conversation.
Which, of course, is very bad. The reason of this is perhaps in the very nature of Digital Humanities, which, “— unlike some other theoretical movements — does have a strong practical dimension. And that tends to harden boundaries.” To this I would add that DH in itself is not a clear and straightforward field as it might appear to be at first sight. In some ways we are all interested and affected by the use of digital and technological tools in our life by now, both academical and personal. But is this enough to build a new field of academic research (which is what we are trying to do in the end)? I believe that there is a lot more than that at stake with DH, and I will try to make my point later on. (To be fully honest here, I think that the next one that says in front of me that using computers and digital tools is meant to make humanities research easier will not survive my reaction – and normally I am a very quiet person). Also, as James O’Sullivan pointed out recently, there is a tendence to assume a defensive position in talking about DH, what he calls a “knee-jerk reactionary way” caused by the fact that this is the moment in which Digital Humanities is claiming to be a discipline on his own
Digital Humanities is not as new as people might think, it’s just that the boundaries are only now being defined. The use of technology in humanist pursuits is not even a phenomena of what we have so proudly declared the “digital age”. […] When a new camp is established, the other camps get uneasy – how will the newcomers encroach upon their space? This anxiety is greeted with two things – attack and defence. […] Digital Humanities is here, and it’s here to stay. As already noted, it’s been here a while.[…] Digital humanists don’t need to convince the doubters – we simply need to justify our existence. There will always be doubters, particularly in the humanities, where subjectivity holds dominion.[…] Perhaps the doubters don’t actually doubt Digital Humanities, but rather, perhaps their apprehension is directed towards everyone suddenly claiming to have roots in this latest camp?
This is very well said, and in many ways I do not disagree with it, but I feel that there is really no need to take a defensive approach here.The depiction of the other side (the attacking camp, so to speak) is overstated, in my opinion. When James says that “it is all about attack”, well, I am not sure. There have been position against DH, the most (in)famous being the one by Stanley Fish in his New York Times blog (but Fish argument against Digital Humanities dates back to the 1970s, as Stephen Ramsay pointed out). Judging from the number of jobs requesting DH skills nowadays, however, it seems to me that this kind of attack is more superficial than real. Deep down DH is fully accepted and requested, and in some sense this might not be a good thing for the discipline, especially if coming from academic environments who do not really understand what is involved in serious DH work. Readings from the camp of Informatica Umanistica (the Italian version of DH – a definition more similar to Humanities Computing, in fact, since it avoids the term digital) convinced me that the problem we are facing (which in a sense might justify the defensive mechanism as internal rather than external) is due to the situation from which we speak as digital humanists. In spite of all the attempts to define what is digital humanities, we are still lacking in my view a clear and comprehensive definition. In fact, we might say that we have almost as many visions of digital humanities as digital humanists. Each one has is his own (which is probably true of literary scholars as well, I suspect, but luckily that camp is already well-established). As a newly formed DAH group, last year, in fact, we spent an awful amount of time discussing what DH is, and I suspect that this was helpful for the teaching group as well (this year, judging from the work of the new MA here in Cork, everything is much more focused). We re-visited, at least I did, a debate on the scope, meaning, nature (and indeed the sheer existence of digital humanities as a field) that occurred during the last 5 (perhaps 10) years, mainly in the Anglo-American academy (Ireland is kind of catching up now, I believe). Apparently we moved on past this debate, if what Underwood said is true. According to him by now it should be easy to get a sense of what digital humanities as a discipline is:
There are a lot of ways to develop that kind of familiarity, from reading Matt Gold’s Debates in Digital Humanities, to surfing blogs, to blogging for yourself, to Lisa Spiro’s list of starting places in DH, to following people on Twitter, to thinking about digital pedagogy with NITLE, to affiliation with groups like HASTAC or NINES or 18th Connect.
This is not to say that humanities scholars discovered computers only ten years ago. Far from that. Actually the connection, in my opinion, dates back to the very moment in which the modern (say digital) computer (as opposed to the analog machine of Babbage) was conceived and developed. I will say more about that in a moment. For the moment I will stick a bit more with the debate on DH.
A few years ago, I remember, from my scant and random readings on the subject at that time (a few years before coming to Cork), that a similar debate took place in Italian academy – more or less in the same period of the anglo-american one, perhaps with a delay of a year -, when an attempt was made to insert Informatica Umanistica (IU) in the government’s list of official academic discipline. At that time the attempt failed, and I believe that even now IU is not in the list. Obviously there was an attempt to define it properly, and the debate was rich and complex, with different schools of thought (the Scuola romana, lead by Tito Orlandi from La Sapienza, a group from the University of Pisa – where computational linguistics was already established in the 1960s by Antonio Zampolli, former President of ALLC for many years, “quelli di Milano” etc.). Some of this debate can be read (in Italian) at Griselda Online (see, for instance, Francesca Tomasi’s All’origine delle humanities computer sciences). Recently, in light of what I am doing here now, I resurrected the few Italian books and articles I had on the subject, and added a few more recent ones, trying to understand what was going on in Italy regarding Digital Humanities. I must say that I found the reading very interesting. I will try to summarise some here, hoping that this will contribute to a better understanding of what we DH people are doing. I will focus on two articles by Domenico Fiormonte who, in my view, engages with some important issue (I must say, as a “disclaimer”, that my choice might not be entirely “scientific”, but due to the fact that Domenico is one of the two umanisti informatici that I met in real life – the second one being Giuseppe Gigliozzi, who was teaching at my University in Rome when I was (under)graduating there. Gigliozzi, unfortunately died, before his time as they say, a decade ago. Surely it was a loss for the discipline and the community of IU)
However, there is science involved here as well. In my opinion, Fiormonte’s approach to DH is stimulating and engaging, also challenging at times, pushing boundaries in an interesting but also rigorous and uncompromising way.
Summarising what he gained from the roman bunch of digital humanists as a student, Fiormonte mentions three important principle that should be followed by all researchers in DH:
- everything that in humanities is intuitive (starting with the idea of text) in the digital world needs to be formalized.
- shifting from analog to digital implies and requires a redefinition of what a “cultural object” is.
- every coding act (every kind of digitization) has to be based on an hermeneutic process.
These are kind of very basic principles, but they can easily be forgotten or taken for granted. In fact, another thing that Fiormonte highlights is that, to a certain extent, the current success of the label Digital Humanities is a risky one, in the sense that “the most important gains of informatica are coincident with the main worries of informatica umanistica: a superficial way of building practical applications, scant attention to the digitalization process, the geopolitical and linguistic upper hand of one part of the scientific community, the risk of losing and manipulating cultural memories, etc.”
A way to avoid a superficial approach is to keep in mind the interdisciplinarity proposed by Tito Orlandi and the group of informatici umanisti of La Sapienza in Rome. Fiormonte understands interdisciplinarity not only in the sense that digital humanists are collaborating with software developers, adopting (and adapting) software that can help in their research and teaching, but because they (we) are taking part of a transformation of language itself.
Tito Orlandi’s approach to interdisciplinarity, praised by Fiormonte, is based on a “convergence between natural science and cultural science that move beyond the practical applications” :
The encounter between informatica and humanities is correctly interpreted [by Tito Orlandi] as a revolution of languages: a paradigmatic shift that foster the possibility of a (re)unification of scientific discourse. Today, when discipline that were usually considered distant, such as biology, or, more recently, neuroscience, enter more and more often in the humanities camp, the kind of interdisciplinarity practiced by Orlandi in forty years of experimental work seems to be a model that was able to conjure up the rigor and the reciprocal scientific understanding and respect without creating or fostering egemonic discourses and positions.
Interdisciplinarity might in fact be traced way back to the very beginning of the discipline, and of digital computing itself. Fiormonte and Numerico argue that the theoretical framework on which the first digital computers were built, after World War II, was a response not only to technological, military and industrial needs, but also to a number of questions asked more often in the humanities camp than in the scientific one, such as the alternative between “two different ideas on the nature of language and how it works”:
On one side there is a coalition between the vision of language as a formal structure (sintatticismo) with the idea that communication is purely a transfer of information, and on the other side, a conjunction between the idea of language as social interaction with a notion of computer as medium.
This were the two main paths along which computing and computer languages evolved, according to Fiormonte and Numerico. The origin of it can be traced in Turing’s theory of the universal machine. Turing’s innovation was the idea that the machine could (and should) operate not with numbers but with symbols, opening up a new space, a space for linguistic revolution. In a sense, we can say that, since the very beginning, the digital computer became the battlefield of a cultural war between the so called two cultures, and if in the first part of the history of digital computing, the engineering and mathematical camp were winning, and doing research in humanities computing was, at best, an odd and visionary thing (such as for the likes of Busa and a few other pioneers), today, with the cultural advent of personal computing, internet and social networks, the balance is not anymore very clear, and the humanities are gaining more and more space, even claiming a part on the engineering of a new computer architecture. In an article in which he tries to address how “computer languages represent, model, and (re)construct knowledge” (a central point, as I have said earlier), Fiormonte (again!) states that fostering “an interaction between the discussions about digital representation with semiotics is one of the few ways to break into the mechanisms and probe the mechanics of symbolic production of digital tools and technologies, exposing both potentialities and problematic issues.” And he hints towards a humanistic point of view on the future of computing when he says that
we should not forget that the formalism provided the model for the birth and development of informatica, allowing machines to achieve the amazing results that are in front of us today. But I am convinced – and I am not the only one – that the actual model of computing machine is reaching a critical point. And even if the contribution of a humanist to the creation of a new model can be considered irrelevant by many, I still think that is important, maybe even necessary, to have such a discussion.
DH people routinely ask themselves if we should eventually drop the word digital, because what we are actually doing is the very same thing that we always did as humanists. I don’t disagree with this position. But, looking at the Italian label, informatica umanistica, I think that, given the interdisciplinary nature of informatica itself, we could also reverse the claim, dropping the adjective umanistica.
The question I am now asking myself (and anyone else who dare to answer, of course. Comments are welcome) is:
Should we perhaps say that, as digital humanists, what we are actually doing (in fact, what we have been doing for a while already) is pure a simple informatica, with no humanistic or scientific label attached? would this be too daring, too far-fetched?
]]>Once upon a time and a very good time it was there was a MOOC coming down through the wires and this MOOC that was coming down through the wires met a vicious digital native named baby linux…
I am sure one of the future generations will start the story in this way. But which generation will be and what story they will tell, it is still not clear. Being in the middle of a transformation of the way knowledge is managed, shaped, remember, archived (as we undoubtedly are), the question of pedagogy and education is in my view a key issue. And it will be part of what I will be writing about in this blog.
Massive Open Online Courses (MOOCs) are the most recent and most hyped development in education, after the New York Times declared 2012 to be The Year of the Mooc. It is becoming a worldwide phenomenon at the crossroad between e-learning (as something that has “the potential to transform the educational transaction towards the ideal of a community of inquiry” – see Garrison 2011) and distant education, a movement that is at least a century year old.
The first MOOC was developed by Stephen Downes and George Siemens in 2008, when they opened up their course on educational technologies and methods (the name of the course was Connectivism and Connective Knowledge, or CCKNO8) to external, non-paying not-for-credit students. Although it was only considered “a pilot project in an emerging-technologies certificate program” (Parry 2010) it was extremely successful and ignited some debates thanks to Dave Cormier and other people. Cormier was the one who came up with the acronym during a chat with Siemens.
As with every kind of new technology, MOOCs will be adopted by users in unpredictable ways and we will need some time to understand how they will change the educational environment. At the moment I have to say that the impression is not very positive, with a few interesting exceptions. Unfortunately, what was the product of the open education movement became part of a corporative and imperialistic endeavor. In a recent post, Martin Grandjean, a French historian, spoke about imperialism of the mooc. Grandjean wrote that
aucun MOOC ne remplit à ce jour la mission de diffuser un savoir, mais bien plutôt d’occuper un terrain stratégique dans un paysage universitaire globalisé en complète recomposition (ou decomposition)
Moocs, today, are not developed as a tool to distribute knowledge, but as an attempt to occupy a field in the globalized academic landscape that is being transformed, or decomposed. (my translation)
Grandjean is addressing xMoocs (“professor-centric massive courses” where “teacher-to-student interaction is non-existent” – see what degrees of freedom has to say on that) rather than cMoocs (“massive online education built around connectivity“) such as the one by Siemens and Downes. The former are becoming dominant with providers such as Coursera currently dominating the field. The risks underlined by Grandjean are related to the impoverishment of the offerings to some dominant academic school (because of a mixture of money, power, prestige, and last and maybe, sadly, least, scientific research). Even in the case that the best schools, the most prestigious scholars will become the facilitator of the most successful massive course, inevitably this will cause an shrinking of academic debates and educational opportunities. Not to mention the fact that they will bring us back to a kind of frontal, one-way teaching that makes the student a passive consumer rather than an active producer of knowledge.
The risks are very real and need to be taken into account. It is true that
nothing prevents students from going beyond these existing tools and creating their own intimate communities (by taking classes in a group, for example, or building their own small online communities of committed classmates) to fill these gaps
as Degree of Freedom says, but we should not forget that students may not have enough self-confidence to follow that path or enough knowledge and expertise to understand the risks, and will probably need some guidance to go beyond what is on offer.
How to use MOOCs will be a negotiation between massive student population and the global(ised) education system. Students have their own reasons, and quite often these are not entirely related to acquiring credits. Cost free MOOCs are becoming part of this huge information repository called the web, almost as an alternative to wikipedia (which is also creating a wikiversity, in fact) whenever there is a need to know something about a topic (part of the transformational process called digital humanities – or humanities computing – is lead by the question of how we know what we think we know, and why, as “Obi Wan” McCarty reminded us a few years ago). This, at least, is what is implied in a short how-to video by Dave Cormier
Where this transformation will lead is still not very clear, but what will need apparently is to hack the institutionalised Moocs, transforming them in something more open and more respectful of the need of students all over the world.
This, in fact, is what happened with the original mooc by Siemens and Downes who were “hacking the format of a class.”
What we are facing is the same old story, only with new technology.
For all these reasons, I decided to follow the debate on Moocs. More posts on the topic will appear in the near future.
- research practices and activities that adopted digital tools in a more systematic way
- epistemological models (McCarty was indeed mentioned at the meeting yesterday) characterising the so-called “e-sciences”
- ways of disseminating research using new communicational and networking tools (under the name OpenEdition they combined together blogs, online journals, events and academic books, creating a rich, multidimensional research environment that still did not show its entire potential, in my view)
- transforming academic policies for human and social sciences, emphasising a need for a renovation in research infrastructures
- opening up opportunities to discuss the relation between science and society
You can read (in French) about the meetings in 2011, 2012, 2013, as well as a series of Comptes Rendus (not sure of the French spelling here) on the blog Philologie à venir (pretty much focused on the seminar activities).
Yesterday meeting was obviously in French so I will not attempt to write a full report (besides, I am sure there will be an “official” one soon enough). I just want to focus on a couple of very interesting suggestions I got from what I could understand (my French is rusty at best). The meeting was meant to introduce what is happening in Switzerland DH-wise, and the two speakers were both from Lausanne. The first one, Claire Clivaz, after listing the different Swiss universities involved in DH, talked about the materiality of the digital, introducing and intriguing distinction between hybridity, the term most frequently used when speaking of the “mixing together” of man and machine, and porosity. The term hybrid conjures up something monstrous (Frankestein was an hybrid, so to speak), and even etimologically the monstrous is there. On the other hand, porosity is a more “natural” way of contamination, of mixing up elements. Biologically, “Porosity is a measure of how much of a rock is open space. This space can be between grains or within cracks or cavities of the rock”. And precisely the metaphor of a rock was the one she used to make her point, using as a source Plato’s Ion, when Socrates speaks about the “stone of Heraclea“:
This stone not only attracts iron rings, but also imparts to them a similar power of attracting other rings; and sometimes you may see a number of pieces of iron and rings suspended from one another so as to form quite a long chain: and all of them derive their power of suspension from the original stone. In like manner the Muse first of all inspires men herself; and from these inspired persons a chain of other persons is suspended, who take the inspiration.
In the digital this porosity is not lost, but is reshaped differently, thanks to ontologies, internet of things, all things that eventually leads to a metamorphosis of objects and to the possibility of post-humans. As I said before, I am not sure I fully understood all the nuances of what she was saying, but I find all this really fascinating. I will have to think it over, starting by re-reading Plato’s dialogue.
The second speaker, Frederic Kaplan, talked about the big data of the past, that you can find in Venice State Archive, extremely rich, a real labyrinth that gives the physical impression, when you walk through the shelves, of a “Google memory of the middle ages”. The problem is: how to bring this into digital? This is what they are trying to do with the Venice Time Machine. Obviously is something that one cannot do alone, the only way is through collaboration using dedicated algorithms. Kaplan mentioned William Thomas III essay, Computing and Historical Imagination, written in 2004, using it to make two very specific, and key, point:
- in the so-called spatial turn in the humanities algorithms played a very important roll
- today there is a strong neeed to educate a new generation of historians, with skills that were not available, nor, perhaps, needed in the past
He went on to describe the kind of a google map of the past, asking if this was achievable, technologically and epistemologically. The first step would be the digitization of the existing data (i.e the archives), which will create a suitable map for the more recent period, when memory and archives were organised in a way we can understand fully. For the period in which these kind of data is not there, what we need is a simulation. To be honest, I found this interesting but I have some doubts about the epistemological issues involved. Perhaps, more than a simulation, we need to adapt the instrument to a different way of organising knowledge. It might be that this was exactly what he meant, and that something was lost in translation, I don’t know.
Another very good point he made was the idea of a convergence of education and research, that is becoming more important in a digital environment.
At the end of the day, it was indeed a stimulating afternoon.
]]>From a letter sent by Domenico Fiormonte to Humanist yesterday:
On 31st January 2014, Emilio Del Giudice, a great physicist, friend and supporter, as well as an influential member of the international *New Humanities* research group passed away. In such cases one always speaks of “emptiness” but such a word cannot be uttered without recalling the quantum vacuum which Emilio told us about for the first time three years ago. I can now imagine him riding that quantum wave in that material dimension that he explored with scientific rigor as well as with merry images and light-hearted poetry (“science is nothing but a metaphor,” he loved to say).
This loss touches us on a profound level because without Emilio *New Humanities* would probably not have been born. We have to thank another physicist (but also a humanist), Paolo De Santis, for letting us discover his character and fascinating research on the memory of water, biological matter and electromagnetic fields, the notion of self and the concept of quantum resonance, etc. All these investigations deeply involved our perception of what means today to be human. “The three bullets secret“, a novel on the military secrets of tactic nuclear weapons, was the book that made him known to the general public in Italy, and also one of the few volumes that bears his signature because he was too versatile and generous to devote himself to monographs. Emilio opened us up to a world and renewed our hopes for a new knowledge. Thanks to him that “new cultural code” that
we all desperately were trying to build and protect from the ethical and cultural wreck of our institutions seemed miraculously at hand.
Not only was “Quantum Physics’ contribution to the idea of consciousness” based on his openness, availability and depth of vision, but in 2011, along with Marcello Buiatti, he was the protagonist of that amazing dialogue “Towards new humanities” that shook us from deadly government educational reform, prompting us to create something new while all drowned in the hermeneutics of the reform bill. If, even to a small extent, we were really able to do something new and didn’t succumb, this was thanks to Emilio, a scientist who “recognized” us as equal partners so that we, humanists sometimes too shy with our ideas, cheered and found courage. Today, someone much more important than us mirrored our work, others theorize things alike, while we put them into practice.
Our task, therefore, is not to lose heart and continue on that road. But we know that without Emilio the path seems all-uphill.
I sincerely hope they will not give up, and looking forward to more meetings with them in the future. And to quantum computing as well, of course.
All the readings here were made November 28, 2013, but they could still be interesting.
In the morning, before starting the real work, I usually go social and read around from feeds, tweets etc. I always wanted to write some sort of summary, just to keep memory of what I see. Today I decided to do it, perhaps inspired by this post on personal memories preservation.
There is a full guide on personal archiving at the Library of Congress, full of links and pieces of advice.
Two interesting post are dealing with book history and the potential of multispectral images to help recovering data in damaged manuscripts, a technique used by Alejandro Giacometti, a PhD student in Digital Humanities at King’s College London.
For the more technically inclined, W3C, the consortium behind Internet, published the highlights for November 2013, a “survey of select recent work and upcoming priorities“, that can give you an idea of where the Internet is going. Particularly interesting for digital humanists, and academics in general, is the discussion of “how Open Web Platform is transforming digital publishing”, among other things (such as automotive, television, entertainment). Perhaps a full-blown post is lurking there somewhere, waiting to be written.
Speaking of which, Melissa Terras published what I might call an “educated rant” against the publishing industry unreasonable fees for open access publishing.
An upcoming, open workshop on Linking Spatial Data (London, sometimes in March), could be of interest to spatial humanists.
The Zen part of me likes posts on how to unclutter my life (not that I succeed to do it, but I will keep trying…)
Henry Jenkins published the third part of Participatory Poland. I always read his posts.
Not very sure what to make of this.
Two things are planned today in the outside world, as you can see from the above calendar:
Open Source Developers Conference (OSDC) 2013 in Auckland, NZ : #osdc2013
Sustainable history: ensuring today’s digital history survives (Institute of Historical Research, UCL)
I will try to follow tweets from both. Keep you posted (maybe).
Finally, you might be interested in some thoughts on The myth of virtual currency. Creative financing has done enough damage in the recent past, maybe we should avoid the same faith with virtual currency?
But I should of course end with something more positive, so I will add a guide to happiness, which seems to me the best possible conclusion.
It looks like it took too long. I will have to improve the process next time. Probably I should call it daily readings and add stuff during the whole day, whenever I read something interesting. It could become something like a day of digital humanist’s for the whole year. We will see.
]]>Communication Theories in a Multicultural World
Bern, Peter Lang, 2014
Book synopsis
This volume is an up-to-date account of communication theories from around the world. Authored by a group of eminent scholars, each chapter is a history and state-of-the-art description of the major issues in international communication theory. While the book draws on an understanding of communication theory as a product of its socio-political and cultural context, and the challenges posed by that context, it also highlights each author’s lifetime effort to critique the existing trends in communication theory and bring out the very best in each multicultural context.
Contents
Kaarle Nordenstreng: Preface: Toward a Better World
Robert A. White: Keeping the Public Sphere(s) Public
Brenda Dervin/ Peter Shields: Talking Communicatively About Mass Communication in Communication Theories: Beyond Multiplicity, Toward Communicating
Denis McQuail: Social Scientific Theory of Communication Encounters Normativity: A Personal Memoir
Janet Wasko: Understanding the Critical Political Economy of the Media
Peter Golding/Karen Williamson: Power, Inequality, and Citizenship: The Enduring Importance of the Political Economy of Communications
Roger Bromley: Cultural Studies: Dialogue, Continuity, and Change
Michael Real/David Black: A Mutually Radicalizing Relationship: Communication Theory and Cultural Studies in the United States
Jesús Martin-Barbero: Thinking Communication in Latin America
Joseph Oládèjo Fáníran: Toward a Theory of African Communication
Keval J. Kumar: Theorizing About Communication in India: Sadharanikaran, Rasa, and Other Traditions in Rhetoric and Aesthetics
Thomas Tufte: Voice, Citizenship, and Civic Action: Challenges to Participatory Communication
Stewart M. Hoover: Media, Culture, and the Imagination of Religion
Pradip N. Thomas: Theorizing Development, Communication, and Social Change
Cees J. Hamelink: Human Rights and Communication: Reflections on a Challenging Relationship
Ruth Teer-Tomaselli/Keyan G. Tomaselli: Struggle, Vatican II, and Development Communication Practice
Paul A. Soukup, SJ: Media Ecology
Theodore L. Glasser/Isabel Awad: Journalism, Multiculturalism, and the Struggle for Solidarity
Clifford G. Christians: Media Ethics in Transnational, Gender Inclusive, and Multicultural Terms
About the editor(s)
Clifford Christians is Research Professor of Communications and Professor of Journalism Emeritus at the University of Illinois-Urbana.
Kaarle Nordenstreng is Professor Emeritus of Journalism and Mass Communication at the University of Tampere, Finland.
]]>London: Sage.
Paperback ISBN 9781446257319
Hardcover ISBN 9781446257302
Now more than ever, we need to understand social media – the good as well as the bad. We need critical knowledge that helps us to navigate the controversies and contradictions of this complex digital media landscape. Only then can we make informed judgements about what’s happening in our media world, and why.
Showing the reader how to ask the right kinds of questions about social media, Christian Fuchs takes us on a journey across social media, delving deep into case studies on Google, Facebook, Twitter, WikiLeaks and Wikipedia. The result lays bare the structures and power relations at the heart of our media landscape.
This book is the essential, critical guide for understanding social media and for all students of media studies and sociology. Readers will never look at social media the same way again.
Introduction
Sample chapter: Twitter and Democracy: A New Public Sphere?
CONTENTS:
1. What is a Critical Introduction to Social Media? 1
I. FOUNDATIONS 29
2. What is Social Media? 31
3. Social Media as Participatory Culture 52
4. Social Media and Communication Power 69
II. APPLICATIONS 95
5. The Power and Political Economy of Social Media 97
6. Google: Good or Evil Search Engine? 126
7. Facebook: A Surveillance Threat to Privacy? 153
8. Twitter and Democracy: A New Public Sphere? 176
9. WikiLeaks: Can We Make Power Transparent? 210
10. Wikipedia: A New Democratic Form of Collaborative Work and Production? 235
III. FUTURES 251
11. Conclusion: Social Media and its Alternatives – Towards a Truly Social Media 253
References 267
Index 289