| CARVIEW |
The Telegraph ran an article about a sizable — and growing — number of Catholic pilgrims arriving in a small village in the Pyrenean foothills. With 94 residents, the town has no hotels or shops — a fact that has left some of the new arrivals a bit confused. The town does have a small statue of the Virgin Mary which some pilgrims have worshiped at. Most pilgrims have noted that the town seems curiously quiet for Catholicism’s third largest pilgrimage site.
The village is Lourde. Without an “s”. The pilgrims, of course, are looking for Lourdes. The statue some pilgrims have prostrated themselves in front of is not the famous Statue of Our Lady at the Grotto of Massabielle but a simple village statue of the virgin. Lourde is 92 kilometers (57 miles) to the east of the larger and more famous city with the very similar name.
Given the similar names, pilgrims have apparently been showing up at Lourde for as long as the residents of the smaller village can remember. But villagers report a very large up-tick in confused pilgrims in recent years. To blame, apparently, is the growing popularity of GPS navigation systems.
Pilgrims have typed in “L-O-U-R-D-E” in their GPS navigation devices and forgotten the final “S”. Indeed, using the clunky on-screen keyboards and automatic completion functionality, it’s often much easier to type in the name of the tiny village than the name of the more likely destination. One letter and only 92 kilometers away in the same country, it’s an easy mistake to make because the affordances of many GPS navigation systems make it slightly easier to ask to go to Lourde than to Lourdes. Apparently, twenty or so cars of pilgrims show up in Lourde each day. Sometimes carrying as many people as live in the town of Lourde itself!
The GPS navigation systems, of course, will happy route drivers to either city and do not know or care that Lourde is rarely the location a driver navigating from across Europe wants. The GPS is designed to show drivers their next turn so a driver won’t know they’re off course until they reach their destination. The systems assume that destinations were entered correctly. A human navigator asked for directions would never point a person to the smaller village. Indeed, they would probably not know it even exists.
A municipal councilor in Lourde suggested that, “the GPS is not at fault. People are.” Of course, she’s correct. Pilgrims typed in the name of their destination incorrectly. But the reason there’s an increase in people making this particular mistake is because the technology people use to navigate in their cars has changed dramatically over the last decade in a way that makes this mistake more likely. A dwindling number of people pore over maps or ask a passer-by or a gas station attendant for directions. On the whole, navigation has become more effective and more convenient. But not without trade-offs and costs.
GPS technology frames our experience of navigation in ways that are profound, even as we are usually take it for granted. Unlike a human, the GPS will never suggest a short detour that leads us to a favorite restaurant or a beautiful vista we’ll be driving by just before sunset. As in the case of Lourde, it will make mistakes no human would (the reverse is also true, of course). In this way, the twenty cars of confused pilgrims showing up in Lourde each day can remind us of the power that technologies have over some of the little tasks in our lives.
]]>The picture, of course, is a bag of Tao brand jasmine rice for sale in Germany. The error is pretty obvious if you understand a little German: the phrase transparentes sichtfeld literally means transparent field of view. In this case, the phrase is a note written by the graphic designer of the rice bag’s packaging that was never meant to be read by a consumer. The phrase is supposed to indicate to someone involved in the bag’s manufacture than the pink background on which the text is written is supposed to remain unprinted (i.e., as transparent plastic) so that customers get a view directly onto the rice inside the bag.
The error, of course, is that the the pink background and the text was never removed. This was possible, in part, because the the pink background doesn’t look horribly out of place on the bag. A more important factor, however, is the fact that the person printing the bag and bagging the rice almost certainly didn’t speak German.
In this sense, this bears a lot of similarity with some errors I’ve written up before — e.g., the Welsh autoresponder and the Translate server error restaurant. And as in those cases, there are takeaways here about all the things we take for granted when communicating using technology — things we often don’t realize until language barriers make errors like this thrust hidden processes into view.
This error revealed a bit of the processes through which these bags of rice are produced and a little bit about the people and the division of labor that helped bring it to us. Ironically, this error is revealing precisely through the way that the bag fails to reveal its contents.
]]>SSL helps provide security for users in at least two ways. First, it helps keep communication encoded in such a way that only you and the site you are communicating with can read it. The Internet is designed in a way that makes messages susceptible to eavesdropping; SSL helps prevent this. But sending coded messages only offer protection if you trust that the person you are communicating in code with really is who they say they are. For example, if I’m banking, I want to make sure the website I’m using really is my bank’s and not some phisher trying to get my account information. The fact that we’re talking in a secret code will protect me from eavesdroppers but won’t help me if I can’t trust the person I’m talking in code with.
To address this, web browsers come with a list of trusted organizations that verify or vouch for websites. When one of these trusted organizations vouches that a website really is who they say they are, they offer what is called a “certificate” that attests to this fact. A certificate for revealingerrors.com would help users verify that that they really are viewing Revealing Errors, and not some intermediary, impostor, or stand-in. If someone were redirect traffic meant for Revealing Errors to an intermediary, users connecting using SSL would get an error message warning them that the certificate offered is invalid and that something might be awry.
That bit of background provides the first part of this explanation for this error message.
In this image, a user attempted to connect to the Whitehouse.gov website over SSL — visible from the https in the URL bar. Instead of a secure version of the White House website, however, the user saw an error explaining that the certificate attesting to the identity of the website was not from the United States White House, but rather from some other website called a248.e.akamai.net.
This is a revealing error, of course. The SSL system, normally represented by little more than a lock icon in the status bar of a browser, is thrust awkwardly into view. But this particularly revealing error has more to tell. Who is a248.e.akamai.net? Why is their certificate being offered to someone trying to connect to the White House website?
a248.e.akamai.net is the name of a server that belongs to a company called Akamai. Akamai, while unfamiliar to most Internet users, serves between 10 and 20 percent of all web traffic. The company operates a vast network of servers around the world and rents space on these servers to customers who want their websites to work faster. Rather than serving content from their own computers in centralized data centers, Akamai’s customers can distribute content from locations close to every user. When a user goes to, say, Whitehouse.gov, their computer is silently redirected to one of Akamai’s copies of the Whitehouse website. Often, the user will receive the web page much more quickly than if they had connected directly to the Whitehouse servers. And although Akamai’s network delivers more 650 gigabits of data per second around the world, it is almost entirely invisible to the vast majority of its users. Nearly anyone reading this uses Akamai repeatedly throughout the day and never realizes it. Except when Akamai doesn’t work.
Akamai is an invisible Internet intermediary on a massive scale. But because SSL is designed to detect and highlight hidden intermediaries, Akamai has struggled to make SSL work with their service. Although Akamai offers a service designed to let their customers use Akamai’s service with SSL, many customers do not take advantage of this. The result is that SSL remains one place where, through error messages like the one shown above, Akamai’s normally hidden network is thrust into view. An attempt to connect to a popular website over SSL will often reveal Akamai. The White House is hardly the only victim; Microsoft’s Bing search engine launched with an identical SSL error revealing Akamai’s behind-the-scenes role.
Akamai plays an important role as an intermediary for a large chunk of all activity online. Not unlike Google, Akamai has an enormous power to monitor users’ Internet usage and to control or even alter the messages that users send and receive. But while Google is repeatedly — if not often enough — held to the fire by privacy and civil liberties advocates, Akamai is mostly ignored.
We appreciate the power that Google has because they are visible — right there in our URL bar — every time we connect to Google Search, GMail, Google Calendar, or any of Google’s growing stable of services. On the other hand, Akamai’s very existence is hidden and their power is obscured. But Akamai’s role as an intermediary is no less important due its invisibility. Errors provide one opportunity to highlight Akamai’s role and the power they retain.
]]>
Every reader on FAILblog can chuckle at the idea an item is being offered for $69.98 instead of an original $19.99 as part of a clearance sale. The idea that one can “Save $-49” is icing on the cake. Of course, most readers will immediately assume that no human was involved in the production of this sign; it’s hard to imagine that any human even read the sign before it went up on the shelf!
The sign was made by a computer program working from a database or a spreadsheet with a column for the name of the product, a column for the original price, and a column for the sale price. Subtracting the sale price from the original gives the “savings” and, with that data in hand, the sign is printed. The idea of negative savings is a mistake that only a computer will make and, with the error, the sign-producing computer program is revealed.
Errors like this, and FAILblog’s work in in general, highlights one of the reasons that I think that errors are such a great way to talk about technology. FAILblog is incredibly popular with millions of people checking in to see the latest pictures and videos of screw-ups, mistakes, and failures. For whatever reason — sadism, schadenfreude, reflection on things that are surprisingly out of place, or the comfort of knowing that others have it worse — we all know that a good error can be hilarious and entertaining.
My own goal with Revealing Errors centers on a type of technology education. I want to revealing hidden technology as a way of giving people insight into the degree and the way that our lives are technologically mediated. In the process, I hope to lay the groundwork for talking about the power that this technology has.
But if people are going to want to read anything I write, it should also be entertaining. Errors are appropriate for a project like mine because they give an a view into closed systems, hidden intermediaries and technological black boxes. But they they are great for the project because they are also intrinsically interesting!
]]>
The caption should have said the “Quorum of the Twelve Apostles” which is the name of the governing body in question. An apostle, of course, is a messenger or ambassador although the term is most often used to refer to Jesus’ twelve closest disciples. The term apostle is used in LDS to refer to a special high rank of priest within the church. An apostate is something else entirely; the term refers to a person who is disloyal and unfaithful to a cause — particularly to a religion.
Shocked that the paper was labeling the highest priests in the church as disloyal and unfaithful, thousands of copies of the paper (18500 by one report) were pulled from news stands around campus. New editions of the paper with a fixed caption were produced and replaced at what must have been enormous cost to BYU and the Daily Universe.
The source of the error, says the university’s spokesperson, was in a spellchecker. Working under a tight deadline, the person spell-checking the captions ran across a misspelled version of “apostles” in the text. In a rush, they clicked the first term in the suggestion list which, unfortunately, happened to be a similarly spelled near-antonym of the word they wanted.
From a technical perspective, this error is a version of the Cupertino effect although the impact was much more strongly felt than most examples of Cupertino. Like Cupertino, BYU’s small disaster can teach us a whole lot about the power and effect of technological affordances. The spell-checking algorithm made it easier for the Daily Universe’s copy editor to write “apostate” than it was to write “apostle” and, as a result, they did exactly that. A system with different affordances would have had different effects.
The affordances in our technological systems are constantly pushing us toward certain choices and actions over others. In an important way, the things we produce and says and the ways we communicate are the product of these affordances. Through errors like BYU’s, we get a glimpse of these usually-hidden affordances in every-day technologies.
]]>
The English half of the sign is printed correctly and says, “No entry for heavy goods vehicles. Residential site only.” Clearly enough, the point of the sign is to prohibit truck drivers from entering a residential neighborhood.
Since the sign was posted in Swansea, Wales, the bottom half of the sign is written in Welsh. The translation of the Welsh is, “I am not in the office at the moment. Send any work to be translated.”
It’s not too hard to piece together what happened. The bottom half of the sign was supposed to be a translation of the English. Unfortunately, the person ordering the sign didn’t speak Welsh. When he or she sent it off to be translated, they received a quick response from an email autoresponder explaining that the email’s intended recipient was temporarily away and that they would be back soon — in Welsh.
Unfortunately, the representative of the Swansea council thought that the autoresponse message — which is coincidentally, about the right length — was the translation. And onto the sign it went. The autoresponse system was clearly, and widely, revealed by the blunder.
One thing we can learn from this mishap is simply to be wary of hidden intermediaries. Our communication systems are long and complex; every message passes through dozens of computers with a possibility of error, interception, surveillance, or manipulation at every step. Although the representative of the Swansea council thought they were getting a human translation, they, in fact, never talked to a human at all. Because the Swansea council didn’t expect a computerized autoresponse, they didn’t consider that the response was not sent by the recipient.
Another important lesson, and one also present in the Chinese examples, is that software needs to give users responses in the language they are interacting in to be interpreted correctly. In the translation context where users plan to use, but may not understand, their program’s output, this is often impossible. That’s why when a person has someone, or some system, translate into a language they do not speak, they open themselves up to these types of errors. If a user does not understand the output of a system they are using, they are put completely at the whim of that system. The fact that we usually do understand our technology’s output provides a set of “sanity checks” that can keep this power in check. We are so susceptible to translation errors because these checks are necessarily removed.
]]>
If, like most people, you have trouble parsing the agreement, that’s because it’s not the text of the license agreement that’s being shown but the “marked up” XHTML code. Of course, users are only supposed to see the processed output of the code and not the code itself. Something went wrong here and Mark was shown everything. The result is useless.
Conceptually, computer science can be boiled down to a process of abstraction. In an introductory undergraduate computer science course, students are first taught syntax or the mechanics of writing code that computers can understand. After that, they are taught abstraction. They’ll continue to be taught abstraction, in one way or another, until they graduate. In this sense, programming is just a process of taking complex tasks and then hiding — abstracting — that complexity behind a simplified set of interfaces. Then, programmers build increasingly complex tools on top of these interfaces and the whole cycle repeats. Through this process of abstracting abstractions, programmers build up systems of almost unfathomable complexity. The work of any individual programmer becomes like a tiny cog in a massive, intricate machine.
Mark’s error is interesting because it shows a ruptured black box — an accute failure of abstraction. Of course, many errors, like the dialog shown below, show us very little about the software we’re using.

With errors like Mark’s, however, users are quite literally presented with a view of parts of the system that programmer was trying to hide.
Here’s another photo I’ve been showing in a my talks that shows a crashed ATM displaying bits of the source code of the application running on the ATM; a bit of unintentional “open sourcing.”
These examples are embarrassing for authors of the software that caused them but are reasonably harmless. Sometimes, however, the window we get into a broken black box can be shocking.
In talks, I’ve mentioned a configuration error on Facebook that resulted in the accidental publication of the Facebook source code. Apparently, people looking at the code found little pieces like these (comments, written by Facebook’s authors, are bolded):
$monitor = array( '42107457' => 1, '9359890' => 1);
// Put baddies (hotties?) in here/* Monitoring these people's profile viewage.
Stored in central db on profile_views.
Helpful for law enforcement to monitor stalkers and stalkees. */
The first block describes a list of “baddies” and “hotties” represented by user ID numbers that Facebook’s authors have singled out for monitoring. The second stanza should be self-explanatory.
Facebook has since taken steps to avoid future errors like this. As a result, we’re much less likely to get further views into their code. Of course, we have every reason to believe that this code, or other code like it, still runs on Facebook. Of course, as long as Facebook’s black box works better than it has in the past, we may never again know exactly what’s going on.
Like Facebook’s authors, many technologists don’t want us knowing what our technology is doing. Sometimes, like Facebook, for good reason: the technology we use is doing things that we would be shocked and unhappy to hear about it. Errors like these provide a view into some of what we might be missing and reasons to be discomforted by the fact that technologists work so hard to keep us in the dark.
]]>
The arrow points to paragraph that is definitely not in German. In fact, it’s Latin. Well, almost Latin.
The paragraph is a famous piece of Latin nonsense text that starts with, and is usually referred to as, lorem ipsum. Lorem ipsum has apparently been in existence (in one form or another), and in use by the printing and publishing industry, for centuries. Although it’s originally derived by a text from Cicero, the Latin is meaningless.
The story behind lorem ipsum is rooted in the fact that when presented with text, people tend to read it. For this reason, and because sometimes text for a document doesn’t exist until late in the process, many text and layout designers do what’s called Greeking. In Greeking, a designer inserts fake or “dummy” text that looks like real text but, because it doesn’t make any sense, lets viewers focus on the layout without the distraction of “real” words. Lorem ipsum was the printing industry’s standard dummy text. It continues to be popular in the world of desktop and web publishing.
In fact, lorem ipsum is increasingly popular. The rise of computers and computer-based web and print publishing has made it much easier and more common for text layout and design to be prototyped and much more likely that a document’s designer is not the same person or firm that publishes the final version. While both design and publishing would have been done in print houses half a century ago, today’s norm is for web, graphic, print and layout designers to give their clients pages or layouts with dummy text — often the lorem ipsum text itself. Clients — the “real” text’s producers, that is — are expected to replace the dummy text with the real text before printing or uploading their document to the web.
We can imagine what happened in this example. The clothing shop hired a web design firm who turned over the “greeked” layout to the store owners and managers. The store managers replaced the greeked text with information about their products and services. Not being experts — or just because they were careless — they missed a few spots and some of the Greeked text ended up published to the world by mistake.
A quick look around the web shows that this shop is in good company. Although lorem ipsum is often preferred because the spacing makes the text “look like” English from a distance, many other dummy texts are both used and abused. Here’s an example from an auto advertisement.

Due to rapidly and radically changed roles introduced by desktop publishing — changes in structure and division of labor that are usually invisible — you can see accidentally published lorem ipsum text all over the web and in all sorts of places in the printed world as well. We don’t often reflect on the changes in the human and technological systems behind web and desktop publishing. Errors like these give an opportunity to do so.
]]>This error was revealed and written up by Fred Beneson and first published on his blog.
After receiving criticism for the privacy-violating “feature” of Google Street View that enabled anyone to easily identify people who happened to be on the street as Google’s car drove by, the search giant started blurring faces.
What is interesting, and what Mako would consider a “Revealing Error”, is when the auto-blur algorithm can not distinguish between an advertisement’s face and a regular human’s face. For the ad, the model has been compensated to have his likeness (and privacy) commercially exploited for the brand being advertised. On the other hand, there is a legal grey-area as to whether Google can do the same for random people on the street, and rather than face more privacy criticism, Google chooses to blur their identities to avoid raising the issue of whether it is their right to do so, at least in America.
So who cares that the advertisement has been modified? The advertiser, probably. If a 2002 case was any indication, advertisers do not like it when their carefully placed and expensive Manhattan advertisements get digitally altered. While the advertisers lost a case against Sony for changing (and charging for) advertisements in the background of Spiderman scenes located in Times Square, its clear that they were expecting their ads to actually show up in whatever work happened to be created in that space. There are interesting copyright implications here, too, as it demonstrates an implicit desire by big media for work like advertising to be reappropriated and recontextualized because it serves the point of getting a name “out there.”
To put my undergraduate philosophy degree to use, I believe these cases bring up deep ethical and ontological questions about the right to control and exhibit realities (Google Street View being one reality, Spiderman’s Time Square being another) as they obtain to the real reality. Is it just the difference between a fiction and a non-fiction reality? I don’t think so, as no one uses Google maps expecting to retrieve information that is fictional. Regardless, expect these kinds of issues to come up more and more frequently as Google increases its resolution and virtual worlds merge closer to real worlds.
]]>Quaker Maid Meats Inc. on Tuesday said it would voluntarily recall 94,400 pounds of frozen ground beef panties that may be contaminated with E. coli.

Of course the article was talking about beef patties, not beef panties.
This error can be blamed, at least in part, on a spellchecker. I talked about spellcheckers before when I discussed the Cupertino effect which happens when someone spells a word correctly but is prompted to change it to an incorrect word because the spellchecker does not contain the correct word in its dictionary. The Cupertino effect explains why the New Zealand Herald ran a story with Saddam Hussein’s named rendered as Saddam Hussies and Reuters ran a story referring to Pakistan’s Muttahida Quami Movement as the Muttonhead Quail Movement.
What’s going on in the beef panties example seems to be a little different and more subtle. Both “patties” and “panties” are correctly spelled words that are one letter apart. The typo that changes patties to panties is, unlike swapping Cupertino in for cooperation, an easy one for a human to make. Single letter typos in the middle of a word are easy to make and easy to overlook.
As nearly all word processing programs have come to include spellcheckers, writers have become accustomed to them. We look for the red squiggly lines underneath words indicating a typo and, if we don’t see it, we assume we’ve got things right. We do so because this is usually a correct assumption: spelling errors or typos that result in them are the most common type of error that writers make.
In a sense though, the presence of spellcheckers has made one class of misspellings — those that result in a correctly spelled but incorrect words — more likely than before. By making most errors easier to catch, we spend less time proofreading and, in the process, make a smaller class of errors — in this case, swapped words — more likely than used to be. The result is errors like “beef panties.”
Although we’re not always aware of them, the affordances of technology changes the way we work. We proofread differently when we have a spellchecker to aid us. In a way, the presence of a successful error-catching technology makes certain types of errors more likely.
One could make an analogy with the arguments made against some security systems. There’s a strong argument in the security community that creation of a bad security system can actually make people less safe. If one creates a new high-tech electronic passport validator, border agents might stop checking the pictures as closely or asking tough questions of the person in front of them. If the system is easy to game, it can end up making the border less safe.
Error-checking systems eliminate many errors. In doing so, they can create affordances that make others more likely! If the error checking system is good enough, we might stop looking for errors as closely as we did before and more errors of the type that are not caught will slip through.
]]>
There was an interesting response from a number of people that pointed out that the images appeared to have been manipulated. Eventually, the image ended up on the blog Photoshop Disasters (PsD) who released this marked up image highlighting the fact that certain parts of the image seemed similar to each other. Identical in fact; they had been cut and pasted.

The blog joked that the photos revealed a “shocking gap in that nation’s ability to use the clone tool.”
The clone tool — sometimes called the “rubber stamp tool” — is a feature available in a number of photo-manipulation programs including Adobe Photoshop, GIMP and Corel Photopaint. The tool lets users easily replace part of a picture with information from another part. The Wikipedia article on the tool offers a good visual example and this description:
The applications of the cloning tool are almost unlimited. The most common usage, in professional editing, is to remove blemishes and uneven skin tones. With a click of a button you can remove a pimple, mole, or a scar. It is also used to remove other unwanted elements, such as telephone wires, an unwanted bird in the sky, and a variety of other things.
Of course, the clone tool can also be used to add things in — like the clouds of dust and smoke at the bottom of the images of the Iranian test. Used well, the clone tool can be invisible and leave little or no discernible mark. This invisible manipulation can be harmless or, as in the case of the Iranian missiles, it can used for deception.
The clone tool makes perfect copies. Too perfect. And these impossibly perfect reproductions can becoming revealing errors. Through its introduction of unnatural verisimilitude within an image, the clone introduces errors. In doing so, it can reveal both the person manipulating the image and their tools. Through their careless use of the tool, the Iranian government’s deception, and their methods, were revealed to the world.
But the Iranian government is hardly the only one caught manipulating images through careless use of the clone tool. Here’s an image, annotated by PsD again, of the 20th Century Fox Television logo with “evident clone tool abuse!”

And here’s an image from Brazilian Playboy where an editor using a clone tool has become a little overzealous in their removal of blemishes.

Now we’re probably not shocked to find out that Playboy deceptively manipulates images of their models — although the resulting disregard for anatomy drives the extreme artificially of their productions home in a rather stark way.
In aggregate though, these images (a tiny sample of what I could find with a quick look) help speak to the extent of image manipulation in photographs that, by default, most of us tend to assume are unadulterated. Looking for the clone tool, and for other errors introduced by the process of image manipulation, we can get a hint of just how mediated the images we view the world are — and we have reason to be shocked.
Here’s a final example from Google maps that shows the clear marks of the clone tool in a patch of trees — obviously cloned to the trained eye — on what is supposed to be an unadulterated satellite image of land in the Netherlands.

Apparently, the surrounding area is full of similar artifacts. Someone has been edited out and papered over much of the area — by hand — with the clone tool because someone with power is trying to hide something visible on that satellite photograph. Perhaps they have a good reason for doing so. Military bases, for example, are often hidden in this way to avoid enemy or terrorist surveillance. But it’s only through the error revealed by sloppy use of the clone tool that we’re in any position to question the validity of these reasons or realize the images have been edited at all.
]]>On September 9th, a glitch in the Google News crawler caused Google News to redisplay an old article from 2002 that announced that UAL — the company that owns and runs United Airlines — was filing for bankruptcy. The re-publication of this article as news started off a chain-reaction that caused UAL’s stock price to plummet from more than USD$11 per share to nearly $3 in 13 minutes! After trading was halted and the company allowed to make a statement, the stock mostly (but not completely) recovered by the end of the day. During that period, USD$1.14 billion dollars of shareholder wealth evaporated.
Initially, officials suspected stock manipulation but it seems to be traced back to a set of automated systems and “honest” technical mistakes. There was no single error behind the fiasco but rather several broken systems working in concert.
The mess started when Chicago Tribune, who published an article about UAL’s 2002 bankruptcy back in 2002, started getting increased traffic to that old article for reasons that are not clear. As a result, the old article became listed as a “popular news story” on their website. Seeing the story on the popular stories list, a program running on computers at Google downloaded the article. For reasons Google tried to explain, their program (or “crawler”) was not able to correctly identify the article as coming from 2002 and, instead, classified it being a new story and listed it on their website accordingly. Elsewhere, the Tribune claimed that they notified Google of this issue already. Google denies this.
What happens next is somewhat complicated but was carefully detailed by the Sun-Times. It seems that a market research firm called Income Securities Advisers, Inc. was monitoring Google News, saw the story (or, in all probability, just the headline “UAL files for bankruptcy”) and filed an alert which was then picked up by the financial news company Bloomberg. At any point, clicking on and reading the article would have made it clear that the story was old. Of course, enough people didn’t click and check before starting a sell-off that snow-balled, evaporating UAL’s market capital before anyone realized what was actually going on. The president of the research firm, Richard Lehmann, said, “It says something about our capital markets that people make a buy-sell decision based on a headline that flashes across Bloomberg.”
Even more intriguing, there’s a Wall Street Journal report that claims that the sell-off was actually kick-started by automated trading programs that troll news aggregators — like Bloomberg and Google news. These programs look for key words and phrases and start selling a companies shares when they get sense “bad” news. Such programs exist and, almost certainly, would have been duped by this chain of events.
While UAL has mostly recovered, the market and many outside of it learned quite a few valuable lessons about the technology that they are trusting their investments and their companies to. Investors understand that the computer programs they use to manage and coordinate their markets are hugely important; Financial services companies spend billions of dollars building robust, error-resistant systems. Google News, of the other hand, quietly became part of this market infrastructure without Google, most investors, or companies realizing it — that’s why officials initially suspected intentional market manipulation and why Google and Tribune were so suprised.
There were several automated programs — including news-reading automated trading systems — that have become very powerful market players. Most investors and the public never knew about these because they are designed to work just like humans do — just faster. When they work, they make money for the people running them because they can be just ahead of the pack in known market moves. These systems were revealed because they made mistakes that no human would make. In the process they lost (if only temporarily) more than a billion dollars!
Our economy is mediated by and, in many ways, resting in the hands of technologies — many of which we won’t know about until they fail. If we’re wise, we’ll learn from errors and think hard about the way that we use technology and about the power, and threat, that invisible and unaccountable technologies might pose to our economy and beyond.
]]>My favorite was this error from Google Calculator:

The error, which has been fixed, occurred when users searched for the the phrase “eight days a week” — the name of a Beatles’s song, film, and sitcom.
Google Calculator is a feature of Google’s search engine that looks at search strings and, if it thinks you are trying to ask a math question or a units conversion, will give you the answer. You can, for example, search for 5000 times 23 or 10 furlongs per fortnight in kph or 30 miles per gallon in inverse square millimeters — Google Calculator will give you the right answers. While it would be obvious to any human that “eight days a week” was a figure of a speech, Google thought it was a math problem! It happily converted 1 week to 7 days and then divided 8 by 7: roughly 1.14.
Clearly, the error reveals the absence of human judgment — but we knew that about Google’s search engine already. More intriguing is what this, combined with a series of other Google Calculator errors, might reveal about the Google’s black box software.
When Google launched its Calculator feature, it reminded me of GNU Units — a piece of free/open source software written by volunteers and distributed with an expectation that those who modify it will share with the community. After playing with Google Calculator for a little while, I tried a few “bugs” that had always bothered me in Units. In particular, I tried converting between Fahrenheit and Celsius. Units converts between the amount of degrees (for example, a change in temperature). It does not take into account the fact that the units have a different zero point so it often gives people an unexpected (and apparently incorrect) answer. Sure enough, Google Calculator had the same bug.
Now it’s possible that Google implemented their system similarly and ran into similar bugs. But it’s also quite likely that Google just took GNU Units and, without telling anyone, plugged it into their system. Google might look bad for using Units without credit and without assisting the community but how would anyone ever find out? Google’s Calculator software ran on the Google’s private servers!
If Google had released a perfect calculator, nobody would have had any reason to suspect that Google might have borrowed from Units. One expects unit conversion by different pieces of software to be similar — even identical — when its working. Identical bugs and idiosyncratic behaviors, however, are much less likely and much more suspicious.
Given the phrase “eight days a week”, Units says “1.1428571.”
]]>
The Daily WTF published this photograph which was sent in by Thomas, one of their readers. The photograph came attached to this summons which arrived in the mail and explained that Thomas had been caught traveling 72 kilometers per hour in a 60 KPH speed zone. The photograph above was attached as evidence of his crime. He was asked to pay a fine or show up in court to contest it.
Obviously, Thomas should never have been fined or threatened. It’s obvious from the picture that Thomas’ car is being towed. Somebody was going 72 KPH but it was the tow-truck driver, not Thomas! Anybody who looked at the image could see this.
In fact, Thomas was the first person to see the image. The photograph was taken by a speed camera: a radar gun measured a vehicle moving in excess of the speed limit and triggered a camera which took a photograph. A computer subsequently analyzed the image to read the license plate number and look up the driver in a vehicle registration database. The system then printed a fine notice and summons notice and mailed it to the vehicle’s owner. The Daily WTF editor points out that proponents of these automated systems often guarantee human oversight in the the implementation of these systems. This error reveals that the human oversight in the application of this particular speed camera is either very little or none and all.
Of course, Thomas will be able to avoid paying the fine — the evidence that exonerates him is literally printed on his court summons. But it will take work and time. The completely automated nature of this system, revealed by this error, has deep implications for the way that justice is carried out. The system is one where people are watched, accused, fined, and processed without any direct human oversight. That has some benefits — e.g., computers are unlikely to let people of a certain race, gender, or background off easier than others.
But in addition to creating the possibilities of new errors, the move from a human to a non-human process has important economic, political, and social consequences. Police departments can give more tickets with cameras — and generate more revenue — than they could ever do with officers in squad cars. But no camera will excuse a man speeding to the hospital with a wife in labor or a hurt child in the passanger seat. As work to rule or “rule-book slowdowns” — types of labor protests where workers cripple production by following rules to the letter — show, many rules are only productive for society because they are selectively enforced. The complex calculus that goes into deciding when to not apply the rules, second nature to humans, is still impossibly out of reach for most computerized expert systems. This is an increasingly important fact we are reminded of by errors like the one described here.
]]>
Yesterday, I saw this article from Network World that described an error that is even more egregious and that was, apparently, predicted by the article’s author ahead of time.
In this case, Google listed a parody by McNamara as the top story about the recent lawsuit filed by the MBTA (the Boston mass transit authority) against security researchers at MIT. In the past, McNamara has pointed to other examples of Google News being duped by obvious spoofs. This long list of possible examples includes a story about congress enlisting the help of YouTube to grill the Attorney General (it was listed as the top story on Google News) and this story (which I dug up) about Paris Hilton’s genitals being declared a wonder of the modern world!
McNamara has devoted an extraordinary amount of time to finding and discussing other shortcomings of Google News. For example, he’s talked about the fact that GN has trouble filtering out highly-local takes on stories that are of broader interest when presenting them to the general Google News audience, about its sluggishness and inability to react to changing news circumstances, and about the sometimes hilarious and wildly inappropriate mismatches of images on the Google News website. Here’s one example I dug up. Imagine what it looked like before it was censored!

As McNamara points out repeatedly, all of these errors are only possible because Google News employs no human editors. Computers remain pretty horrible at sorting images for relevance to news stories and discerning over-the-top parody from the real thing — two tasks that most humans don’t have too much trouble with. The more generally inappropriate errors wouldn’t have made it past a human for multiple reasons!
As I mentioned in my original Revealing Errors article, the decision to use a human editor is an important one with profound effects on the way that users are exposed to news and, as an effect, experience and understand one important part of the world around them. Google News’ frequent mistakes gives us repeated opportunity to consider the way that our choice of technology — and of editors — frames this understanding.
]]>
In the not-so-recent past, a stadium like the Bird’s Nest would have been lit up using a large number of lights with gels to add color and texture. As the need for computer control moved on, expensive specialized theatrical computer controlled lighting equipment was introduced to help automate the use of these systems.
Of course, another way to maximize flexibility, coordination, and programmability at a low cost is to skip the lighting control systems altogether and to just hook up a computer to a powerful general purpose video projector. Then, if you want a green light projected, all you have to do is change the background on the screen being projected to green. If you want a blue green gradient, it’s just as easy and there are no gels to change. Apparently, that’s exactly what the Bird’s Nest’s designers did.
Unfortunately, with that added flexibility comes the opportunity for new errors. If the computer controlling your light is running Windows, for example, your lighting systems will be susceptible to all the same modes of failure. Apparently, using a video projector for this type of lighting is an increasingly common trick. If it had worked correctly for the Olympic organizers, we might never have known!
]]>I’m happy with the result: a couple thousand people showed up for the talk despite the fact that it was at 8:45 AM after the biggest “party night” of the conference!
For those that missed it for whatever reason, you can watch a video recording that O’Reilly made that I’ve embedded below.
A larger version of the Flash video as well as a QuickTime version is over on blip.tv and I’ve created an OGG Theora version for all my freedom loving readers.
]]>
It’s pretty easy to imagine the chain of events to led to this revealing error. The sign is describing a restaurant (the Chinese text, 餐厅, means “dining hall”). In the process of making the sign, the producers tried to translate Chinese text into English with a machine translation system. The translation software did not work and produced the error message, “Translation Server Error.” Unfortunately, because the software’s user didn’t know English, they thought that the error message was the translation and the error text went onto the sign.
This class of error is extremely widespread. When users employ machine translations systems, it’s because they want to communicate to people with whom they do not have a language in common. What that means is that the users of these systems are often in no position to understand the output (or input, depending on which way the translation is going) of such systems and have to trust the translation technology and its designers to get things right.
Here’s another one of my favorite examples that shows a Chinese menu selling stir-fried Wikipedia.

It’s not entirely clear how this error came about but it seems likely that someone did a search for the Chinese word for a type of edible fungus and its translation into English. The most relevant and accurate page very well might have been an article on the fungus on Wikipedia. Unfamiliar with Wikipedia, the user then confused the name of the article with the name of the website. There have been several distinct citings of “wikipedia” on Chinese menus.
There are a few errors revealed in these examples. Of course, there are errors in the use of language and the broken translation server itself. Machine translations tools are powerful intermediaries that determine (often with very little accountability) the content of one’s messages. The authors of the translation software might design their tool to avoid certain terminology and word choices over others or to silently censor certain messages. When the software is generating reasonable sounding translations, the authors and readers of machine translated texts are usually unaware of the ways in which messages are being changed. By revealing the presence of a translation system or process, this power is hinted at.
Of course, one might be able to recognize a machine translation system simply by the roughness and nature of a translation. In this particular case, the server itself came explicitly into view; it was mentioned by name! In that sense, the most serious failure was not that the translation server worked or that Wikipedia was used incorrectly, but rather that each system failed to communicate the basic fact that there was an error in the first place.
]]>
The error occurred on One News Now, a news website run by the conservative Christian American Family Association. The site provides Christian conservative news and commentary. One of the things they do, apparently, is offer a version of the standard Associated Press news feed. Rather than just republishing it, they run software to clean up the language so it more accurately reflects their values and choice of terminology. They do so with a computer program.
The error is a pretty straightforward variant of the clbuttic effect — a run-away filter trying to clean up text by replacing offensive terms with theoretically more appropriate ones. Among other substitutions, AFA/ONN replaced the term “gay” with “homosexual.” In this case, they changed the name of champion sprinter and U.S. Olympic hopeful Tyson Gay to “Tyson Homosexual.” In fact, they did it quite a few times as you can see in the screenshot below.

Now, from a technical perspective, the technology this error reveals is identical to the clbuttic mistake. What’s different, however, is the values that the error reveals.
AFA doesn’t advertise the fact that it changes words in its AP stories — it just does it. Most of its readers probably never know the difference or realize that the messages and terminology they are being communicated to in is being intentionally manipulated. AFA prefers the term “homosexual,” which sounds clinical, to “gay” which sounds much less serious. Their substitution, and the error it created, reflects a set of values that AFA and ONN have about the terminology around homosexuality.
It’s possible than the AFA/ONN readers already know about AFA’s values. This error provides an important reminder and shows, quite clearly, the importance that AFA gives to terminology. It reveals their values and some of the actions they are willing to take to protect them.
]]>

