CARVIEW |
GeoData Explorations: Google's Ever-Expanding Geo Investment
by Brady Forrest
Google has been investing lots of money in geodata acquisition. Some of the money is being spent externally: they've inked an exclusive satellite imagery deal with GeoEye (Radar post) and a data sharing deal Tele Atlas (Radar post). And some is being spent internally with Mapmaker, Street View and the web. Over the past week Google has been sharing visualizations of their internally gathered geodata. Here's a round-up of them.
The image above was released on December 9th. It shows how much of the US is available via Street View. According to the post Street View imagery increased 22 fold around the world in 2008.
The dark image above was released on December 11th. It highlights the parts of the world that are being mapped on Google's Mapmaker by users (Radar post). Mapmaker is now live in 164 countries. According to the map it has gained the most traction in Africa and the Indian sub-continent. The Google Mapmaker team has released timelapse videos of Mapmaker building cities on the Mapmaker YouTube Channel. I've embedded one after the jump.
This final image shows all the points described by GeoRSS and KML all over the world. It was shown at Where 2.0 2007 by Michael Jones (video). Unsurprisingly, this image and the Mapmaker image show opposite data density concentrations.
In some more GeoData Explorations posts this week I will look at OSM vs Google and some surprising trends in KML.
The State of Transit Routing
by Jim Stogdill
My brother called me a week ago and during the course of our conversation mentioned that he made the trek to the Miami Auto Show. He was complaining that he really wanted to take Tri-Rail (the commuter rail that runs along Florida's South East coast) but it was just too hard to figure out the rest of the trip once he got off the train. "One web site for train schedules, another for buses, and another for a city map to tie it all together. It was just too much trouble to figure out, so I drove. I just want to go online and get directions just like I do for driving, but that tells me which train, which bus, etc."
Coincidentally, later in the day I downloaded the iPhone 2.2 upgrade with the new walking and public transit directions. So far, at least where I live, it's useless. The little bus icon just sits there grayed out, taunting me. I guess because SEPTA (our local transit authority for bus and regional rail) isn't giving data to Google?
My brother hadn't heard of Google Transit, but It turns out to have some coverage in Miami. Their coverage at this point seems to be transit authority centric and doesn't seem to have great support for mixed mode or stuff that crosses transit system boundaries. I am curious though, is it being used? Let me know in the comments if you are using it to good effect.
Anyway, my brother's call on the same day as the iPhone update piqued my interest in the current state of the art for mixed mode transit routing. After some mostly fruitless web searches for I reached out to Andrew Turner. I knew he'd know what was going on in. This is what he had to say:
Routing is definitely one of the emergent areas of technology in the next generation applications. So far, we've done a great job getting digital maps on the web, mobile locative devices, and comfortable users.One problem for awhile has been the lack of data. You can have a great algorithm or concept, but without data it's useless. Gathering this data has been prohibitively expensive - companies like NAVTEQ drive many of the roads they map for verification and additional data. Therefore if you wanted to buy road data from one of the vendors you had to have a large sum of money in the bank and know how you were going to monetize it. This stifled experimentation and creating niche applications.
Now that the data is becoming widely, and often freely, available innovation is happening at an increased pace.
For one example, consider the typical road navigation. The global OpenStreetMap project has always had topology (road connectivity), but the community now adding attribute data to ways such as number of lanes, stop lights, turn restrictions, speeds, and directionality. Anyone can download this data to use with a variety of tools such as pgRouting. As a result people are rethinking standard routing mechanisms that assume travel from A to B via the fastest, or shortest, route. What if a user wants to take the "greenest" route as determined by lowest total gas mileage, or the most scenic route based on community feedback.
An area that has strongly utilized this idea has been disaster response. Agencies and organizations deploy to areas with little on the ground data, or data that is now obsolete due to the disaster they're responding. Destroyed bridges, flooded roads, new and temporary infrastructure are just some of the aspects that are lost with typical navigation systems. However, given the capability of responders to correct the data and instantly get new routes is vital. And these routes may need to be based on attributes different from typical engines - it's not about the fastest, but which roads will handle a 5-ton water truck?
This scheme was deployed in the recent hurricane response in Haiti in conjunction with the UNJLC, CartOng, OpenStreetMap and OpenRouteService.
Beyond just simple, automotive routing, we can now incorporate multi-modal transit. With 50% of the world's population now living in urban areas, the assumption that everyone is in a car is not valid. Instead people will be utilizing a mixture of cars, buses, subways, walking, and bicycling. This data is also being added to OpenStreetMap as well as other projects such as Bikely or EveryTrail. GraphServer is one routing engine that will incorporate these various modes and provide routes.
And we're interfacing with all these engines using a variety of devices: laptop, PND (Personal Navigation Device), GPS units, mobile phones, and waymarking signs. PointAbout recently won an award in the Apps For Democracy for their DC Location Aware Realtime Alerts mobile application that displays the route to the nearest arriving metro.
What's also interesting is the potential of these routing tools beyond actual specific individual routes. Taken in amalgamation the routing distances form a new topography of the space. Given a point in the city, how far can I travel in 20 minutes? in 40 minutes? for less than $1.75? This type of map is known as an isochrone. Tom Carden and MySociety developed London Travel Time Maps that allow users to highlight the spots in London given a range of house prices and travel times.
Despite these apparent benefits, there is a large hurdle. Like road data, there has been a lack of openly available transit data to power applications and services. Providers like NAVTEQ and open projects like OpenStreetMap are possible because the public roads are observable and measurable by any one. By contrast, the many, varied local transit agencies own and protect their routing data and are reluctant to share. Google Transit has made great strides in working with transit authorities to expose their information in the Google Transit Feed Specification - at least to Google. This does not mean the data has to be publicly shared, and in many cases this is exactly what has occured.
However, not even the allure of widely admired Google Transit can induce transit authorities to share their prized data. The Director of Customer Service of the Washington Metro Area Transit Authority (WMATA) plainly states that working with Google is "not in our best interest from a business perspective."
Hopefully, this situation will change, first through forceful FOIA requests, but later through cooperation. One step in this direction have been TransitCamps. And Portland's TriMet is a shining example with a Developer Resources page detailing data feeds and API's.
These experimentations are just the beginning of what is being pushed in the space. Routing is one of those features that users may not realize they need until they have it and then they'll find it indispensable. The ability for a person to customize their valuation of distance to assist in making complex decisions and searching is very powerful.
For more projects and tools, check out the OpenStreetMap routing page, Ideas in transit and the OGC's OpenLS standards.
tags: emerging tech, geo
| comments: 5
| Sphere It
submit:
My Netbook Took Me Back To Windows
by Brady Forrest
When I left Microsoft I switched to a Macbook Pro and didn't look back. I never thought that I would use a Windows machine regularly again. Then I got an Asus Eee PC 1000h (10.2 in screen, 1.6 GHz Intel Atom N270 Processor, upgraded to 2GB RAM; I judge it to be on the larger end of a netbook). For three weeks it was my sole computer. It runs XP and that is just fine for what I expect from a netbook.
How is the netbook different? It is a secondary machine that knows its place. It is not as powerful as my Macbook nor is the workspace as big; I am definitely less efficient on it. I got it for its size and price. The 10 inch screen (1024x600 resolution) is fine for most work. The weight (3.2 lbs) is a relief for a traveller. And ringing in under $350 (with discounts and Live Cashback) it is an affordable luxury. In fact the price makes it almost disposable. Not disposable in a throw-away fashion, but in a if it gets stolen, lost or ruined while I am on the road it will not be the end of the world or a costly item to replace. It's a machine that I can throw in my backpack when I go out for the day and not worry too much.
During my three week of travels I used the machine primarily for browsing the web, answering email, managing photos and watching video. It has a tiny screen, but I sought software that left as much room as possible in the workspace. Chrome, for example, takes up very little screen space with toolbars. I switched from the clunky Zimbra Desktop client to Windows Live Mail (a really well-designed mail client if you can overlook the lack of smart folders and a couple of quirks).
My other major criteria for software was the ability to sync off the machine. Other than when managing media I tried to never save directly to the file-system and only to the web. The netbook will never be my main machine and I do not want to "forget" a file on it. I relied on Evernote to record my notes and save them to the cloud.
To make the machine more reminiscent of my Mac I installed Launchy. It's an extendable application launcher like Quicksilver. With Launchy I never use the Start Menu.
This is not to say that I didn't find the computer limiting. I was unable to install Valve's Portal (most likely due to the integrated graphics card) and video occasionally stuttered on the machine. I try to keep a minimum number of apps open to prevent the machine from slowing down.
Instead of XP I could run a Linux variety or Mac OS X. I do dual-boot with Ubuntu-eee, but it is not my primary OS. As you can see in the screesnhot it is very icon heavy and does a good job of being user-friendly. However, the OS lacks the client software that I need (no Chrome or Evernote client). Soon there will be another Ubuntu designed specifically for netbooks. According to Techcrunch Tariq Krim is developing Jolicloud, but without more information I am not certain how it is different from Ubuntu-eee - based on screenshots they look very similar.
I ultimately chose XP because it stays out of the way, it has the software I want and it lets me get the job done. I am not sure that it will keep me. Chrome will be coming out on Linux. Evernote (and other clients) could opt to develop across all platforms. New netbook-oriented OSs are going to be designed with a netbook's characteristics in mind.
(It's being reported that Dell will start penalizing users for selecting XP over Vista to the tune of an extra $150. It's interesting to note that Dell does not offer Vista as an option for the Dell Mini, its netbook offering.)
(Ubuntu-eee screenshot courtesy of ubuntu-eee.com)
Update: In the comments Corey Burger provided some interesting information on Ubuntu-eee: The icon-heavy launcher is built by Canonical and is called the netbook-remix-launcher or ubuntu-mobile-edition launcher, depending. Ubuntu-eee is basically just that plus a few tweaks. Coming with Ubuntu 9.04 will be official images/isos for all sorts of netbooks.
Register's Googlewashing Story Overblown
by Tim O'ReillyI'm disappointed by the pile-on of people rising to Andrew Orlowski's classic bit of yellow journalism (or trolling, as it's more often referred to today), Google Cranks Up the Consensus Engine. If so many other people weren't taking it seriously, I'd just ignore it. (I just picked this story up via Jim Warren's alarmed forward to Dave Farber's IP list.)
Orlowski breathlessly "reports": "Google this week admitted that its staff will pick and choose what appears in its search results. It's a historic statement - and nobody has yet grasped its significance."
Orlowski has divined this fact based on the following "evidence," a report by Techcrunch's Michael Arrington on comments made by Marissa Mayer at the Le Web Conference in Paris:
Mayer also talked about Google’s use of user data created by actions on Wiki search to improve search results on Google in general. For now that data is not being used to change overall search results, she said. But in the future it’s likely Google will use the data to at least make obvious changes. An example is if “thousands of people” were to knock a search result off a search page, they’d be likely to make a change.
While I agree that, if true, Google's manipulation of search results would be a serious problem, I don't see any evidence in this comment of a change in Google's approach to search. I fail to see how tuning Google's algorithms based on the input of thousands of people about which search results they prefer is different from Google's initial algorithms like Pagerank, in which Google weights links from sites differently based on a calculated value that reflects -- guess what, the opinions of the thousands of people linking to each of those sites in turn.
The idea that Google's algorithms are somehow magically neutral to human values misses their point entirely. What distinguished Google from its peers in 1998 was precisely that it exploited an additional layer of implicit human values as expressed by link behavior, rather than relying on purely mechanistic analysis of the text contained on pages.
Google is always tuning their algorithms to produce what they consider to be better results. What makes a better result? More people click on it.
There's a feedback loop here that has always guided Google. Google's algorithms have never been purely mechanistic. They are an attempt to capture the flow of human meaning that is expressed via choices like linking, clicking on search results, and perhaps in the future, gasp, whether people using the wikified version of the search engine de-value certain links.
This is not to say that Google's search quality team doesn't actually make human interventions from time to time. In fact, search for O'Reilly (my name) and you'll see one of them. You'll see an unusual split page, with the organic search results, dominated by yours truly and my namesake Bill O'Reilly, with the second half of the page given over to Fortune 500 company O'Reilly Auto Parts.
Why? Because Google's algorithms pushed O'Reilly Auto Parts off the first page in favor of lots more Tim O'Reilly and Bill O'Reilly links, and Google judged, based on search behavior, that folks looking for O'Reilly Auto Parts were going away frustrated. Google uses direct human intervention when it believes that there is no easy way to accomplish the same goal by tuning the algorithms to solve the general case of providing the best results.
(I should note that my only inside knowledge of this subject comes from a few conversations with Peter Norvig, plus a few attempts to persuade Google to give more prominence to book search results, which failed due to the resistance of the search quality team to mucking with the algorithms in ways that they don't consider justified by the goal of providing greater search satisfaction.)
Even if Google were to become manipulative for their own benefit in the way Orlowski implies, I don't think we have to worry. They'd soon start losing share to someone who gives better results.
P.S. Speaking of the dark underbelly of editorial bias, consider this: Orlowski doesn't even bother to link to his source, the Techcrunch article. There's only one external link in his piece, and it's done in such a way as to minimize the search engine value of the link (i.e. with no key search terms in the anchor text.) Orlowski either doesn't understand how search engines work, or he understands them all too well, and is trying not to lead anyone away from his own site. A good lesson in how human judgment can be applied to search results: consider the source.
tags: google, googlewashing, register
| comments: 34
| Sphere It
submit:
Michael Pollan on Food, Energy, Climate, and Health
by Sara WingeIn his latest column, Nicholas Kristof encourages President-Elect Obama to heed Michael Pollan's call for a radically new food policy. Pollan makes a convincing case that our current food system is a "shadow problem." If we're serious about working on energy independence, climate change, and health care, we have to change how we're feeding ourselves.
During his interview with Pollan At Web 2.0 Summit last month, John Battelle boils it down to "eat sunshine." Pollan challenges the audience to make a difference in the food system. Watch the video and ask yourself--can tech innovators and entrepreneurs create technology to make the food system more transparent and carbon-neutral, and figure out how to make money creating solar food production systems?
tags:
| comments: 4
| Sphere It
submit:
O'Reilly AlphaTech Ventures Invests in Amee
by Tim O'ReillyI'm pleased to announce that on Wednesday, O'Reilly AlphaTech Ventures, our VC affiliate, closed an investment in UK-based Amee, which bills itself as "the world's energy meter." Here's their description of what they do:
AMEE’s aim is to map, measure and track all the energy data on Earth. This includes aggregating every emission factor and methodology related to CO2 and Energy Assessments (individuals, businesses, buildings, products, supply chains, countries, etc.), and all the consumption data (fuel, water, waste, quantitative and qualitative factors).If you've been following my talks in which I urge software developers and entrepreneurs to "work on stuff that matters," you know that I consider getting a handle on carbon accounting is the first step in putting a stop to global warming. (If you're a warming skeptic, I consider global warming as a modern example of Pascal's wager: if we're wrong, and global warming is not human caused, the steps we'll take to address it are still worthwhile. We get off foreign oil, improve our energy security, build new industries, improve the environment.)It is a web-service (API) that combines measurement, calculation, profiling and transactional systems. Its algorithmic engine applies conversion factors from energy into CO2 emissions, and represents data from 150 countries.
AMEE aids the development of businesses and other initiatives - by providing common benchmarks for measurement, tracking, conversion, collaboration and reporting.
Even apart from the contribution to a critical world issue, Amee is interesting because it shows that the future of web services will involve a much broader range of data services than most people imagine. I've long argued that the subsystems of the emerging internet operating system are data subsystems. Some of those, like location and identity, are obvious, and thus hotly contested. Others, like carbon data, are sorely needed, and not yet built out. There's huge opportunity in finding and populating key databases, and then turning them into ubiquitous web services.
By the way, if you use dopplr, you've already seen Amee at work: it provides the data for dopplr's carbon calculator tab.
Union Square Ventures is also an investor in this round. Partner Albert Wenger gives his take on the investment on their blog.
tags: amee, carbon, energy, global warming, investments, oatv
| comments: 5
| Sphere It
submit:
Clever Emoticarolers App
by Dale DoughertyOpen the door and smiley-face carolers sing a song that you can customize and send to others. That's the emoticarolers concept, worked up by Jason Striegel, our Hackszine editor, who leads the development side of things for Colle+McVoy in Minneapolis. The team created this clever holiday "text-to-sing" promotion for Yahoo Messenger at emoticarolers.com. A custom Make carol is here. (Reminds me of the Smileys book by David Sanderson that I developed many years ago.)
I asked Jason how they built the app and here's what he said:
The front end interface is written in Flash/AS3. It talks to a PHP backend, which uses the Festival text-to-speech software and some other Unix audio tools to render each of the four voices. Those all get compiled back into a single mp3 and sent back to flash, along with an xml file that tells the app how to animate the emoticons and custom lyrics. Aside from some of the animated bits, this could work as-is with an HTML/CSS/JS front end as well.Links: emoticarolers.comThe process is pretty cpu intensive, so we had to use a number of load balanced machines to handle requests. They output files on amazon s3, all keyed by a unique id. If this becomes popular (fingers crossed), there's no database or anything that will bottleneck reads or writes, and it should just scale linearly as we add more boxes.
It's funny how the text-to-singing stuff ended up being only a small portion of the project.
Make Holiday Carol
tags: carol, Christmas, festival, speech synthesis
| comments: 1
| Sphere It
submit:
Challenges for the New Genomics
by Matt Wood
New guest blogger Matt Wood heads up the Production Software team at the Wellcome Trust Sanger Institute, where he builds tools and processes to manage tens of terabytes of data per day in support of genomic research. Matt will be exploring the intersection of data, computer technology, and science on Radar.
The original Human Genome Project was completed in 2003, after a 13-year worldwide effort and a billion dollar budget. The quest to sequence all three billion letters of the human genome, which encodes a wide range of human characteristics including the risk of disease, has provided the foundation for modern biomedical research.
Through research built around the human genome, the scientific community aims to learn more about the interplay of genes, and the role of biologically active regions of the genome in maintaining health or causing disease. Since such active areas are often well conserved between species, and given the huge costs involved in sequencing a human genome, scientists have worked hard to sequence a wide range of organisms that span evolutionary history.
This has resulted in the publication of around 40 different species' genomes, ranging from C. elegans to the Chimpanzee, from the Opossum to the Orangutan. These genomic sequences have helped progress the state of the art of human genomic research, in part, by helping to identify biologically important genes.
Whilst there is great value in comparing genomes between species, the answers to key questions of an individual's genetic makeup can only be found by looking at individuals within the same species. Until recently, this has been prohibitively expensive. We needed a quantum leap in cost-effective, timely individual genome sequencing, a leap delivered by a new wave of technologies from companies such as Illumina, Roche and Applied Biosystems.
In the last 18 months, new horizons in genomic research have opened up, along with a number of new projects looking to make a big impact (the 1000 Genomes Project and International Cancer Genome Consortium to name but two). Despite the huge potential, these new technologies bring with them some tough challenges for modern biological research.
High throughput
For the first time, biology has become truly data driven. New short-read sequencing technologies offer orders of magnitude greater resolution when sequencing DNA, sufficient to detect the single-letter changes that could indicate an increased risk of disease. The cost of this enhanced resolution comes in the form of substantial data throughput requirements, with a single sequencing instrument generating terabytes of data a week--more than all biological protocols to date. The methods by which data of this scale can be efficiently moved, analyzed, and made available to scientific collaborators (not least the challenge of backing it up), are cause for intense activity and discussion in biomedical research institutes around the globe.
Very rapid change
Scientific research has always been a relatively dynamic realm to work in, but the novel requirements of these new technologies bring with them unprecedented levels of flux. Software tools built around these technologies are required to bend and flex with the same agility as the frequently updated and refined underlying laboratory protocols and analysis techniques. A new breed of development approaches, techniques and technologies are needed to help biological researches add value to this data.
In a very short space of time the biological sciences have caught up with the data and analysis requirements of other large scale domains, such as high energy physics and astronomy. It is an exciting and challenging time to work in areas with such large scale requirements, and I look forward to discussing the role distribution, architecture and the networked future of science here on Radar.
tags: genomics, informatics, science, software
| comments: 12
| Sphere It
submit:
The Twitter Gold Mine & Beating Google to the Semantic Web
by Nick Bilton
There's always been jabs at Twitter for not having a viable business model and the chatter has increased in the current economic climate. In a recent interview Evan Williams, Twitter CEO, said "We had planned to focus on revenue in 2010 but that's no longer the case, so we changed the plan quite a bit... We've moved revenue higher on our list of priorities...".
I believe Twitter, potentially, has an incredible business model.
In The New York Times R&D; Labs, where I work, we've been talking a lot about 'smart content', both in relation to advertising, search and news delivery. For the past 157 years (that's how old the newspaper is) we've essentially delivered 'dumb content' to people's doorsteps. You and I, irrespective of interests, location etc. have received the same newspaper on our doorsteps every morning. We're beginning to explore ways to make content smarter, to understand what you've read, which device you've read it on and your micro level interests—making the most important news find you, instead of you having to find it.
This also changes the advertising model where ads become even smarter. Sure, ads are at about a 1st grade reading level now; with adsense and cookies, the ad networks have half an idea of what I'm interested in, but they aren't exactly smart about it. Just because a friend sends me an email about a baseball game doesn't mean I want to see ESPN ads in my Gmail.
So what does this have to do with a Twitter business model? Twitter, potentially, has the ability to deliver unbelievably smart advertising; advertising that I actually want to see, and they have the ability to deliver search results far superior and more accurate to Google, putting Twitter in the running to beat Google in the latent quest to the semantic web. With some really intelligent data mining and cross pollination, they could give me ads that makes sense not for something I looked at 3 weeks ago, or a link my wife clicked on when she borrowed my laptop, but ads that are extremely relevant to 'what I'm doing right now'.
A quick perusal of my Tweets shows that I live in Brooklyn, NY, I work for The New York Times, teach at NYU/ITP, I travel somewhere once a month for work, I love gardening, cappuccinos, my Vespa , U.I./Design and hardware hacking, I'm a political news junkie, I read Gizmodo & NYTimes.com and I was looking for a new car for a while, but now have a MINI and I'm also friends with these people. That's a treasure trove of data about me, and it's semantic on a granular level about only my interests.
If I send a tweet saying "I'm looking for a new car does anyone have any recommendations", I would be more than happy to see 'smart' user generated advertising recommendations based on my past tweets, mine the data of other people living Brooklyn who have tweeted about their car and deliver a tweet/ad based on those result leaving spammers lost in the noise. I'd also expect when I send a tweet saying 'I got a new car and love it!' that those car ads stop appearing and something else, relevant to only me, takes its place.
And it doesn't have to be advertising delivered on their site alone. One of the great successes of Twitter has been their APIs and the wonderful applications and sites that users have built with them. Why not build out an advertising or search API that delivers the latest micro level tags or ad links of users interests? There's a plethora of opportunities with this data, and if it's done right it becomes enticing and engaging, not annoying, irrelevant and outdated.
tags: advertising, google, twitter, web 2.0
| comments: 41
| Sphere It
submit:
Catch 22: Too Big To Fail, Too Big To Succeed
by Joshua-Michéle Ross
Hat in hand the U.S. Auto Industry lined up for their slice of government aid and it appears as of this posting that they will get the money they are asking for. These titans spent years hiding behind the “free market” shibboleth when convenient (the market wants gas guzzling SUV’s) and when punished by that same market we hear that they are victims of factors outside their control and that they are “too big to fail.” It has become a hackneyed expression precisely because it summarizes the situation so well; this is the privatization of profit and the socialization of loss.
The very concept of “Too Big To Fail” points to a deeper truth: the U.S.’s auto industry does not operate within the “free market” at all. Far from it. As their moniker suggests, the “Big Three” are an oligopoly with a long record of eschewing innovation (electric cars, hybrids etc.), killing off alternatives like mass transit and bullying public policy (lobbying against CAFÉ standards, environmental and tax policies [Hummer owners get a $34K tax credit!], the threat of relocating factories etc.) all in an effort to conform the not so “free market” to its lumbering non-strategies of pursuing short-term profit.
Now that their short-term thinking has met with long-term reality we are faced with bailing them out. Fair enough. There are millions of jobs connected to the automobile industry. But do we now trust these same institutions to deliver and execute the plan for a sustainable U.S. transportation industry?
If these are the flaws of the industry, consider their current leadership; The CEOs of these failing behemoths flew in on corporate jets, asked for $25 billion dollars, brought literally not one shred of documentation on what they intended to do differently and couldn’t explain how they arrived at the 25 billion dollar figure in the first place. When asked if they would accept a $1 dollar per year salary (Iacoca style) in exchange responses from GM and Ford ranged from non-committal to sarcastic (“I don’t have a position on that today” - Rick Wagoner of GM, “I think I am OK where I am today.” Ford’s Alan Mulally who earns $22m per year).
Oligopolies like The Big Three thrive on standardization, scale and market manipulation - not innovation. It is precisely their structure, size and leadership DNA that I believe precludes them from any chance of successful innovation. So there is the Catch 22. They may be too big to fail - but they are too big, bloated and corrupt to succeed. If we are the taxpayers funding the bailout, what are the alternatives?
tags: automotive, economy
| comments: 17
| Sphere It
submit:
Facebook Growth Regions and Gender Split
by Ben LoricaSince we began tracking Facebook demographics in late May, weekly growth has held steady, usually in the low single-digits on a percentage basis. More importantly, it's fair to say that the company has successfully expanded overseas. With close to 128M users, the share of U.S. users is down to around 30% from 35% in late May:

Over the last three months, Facebook has added members across all regions, with the strongest growth coming from Europe, South America, and the Middle East/North Africa:

In Europe, growth has been especially impressive in Italy and Spain. I'm not sure when the Italian translation of Facebook launched, but soon after, Italians started signing up in droves. The (crowdsourced) Spanish translation was completed within a month and launched in early 2008. I've read reports that users in Spain have used the site to connect with long lost relatives in Latin America. Venezuela, Argentina, and Uruguay were Facebook's fastest-growth countries in South America. In late May, some Radar readers were highlighting Facebook's growing popularity in Venezuela, Argentina, and Chile.
I don't have any particular insight into how Facebook is growing in the Middle East and North Africa, but the company has added lots of users in Tunisia, Morocco, and Turkey. (I encourage Radar readers from the region to share their thoughts in the comments.)
Having grown up in Southeast Asia, I've been detecting more interest in Facebook among friends in the region. But for now Facebook still lags Friendster and Multiply. In fact Facebook has far less users in all of Asia than users from Canada! Similarly, the U.K. has more than twice the number of Facebook users than all of Asia. Facebook has to contend with homegrown social networks and slightly different online habits: Asian internet users spend more time on gaming and instant messaging. But even with their relatively small user base and amidst a competitive environment, Facebook is growing in Asia (they added 1.5M users from the region in the last 12 weeks).
Another interesting tidbit about Facebook's recent growth, is that the fast-growing regions discussed above are adding teens (13-17) and college-age (18-25) users at a faster rate than North America.

With a commanding share of college-age users in its home country, U.S. growth has been strongest among working age users (26-59). I was expecting stronger growth in the teen market (13-17), but teens remain the slowest growing group in the U.S.

The Gender split has persisted: Females now outnumber Males, 51% to 44%. In late May the Female to Male split was 41% to 34%. The share of users who decline to state their gender dropped from 24% in late May to 5% in early December.

That Females so outnumber Males may surprise people. While the Female/Male distribution has persisted over time, there is quite a bit of variation across regions. The Middle East/North Africa and Africa are the only regions where Male Facebook users outnumber Females.

tags: facebook, hard numbers, social networking
| comments: 17
| Sphere It
submit:
Open Source Mobile Roundup
by Nat Torkington
Tim sent around a link to the VisionMobile report on open source technology in the mobile space, which I really enjoyed. It covers not just the software used at different layers in the stack, it also covers licensing and governance models. Strongly recommended.
tags: mobile, open source
| comments: 2
| Sphere It
submit:
Recent Posts
- Getting OpenID Into the Browser | by David Recordon on December 2, 2008
- iTunes App Store: The First Five Months | by Ben Lorica on December 2, 2008
- Where 2.0 CFP Closes December 2nd | by Brady Forrest on December 1, 2008
- 10,000 iPhone Apps | by Raven Zachary on November 30, 2008
- Data Center Power Efficiency | by Jesse Robbins on November 29, 2008
- Why I Love Twitter | by Tim O'Reilly on November 29, 2008
- Put change.gov Under Revision Control! | by Tim O'Reilly on November 28, 2008
- My Web Doesn't Like Your Enterprise, at Least While it's More Fun | by Jim Stogdill on November 25, 2008
- Why Does Twitter's Business Model Matter to You? | by Sarah Milstein on November 24, 2008
- “Technology is the 7th Kingdom of Life” - A conversation with Kevin Kelly | by Joshua-Michéle Ross on November 24, 2008
- Get One Give One For Christmas | by Nat Torkington on November 23, 2008
- It's Not Over: We are "the change we need." | by Tim O'Reilly on November 22, 2008
TIM'S TWITTER UPDATES
BUSINESS INTELLIGENCE
RELEASE 2.0
Current Issue

Where 2.0: The State of the Geospatial Web
Issue 2.0.10
Back Issues
More Release 2.0 Back IssuesCURRENT CONFERENCES

The O'Reilly Money:Tech Conference will be an even deeper dive into the space where Wall Street meets Web 2.0, using technology as a lens to provide a unique view of the most pressing issues facing the industry now. Read more

Connect with Publishing Innovation. True digital publishing means creating, producing, and delivering content that may never appear on a printed page. Read more

ETech, the O'Reilly Emerging Technology Conference, is O'Reilly Media's flagship "O'Reilly Radar" event. Read more
O'Reilly Home | Privacy Policy ©2005-2008, O'Reilly Media, Inc. | (707) 827-7000 / (800) 998-9938
Website:
| Customer Service:
| Book issues:
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.