CARVIEW |
Zappos: If You Are Great at Something - Let It Go...
(Or Resell It)
by Joshua-Michéle Ross
I am fascinated by what I see as Zappos' ongoing evolution from a simple, online retailer to a leading online innovator. A few months back I wrote about Zappos pioneering what I called “Experience Syndication" with their Powered by Zappos (PBZ) service. In brief, PBZ syndicates the end-to-end value of shopping with Zappos - from the online store experience to shipping, to returns, to the call center - everything. Clarks Shoes, Stuart Weitzman and many other online sites are providing a customer experience entirely syndicated by Zappos.
Last night I saw CEO Tony Hsieh’s tweet about Zappos Insights - a paid membership site “that allows 'Fortune one million' companies to gain insights from the learnings of Zappos.com. The site will allow access to Zappos.com management and contacts and provide guidance and direct answers for user generated questions via video responses.”
If PBZ syndicates the customer experience, Zappos Insights is syndicating the internal business experience; providing a window into the leadership and culture that has made Zappos such a successful business. What is so radical about this is the notion that Zappos is willing to let go of the very thing that makes them so exceptional.
What other company would you like to see create a similar service?
tags: strategy, zappos
| comments: 0
| Sphere It
submit:
Wikipedia and Nature
by Nat Torkington
I love the RNA Biology journal's new guidelines for submissions, which state that you must submit a Wikipedia article on your research on RNA families before the journal will publish your scholarly article on it:
This track will primarily publish articles describing either: (1) substantial updates and reviews of existing RNA families or (2) novel RNA families based on computational and/or experimental results for which little evolutionary analysis has been published. These articles must be accompanied by STOCKHOLM formatted alignments, including a consensus secondary structure or structures and a corresponding Wikipedia article. Publication in the track will require a short manuscript, a high quality Stockholm alignment and at least one Wikipedia article; Each centered around the RNA in question.
As my source for this points out, Nature (the publishing organisation behind the RNA Biology journal, and co-producer of Science Foo Camp with O'Reilly and Google) already synchronises a database with Wikipedia. Apparently there's a core of scientists who do most of the edits, but also a lot of other scientists who pop in sporadically to fix or add information.
Kudos to Nature for doing something imaginative to increase the commons. Journals wield a huge amount of power in the scientific world, and it's wonderful to see them using that power to incentivize good.
tags: nature, publishing, science, wikipedia
| comments: 3
| Sphere It
submit:
Hard Work and Practice in Programming
by Tim O'ReillyAt the Program For the Future event commemorating the 40th anniversary of Doug Englebart's "mother of all demos" in 1968, I was privileged to hear an inspired rant by Alan Kay about the unwillingness of people to work hard to learn new skills. I'm quoting from memory, so the lines below are not exact, and there's no way I can convey the wonderful sense of outrage expressed in Alan's voice, but I hope you can imagine it:
If some entrepreneur introduced the bicycle today, no one would fund him. You have to actually learn how to use it! ...I saw a controller for Guitar Hero that costs a couple of hundred dollars. You can get a decent electric guitar for that price. But you'd have to actually learn something to play it!
There's a long arc in computing that teaches us how much we gain through advances in ease-of-use, with the iPhone being the latest breakthrough success. But it's important to remember how much we lose when we think that ease of use is everything. Many things worth doing are hard, requiring a great deal of practice before you achieve mastery.
Shortly thereafter, I was intrigued to see an interview entitled Bjarne Stroustrup on Educating Software Developers (via Slashdot) sounding the same theme:
High schools could teach students to work hard at something (just about anything), to search out information as needed, and learn to express their ideas in writing and orally. Project-based work is good for that. Exactly which programming language is used for software is less important, but the aim should not be to make tasks as simple as possible but to challenge students.
And of course, practice, specifically "10,000 hours of practice" during childhood, is one of the themes of Malcolm Gladwell's new book, Outliers.
The interview with Stroustrup provoked a great discussion on the O'Reilly editors' backchannel. It was so juicy that I wanted to share it with all of you.
tags: alan kay, bicycle, engelbart, practice, programming
| comments: 22
| Sphere It
submit:
GeoData Explorations: Open Street Map's Growth
by Brady Forrest
Open Street Map (OSM), the open data mapping project, has grown a lot over the past year. It now has almost 80,000 users and 800 million data points.
OSM's data is still freely available, but commercial services around it have sprung up. Cloudmade is a startup that recently moved from the UK to San Francisco to be closer to investors and try to build up their US data. GeoFabrik is a German startup with similar plans (just focused on Germany). Flickr has been making use of it lately to supplement Yahoo's Mapping data (specifically Black Rock City, Beijing, Kabul and Baghdad).
The above image is Planet - A Year Of Edits On OpenStreetMap. It was generated on November 23rd, 2008 by Peter Ito. Most of the growth occurred in Europe (where the project originated) and the United States (where the founder has moved). The US community has really picked up the pace and has started replacing the US government's free TIGER data set. You can see images of the data edits in the US for October and November and an animation of the world's edits.
This cartogram shows the distribution of POIs (Points of Interest) in the OSM data set. The UK and Germany have a disproportionate amount of data compared to their land mass (but obviously not compared to their OSM users). This image was released on 11/7 on the Cloudmade blog.
If you're not familiar with cartograms go explore Worldmapper, it's an amazing site filled with them. Or make your own with the same software.
The above graph shows the number of registered (and presumably contributing) OSM users and the number of uploaded track points. The user growth is similar to the early Wikipedia years, but it's uncertain whether OSM will be able to match Wikipedia's amazing growth. Uploading GPS tracks or editing geo data is a higher user barrier than editing an article. The user's have to register and use complicated tools (though Potlatch, the online editor, attempts to even the playing field).
If you want to see the people behind the map all of their names are available in this short animation. You can see other OSM stats in their wiki.
This is the latest GeoData Explorations post, also see GeoData Explorations: Google's Ever-Expanding Geo Investment. If you have geodata to share (for a future post) let me know in the comments.
tags: geo, open data, open street map, where 2.0
| comments: 3
| Sphere It
submit:
GeoData Explorations: Google's Ever-Expanding Geo Investment
by Brady Forrest
Google has been investing lots of money in geodata acquisition. Some of the money is being spent externally: they've inked an exclusive satellite imagery deal with GeoEye (Radar post) and a data sharing deal Tele Atlas (Radar post). And some is being spent internally with Mapmaker, Street View and the web. Over the past week Google has been sharing visualizations of their internally gathered geodata. Here's a round-up of them.
The image above was released on December 9th. It shows how much of the US is available via Street View. According to the post Street View imagery increased 22 fold around the world in 2008.
The dark image above was released on December 11th. It highlights the parts of the world that are being mapped on Google's Mapmaker by users (Radar post). Mapmaker is now live in 164 countries. According to the map it has gained the most traction in Africa and the Indian sub-continent. The Google Mapmaker team has released timelapse videos of Mapmaker building cities on the Mapmaker YouTube Channel. I've embedded one after the jump.
This final image shows all the points described by GeoRSS and KML all over the world. It was shown at Where 2.0 2007 by Michael Jones (video). Unsurprisingly, this image and the Mapmaker image show opposite data density concentrations.
In some more GeoData Explorations posts this week I will look at OSM vs Google and some surprising trends in KML.
The State of Transit Routing
by Jim Stogdill
My brother called me a week ago and during the course of our conversation mentioned that he made the trek to the Miami Auto Show. He was complaining that he really wanted to take Tri-Rail (the commuter rail that runs along Florida's South East coast) but it was just too hard to figure out the rest of the trip once he got off the train. "One web site for train schedules, another for buses, and another for a city map to tie it all together. It was just too much trouble to figure out, so I drove. I just want to go online and get directions just like I do for driving, but that tells me which train, which bus, etc."
Coincidentally, later in the day I downloaded the iPhone 2.2 upgrade with the new walking and public transit directions. So far, at least where I live, it's useless. The little bus icon just sits there grayed out, taunting me. I guess because SEPTA (our local transit authority for bus and regional rail) isn't giving data to Google?
My brother hadn't heard of Google Transit, but It turns out to have some coverage in Miami. Their coverage at this point seems to be transit authority centric and doesn't seem to have great support for mixed mode or stuff that crosses transit system boundaries. I am curious though, is it being used? Let me know in the comments if you are using it to good effect.
Anyway, my brother's call on the same day as the iPhone update piqued my interest in the current state of the art for mixed mode transit routing. After some mostly fruitless web searches for I reached out to Andrew Turner. I knew he'd know what was going on in. This is what he had to say:
Routing is definitely one of the emergent areas of technology in the next generation applications. So far, we've done a great job getting digital maps on the web, mobile locative devices, and comfortable users.One problem for awhile has been the lack of data. You can have a great algorithm or concept, but without data it's useless. Gathering this data has been prohibitively expensive - companies like NAVTEQ drive many of the roads they map for verification and additional data. Therefore if you wanted to buy road data from one of the vendors you had to have a large sum of money in the bank and know how you were going to monetize it. This stifled experimentation and creating niche applications.
Now that the data is becoming widely, and often freely, available innovation is happening at an increased pace.
For one example, consider the typical road navigation. The global OpenStreetMap project has always had topology (road connectivity), but the community now adding attribute data to ways such as number of lanes, stop lights, turn restrictions, speeds, and directionality. Anyone can download this data to use with a variety of tools such as pgRouting. As a result people are rethinking standard routing mechanisms that assume travel from A to B via the fastest, or shortest, route. What if a user wants to take the "greenest" route as determined by lowest total gas mileage, or the most scenic route based on community feedback.
An area that has strongly utilized this idea has been disaster response. Agencies and organizations deploy to areas with little on the ground data, or data that is now obsolete due to the disaster they're responding. Destroyed bridges, flooded roads, new and temporary infrastructure are just some of the aspects that are lost with typical navigation systems. However, given the capability of responders to correct the data and instantly get new routes is vital. And these routes may need to be based on attributes different from typical engines - it's not about the fastest, but which roads will handle a 5-ton water truck?
This scheme was deployed in the recent hurricane response in Haiti in conjunction with the UNJLC, CartOng, OpenStreetMap and OpenRouteService.
Beyond just simple, automotive routing, we can now incorporate multi-modal transit. With 50% of the world's population now living in urban areas, the assumption that everyone is in a car is not valid. Instead people will be utilizing a mixture of cars, buses, subways, walking, and bicycling. This data is also being added to OpenStreetMap as well as other projects such as Bikely or EveryTrail. GraphServer is one routing engine that will incorporate these various modes and provide routes.
And we're interfacing with all these engines using a variety of devices: laptop, PND (Personal Navigation Device), GPS units, mobile phones, and waymarking signs. PointAbout recently won an award in the Apps For Democracy for their DC Location Aware Realtime Alerts mobile application that displays the route to the nearest arriving metro.
What's also interesting is the potential of these routing tools beyond actual specific individual routes. Taken in amalgamation the routing distances form a new topography of the space. Given a point in the city, how far can I travel in 20 minutes? in 40 minutes? for less than $1.75? This type of map is known as an isochrone. Tom Carden and MySociety developed London Travel Time Maps that allow users to highlight the spots in London given a range of house prices and travel times.
Despite these apparent benefits, there is a large hurdle. Like road data, there has been a lack of openly available transit data to power applications and services. Providers like NAVTEQ and open projects like OpenStreetMap are possible because the public roads are observable and measurable by any one. By contrast, the many, varied local transit agencies own and protect their routing data and are reluctant to share. Google Transit has made great strides in working with transit authorities to expose their information in the Google Transit Feed Specification - at least to Google. This does not mean the data has to be publicly shared, and in many cases this is exactly what has occured.
However, not even the allure of widely admired Google Transit can induce transit authorities to share their prized data. The Director of Customer Service of the Washington Metro Area Transit Authority (WMATA) plainly states that working with Google is "not in our best interest from a business perspective."
Hopefully, this situation will change, first through forceful FOIA requests, but later through cooperation. One step in this direction have been TransitCamps. And Portland's TriMet is a shining example with a Developer Resources page detailing data feeds and API's.
These experimentations are just the beginning of what is being pushed in the space. Routing is one of those features that users may not realize they need until they have it and then they'll find it indispensable. The ability for a person to customize their valuation of distance to assist in making complex decisions and searching is very powerful.
For more projects and tools, check out the OpenStreetMap routing page, Ideas in transit and the OGC's OpenLS standards.
tags: emerging tech, geo
| comments: 10
| Sphere It
submit:
My Netbook Took Me Back To Windows
by Brady Forrest
When I left Microsoft I switched to a Macbook Pro and didn't look back. I never thought that I would use a Windows machine regularly again. Then I got an Asus Eee PC 1000h (10.2 in screen, 1.6 GHz Intel Atom N270 Processor, upgraded to 2GB RAM; I judge it to be on the larger end of a netbook). For three weeks it was my sole computer. It runs XP and that is just fine for what I expect from a netbook.
How is the netbook different? It is a secondary machine that knows its place. It is not as powerful as my Macbook nor is the workspace as big; I am definitely less efficient on it. I got it for its size and price. The 10 inch screen (1024x600 resolution) is fine for most work. The weight (3.2 lbs) is a relief for a traveller. And ringing in under $350 (with discounts and Live Cashback) it is an affordable luxury. In fact the price makes it almost disposable. Not disposable in a throw-away fashion, but in a if it gets stolen, lost or ruined while I am on the road it will not be the end of the world or a costly item to replace. It's a machine that I can throw in my backpack when I go out for the day and not worry too much.
During my three week of travels I used the machine primarily for browsing the web, answering email, managing photos and watching video. It has a tiny screen, but I sought software that left as much room as possible in the workspace. Chrome, for example, takes up very little screen space with toolbars. I switched from the clunky Zimbra Desktop client to Windows Live Mail (a really well-designed mail client if you can overlook the lack of smart folders and a couple of quirks).
My other major criteria for software was the ability to sync off the machine. Other than when managing media I tried to never save directly to the file-system and only to the web. The netbook will never be my main machine and I do not want to "forget" a file on it. I relied on Evernote to record my notes and save them to the cloud.
To make the machine more reminiscent of my Mac I installed Launchy. It's an extendable application launcher like Quicksilver. With Launchy I never use the Start Menu.
This is not to say that I didn't find the computer limiting. I was unable to install Valve's Portal (most likely due to the integrated graphics card) and video occasionally stuttered on the machine. I try to keep a minimum number of apps open to prevent the machine from slowing down.
Instead of XP I could run a Linux variety or Mac OS X. I do dual-boot with Ubuntu-eee, but it is not my primary OS. As you can see in the screesnhot it is very icon heavy and does a good job of being user-friendly. However, the OS lacks the client software that I need (no Chrome or Evernote client). Soon there will be another Ubuntu designed specifically for netbooks. According to Techcrunch Tariq Krim is developing Jolicloud, but without more information I am not certain how it is different from Ubuntu-eee - based on screenshots they look very similar.
I ultimately chose XP because it stays out of the way, it has the software I want and it lets me get the job done. I am not sure that it will keep me. Chrome will be coming out on Linux. Evernote (and other clients) could opt to develop across all platforms. New netbook-oriented OSs are going to be designed with a netbook's characteristics in mind.
(It's being reported that Dell will start penalizing users for selecting XP over Vista to the tune of an extra $150. It's interesting to note that Dell does not offer Vista as an option for the Dell Mini, its netbook offering.)
(Ubuntu-eee screenshot courtesy of ubuntu-eee.com)
Update: In the comments Corey Burger provided some interesting information on Ubuntu-eee: The icon-heavy launcher is built by Canonical and is called the netbook-remix-launcher or ubuntu-mobile-edition launcher, depending. Ubuntu-eee is basically just that plus a few tweaks. Coming with Ubuntu 9.04 will be official images/isos for all sorts of netbooks.
Register's Googlewashing Story Overblown
by Tim O'ReillyI'm disappointed by the pile-on of people rising to Andrew Orlowski's classic bit of yellow journalism (or trolling, as it's more often referred to today), Google Cranks Up the Consensus Engine. If so many other people weren't taking it seriously, I'd just ignore it. (I just picked this story up via Jim Warren's alarmed forward to Dave Farber's IP list.)
Orlowski breathlessly "reports": "Google this week admitted that its staff will pick and choose what appears in its search results. It's a historic statement - and nobody has yet grasped its significance."
Orlowski has divined this fact based on the following "evidence," a report by Techcrunch's Michael Arrington on comments made by Marissa Mayer at the Le Web Conference in Paris:
Mayer also talked about Google’s use of user data created by actions on Wiki search to improve search results on Google in general. For now that data is not being used to change overall search results, she said. But in the future it’s likely Google will use the data to at least make obvious changes. An example is if “thousands of people” were to knock a search result off a search page, they’d be likely to make a change.
While I agree that, if true, Google's manipulation of search results would be a serious problem, I don't see any evidence in this comment of a change in Google's approach to search. I fail to see how tuning Google's algorithms based on the input of thousands of people about which search results they prefer is different from Google's initial algorithms like Pagerank, in which Google weights links from sites differently based on a calculated value that reflects -- guess what, the opinions of the thousands of people linking to each of those sites in turn.
The idea that Google's algorithms are somehow magically neutral to human values misses their point entirely. What distinguished Google from its peers in 1998 was precisely that it exploited an additional layer of implicit human values as expressed by link behavior, rather than relying on purely mechanistic analysis of the text contained on pages.
Google is always tuning their algorithms to produce what they consider to be better results. What makes a better result? More people click on it.
There's a feedback loop here that has always guided Google. Google's algorithms have never been purely mechanistic. They are an attempt to capture the flow of human meaning that is expressed via choices like linking, clicking on search results, and perhaps in the future, gasp, whether people using the wikified version of the search engine de-value certain links.
This is not to say that Google's search quality team doesn't actually make human interventions from time to time. In fact, search for O'Reilly (my name) and you'll see one of them. You'll see an unusual split page, with the organic search results, dominated by yours truly and my namesake Bill O'Reilly, with the second half of the page given over to Fortune 500 company O'Reilly Auto Parts.
Why? Because Google's algorithms pushed O'Reilly Auto Parts off the first page in favor of lots more Tim O'Reilly and Bill O'Reilly links, and Google judged, based on search behavior, that folks looking for O'Reilly Auto Parts were going away frustrated. Google uses direct human intervention when it believes that there is no easy way to accomplish the same goal by tuning the algorithms to solve the general case of providing the best results.
(I should note that my only inside knowledge of this subject comes from a few conversations with Peter Norvig, plus a few attempts to persuade Google to give more prominence to book search results, which failed due to the resistance of the search quality team to mucking with the algorithms in ways that they don't consider justified by the goal of providing greater search satisfaction.)
Even if Google were to become manipulative for their own benefit in the way Orlowski implies, I don't think we have to worry. They'd soon start losing share to someone who gives better results.
P.S. Speaking of the dark underbelly of editorial bias, consider this: Orlowski doesn't even bother to link to his source, the Techcrunch article. There's only one external link in his piece, and it's done in such a way as to minimize the search engine value of the link (i.e. with no key search terms in the anchor text.) Orlowski either doesn't understand how search engines work, or he understands them all too well, and is trying not to lead anyone away from his own site. A good lesson in how human judgment can be applied to search results: consider the source.
tags: google, googlewashing, register
| comments: 39
| Sphere It
submit:
Michael Pollan on Food, Energy, Climate, and Health
by Sara WingeIn his latest column, Nicholas Kristof encourages President-Elect Obama to heed Michael Pollan's call for a radically new food policy. Pollan makes a convincing case that our current food system is a "shadow problem." If we're serious about working on energy independence, climate change, and health care, we have to change how we're feeding ourselves.
During his interview with Pollan At Web 2.0 Summit last month, John Battelle boils it down to "eat sunshine." Pollan challenges the audience to make a difference in the food system. Watch the video and ask yourself--can tech innovators and entrepreneurs create technology to make the food system more transparent and carbon-neutral, and figure out how to make money creating solar food production systems?
tags:
| comments: 6
| Sphere It
submit:
O'Reilly AlphaTech Ventures Invests in Amee
by Tim O'ReillyI'm pleased to announce that on Wednesday, O'Reilly AlphaTech Ventures, our VC affiliate, closed an investment in UK-based Amee, which bills itself as "the world's energy meter." Here's their description of what they do:
AMEE’s aim is to map, measure and track all the energy data on Earth. This includes aggregating every emission factor and methodology related to CO2 and Energy Assessments (individuals, businesses, buildings, products, supply chains, countries, etc.), and all the consumption data (fuel, water, waste, quantitative and qualitative factors).If you've been following my talks in which I urge software developers and entrepreneurs to "work on stuff that matters," you know that I consider getting a handle on carbon accounting is the first step in putting a stop to global warming. (If you're a warming skeptic, I consider global warming as a modern example of Pascal's wager: if we're wrong, and global warming is not human caused, the steps we'll take to address it are still worthwhile. We get off foreign oil, improve our energy security, build new industries, improve the environment.)It is a web-service (API) that combines measurement, calculation, profiling and transactional systems. Its algorithmic engine applies conversion factors from energy into CO2 emissions, and represents data from 150 countries.
AMEE aids the development of businesses and other initiatives - by providing common benchmarks for measurement, tracking, conversion, collaboration and reporting.
Even apart from the contribution to a critical world issue, Amee is interesting because it shows that the future of web services will involve a much broader range of data services than most people imagine. I've long argued that the subsystems of the emerging internet operating system are data subsystems. Some of those, like location and identity, are obvious, and thus hotly contested. Others, like carbon data, are sorely needed, and not yet built out. There's huge opportunity in finding and populating key databases, and then turning them into ubiquitous web services.
By the way, if you use dopplr, you've already seen Amee at work: it provides the data for dopplr's carbon calculator tab.
Union Square Ventures is also an investor in this round. Partner Albert Wenger gives his take on the investment on their blog.
tags: amee, carbon, energy, global warming, investments, oatv
| comments: 5
| Sphere It
submit:
Clever Emoticarolers App
by Dale DoughertyOpen the door and smiley-face carolers sing a song that you can customize and send to others. That's the emoticarolers concept, worked up by Jason Striegel, our Hackszine editor, who leads the development side of things for Colle+McVoy in Minneapolis. The team created this clever holiday "text-to-sing" promotion for Yahoo Messenger at emoticarolers.com. A custom Make carol is here. (Reminds me of the Smileys book by David Sanderson that I developed many years ago.)
I asked Jason how they built the app and here's what he said:
The front end interface is written in Flash/AS3. It talks to a PHP backend, which uses the Festival text-to-speech software and some other Unix audio tools to render each of the four voices. Those all get compiled back into a single mp3 and sent back to flash, along with an xml file that tells the app how to animate the emoticons and custom lyrics. Aside from some of the animated bits, this could work as-is with an HTML/CSS/JS front end as well.Links: emoticarolers.comThe process is pretty cpu intensive, so we had to use a number of load balanced machines to handle requests. They output files on amazon s3, all keyed by a unique id. If this becomes popular (fingers crossed), there's no database or anything that will bottleneck reads or writes, and it should just scale linearly as we add more boxes.
It's funny how the text-to-singing stuff ended up being only a small portion of the project.
Make Holiday Carol
tags: carol, Christmas, festival, speech synthesis
| comments: 1
| Sphere It
submit:
Challenges for the New Genomics
by Matt Wood
New guest blogger Matt Wood heads up the Production Software team at the Wellcome Trust Sanger Institute, where he builds tools and processes to manage tens of terabytes of data per day in support of genomic research. Matt will be exploring the intersection of data, computer technology, and science on Radar.
The original Human Genome Project was completed in 2003, after a 13-year worldwide effort and a billion dollar budget. The quest to sequence all three billion letters of the human genome, which encodes a wide range of human characteristics including the risk of disease, has provided the foundation for modern biomedical research.
Through research built around the human genome, the scientific community aims to learn more about the interplay of genes, and the role of biologically active regions of the genome in maintaining health or causing disease. Since such active areas are often well conserved between species, and given the huge costs involved in sequencing a human genome, scientists have worked hard to sequence a wide range of organisms that span evolutionary history.
This has resulted in the publication of around 40 different species' genomes, ranging from C. elegans to the Chimpanzee, from the Opossum to the Orangutan. These genomic sequences have helped progress the state of the art of human genomic research, in part, by helping to identify biologically important genes.
Whilst there is great value in comparing genomes between species, the answers to key questions of an individual's genetic makeup can only be found by looking at individuals within the same species. Until recently, this has been prohibitively expensive. We needed a quantum leap in cost-effective, timely individual genome sequencing, a leap delivered by a new wave of technologies from companies such as Illumina, Roche and Applied Biosystems.
In the last 18 months, new horizons in genomic research have opened up, along with a number of new projects looking to make a big impact (the 1000 Genomes Project and International Cancer Genome Consortium to name but two). Despite the huge potential, these new technologies bring with them some tough challenges for modern biological research.
High throughput
For the first time, biology has become truly data driven. New short-read sequencing technologies offer orders of magnitude greater resolution when sequencing DNA, sufficient to detect the single-letter changes that could indicate an increased risk of disease. The cost of this enhanced resolution comes in the form of substantial data throughput requirements, with a single sequencing instrument generating terabytes of data a week--more than all biological protocols to date. The methods by which data of this scale can be efficiently moved, analyzed, and made available to scientific collaborators (not least the challenge of backing it up), are cause for intense activity and discussion in biomedical research institutes around the globe.
Very rapid change
Scientific research has always been a relatively dynamic realm to work in, but the novel requirements of these new technologies bring with them unprecedented levels of flux. Software tools built around these technologies are required to bend and flex with the same agility as the frequently updated and refined underlying laboratory protocols and analysis techniques. A new breed of development approaches, techniques and technologies are needed to help biological researches add value to this data.
In a very short space of time the biological sciences have caught up with the data and analysis requirements of other large scale domains, such as high energy physics and astronomy. It is an exciting and challenging time to work in areas with such large scale requirements, and I look forward to discussing the role distribution, architecture and the networked future of science here on Radar.
tags: genomics, informatics, science, software
| comments: 12
| Sphere It
submit:
Recent Posts
- The Twitter Gold Mine & Beating Google to the Semantic Web | by Nick Bilton on December 8, 2008
- Catch 22: Too Big To Fail, Too Big To Succeed | by Joshua-Michéle Ross on December 7, 2008
- Facebook Growth Regions and Gender Split | by Ben Lorica on December 4, 2008
- Open Source Mobile Roundup | by Nat Torkington on December 3, 2008
- Getting OpenID Into the Browser | by David Recordon on December 2, 2008
- iTunes App Store: The First Five Months | by Ben Lorica on December 2, 2008
- Where 2.0 CFP Closes December 2nd | by Brady Forrest on December 1, 2008
- 10,000 iPhone Apps | by Raven Zachary on November 30, 2008
- Data Center Power Efficiency | by Jesse Robbins on November 29, 2008
- Why I Love Twitter | by Tim O'Reilly on November 29, 2008
- Put change.gov Under Revision Control! | by Tim O'Reilly on November 28, 2008
- My Web Doesn't Like Your Enterprise, at Least While it's More Fun | by Jim Stogdill on November 25, 2008
BUSINESS INTELLIGENCE
TIM'S TWITTER UPDATES
RELEASE 2.0
Current Issue

Where 2.0: The State of the Geospatial Web
Issue 2.0.10
Back Issues
More Release 2.0 Back IssuesCURRENT CONFERENCES

The O'Reilly Money:Tech Conference will be an even deeper dive into the space where Wall Street meets Web 2.0, using technology as a lens to provide a unique view of the most pressing issues facing the industry now. Read more

Connect with Publishing Innovation. True digital publishing means creating, producing, and delivering content that may never appear on a printed page. Read more

ETech, the O'Reilly Emerging Technology Conference, is O'Reilly Media's flagship "O'Reilly Radar" event. Read more
O'Reilly Home | Privacy Policy ©2005-2008, O'Reilly Media, Inc. | (707) 827-7000 / (800) 998-9938
Website:
| Customer Service:
| Book issues:
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.