CARVIEW |
Operations
Velocity 2009 - Big Ideas (early registration deadline)
by Jesse Robbins | comments: 5
My favorite interview question to ask candidates is: "What happens when you type www.(amazon|google|yahoo).com in your browser and press return?"
While the actual process of serving and rendering a page takes seconds to complete, describing it in real detail can take an hour. A good answer spans every part of the Internet from the client browser & operating system, DNS, through the network, to load balancers, servers, services, storage, down to the operating system & hardware, and all the way back again to the browser. It requires an understanding of TCP/IP, HTTP, & SSL deep enough to describe how connections are managed, how load-balancers work, and how certificates are exchanged and validated... and that's just the first request!
Web Performance & Operations is an emerging discipline which requires incredible breadth, focusing less on specific technologies and more on how the entire system works together. While people often specialize on particular components, great engineers always think of that component in relation to the whole. The best engineers are able to fly to the 50,000 foot view and see the entire system in motion and then zoom in to microscopic levels and examine the tiny movements of an individual part.
John Allspaw recently described this interconnectedness on his blog:
With websites, the introduction of change (for example, a bad database query) can affect (in a bad way) the entire system, not just the component(s) that saw the change. Adding handfuls of milliseconds to a query that’s made often, and you’re now holding page requests up longer. The same thing applies to optimizations as well. Break that [bad] query into two small fast ones, and watch how usage can change all over the system pretty quickly. Databases respond a bit faster, pages get built quicker, which means users click on more links, etc. This second-order effect of optimization is probably pretty familiar to those of us running sites of decent scale.
Working with these systems requires an understanding not only of the way technology interacts, but the way that people do as well. The structure, operation, and development of a website mirrors the organization that creates it, which is why so many people in WebOps focus on understanding and improving management culture & process.
Organizing a conference like Velocity is a wonderful challenge because it requires the same sort of thinking. We focus on the big concepts that everyone needs to know and then go deep into the technologies that change our understanding of the system. We find ways to share the unique experience that can only be gained by operating at scale. We make it safe to share as much of the "Secret Sauce" as we can.
Please join us at Velocity this year, we have an amazing lineup of speakers & participants. Early registration ends on Monday, May 11th at 11:59 PM Pacific. (Radar readers can use "vel09cmb" for an additional 15% discount.)
tags: cloud, data, infrastructure, operations, scale, velocity, velocity09, velocityconf, web, web2.0
| comments: 5
submit:
Velocity Preview - Keeping Twitter Tweeting
by James Turner | comments: 3
You may also download this file. Running time: 00:10:46
Subscribe to this podcast series via iTunes. Or, visit the O'Reilly Media area at iTunes to find other podcasts from O'Reilly.
If there's a site that exemplifies explosive growth, it has to be Twitter. It seems like everywhere you look, someone is Tweeting, or talking about Tweeting, or Tweeting about Tweeting. Keeping the site responsive under that type of increase is no easy job, but it's one that John Adams has to deal with every day, working in Twitter Operations. He'll be talking about that work at O'Reilly's Velocity Conference, in a session entitled Fixing Twitter: Improving the Performance and Scalability of the World's Most Popular Micro-blogging Site, and he spent some time with us to talk about what is involved in keeping the site alive.
James Turner: Can you start by describing the platforms and technologies that make Twitter run today?
John Adams: Twitter currently runs on Ruby on Rails. And we also use a combination of Java and Scala, and a number of homegrown scripts that run the site. We also use a lot of open-source tools like Apache, MySQL, memcached.
JT: What type of hardware are you running on?
JA: It's all Linux, so a lot of x86 hardware. I can't tell you the brands or how many.
JT: Do you make any kind of attempt to stay homogeneous in that?
JA: Yes, we do. All of our hardware is very consistent. It makes deployment of new software very easy. And we also use a number of configuration management tools like Puppet to deliver software to those machines.
JT: As anyone can see, Twitter has had a pretty explosive growth, especially recently. Were you prepared for this kind of ramp up?
JA: I don't think so. I mean we're growing week over week in enormous numbers. And we spend a lot of time calculating the growth and scalability of the site to make sure that we can handle the upcoming load.
JT: I mean obviously there are events like Oprah decides she's going to Tweet that are going to be spikes. Do you try to get warning of that stuff?
JA: Yeah. And frequently we know of major events happening. Major events are very predictable like Macworld, even any massive amount of media interaction, we have some fair warning beforehand.
tags: interviews, operations, twitter, velocity, velocity09, velocityconf, web2.0, webops
| comments: 3
submit:
AT&T; Fiber cuts remind us: Location is a Basket too!
by Jesse Robbins | comments: 3
The fiber cuts affecting much of the San Francisco Bay Area this week are similar to the outages in the Middle East last year (radar post), although far more limited in scope and impact. What I said last year still holds true and is repeated below:
From an operations perspective these kinds of outages are nothing new, and underscore why having "many eggs in few baskets" is such a problem. I believe we will see similar incidents when we have the first multi-datacenter failures where multiple providers lose significant parts of their infrastructure in a single geographic area.
Remember: Don't put all your eggs in one basket... and Location is a basket too!
To really understand the issue, I recommend Neal Stephenson's incredible (and lengthy) Wired article from 1996 entitled "Mother Earth Mother Board":
It's also worth mentioning the outages to multiple service providers hosted in a single colocation facility when the FBI sized all the equipment in the facility, the big outage at 365 Main from two years ago, and many others (see: Radar posts & comprehensive coverage at Data Center Knowledge).[...] It sometimes seems as though every force of nature, every flaw in the human character, and every biological organism on the planet is engaged in a competition to see which can sever the most cables. The Museum of Submarine Telegraphy in Porthcurno, England, has a display of wrecked cables bracketed to a slab of wood. Each is labeled with its cause of failure, some of which sound dramatic, some cryptic, some both: trawler maul, spewed core, intermittent disconnection, strained core, teredo worms, crab's nest, perished core, fish bite, even "spliced by Italians." The teredo worm is like a science fiction creature, a bivalve with a rasp-edged shell that it uses like a buzz saw to cut through wood - or through submarine cables. Cable companies learned the hard way, early on, that it likes to eat gutta-percha, and subsequent cables received a helical wrapping of copper tape to stop it.
[...] There is also the obvious threat of sabotage by a hostile government, but, surprisingly, this almost never happens. When cypherpunk Doug Barnes was researching his Caribbean project, he spent some time looking into this, because it was exactly the kind of threat he was worried about in the case of a data haven. Somewhat to his own surprise and relief, he concluded that it simply wasn't going to happen. "Cutting a submarine cable," Barnes says, "is like starting a nuclear war. It's easy to do, the results are devastating, and as soon as one country does it, all of the others will retaliate."
As the capacity of optical fibers climbs, so does the economic damage caused when the cable is severed. FLAG makes its money by selling capacity to long-distance carriers, who turn around and resell it to end users at rates that are increasingly determined by what the market will bear. If FLAG gets chopped, no calls get through. The carriers' phone calls get routed to FLAG's competitors (other cables or satellites), and FLAG loses the revenue represented by those calls until the cable is repaired. The amount of revenue it loses is a function of how many calls the cable is physically capable of carrying, how close to capacity the cable is running, and what prices the market will bear for calls on the broken cable segment. In other words, a break between Dubai and Bombay might cost FLAG more in revenue loss than a break between Korea and Japan if calls between Dubai and Bombay cost more.
The rule of thumb for calculating revenue loss works like this: for every penny per minute that the long distance market will bear on a particular route, the loss of revenue, should FLAG be severed on that route, is about $3,000 a minute. So if calls on that route are a dime a minute, the damage is $30,000 a minute, and if calls are a dollar a minute, the damage is almost a third of a million dollars for every minute the cable is down. Upcoming advances in fiber bandwidth may push this figure, for some cables, past the million-dollar-a-minute mark. [Link]
tags: at&t;, cloud, failure, failure happens, fiber, infrastructure, operations, outages, velocity, velocity09, web infrastructure, web operations, web2.0, webops, worries
| comments: 3
submit:
Karmic Koalas Love Eucalyptus
by Simon Wardley | comments: 7
Guest blogger Simon Wardley, a geneticist with a love of mathematics and a fascination for economics, is the Software Services Manager for Canonical, helping define future cloud computing strategies for Ubuntu. Simon is a passionate advocate and researcher in the fields of open source, commoditization, innovation, and cybernetics.
Mark Shuttleworth recently announced that the release of Ubuntu 9.10 will be code-named Karmic Koala. Whilst many of the developments around Ubuntu 9.10 are focused on the desktop, a significant effort is being made on the server release to bring Ubuntu into the cloud computing space. The cloud effort begins with 9.04 and the launch of a technology preview of Eucalyptus, an open sourced system for creating Amazon EC2-like clouds, on Ubuntu.
I thought I'd discuss some of the reasoning behind Ubuntu's Cloud Computing strategy. Rather than just give a definition of cloud computing, I'll start with a closer look at its underlying causes.
The computing stack is comprised of many layers, from the applications we write, to the platforms we develop in and the infrastructure we build upon. Some activities at various layers of this stack have become so ubiquitous and well defined that they are now suitable for service provision through volume operations. This has led to the growth of the 'as a Service' industries, with providers like Amazon EC2 and Force.com.
Information Technology's shift from a product to a service-based economy brings with it both advantage and disruption. On the one hand, the shift offers numerous benefits including economies of scale (through volume operations), focus on core activities (outsourcing), acceleration in innovation (componentisation), and pay per use (utility charging). On the other hand, many concerns remain, some relating to the transitional nature of this shift (management, security and trust), while others pertain to the general outsourcing of any common activity (second sourcing options, competitive pricing pressures and lock-in). These concerns create significant adoption barriers for the cloud.
At Canonical, the company that sponsors and supports Ubuntu, we intend to provide our users with the ability to build their own clouds whilst promoting standards for the cloud computing space. We want to encourage the formation of competitive marketplaces for cloud services with users having choice, freedom, and portability between providers. In a nutshell, and with all due apologies to Isaac Asimov, our aim is to enable our users with 'Three Rules Happy' cloud computing. That is to say:
- Rule 1: I want to run the service on my own infrastructure.
- Rule 2: I want to easily migrate the service from my infrastructure to a cloud provider and vice versa with a few clicks of a button.
- Rule 3: I want to easily migrate the service from one cloud provider to another with a few clicks of a button.
tags: cloud computing, open source, operations, ubuntu
| comments: 7
submit:
Cloud Computing defined by Berkeley RAD Labs
by Artur Bergman | comments: 4
I am pleased to finally have found a paper that manages to bring together the different aspects of cloud computing in a coherent fashion, and suggests the requirements for it to develop further.
Written by the Berkeley RAD Lab (UC Berkeley Reliable Adaptive Distributed Systems Laboratory) the paper succinctly brings together Software as a Service with Utility Computing to come up with a workable definition of Cloud Computing and is a recommended read.
The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds.
Exploring the difference between the raw service of Amazon EC2 to the high level web centered Google App Engine, the highlights are:
- Insight into the pay-as-you go aspect with no commits
- Analysis of cost with regards to peak and elasticity in face of unknown demand
- Cost of data transfers versus processing time
- Seamless migration of user to cloud processing
- Limits and problems with I/O on shared hardware
- Availability of Service
- Data Lock-In
- Data Confidentiality and Auditability
- Data Transfer Bottlenecks
- Performance Unpredictability
- Scalable Storage
- Bugs in Large-Scale Distributed Systems
- Scaling Quickly
- Reputation Fate Sharing
- Software Licensing
I particularly find interesting the analysis of transportation cost versus computing cost; when is it more efficient to to use EC2 than your own individual processing? I predict speed of light and available of raw transfer capacity is going to become a even larger obstacle. (Both inside computers, between them on local LANs and on WANs.)
The paper reinforces my belief in the cloud, but that we need open source cloud environments and a larger ecosystem of providers.
Read more on the Above the Clouds blog.
tags: cloud computing, operations, web2.0
| comments: 4
submit:
Understanding Web Operations Culture - the Graph & Data Obsession
by Jesse Robbins | comments: 7
We’re quite addicted to data pr0n here at Flickr. We’ve got graphs for pretty much everything, and add graphs all of the time.
-John Allspaw, Operations Engineering Manager at Flickr & author of The Art of Capacity Planning
One of the most interesting parts of running a large website is watching the effects of unrelated events affecting user traffic in aggregate. Web traffic is something that companies typically keep very secret, and often the only time engineers can talk about it is late at night, at a bar, and very much off the record.
There are many good reasons for keeping this kind of information confidential, particularly for publicly traded companies with complicated disclosure requirements. There are also downsides, the biggest being that is difficult for peers to learn from each other and compare notes.
John Allspaw recently created a WebOps Visualizations group on Flickr for sharing these kinds of graphs with the confidential information removed. Here’s an example of a traffic drop seen both by Flickr & by Last.FM that coincided with President Obama’s inauguration.

Similar traffic drop on Last.FM seen on the right
Google saw a similar drop as well
Was it because everybody went to Twitter?
Besides being an interesting story, sharing these kinds of graphs help people build better monitoring tools and processes. As just one example: How should the WebOps team respond to this dip in traffic? Is it an outage? The inaguration was a very well known event and so it’s easy to explain the drop in traffic… what happens when a similar drop in traffic occurs? Should the WebOps team be looking at CNN (or trends in twitter) along with everything else?
How do you tell when that unexpected 10% drop in traffic is really just people with something more important to do than browse your site?
(Note: Updated since original posting to add Google & Twitter graphs and annotations, and to switch the Last.FM graphic with an annotated one after I got permission.)
tags: big data, culture, enterprise 2.0, flickr, infovis, john allspaw, last.fm, metrics, monitoring, operations, velocity, velocity09, web2.0, webops
| comments: 7
submit:
Data Center Power Efficiency
by Jesse Robbins | comments: 8
James Hamilton is one of the smartest and most accomplished engineers I know. He now leads Microsoft's Data Center Futures Team, and has been pushing the opportunities in data center efficiency and internet scale services both inside & outside Microsoft. His most recent post explores misconceptions about the Cost of Power in Large-Scale Data Centers:
![]()
I’m not sure how many times I’ve read or been told that power is the number one cost in a modern mega-data center, but it has been a frequent refrain. And, like many stories that get told and retold, there is an element of truth to the it. Power is absolutely the fastest growing operational costs of a high-scale service. Except for server hardware costs, power and costs functionally related to power usually do dominate.
However, it turns out that power alone itself isn’t anywhere close to the most significant a cost. Let’s look at this more deeply. If you amortize power distribution and cooling systems infrastructure over 15 years and amortize server costs over 3 years, you can get a fair comparative picture of how server costs compare to infrastructure (power distribution and cooling). But how to compare the capital costs of server, and power and cooling infrastructure with that monthly bill for power?
The approach I took is to convert everything into a monthly charge. [...]
tags: cloud computing, energy, james hamilton, microsoft, operations, performance, platforms, utilities, utility computing, velocity, velocity09, web2.0
| comments: 8
submit:
My Web Doesn't Like Your Enterprise, at Least While it's More Fun
by Jim Stogdill | comments: 20
The other day Jesse posted a call for participation for the next Velocity Web Operations Conference. My background is in the enterprise space, so, despite Velocity's web focus, I wondered if there might not be interest in a bit of enterprise participation. After all, enterprise data centers deal with the same "Fast, Scaleable, Efficient, and Available" imperatives. I figured there might be some room for the two communities to learn from each other. So, I posted to the internal Radar author's list to see what everyone else thought.
Mostly silence. Until Artur replied with this quote from one of his friends employed at a large enterprise: "What took us a weekend to do, has taken 18 months here." That concise statement seems to sum up the view of the enterprise, and I'm not surprised. For nearly six years I've been swimming in the spirit-sapping molasses that is the Department of Defense IT Enterprise so I'm quite familiar with the sentiment. I often express it myself.
We've had some of this conversation before at Radar. In his post on Enterprise Rules, Nat used contrasting frames of reference to describe the web as your loving dear old API-provisioning Dad, while the enterprise is the belt-wielding standing-in-the-front-door-when-you-come-home-after-curfew step father.
While I agree that the enterprise is about control and the web is about emergence (I've made the same argument here at Radar), I don't think this negative characterization of the enterprise is all that useful. It seems to imply that the enterprise's orientation toward control springs fully formed from the minds of an army of petty controlling middle managers. I don't think that's the case.
I suspect it's more likely the result of large scale system dynamics, where the culture of control follows from other constraints. If multiverse advocates are right and there are infinite parallel universes, I bet most of them have IT enterprises just like ours; at least in those shards that have similar corporate IT boundary conditions. Once you have GAAP, Sarbox, domain-specific regulation like HIPAA, quarterly expectations from "The Street," decades of MIS legacy, and the talent acquisition realities that mature companies in mature industries face, the strange attractors in the system will pull most of those shards to roughly the same place. In other words, the IT enterprise is about control because large businesses in mature industries are about control. On the other hand, the web is about emergence because in this time, place, and with this technology discontinuity, emergence is the low energy state.
Also, as Artur acknowledged in a follow up email to the list, no matter what business you're in, it's always more fun to be delivering the product than to be tucked away in a cost center. On the web, bits are the product. In the enterprise bits are squirreled away in a supporting cost center that always needs to be ten percent smaller next year.
tags: operations, web2.0
| comments: 20
submit:
Velocity 2009: Themes, ideas, and call for participation...
by Jesse Robbins | comments: 0
Last year's Velocity conference was an incredible success. We expected around 400 people and we ended up maxing out the facility with over 600. This year we're moving the conference to a bigger space and extending it to 3 days to accommodate workshops and longer sessions.
Velocity 2009 will be on June 22-24th, 2009 at the Fairmont Hotel in San Jose, CA.
This year's conference will be especially important. I've said many times that Web Performance and Operations is critical to the success of every company that depends on the web. In the current economic situation, it's becoming a matter of survival. The competitive advantage comes from the ability to do two things:
Our Velocity 2009 mantra is "Fast, Scalable, Efficient, Available", a slight change from last year. (We've replaced "Resilient" with "Efficient" to make focus clear.)
I'm excited to announce that joining Steve Souders & I on this year's program committee are John Allspaw, Artur Bergman, Scott Ruthfield, Eric Schurman, and Mandi Walls. We've already started working on the program, and have just opened the Call for Participation.
tags: artur bergman, conferences, Eric Schurman, John Allspaw, mandi walls, operations, performance, scott ruthfield, steve souders, velocity, velocity09, web2.0, webops
| comments: 0
submit:
DisasterTech: "Decisions for Heroes"
by Jesse Robbins | comments: 2
One of the most interesting DisasterTech projects I've been following is "Decisions for Heroes" led by developer and Irish Coast Guard volunteer Robin Blandford.
Decisions is like Basecamp for volunteer Search & Rescue teams. The focus is on providing "just enough" process to compliment the real-world workflow of a rescue team, without unnecessary complexity. One of Robin's design goals is that:
User requirements are nil. Nobody likes reading manuals - if we have to write one, we've gotten too complicated.
This is the winning approach for building systems that "serve those that serve others", and is echoed by InSTEDD's design philosophy and the Sahana disaster management system.
Teams begin by entering their responses to incidents and training exercises. They then tag them with things like the weather conditions, the tools and skills required, and who from the team was deployed.
As a team's incident database grows this information can be used to show heatmaps, and provide powerful insight on the locations, weather conditions, and times of year that various incidents occur. Over time this kind of data could be analyzed in aggregate across multiple teams and regions and create an incredibly powerful resource for Emergency Managers. This is very similar to what Wesabe does for consumers with financial transaction data today (disclosure: OATV investment).

Rescue team members enter training dates and levels. The system tracks certification expiration dates and prompts team members & leaders to plan classes and remain current. This is a huge issue for volunteers who have to manage professional-level training requirements with the demands of a regular career.
As more incidents are entered into the system, it compares the skills required for each of the rescues with the team training exercises. This allows teams to identify areas to focus, train, and develop new skills.

tags: disaster tech, disastertech, emergency management, firefighting, humanitarian aid, ict, innovation, operations, rescue, social networking, web 2.0, webops
| comments: 2
submit:
Sprint blocking Cogent network traffic...
by Jesse Robbins | comments: 3
It appears that Sprint has stopped routing traffic (called "depeering") from Cogent as a result of some sort of legal dispute. Sprint customers cannot reach Cogent customers, and vice versa. The effect is similar to what would happen if Sprint were to block voice phonecalls to AT&T customers.
Here's a graph that shows the outage, courtesy of Keynote :
Rich Miller at DataCenterKnowledge has a great summary of the issues behind the incident, which has happened with Cogent before. Rich says:
At the heart of it, peering disputes are really loud business negotiations, and angry customers can be used as leverage by either side. This one will end as they always do, with one side agreeing to pay up or manage their traffic differently.
I think this is particularly Radar-worthy because it provides an example of the complex issues around Net Neutrality . In this case customers are harmed and most (especially Sprint wireless customers) will have no immediate recourse.
tags: cloud computing, cogent, disruption, innovation, internet policy, network neutrality, operations, sprint, utilities, utility computing, webops
| comments: 3
submit:
Amazon's new EC2 SLA
by Jesse Robbins | comments: 7
Amazon announced a new SLA for EC2, similar to the one for S3. This is a notable step for Amazon and cloud computing as a whole, as it establishes a new bar for utility computing services.
Amazon is committing to 99.95% availability for the EC2 service on a yearly basis, which corresponds to approximately four hours and twenty three minutes of downtime per year. It's important to remember that an SLA is just a contract that provides a commitment to a certain level of performance and some form of compensation when a provider fails to meet it.
Here's the summary of the EC2 SLA (emphasis added):Service Commitment AWS will use commercially reasonable efforts to make Amazon EC2 available with an Annual Uptime Percentage (defined below) of at least 99.95% during the Service Year. In the event Amazon EC2 does not meet the Annual Uptime Percentage commitment, you will be eligible to receive a Service Credit as described below. [...]To receive a Service Credit, you must submit a request by sending an e-mail message to aws-sla-request @ amazon.com. To be eligible, the credit request must [...] include your server request logs that document the errors and corroborate your claimed outage (any confidential or sensitive information in these logs should be removed or replaced with asterisks)
- “Annual Uptime Percentage” is calculated by subtracting from 100% the percentage of 5 minute periods during the Service Year in which Amazon EC2 was in the state of “Region Unavailable.” If you have been using Amazon EC2 for less than 365 days, your Service Year is still the preceding 365 days but any days prior to your use of the service will be deemed to have had 100% Region Availability [...]
- “Unavailable” means that all of your running instances have no external connectivity during a five minute period and you are unable to launch replacement instances. [...]
This new SLA does not appear to address the reliability of server instances individually or in aggregate. For example, if half of a customer's EC2 instances lose their connections or die every 6 minutes, EC2 would still be considered "available" even if it is essentially unusable.
If the entire EC2 service is down a cumulative four hours and twenty minutes, customers must furnish proof of the outage to Amazon to be eligible for the 10% credit. This seems like an onerous process for very little compensation, and isn't in-line with Amazon's famous "Relentless Customer Obsession". Amazon takes monitoring very seriously and should take the lead by tracking, reporting, and proactively compensating customers when it lets them down.
tags: amazon, availability, cloud computing, ec2, operations, s3, sla, webops
| comments: 7
submit:
Recent Posts
- Kaminsky DNS Patch Visualization | by Jesse Robbins on August 7, 2008
- The new internet traffic spikes | by Jesse Robbins on June 28, 2008
- Video of Rich Wolski's EUCALYPTUS talk at Velocity | by Jesse Robbins on June 24, 2008
- Hyperic CloudStatus service dashboard launches at Velocity! | by Jesse Robbins on June 23, 2008
- Service Monitoring Dashboards are mandatory for production services! | by Jesse Robbins on June 17, 2008
- Two new open source projects at Velocity | by Jesse Robbins on June 17, 2008
- Understanding Web Operations Culture (Part 1) | by Jesse Robbins on June 14, 2008
- CloudCamp gathering after Velocity | by Jesse Robbins on June 13, 2008
- Bill Coleman to keynote Velocity | by Jesse Robbins on June 11, 2008
- TLS Report grades and reports on site security | by Jesse Robbins on June 9, 2008
- DisasterTech from Where2.0 | by Jesse Robbins on May 30, 2008
- Ignite! @ Velocity: Submit your talks & "war stories"... | by Jesse Robbins on May 29, 2008
STAY CONNECTED
TIM'S TWITTER UPDATES
CURRENT CONFERENCES

Where 2.0 2009 delves into the emerging technologies surrounding the geospatial industry, particularly the way our lives are organized, from finding a restaurant to finding the source of a new millennium plague. Read more
O'Reilly Home | Privacy Policy ©2005-2009, O'Reilly Media, Inc. | (707) 827-7000 / (800) 998-9938
Website:
| Customer Service:
| Book issues:
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.