CARVIEW |
Google's PowerMeter. It's Cool, but don't Bogart My Meter Data
by Jim Stogdill | comments: 2
Last week I read this piece in the New York Times about Google's PowerMeter, their entry into the smart meter game. The story was picked up in quite a few places but neither the NYT piece or related articles from other outlets expanded much on Google's underlying press release. Google's FAQ isn't very satisfying either; it has no depth so I didn't really know what to make of it. When I finished reading it I was left with an inchoate unsettled feeling and then I forgot about it. But on Friday evening I had a random conversation about it with a colleague who works in the meter data management (MDM) space. By the time we were through talking about what Google might be doing I had arrived at a position of love / hate. I'll explain the love first.
In terms of the attention this brings to energy consumption at the household level, I really love what Google is doing with this initiative. As they put it:
"But smart meters need to be coupled with a strategy to provide customers with easy access to near real-time data on their energy usage. We're working on a prototype product that would give people this information in an iGoogle gadget."
I agree completely. It's not exactly the same thing, but I've been amazed by how much my behavior behind the wheel changed once I started leaving average mpg permanently visible on my car's dashboard display. In short order I went from speed racer wannabe to one of those guys that gets harassed by co-workers for driving too slow. "Hey, can you hypermile on the way back from lunch? I'm starving."
While I am not sure that a gadget on the web will have the same right-there-in-front-of-my-eyes impact that my car's LCD display has, I'm convinced that Google has hit on something important. After all, today most of us have no idea how many kilowatts we use, what we use them for, or how much we're paying per kilowatt. We use power in our homes the way I used to drive my car.
Unfortunately, Google's FAQ doesn't really answer any questions about how the service works. But from statements like "Google is counting on others to build devices to feed data into PowerMeter technology" we can deduce that Google is proposing to correlate the total power reported by your smart meter with the data collected from individual loads inside the home. This is really cool, because not only does it make the information more generally accessible to you (in an easily accessible gadget), it proposes to tell you what it is in your house that is using that power, and when.
Google can do this because many national and state governments have begun to mandate smart meter programs. Most of us will probably have one on the side of our house pretty soon (especially if the stimulus bill speeds things up). Smart meters improve on their predecessors by automating meter reading, reporting consumption in intervals (typically 15 minutes), and they can send "last gasp" failure notifications in the event of power outages.
But, just like their dumb ancestors, they will be owned by the utility. This means that the data generated will ultimately be under control of the utility and hosted in their systems. The meter will talk to a utility data collector and from there its data will enter the utility's MDM system. The MDM will do a bunch of stuff with the data. However, from the point of view of you, the consumer, it will primarily send it to the billing system which will now be able to account for time of day pricing. Also, it will send those last gasp signals to the outage management system so that outage reporting will be automatic. This will make analysis and response faster and more accurate. Google appears to be leveraging their position and market power to make deals with the utilities to access that data on our behalf.
The biggest reason for smart meter initiatives is demand management. The utilities have to carry expensive excess capacity so that they can meet peak loads. If they can use interval metering coupled with better pricing and feedback systems, they may be able to change our usage patterns and smooth that load which will reduce the necessary peak capacity overhang. Also, as alternative energy sources with less predictable availability like wind power come on line the utilities will need more "load shaping" options. Ultimately they might be able to reach directly out to your smart appliances and turn them off remotely if they need to.
The laws that are mandating smart metering are focused on this demand side management. Practically speaking, most utilities will close the consumer feedback loop by offering a simple portal on the utility's web site that will let you monitor your usage in the context of your bill. However, this isn't the part of the system the utilities are excited about. The hardware and the meters are the sexy part. The contracts to build the consumer portals are probably going to go to low cost bidders who will build them exactly to low band pass requirements. In some cases they may provide provisions for cusomers to download historical data into a spreadsheet if they want to. A few enterprising customers will probably take advantage of this feature, but this is the hard way to do the kinds of correlations Google has in mind.
What should be apparant by now, is that the government is mandating a good idea, but they are mandating it from a utilty-centric rather than customer-centric point of view. There is naturally some overlap between utility and customer interests, but they are not identical. The utility is concerned about managing capital costs. They look at the interval data and the customer portal as a way to influence your time-of-use behaviors. They really don't care how much power you use, they just don't want your demand to be lumpy. On the other hand, we just want our bills to be low.
So, Google's initiative offers to take your data from the utility, combine it with data coming from devices in your home, and visualize it much more you-centrically. There offering will do a better job than the utility's portal illuminating structural efficiency problems in the home as well as usage pattern problems once utilities start implementing variable pricing. In short, while the utility is attempting to influence your "when I use it" decision making, Google is offering to help you make better "what I plug in" decisions along with the stuff the utility cares about.
So, what's not to like?
Google needs two distinct sources of data to make this initiative work. They need access to your data via the utility that owns your smart meter. Plus they need data from equipment manufacturers that are going to make your appliances smart or provide your home automation gadgets. It doesn't bother me at all that they get this data, as long as the utility makes it available for anyone else that might be able to innovate with it too, including me. You never know, I might want to use it for a home made gadget that sets up an electric shock on my thermostat any time my last eight averaged readings are above some arbitrary threshold, you know, just to make me think twice before turning it up.
The little bit of info that Google provides on this initiative is at their .org domain, but there is virtually no information about how to participate in data standards making, API specification, device development, or that kind of thing. If you want to participate, you pick whether you are a utility, device manufacturer, or government, fill out a form and wait for Google to get back to you. Imagine, the government fills out a form to participate in Google's initiative. Google has out governmented the government.
As I described already, governments are insisting on demand side management, but there don't appear to be any requirements to provide generic API's for meter readings or meter events. It's enterprise thinking rather than web platform thinking and we run the risk of your data being treated like utility "content." "In other news today HBO struck an exclusive deal with XYZ electric for all of their meter content, meanwhile Cinemax surprised industry watchers by locking up ABC Electric. As was reported last night, all of the remaining utility's signed with Google last week."
I'm guessing that Google is probably following the same pattern that they are using in the transit space and making (exclusive?) deals with the utilities to consume your data. You'll have to log into the utilty portal to approve their access (or check a box on your bill). But Google, or other big players that can afford to buy in, will probably be the only choice(s) you have. There is no evidence on Google.org that they are trying to create an eco-system or generalized approach that would let you, the owner of the data, share it with other value added service providers. If the utilities implement this under government mandate it will suck. If they install smart meters with stimulus package money and still don't provide eco-system API's it will worse than suck.
Any thoughts on how this plays out on the smart appliance / home automation side? Are there healthy open standards developing or is there danger of large scale exclusivity on that side of the equation too?
Google will be more innovative with this data than the electric utilities, I have no doubt about that. But I can easily imagine other companies doing interesing innovating things with my meter data as well. Especially as Google achieves utility scale themselves. If my electric utility is going to create a mechanism to share my data with companies like Google, I want them to make a generalized set of API's that will let me share it with anyone.
A quick note to policy makers in states who haven't yet finalized their programs. When you think about what to mandate, consider a more consumer-centric model (if it's easier, think of it as a voter-centric model). You should be shooting for a highly innovative and generative space where contributions and innovations can come from large and small firms alike, and where no one should be structurally locked out from participation. Don't lock us into a techno-oligarchy where two or three giant firms own our data and the possibility of innovation. If you insist on widely implemented consumer controlled API's and a less enterprise-centric model, you will not only encourage broader innovation at the consumer end, but you can use it to enhance competition on the generation side too.
Well, Google isn't really saying what they are doing, so maybe I got it wrong. Maybe they are about to get all "spectrum should be free" and roll out all kinds of draft API's specifications for comment. If you think I got it wrong, don't hesitate to let me know in the comments.
tags: energy, google, utilities
| comments: 2
submit:
Radar Interview with Clay Shirky
by Joshua-Michéle Ross | comments: 2
Clay Shirky is one of the most incisive thinkers on technology and its effects on business and society. I had the pleasure to sit down with him after his keynote at the FASTForward '09 conference last week in Las Vegas.
In this interview Clay talks about
- The effects of low cost coordination and group action.
- Where to find the next layer of value when many professions are being disrupted by the Internet
- The necessary role of low cost experimentation in finding new business models
A big thanks to the FASTForward Blog team for hosting me there.
tags: clay shirky, future at work, innovation, journalism, publishing, social media
| comments: 2
submit:
Four short links: 16 Feb 2009
by Nat Torkington | comments: 2
A lot of Python and databases today, with some hardware and Twitter pranking/security worries to taste:
- Free Telephony Project, Open Telephony Hardware -- professionally-designed mass-manufactured hardware for telephony projects. E.g., IP04 runs Asterisk and has four phone jacks and removable Flash storage. Software, schematics, and PCB files released under GPL v2 or later.
- Don't Click Prank Explained -- inside the Javascript prank going around Twitter. Transparent overlays would appear to be dangerous.
- Tokyo Cabinet: A Modern Implementation of DBM -- ok, so there's definitely something going on with these alternative databases. Here's the 1979 BTree library reinvented for the modern age, then extended with PyTyrant, a database server for Tokyo Cabinet that offers HTTP REST, memcached, and a simple binary protocol. Cabinet is staggeringly fast, as this article makes clear. And if that wasn't enough wow for one day, Tokyo Dystopia is the full-text search engine. The Tyrant tutorial shows you how to get the server up and running. And what would technology be without a Slideshare presentation? (via Stinky)
- Whoosh -- a pure Python fulltext search library.
tags: big data, hardware, javascript, opensource, python, search, security, voip
| comments: 2
submit:
New Zealand Goes Black
by Nat Torkington | comments: 2
The previous government in New Zealand enacted an amendment to the Copyright Act that required ISPs to have a policy to disconnect users after repeated accusations of infringement, over the objections of technologists. While it's possible to have a policy that requires proof rather than accusation, APRA (the RIAA of New Zealand) strongly opposes any such attempts at reasonable interpretation of Section 92. The minor parties in the coalition government oppose the "three accusations and you're offline" section and want it repealed. This is the last week before that law is due to come into effect and the Creative Freedom Foundation, a group formed to represent artists and citizens who oppose the section, has a week of protest planned to convince the ruling National Party to repeal S92.
The first day's action was blacking out Twitter and Facebook avatars. I did it, as did Channel 3 Business News, a Creative Director at Saatchi and Saatchi, oh and Stephen Fry. Kudos to Juha Saarinen who first put out the call. This is building up to a full Internet blackout day on February 23rd. I'm delighted to say that the idea was formed at Kiwi Foo Camp, and the folks who were at Kiwi Foo have been running wild with it--building banners, releasing templates, spreading the word.
tags: democracy, twitter, web
| comments: 2
submit:
Change Happens
by Tim O'Reilly | comments: 27Last night, I watched a 1951 British movie, The Man in the White Suit. The plot hinged on everyone's realization that a new fabric invented by a young chemist (played by Alec Guiness) would put the British textile industry out of business. The fabric never wears out and resists dirt. Both labor and mill owners unite to suppress the discovery.
We know now, of course, that the great British woolen, cotton and silk mills did go the way of the buggy whip, prey not to new synthetic fabrics but to low cost overseas competition. At the time, it was unthinkable that the British mills would become all but extinct. When my great grandparents worked at Lister's Mill in Bradford, it employed more than 10,000 people. My mother, who grew up "back of the mill," recalls how the streets were so packed with people at closing time that there was no room for vehicles. By the time I remember visiting with my grandparents on Silk Street in the late 1960s, the mill was still active, but a shadow of its former self. Thirty years later, this monument of a once great industry was turned into shops and luxury apartments.
I think too of how my grandmother, with the prejudices of her time, was alarmed at how the "pakis" were taking over Bradford. How pleasing it was, then, to hear from my friend Imran Ali recently about the evolution of Bradford, a rebirth in which his family from Pakistan made the city their home:
My Grandfather came to the UK in the 50s, settling in Bradford before bringing his brothers and sons here to work and study...he was in the Indian/British Army during WW2, before finding work in various textile mills across the Bradford area.Meanwhile, Lister Park, a lovely park that I remember visiting with my grandmother, is now called The Mughal Gardens. Imran adds: "There's a Pakistani cafe near there that serves kebabs made with a sauce that's extracted from the Earth's molten core - so spicy, you can briefly see through time itself ;)"Coming to Bradford moved us up from that background to our family's first university graduates and now professional careers
So it's a place I'm so fond of I can't bring myself to leave. As much as I love coming to the Bay Area, Mt. San Bruno, Sonoma county, Burlingame and my other favoriite spots aren't the same as driving over Ilkley Moor on a snowy winter's day and seeing the Dales unfurl before me.
You might be interested to know a few interesting facts about the city since you may have last visited...
- The University of Bradford was one of the first two to teach computer science in the UK (Manchester being the other) - though it's disputed who was first!
- The university's school of computing gave the early UK web industry a great talent pool, including some of the founding team for Freeserve, the UK's largest ISP during the first boom, and also its biggest exit.
- Based on that we started a non-profit collective of new media companies called bmedi@ in 2001...
- The National Media Museum is located in Bradford and they just added their photo collections to Flickr.
- Grant Morrison wrote a graphic novel, Vimanarama, set in Bradford.
- The city's currently in the midst of a depression that goes back to the 2001 riots, but civic leaders have tabled an ambitious $3.2bn regeneration plan for the city's built environment.
There's actually a lot of interesting tech stuff going on regionally - a bunch of us have kickstarted grassrootsy-stuff like BarCamps, geek dinners and are starting to help a local university model itself on the Media Labs and ITPs of the world.
I won't say that this entry has that much spice, but I hope you can take a moment with me to see through time to allow wonder and delight to replace fear of change.
We're in the midst of enormous upheaval right now, between the Scylla and Charybdis of economic meltdown and climate change, with the promise of the Singularity visible in the distance like Apollo or Athena might have appeared to Odysseus' frightened sailors.
This is not new. History is full of optimism and despair, discovery and upheaval, with distant hope inspiring us to the great efforts that alone can save us. And despite all our attempts to prognosticate, it has a way of surprising us. The makers of The Man in the White Suit were fascinated and frightened by the possibilities of industrial chemistry: it had all the magic that today we associate with great advances in computing or synthetic biology. And inventions of new materials did in fact change the world, though not in ways that the film's creators lampooned.
Coming to terms with change is a basic life skill. If you don't have it, it's time to put it on your self-improvement to-do list. I'm reminded of something I wrote nearly 30 years ago in my first book, just out of college, a study of the work of science-fiction writer Frank Herbert:
One of [Herbert's] central ideas is that human consciousness exists on--and by virtue of--a dangerous edge of crisis, and that the most essential human strength is the ability to dance on that edge. The more man confronts the dangers of the unknown, the more conscious he becomes. All of Herbert's books portray and test the human ability to consciously adapt....It is a general principle of ecology that an ecosystem is stable not because it is secure and protected, but because it contains such diversity that some of its many types of organisms are bound to survive despite drastic changes in the environment or other adverse conditions. Herbert adds, however, that the effort of civilization to create and maintain security for its individual members, "necessarily creates the conditions of crisis because it fails to deal with change."
In short, get with the program! The future isn't going to be like the past. What's more, it isn't going to be like any future we imagine. How wonderful that is, if only we are prepared to accept it.
tags:
| comments: 27
submit:
Stimuluswatch.org; The Falling Cost and Accelerated Speed of Group Action
by Joshua-Michéle Ross | comments: 8Stimuluswatch.org is a great example of how easy it is today for people to, as Clay Shirky says, “organize without organizations.” Stimuluswatch.org began after Jerry Brito attended a mayor’s Conference and posted this request:
"Let’s help President-Elect Obama do what he is promising. Let’s help him “prioritize” so the projects so that we “get the most bang for the buck” and identify those that are old school “pork coming out of Congress”. We can do this through good clean fun crowdsourcing. Who can help me take the database on the Conference of Mayors site and turn each project into a wiki-page or other mechanism where local citizens can comment on whether the project is actually needed or whether it’s a boondoggle? How can we create an app that will let citizens separate the wheat from the pork and then sort for Congress and the new administration the project in descending order or relevancy?
Several developers read the post and got to work. Stimuluswatch went live on February 2nd with all the features Brito had requested. Last Friday alone there were 20,000 unique hits to the site. Total time to complete, seven weeks including holidays. Total cost - about $40 in monthly hosting fees.
I caught up with two of the developers behind the effort, Peter Snyder (via phone) and Kevin Dwyer (via email). The story they told me exemplifies how the web enables some remarkably fast group action. Here is how Kevin tells it - and pay attention to how many references there are to some form of open source, web service, or plug-and-play functionality that the team used to get this done.
“After reading Jerry's original blog post about the US Conference of Mayors report, I quickly wrote some python code to grab (screen scrape) all of the projects from their web site and put them into a sqlite database. The lxml module was awesome for this. Brian Mount took it and remastered the database into a MySQL database. Peter Snyder then popped up and offered to build the web site using a PHP based system called CodeIgniter. It lives up to its name (and Pete is awesome) because he had a fairly complex site up in no time. Now that we had a great base for the site, Jerry wrote copy and worked up some CSS/HTML which gives the site a great look and feel. Jerry also helped us integrate disqus and tumblr, which definitely helped reduce the number of wheels we had to reinvent. I experimented with several wiki backends and settled on MediaWiki. Using a perl module, I created wiki stubs for each of the projects to give users a bit of a framework for recording any facts they researched about each project, as well as listing points in favor and against. The whole thing now runs on an Amazon EC2 image.
Peter also pointed out that in the short time since launch, users themselves have helped cleanse errors in the data that was pulled from the mayor’s database and already begun filling out details on these local projects; including showering great disdain on the “doorbells” project.
None of these people knew each other previously. They were brought together by blog post into a common effort. They used open source tools in rapid development. They plugged in off the shelf online social technologies (disqus, tumblr and mediawiki) to create a forum to discuss these local projects. They achieved this in seven weeks. In fact, according to Peter, “the real effort here was more like two weeks”.
It will be interesting to see how stimuluswatch.org performs as a place to allow transparency and citizen involvement in civic projects. As we the public wait for www.recovery.gov to launch, perhaps we should just be asking them to give us the data. We can do the rest.
tags: collaboration, crowdsourcing, government
| comments: 8
submit:
Four short links: 13 Feb 2009
by Nat Torkington | comments: 1
One work-related and three fun geeky links to set you up for the weekend:
- Continuous Deployment and Continuous Learning -- I've been reading about the processes and structures that different organizations use to develop software, and this was interesting. "Our eventual conclusion was that there was no reason to have code that had passed the integration step but was not yet deployed."
- Pixel Art with Book Jackets -- the perfect thing to do with a shelf of O'Reilly books ....
- WhatTheFont -- take a photo of some text with your iPhone and this app will identify the font.
- La Princesse in Liverpool -- an amazing piece of civic theatre. I am in awe of Liverpool for greenlighting it, and of La Machine, the French creators of La Princesse.
tags: book related, design, hardware, management, programming
| comments: 1
submit:
ETech Preview: Inside Factory China, An Interview with Andrew Huang
by James Turner | comments: 18
You may also download this file. Running time: 00:21:57
China has become the production workhorse of the consumer electronics industry. Almost anything you pick up at a Best Buy first breathed life across the Pacific Ocean. But what is it like to shepherd a product through the design and production process? Andrew "bunnie" Huang has done just that with the Chumby, a new internet appliance. He'll be speaking about the experience at the O'Reilly Emerging Technology Conference. In an exclusive interview with Radar, he talks about the logistical and moral issues involved with manufacturing in China, as well as his take on the consumer's right to hack the hardware they purchase.
JAMES TURNER: Andrew "bunnie" Huang is the Vice President of Hardware Engineering and Founder of Chumby Industries. He's pretty much the consummate hardware geek who has used his doctorate from MIT in electrical engineering to do everything from designing opto-electronics to hacking the Xbox. The Chumby, an internet appliance that delivers a cornucopia of information, is his latest endeavor. And he'll be talking about the process of getting it manufactured in China at O'Reilly's Emerging Technology Conference in March. Thank you for taking the time to talk to us.
ANDREW HUANG: No problem.
JT: So I have to start by asking, were you one of those kids who took everything apart in your house?
AH: Oh, yeah. Yeah. My parents had a problem with that. There was lots of stuff taken apart. Not everything got back together again. Most things did. But there's definitely a few things that got hidden underneath the couch for a few days hoping my parents wouldn't notice, while I tried to find the last few screws and whatnot. They eventually figured out that the best way to try and contain me was to just give me other things to play with. So I got a computer and they got one of those 201 kits from Radio Shack for me to play with, so I would stop taking apart all of their alarm clocks and stuff.
JT: You know, you can't get those kits at Radio Shack anymore. It's very disappointing.
AH: I know. That is really sad. I mean those were really good kits. I mean I really learned a lot from the one that I had, and a couple other ones that were donated to me through friends or my friends' parents also were really engaging.
JT: So you used to spend a lot of your time deconstructing the security infrastructure that manufacturers put in place. What in particular drives you in that direction?
AH: The deconstruction of security infrastructure?
JT: Yeah.
AH: I mean a lot of it is just -- it's more like if you just put a Rubik's Cube in front of me, I'll play with it. It's kind of the same thing. A lot of it comes from the fact that I've actually been taking apart consumer electronic devices for decades now. And I always look at the construction and how it's built to learn something from it, because that's basically what I read to figure out the latest techniques for constructioning and costing and part selection.
And when I start seeing someone mentioning security features that have some relevance to the hardware level, I start poking at it some more just because it's really interesting and you can learn something from it.
tags: china, emerging telephony, manufacturing
| comments: 18
submit:
Cloud Computing defined by Berkeley RAD Labs
by Artur Bergman | comments: 4
I am pleased to finally have found a paper that manages to bring together the different aspects of cloud computing in a coherent fashion, and suggests the requirements for it to develop further.
Written by the Berkeley RAD Lab (UC Berkeley Reliable Adaptive Distributed Systems Laboratory) the paper succinctly brings together Software as a Service with Utility Computing to come up with a workable definition of Cloud Computing and is a recommended read.
The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds.
Exploring the difference between the raw service of Amazon EC2 to the high level web centered Google App Engine, the highlights are:
- Insight into the pay-as-you go aspect with no commits
- Analysis of cost with regards to peak and elasticity in face of unknown demand
- Cost of data transfers versus processing time
- Seamless migration of user to cloud processing
- Limits and problems with I/O on shared hardware
- Availability of Service
- Data Lock-In
- Data Confidentiality and Auditability
- Data Transfer Bottlenecks
- Performance Unpredictability
- Scalable Storage
- Bugs in Large-Scale Distributed Systems
- Scaling Quickly
- Reputation Fate Sharing
- Software Licensing
I particularly find interesting the analysis of transportation cost versus computing cost; when is it more efficient to to use EC2 than your own individual processing? I predict speed of light and available of raw transfer capacity is going to become a even larger obstacle. (Both inside computers, between them on local LANs and on WANs.)
The paper reinforces my belief in the cloud, but that we need open source cloud environments and a larger ecosystem of providers.
Read more on the Above the Clouds blog.
tags: cloud computing, operations, web2.0
| comments: 4
submit:
Four short links: 12 Feb 2009
by Nat Torkington | comments: 2
Two links on visualization and two on life:
- Myth of the Concentration Oasis -- Vaughan of Mind Hacks takes on the trendy notion that the Internet is turning us into brainless dullards who are unable to focus on any subject for longer than a 15s TV commercial. "The trouble is, it's plainly rubbish, and you just have to spend time with some low tech communities to see this is the case."
- UUorld -- gorgeous map-based visual analytics environment for Mac OS X. Lovely to see something step beyond the "throw it onto a Google Map", which has become commonplace.
- Why I Can't Afford Cheap -- great story about an octogenarian talking about her prized possessions. (via Titine's delicious stream)
- Visualization Trends for the Noosphere (Jon Udell) -- thoughtful commentary on what's needed to make data visualization as simple as email. Viz is an incredibly powerful tool for translating data into understanding, but it's currently too damn hard to "mix the paint" (Udell's great term for the data hacking to convert, fix, etc. the data before they can be used). (via Titine's delicious stream)
tags: brain, data, map, visualization
| comments: 2
submit:
Parallel Computing on Late-Night TV
by Ben Lorica | comments: 3Jen-Hsun Huang, CEO Nvidia appeared on Charlie Rose last week and he touched on a wide range of subjects including his early years in a boarding school in Kentucky, the founding of Nvidia, CPUs, and GPUs. Amazingly, Charlie spent the last 10+ minutes of the show on the CUDA architecture (starting around minute 29:54 of the broadcast).
GPUs excel at mathematical computations, but until a few years ago there wasn't an easy way to access the compute engine behind such manycore processors. With CUDA, a C programmer uses a few simple extensions to access abstractions (thread groups, shared memories, synchronization) that can be used for (fine-grained) parallel programming. Nvidia's goal is to make every language now available in the CPU, also available in the GPU. The next wave of languages they are targeting are FORTRAN, Java, and C++.
In the interview Jen acknowledged that feedback from a few users encouraged them to start working on CUDA. To their credit, they acted quickly and if you visit the CUDA web site they highlight interesting applications mostly in the field of scientific computation, energy exploration and mathematical modeling. Other heavy users are hedge funds and other computational finance outfits.
Coincidentally, we talked to Nvidia late last year as part of our upcoming report on big data. For big data problems, they cited users who accelerated database computations such as sorts and relational joins, and bioinformatics researchers who used CUDA for their pattern matching algorithms. Their users also report that the combination of CPU/GPU in servers leads to smaller clusters and a substantial reduction in energy costs.
For now, the CUDA architecture is the province of C programmers and my fellow number crunchers. But Nvidia is allocating resources to make their tools even easier to use, and once that happens, surprising applications will emerge. Given that Apple and Intel have signaled that they too think GPUs are interesting, I'm fairly confident that simpler programming tools will emerge soon.
tags: big data
| comments: 3
submit:
ETech Preview: Why LCD is the Cool New Technology All Over Again
by James Turner | comments: 5
You may also download this file. Running time: 00:43:53
In an early test of the OLPC XO in Nigeria, the student users dropped every laptop several times a day. Despite the laptops' rugged construction, they occasionally needed fixing, and a group of six-year-old girls opened up a "hospital" to reseat cables and do other simpler repairs. Mary Lou Jepson, One Laptop Per Child project's CTO, had this response: "I put extra screws underneath the battery cover so that if they lost one, they could have an extra one. And kids trade them almost like marbles, when they want to try to get something fixed in their laptop."
Mary Lou led the development of the OLPC's breakthrough low-power transflective display, combining a traditional backlit color display with a black and white display that could be used outdoors. She left OLPC to form Pixel Qi, and bring the revolutionary engineering used in the XO to the broader consumer market. In this interview, she discusses lessons learned from OLPC and shares her vision of "cool screens that can ship in high volume, really quickly, at price points that are equivalent to what you pay for standard liquid crystal displays."
At ETech, Mary Lou's keynote presentation delves further into Low-Cost, Low-Power Computing.
JAMES TURNER: I'm speaking today with Mary Lou Jepsen, Founder and CEO of Pixel Qi. Dr. Jepsen previously served as chief technology officer for the One Laptop per Child program where she was an instrumental player in the development of the OLPC's revolutionary hybrid screen. She also previously served as CTO of Intel's display division. Dr. Jepsen was also named by Time Magazine recently as one of the 100 most influential people in the world for 2008. She'll be speaking at the O'Reilly Emerging Technologies Conference in March, and we're pleased she's taken the time to talk to us. Good evening.
MARY LOU JEPSEN: Hi. Nice to speak with you tonight.
JT: So in some ways, you're kind of uniquely qualified to comment on the current travails of the OLPC since you've been in highly influential positions both in the OLPC effort itself and at Intel, who some believe tried to sabotage the OLPC. Do you think that the OLPC would've had wider acceptance if the Intel Classmate wasn't competing against it?
MLJ: It is interesting. I think the OLPC, and I haven't seen the latest numbers, sold a lot more than the Classmate. I think head-to-head there's no comparison which is the better machine, and I'm not saying that just because I'm the architect. But what's really happened has been extraordinary. I think OLPC's impact in sort of spearheading the movement to Netbooks is fairly undisputed, although OLPC is not the best selling Netbook; 17 million Netbooks shipped in 2008 and that's through companies like Acer, Asus, MSI, HP, Dell. And that impact on the world is starting to be felt.
JT: What were the factors that led you to leave the OLPC program and start Pixel Qi?
MLJ: You know, I started OLPC with Nicholas in his office in the beginning, in January of 2005. And at that point, right after that Bill Gates, Steve Jobs, Michael Dell, all said it was impossible. So it became my job to sort of take that, create an architecture, invent a few things, convince the manufacturers to work with me to develop it, get a team together, and take it into high-volume mass production. And then it got to the point where my days were spent getting safety certifications for various countries.
And I just realized, it's time for me to continue doing this; this is the best job I've ever done, but to keep going, why not make these components that are inside of the XO and let everybody buy them rather than just exclusively making and designing them for the OLPC laptop. If you make more of something, you can sell it for less. So rather than just serving the bottom of the pyramid, why not take the fantastic technology that we developed at OLPC and serve the whole pyramid? Everybody wants their batteries to last a lot longer. Everybody wants screens that are e-paper-like and high resolution and sunlight readable. So why not make these for the whole world?
tags: displays, emerging tech, etech, green tech, lcd, olpc, pixel qi
| comments: 5
submit:
Recent Posts
- Four short links: 11.5 Feb 2009 | by Nat Torkington on February 11, 2009
- Ask... no, wait... TELL Tim | by Brett McLaughlin on February 10, 2009
- Come to ETech; Experiment with Physical Computing and RFIDs | by Brady Forrest on February 10, 2009
- O'Reilly Labs: RDF For All of Our Books, Plus Bookworm Ebook Reader | by Andrew Savikas on February 10, 2009
- Four short links: 11 Feb 2009 | by Nat Torkington on February 10, 2009
- The Kindle and the End of the End of History | by Jim Stogdill on February 9, 2009
- The Kindle Hardware Tax | by Marc Hedlund on February 9, 2009
- ETech Preview: Living the Technomadic Life | by James Turner on February 9, 2009
- Four short links: 10 Feb 2009 | by Nat Torkington on February 9, 2009
- For-Profit, Non-Profit, and Scary Humor | by Michael Jon Jensen on February 7, 2009
- Security and Data Risk in the Age of Social Networks | by Joshua-Michéle Ross on February 6, 2009
- Four short links: 6 Feb 2009 | by Nat Torkington on February 6, 2009
O'Reilly Home | Privacy Policy ©2005-2009, O'Reilly Media, Inc. | (707) 827-7000 / (800) 998-9938
Website:
| Customer Service:
| Book issues:
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.