CARVIEW |
Emerging Tech
Emerging Technology is our catch-all phrase for the innovations and trends we're watching (and the name of our popular conference) . We seek out and amplify the "faint signals" from the alpha geeks who are creating the future, and report on it here. We're currently intrigued by data visualization and search, hardware hacking, alternative energy, personalized medicine, immersive online environments, nanotechnology, body modification, and other mindbending new technology that will that spark new industries.
Tweenbots: Cute Beats Smart
by Brady Forrest | comments: 15
If you wanted to build a robot that could go from one end of Washington Square Park to the other without your help how would you do it? How expensive in time and money would it be? Would you build or buy a navigation system? Construct a sensing system to detect obstacles? Or would you decide to take a different tact and use cute as your primary tool?
ITP student Kacie Kinzer created a 10-inch smiling robot called a Tweenbot that can only go straight. For each journey Kacie would give the robot a destination and clearly label it. Given the obstacles in its way and lack of navigation or steering systems the expectation was that the robot would not make it. However the robot's avoidance of the uncanny valley and clearly written goal helped it out. Humans would redirect the Tweenbot so it successfully reached its destination. Below is a map of one Tweenbot journey:
Mission 1: Get from the Northwest to the Southwest Corner of Washington Square Park / time: 42 minutes / number of people who intervened: 29
As Kacie describes on the site:
Over the course of the following months, throughout numerous missions, the Tweenbots were successful in rolling from their start point to their far-away destination assisted only by strangers. Every time the robot got caught under a park bench, ground futilely against a curb, or became trapped in a pothole, some passerby would always rescue it and send it toward its goal. Never once was a Tweenbot lost or damaged. Often, people would ignore the instructions to aim the Tweenbot in the “right” direction, if that direction meant sending the robot into a perilous situation. One man turned the robot back in the direction from which it had just come, saying out loud to the Tweenbot, "You can’t go that way, it’s toward the road.”
So why do people help out the tweenbot? Personally I would not be able to resist assisting the anthropomorphized little robot. The smile signals its innocent intentions and the Tweenbot's label makes it clear how to help. It's something for designers and technologists to remember; sometimes cute and clever can get the job done much cheaper and in less time than smart and expensive.
There are more Tweenbots coming so if you happen to see any friendly robots around your town lend a hand. Here are some of the prototypes that are currently in development.
via Hacker News
tags: emerging tech, etech, geo
| comments: 15
submit:
Startup Marketing Isn't Rocket Science, So Don't Hire the Ph.D Too Soon
by Darren Barefoot | comments: 7
Guest blogger Darren Barefoot is a writer, marketer, technologist, and co-founder of Capulet Communications, a web marketing firm that specializes in high-tech and sustainability clients. He is the co-author of a forthcoming book about social media marketing for No Starch Press entitled "Friends with Benefits." Darren's personal blog is DarrenBarefoot.com.
.A couple of weeks ago, my partner and I met with a potential client, the newly-hired Vice President of Marketing at a technology startup. She was a heavyweight, having spent the last decade in charge of marketing at a major Canadian corporation. She'd been craving a change of pace, and so accepted an offer to oversee the marketing at this eight-person startup. She'd only been with the company for a couple of weeks, so I asked about her immediate plans. She replied, "I'm going to start by putting some policies and procedures in place--we could really use more structure."
Her response highlighted one of the most common mistakes we encounter when working with early-stage startups: the founders hire too much marketing talent too early.
Why does this happen? I'm not sure, but I wonder if it's because many founders have a technical background. As such, they're unfamiliar and sometimes a little intimidated by the challenges of promoting their startup. To assuage their concerns, they bring in a senior marketer with plenty of credentials.
In theory, this looks like a rational decision. After all, the more experienced the executive, the better. Practically speaking, things aren't quite that simple.
These types are usually great strategists. That's what got them high on the ladder at their previous corporate or agency job. They're accustomed to devising a strategic plan and overseeing a team that implements that plan. That's fine, because, a startup obviously needs strategy.
By their nature, though, they're thinkers, not doers. They're great at wrestling with thorny messaging problems or rejigging the corporate branding, but how often have they run a CPC advertising campaign, or how good are they at grokking Google Analytics?
The work they're accustomed to delegating or outsourcing is now work that they must do themselves. They have some 'doing' skills, but they're often atrophied or antiquated. Your average startup doesn't have a graphic designer or a copywriter or an SEO expert on staff. The VP of marketing at a bootstrapped startup must be all of these people, and we find that the average corporate escapee can't fill those roles.
What's the answer? A startup still needs strategic thinking. It just doesn't need it 40 hours a week with medical benefits and a corner office. Instead, founders should contract a senior marketing consultant with expertise in their field for a fixed number of hours a month. Have them weigh in on the hard decisions, draft launch plans and so forth.
Then, saving on the cost of an executive's salary, contract out the tactical marketing or employ a couple junior staffers to do the legwork. Marketing isn't rocket science, after all, so hold off on hiring that marketing Ph.D. until you're certain that you need them.
tags: emerging tech, web 2.0
| comments: 7
submit:
ETech Preview: On The Front Lines of the Next Pandemic
by James Turner | comments: 0
You may also download this file. Running time: 00:23:20
Subscribe to this podcast series via iTunes. Or, visit the O'Reilly Media area at iTunes to find other podcasts from O'Reilly.
With all of the stress and anxiety that humanity deals with on a daily basis--confronting the dangers of global warming, the perils of a financial system in meltdown and the ever-present threat of terrorism--the fact that there's yet another danger lurking out there ready to destroy mankind: the threat of a global pandemic, may be easy to forget. But although you and I may have driven thoughts of Ebola and the like from our minds, Dr. Nathan Wolfe worries about them every day. Dr. Wolfe founded and directs the Global Viral Forecasting Initiative which monitors the transfer of new diseases from animals to humans.
He received his Bachelor's degree at Stanford in 1993 and his Doctorate in Immunology and Infectious Diseases from Harvard in 1998. Dr. Wolfe was awarded the National Institute of Health International Research Scientist Development Award in 1999 and a prestigious NIH Directors Pioneer Award in 2005. He'll be speaking at the O'Reilly Emerging Technology Conference in March. His session is entitled, "Viral Forecasting." Thank you for joining us.
Dr. Nathan Wolfe: My pleasure.
James Turner: So why don't we start by talking about the Global Viral Forecasting Initiative. How is it different from the work that the CDC and the WHO and similar organizations do monitoring disease spread?
NW: Well, what we do is we actually focus on the interface between humans and animal populations. When we looked back and investigated the ways in which disease got started, the ways that pandemics really originated, what we found is that really the vast majority of these things are animal diseases. So rather than monitoring for illness, at which point it could potentially be too late, we've taken it one step backward. We actually focus on people who have high levels of contact with animals. And we set up large groups of these individuals and monitor the diseases that they have, as well as the diseases in the animal population. So the idea is to be able to catch these things a little bit earlier.
JT: The last disease that really made a big splash with the media was Ebola, earlier in this decade. But we really haven't heard much recently. Have things calmed down as far as new and novel diseases? Or are we just hearing less about outbreaks these days?
NW: Well, I mean I think we've had really substantive important pandemics. If you take a look at SARS, for example. SARS really only infected about 1,200 individuals, but its impact was tremendous. It was billions of dollars of economic impact all throughout the world. Even in a place like Singapore, where you had a small number of cases, you had an incredibly substantive financial impact. And then, of course, right now we have H5N1 which is -- they call it the bird flu. Actually, most influenzas are bird influenzas, so it's a little bit of a misnomer. But H5N1 is a virus which is spreading around the world in birds. And if it does make a transition into humans, which some bird flu will over the next 20 to 30 years, it could be incredibly devastating. So I think that these are kind of constant and present dangers. They're things that are increasing over time simply because of the way that we're connected as a human population.
tags: emerging tech, interviews, pandemics, viruses
| comments: 0
submit:
ETech Preview: Creating Biological Legos
by James Turner | comments: 21
You may also download this file. Running time: 00:15:25
Subscribe to this podcast series via iTunes. Or, visit the O'Reilly Media area at iTunes to find other podcasts from O'Reilly.
If you've gotten tired of hacking firewalls or cloud computing, maybe it's time to try your hand with DNA. That's what Reshma Shetty is doing with her Doctorate in Biological Engineering from MIT. Apart from her crowning achievement of getting bacteria to smell like mint and bananas, she's also active in the developing field of synthetic biology and has recently helped found a company called Gingko BioWorks which is developing enabling technologies to allow for rapid prototyping of biological systems. She will be giving a talk entitled Real Hackers Program DNA at O'Reilly's Emerging Technology Conference, March 9-12, in San Jose, California. And she's joining us here today. Thank you for taking the time.
RESHMA SHETTY: No problem. Happy to be here.
JAMES TURNER: So first of all, how do you make bacteria smell nice, and why? I get an image of a commercial, "Mary may have necrotizing fasciitis, but at least her hospital room smells minty fresh."
RS: Well, the original inspiration for the project was the fact that for anybody who works in a lab, who works with E. coli, when you grow cultures of the stuff, it just smells really bad. It smells really stinky, basically. And so our thought was, "Hey, why don't we reengineer the smell of E. coli? It'll make the lab smell minty fresh, and it's also a fun project that gets people, who maybe aren't normally excited about biology, interested in it because it's a very tangible thing. I can smell the change I made to this bacteria."
JT: So what was the actual process involved?
RS: So the process was, you basically take a gene, we took a gene from the petunia plant, which normally provides an odor to the flower, and you place that gene into the E. coli cell. And by supplying the cell with an appropriate precursor, you make this minty smell as a result. So it's fairly straightforward.
JT: Your degree, biological engineering, is a new one to me. How is it different from biochemistry or microbiology or genomics or any of the other traditional biotech degrees?
RS: Well, biology and biochemistry, and so on, are concerned with studying the natural world. So I'm going to go out and figure out how the natural world works. Biological engineering, instead, is really all about saying, "Hey, we have this natural world around us. Biology is, in some sense, a new technology through which we can build new engineered biological systems." Right? So the idea is, what's the difference between physics and electrical engineering? Electrical engineers want to go build. So in biological engineering, we're interested in going and building stuff, too. But using biology, rather than physics, as the underlying science of it.
tags: biology, emerging tech, interviews, itunes, synthetic biology
| comments: 21
submit:
ETech Preview: Why LCD is the Cool New Technology All Over Again
by James Turner | comments: 7
You may also download this file. Running time: 00:43:53
Subscribe to this podcast series via iTunes. Or, visit the O'Reilly Media area at iTunes to find other podcasts from O'Reilly.
In an early test of the OLPC XO in Nigeria, the student users dropped every laptop several times a day. Despite the laptops' rugged construction, they occasionally needed fixing, and a group of six-year-old girls opened up a "hospital" to reseat cables and do other simpler repairs. Mary Lou Jepson, One Laptop Per Child project's CTO, had this response: "I put extra screws underneath the battery cover so that if they lost one, they could have an extra one. And kids trade them almost like marbles, when they want to try to get something fixed in their laptop."
Mary Lou led the development of the OLPC's breakthrough low-power transflective display, combining a traditional backlit color display with a black and white display that could be used outdoors. She left OLPC to form Pixel Qi, and bring the revolutionary engineering used in the XO to the broader consumer market. In this interview, she discusses lessons learned from OLPC and shares her vision of "cool screens that can ship in high volume, really quickly, at price points that are equivalent to what you pay for standard liquid crystal displays."
At ETech, Mary Lou's keynote presentation delves further into Low-Cost, Low-Power Computing.
JAMES TURNER: I'm speaking today with Mary Lou Jepsen, Founder and CEO of Pixel Qi. Dr. Jepsen previously served as chief technology officer for the One Laptop per Child program where she was an instrumental player in the development of the OLPC's revolutionary hybrid screen. She also previously served as CTO of Intel's display division. Dr. Jepsen was also named by Time Magazine recently as one of the 100 most influential people in the world for 2008. She'll be speaking at the O'Reilly Emerging Technologies Conference in March, and we're pleased she's taken the time to talk to us. Good evening.
MARY LOU JEPSEN: Hi. Nice to speak with you tonight.
JT: So in some ways, you're kind of uniquely qualified to comment on the current travails of the OLPC since you've been in highly influential positions both in the OLPC effort itself and at Intel, who some believe tried to sabotage the OLPC. Do you think that the OLPC would've had wider acceptance if the Intel Classmate wasn't competing against it?
MLJ: It is interesting. I think the OLPC, and I haven't seen the latest numbers, sold a lot more than the Classmate. I think head-to-head there's no comparison which is the better machine, and I'm not saying that just because I'm the architect. But what's really happened has been extraordinary. I think OLPC's impact in sort of spearheading the movement to Netbooks is fairly undisputed, although OLPC is not the best selling Netbook; 17 million Netbooks shipped in 2008 and that's through companies like Acer, Asus, MSI, HP, Dell. And that impact on the world is starting to be felt.
JT: What were the factors that led you to leave the OLPC program and start Pixel Qi?
MLJ: You know, I started OLPC with Nicholas in his office in the beginning, in January of 2005. And at that point, right after that Bill Gates, Steve Jobs, Michael Dell, all said it was impossible. So it became my job to sort of take that, create an architecture, invent a few things, convince the manufacturers to work with me to develop it, get a team together, and take it into high-volume mass production. And then it got to the point where my days were spent getting safety certifications for various countries.
And I just realized, it's time for me to continue doing this; this is the best job I've ever done, but to keep going, why not make these components that are inside of the XO and let everybody buy them rather than just exclusively making and designing them for the OLPC laptop. If you make more of something, you can sell it for less. So rather than just serving the bottom of the pyramid, why not take the fantastic technology that we developed at OLPC and serve the whole pyramid? Everybody wants their batteries to last a lot longer. Everybody wants screens that are e-paper-like and high resolution and sunlight readable. So why not make these for the whole world?
tags: displays, emerging tech, etech, green tech, interviews, lcd, olpc, pixel qi
| comments: 7
submit:
The Kindle and the End of the End of History
by Jim Stogdill | comments: 23
This morning I was absentmindedly checking out the New York Times' bits blog coverage of the Kindle 2 launch and saw this:
“Our vision is every book, ever printed, in any language, all available in less than 60 seconds.”
It wasn't the main story for sure. It was buried in the piece like an afterthought, but it was the big news to me. It certainly falls into the category of big hairy audacious goal, and I think it's a lot more interesting than the device Bezos was there to launch (which still can't flatten a colorful maple leaf). I mean, he didn't say "every book in our inventory" or "every book in the catalogues of the major publishers that we work with." Or even, "every book that has already been digitized." He said "every book ever printed."
When I'm working I tend to write random notes to myself on 3x5 cards. Sometimes they get transcribed into Evernote, but all too often they just end up in piles. I read that quote and immediately started digging into the closest pile looking for a card I had just scribbled about an hour earlier.
I had been doing some research this morning and was reading a book published in 1915. It's long out of print, and may have only had one printing, but I know from contemporary news clippings found tucked in its pages that the author had been well known and somewhat controversial back in his day. Yet, Google had barely a hint that he ever existed. I fared even worse looking for other people referenced in the text. Frustrated, I grabbed a 3x5 card and scribbled:
"Google and the end of history... History is no longer a continuum. The pre-digital past doesn't exist, at least not unless I walk away from this computer, get all old school, and find an actual library."
My house is filled with books, it's ridiculous really. They are piled up everywhere. I buy a lot of old used books because I like to see how people lived and how they thought in other eras, and I guess I figure someday I'll find time to read them all. For me, it's often less about the facts they contain and more about peeking into alternative world views. Which is how I originally came upon the book I mentioned a moment ago.
The problem is that old books reference people and other stuff that a contemporary reader would have known immediately, but that are a mystery to me today - a mystery that needs solving if I want to understand what the author is trying to say, and to get that sense of how they saw the world. If you want to see what I mean, try reading Winston Churchill's Second World War series.
Churchill speaks conversationally about people, events, and publications that a London resident in 1950 would have been familiar with. However, without a ready reference to all that minutiae you'll have no idea what he's talking about. Unfortunately, a lot of the stuff he references is really obscure today and today's search engines are hit and miss with it - they only know what a modern wikipedia editor or some other recent writer thinks is relevant today. Google is brilliant for things that have been invented or written about in the digital age, or that made enough of a splash in their day to still get digital now, but the rest of it just doesn't exist. It's B.G. (before Google) or P.D. (pre digital) or something like that.
To cut to the chase, if you read old books you get a sense for how thin the searchable veneer of the web is on our world. The web's view of our world is temporally compressed, biased toward the recent, and even when it does look back through time to events memorable enough to have been digitally remembered, it sees them through our digital-age lens. They are being digitally remembered with our world view overlaid on top.
I posted some of these thoughts to the Radar backchannel list and Nat responded with his usual insight. He pointed out that cultural artifacts have always been divided into popular culture (on the tips of our tongues), cached culture (readily available in an encyclopedia or at the local library) and archived culture (gotta put on your researcher hat and dig, but you can find it in a research library somewhere). The implication is that it's no worse now because of the web.
I like that trichotomy, and of course Nat's right. It's not like the web is burying the archive any deeper. It's right there in the research library where it has always been. Besides, history never really operates as a continuum anyway. It's always been lumpy for a bunch of reasons. But as habit and convenience make us more and more reliant on the web, the off-the-web archive doesn't just seem hard to find, it becomes effectively invisible. In the A.G. era, the deep archive is looking more and more like those charts used by early explorers, with whole blank regions labeled "there be dragons".
So, back to Bezo's big goal... I'd love it to come true, because a comprehensive archive that is accessible in 60 seconds is an archive that is still part of history.
tags: big hairy audacious goals, emerging tech, publishing
| comments: 23
submit:
A Climate of Polarization
by Gavin Starks | comments: 10
Guest blogger Gavin Starks is founder and CEO of AMEE, a neutral aggregation platform designed to measure and track all the energy data in the world. Gavin has a background in Astrophysics and over 15 years Internet development experience.
We're all aware of the emotive language used to polarize the climate change debate.
There are, however, deeper patterns which are repeated across science as it interfaces with politics and media. These patterns have always bothered me, but they've never been as "important" as now.
We are entering an new era of seismic change in policy, business, society, technology, finance and our environment, on a scale and speed substantially greater than previous revolutions. The sheer complexity of these interweaving systems is staggering.
Much of this change is being driven by "climate science", and in the communications maelstrom there is a real risk that we further alienate "science" across the board.
We need more scientists with good media training (and presenting capability) to change the way that all sciences are represented and perceived. We need more journalists with deeper science training - and the time and space to actually communicate across all media. We need to present uncertainty clearly, confidently and in a way that doesn't impede our decision-making.
On the climate issue, there are some impossible levers to contend with;
- Introducing any doubt into the climate debate stops any action that might combat our human impact.
- Introducing "certainty" undermines our scientific method and its philosophy.
When represented in political, public and media spaces, these two levers undermine every scientific debate and lead to bad decisions.
Pascal's Wager is often invoked, and this is entirely reasonable in this case.
It is reasonable because of what's at stake: the risk of mass extinction events. If there is a probability that anthropogenic climate change will cause the predicted massive interventions in our ecosystem, then we have to act.
The nature of our actions must be commensurate with both the cause and the effect. The causes are many: population, production, consumption - as are the effects: war, poverty, scarcity, etc.
Our interventions will use all our means to address both cause and effect, and those actions will run deep.
Equally, we must allow science to do what it's designed to do: measure, model, analyse and predict.
From a scientific perspective we must allow more room for theories to evolve, otherwise we'll only prove what we're looking for.
However, if we ignore the potential need to act, the consequences are not something anyone will want to see.
It's not something we can fix later (for me, "geo-engineering" is not a fix, it's a pre-infected band-aid).
Given the massive complexity of the issues, and that - really - anthropogenic climate change is only one of many "peak consumption" issues that we face, there is no way we can accurately communicate all the arguments that would lead to mass understanding.
However, the complexity issues are no different from those we face in politics. They are not solvable, but they are addressable.
We can communicate the potential outcomes, and the decisions that individuals need to make in order to impact the causes.
Ultimately it's your personal choice.
My choice is based on my personal exposure to the science, business, data, policy, media, and broader issues around sustainability. That choice is to do my best to catalyse change as fast as I possibly can.
We all need to actively engage in improving communication, so that everyone - potentially everyone on Earth - can make informed choices about the future of the planet we inhabit.
--
Recommended reading:
https://www.realclimate.org/ is a great resource.
Today, the UK Government launched a campaign "to create a more science literate society, highlighting the science and technology based industries of the future"
tags: climate change, communication, emerging tech, media
| comments: 10
submit:
What Will Change Everything?
by Brady Forrest | comments: 16
Regular Radar contributor Linda Stone sent this in to be posted today.
What game-changing scientific ideas and developments do you expect to live to see? The Internet, television, antibiotics, automobiles, electricity, nuclear power, space travel, and cloning - these inventions were born out of dreams, persistence, and imagination. What game-changing ideas can we expect to see in OUR lifetimes?
tags: edge, emerging tech, question
| comments: 16
submit:
The State of Transit Routing
by Jim Stogdill | comments: 12
My brother called me a week ago and during the course of our conversation mentioned that he made the trek to the Miami Auto Show. He was complaining that he really wanted to take Tri-Rail (the commuter rail that runs along Florida's South East coast) but it was just too hard to figure out the rest of the trip once he got off the train. "One web site for train schedules, another for buses, and another for a city map to tie it all together. It was just too much trouble to figure out, so I drove. I just want to go online and get directions just like I do for driving, but that tells me which train, which bus, etc."
Coincidentally, later in the day I downloaded the iPhone 2.2 upgrade with the new walking and public transit directions. So far, at least where I live, it's useless. The little bus icon just sits there grayed out, taunting me. I guess because SEPTA (our local transit authority for bus and regional rail) isn't giving data to Google?
My brother hadn't heard of Google Transit, but It turns out to have some coverage in Miami. Their coverage at this point seems to be transit authority centric and doesn't seem to have great support for mixed mode or stuff that crosses transit system boundaries. I am curious though, is it being used? Let me know in the comments if you are using it to good effect.
Anyway, my brother's call on the same day as the iPhone update piqued my interest in the current state of the art for mixed mode transit routing. After some mostly fruitless web searches for I reached out to Andrew Turner. I knew he'd know what was going on in. This is what he had to say:
Routing is definitely one of the emergent areas of technology in the next generation applications. So far, we've done a great job getting digital maps on the web, mobile locative devices, and comfortable users.One problem for awhile has been the lack of data. You can have a great algorithm or concept, but without data it's useless. Gathering this data has been prohibitively expensive - companies like NAVTEQ drive many of the roads they map for verification and additional data. Therefore if you wanted to buy road data from one of the vendors you had to have a large sum of money in the bank and know how you were going to monetize it. This stifled experimentation and creating niche applications.
Now that the data is becoming widely, and often freely, available innovation is happening at an increased pace.
For one example, consider the typical road navigation. The global OpenStreetMap project has always had topology (road connectivity), but the community now adding attribute data to ways such as number of lanes, stop lights, turn restrictions, speeds, and directionality. Anyone can download this data to use with a variety of tools such as pgRouting. As a result people are rethinking standard routing mechanisms that assume travel from A to B via the fastest, or shortest, route. What if a user wants to take the "greenest" route as determined by lowest total gas mileage, or the most scenic route based on community feedback.
An area that has strongly utilized this idea has been disaster response. Agencies and organizations deploy to areas with little on the ground data, or data that is now obsolete due to the disaster they're responding. Destroyed bridges, flooded roads, new and temporary infrastructure are just some of the aspects that are lost with typical navigation systems. However, given the capability of responders to correct the data and instantly get new routes is vital. And these routes may need to be based on attributes different from typical engines - it's not about the fastest, but which roads will handle a 5-ton water truck?
This scheme was deployed in the recent hurricane response in Haiti in conjunction with the UNJLC, CartOng, OpenStreetMap and OpenRouteService.
Beyond just simple, automotive routing, we can now incorporate multi-modal transit. With 50% of the world's population now living in urban areas, the assumption that everyone is in a car is not valid. Instead people will be utilizing a mixture of cars, buses, subways, walking, and bicycling. This data is also being added to OpenStreetMap as well as other projects such as Bikely or EveryTrail. GraphServer is one routing engine that will incorporate these various modes and provide routes.
And we're interfacing with all these engines using a variety of devices: laptop, PND (Personal Navigation Device), GPS units, mobile phones, and waymarking signs. PointAbout recently won an award in the Apps For Democracy for their DC Location Aware Realtime Alerts mobile application that displays the route to the nearest arriving metro.
What's also interesting is the potential of these routing tools beyond actual specific individual routes. Taken in amalgamation the routing distances form a new topography of the space. Given a point in the city, how far can I travel in 20 minutes? in 40 minutes? for less than $1.75? This type of map is known as an isochrone. Tom Carden and MySociety developed London Travel Time Maps that allow users to highlight the spots in London given a range of house prices and travel times.
Despite these apparent benefits, there is a large hurdle. Like road data, there has been a lack of openly available transit data to power applications and services. Providers like NAVTEQ and open projects like OpenStreetMap are possible because the public roads are observable and measurable by any one. By contrast, the many, varied local transit agencies own and protect their routing data and are reluctant to share. Google Transit has made great strides in working with transit authorities to expose their information in the Google Transit Feed Specification - at least to Google. This does not mean the data has to be publicly shared, and in many cases this is exactly what has occured.
However, not even the allure of widely admired Google Transit can induce transit authorities to share their prized data. The Director of Customer Service of the Washington Metro Area Transit Authority (WMATA) plainly states that working with Google is "not in our best interest from a business perspective."
Hopefully, this situation will change, first through forceful FOIA requests, but later through cooperation. One step in this direction have been TransitCamps. And Portland's TriMet is a shining example with a Developer Resources page detailing data feeds and API's.
These experimentations are just the beginning of what is being pushed in the space. Routing is one of those features that users may not realize they need until they have it and then they'll find it indispensable. The ability for a person to customize their valuation of distance to assist in making complex decisions and searching is very powerful.
For more projects and tools, check out the OpenStreetMap routing page, Ideas in transit and the OGC's OpenLS standards.
tags: emerging tech, geo
| comments: 12
submit:
DIY Appliances on the Web?
by Jim Stogdill | comments: 9
Or, My Enterprise is Appliancized, Why Isn't Your Web?
I wrote a couple of posts a while back that covered task-optimized hardware. This one was about a system that combined Field Programmable Gate Arrays (FPGA's) with a commodity CPU platform to provide the sheer number crunching performance needed to break GSM encryption. This one looked at using task-appropriate efficient processors to reduce power consumption in a weather predicting super computer. In these two posts I sort of accidentally highlighted two of the three key selling points of task-specific appliances, sheer performance and energy efficiency (the third is security). The posts also heightened my awareness of the possibilities for specialized hardware and some of my more recent explorations that focused on the appliance market in particular got me wondering if there might be a growing trend toward specialized appliances.
Of course, specialized devices have been working their way into the enterprise ever since the first router left its commodity Unix host for the task-specific richness of specialized hardware. Load balancers followed soon after and then devices from companies like Layer 7 and Data Power (now IBM) took the next logical step and pushed the appliance up the stack to XML processing. These appliances aren't just conveniently packaging intellectual property inside commodity 1U blister packs, they are specialized devices that process XML on purpose-built Application Specific Integrated Circuits (ASICS), accelerate encryption / decryption in hardware, and encapsulate most of an ESB inside a single tamper proof box whose entire OS is in firmware. They are fast, use a lot less power than an equivalent set of commodity boxes, and are secure.
Specialization is also showing up in the realm of the commodity database management systems. At last year's Money:Tech Michael Stonebraker described a column-oriented database designed to speed access to pricing history for back testing and other financial applications. In this case the database is still implemented on commodity hardware. However, I think it's interesting in the context of this conversation on specialized computing because it speaks to the inadequacy of commodity solutions for highly specific requirements.
A device from Netezza is also targeted at the shortcomings of the commodity DBMS. In this case the focus is on data warehousing, but it takes the concept further with an aggressive hardware design that is delivered as an appliance. It has PostgreSQL at its core but it takes the rather radical step of coupling FPGA's directly to the storage devices. The result, for at least a certain class of query, is a multiple order of magnitude boost in performance. I think this device is noteworthy because it puts the appliance way up the stack and is perhaps a harbinger for further penetration of the appliance into application-layer territory.
While appliances are expanding their footprint in the enterprise, it seems like the exact opposite might be happening on the web? Maybe the idea of a closed appliance is anathema to the open source zeitgeist of the web, but in any case, the LAMP stack is still king. Even traditional appliance-like tasks such as load balancing seem to be trending toward open source software on commodity hardware (e.g. Perlbal).
I can't help but wonder though, at the sheer scale that some web properties operate (and at the scale of the energy cost required to power them), can the performance and cost efficiency of specialized hardware appliances be ignored? Might there be a way to get the benefits of the appliance that is in keeping with the open source ethos of the web?
If you've ever uploaded a video to Youtube and waited for it to be processed you have an idea of how processor hungry video processing is on commodity hardware. I don't know what Google's hardware and energy costs are for that task but they must be significant. Same goes for Flickr's image processing server farm and I would guess for Google's voice processing now that its new speech services have launched. If the combination hardware and electricity costs are high enough, maybe this is a good place to introduce specialized appliances to the web?
But how to do that in a way that is consistent with the prevailing open source ethos and that still lets a firm continue to innovate? I think an answer might be sort of DIY writ large; a confluence of open source and open hardware that works like an undocumented joint venture based on the right to fork. Think Yahoo and the Hadoop community or JP Morgan and friends with AMQP but with hardware and you get the idea. Such a community could collaborate on the design of the ASICS and the appliance(s) that hosted them and even coordinate production runs in order to manage unit costs. Perhaps more importantly, specifying the components openly would serve cost sharing across these companies while still supporting flexibility in how they were deployed and ultimately, generativity and innovation for future uses.
There are probably a bunch of reasons why this is just silly speculation, but Google's efforts with power supply efficiency might be seen as at least a bit of precedent for web firms dabbling in hardware and hardware specifications. In fact, Google's entire stack, from it's unique approach to commodity hardware, to software infrastructure like GFS, might be thought of as a specialized appliance that suits the specific needs of search. It's just a really really big one that "ships" in a hundred thousand square foot data center form factor.
tags: emerging tech, energy, open hardware, open source
| comments: 9
submit:
Apps for Democracy
by Jim Stogdill | comments: 3
Vivek Kundra, the District of Columbia's CTO, isn't just talking about transparent government and participative democracy, he's working hard to make DC's massive data stores transparent and inviting your participation.
I first heard about Vivek's push for transparency when he spoke at an Intelligence Community Conference in September (I just happened to be speaking on a panel thanks to a twitter-induced serendipitous introduction to one of the conference organizers - @immunity). He was there in sort of "one government entity to another" role to demonstrate that data could be shared and that it is valuable to do it.
I was impressed with the risks he was taking to push hard for the "democratization of data" and for what he was calling The Digital Public Square. What came through really clearly was that he didn't just view this as a technology exercise, but as a way for citizens to participate in real ways and to increase government accountability. It was an engaging and refreshing talk.
It's not exactly news at this point that he came up with $20,000 to offer prizes for the best applications to be built on top of the district's data. After all, the submissions have been long closed and the winners have already been announced. However, I thought it might be worth pointing out that you still have until tomorrow to vote for the two people's choice award winners.
I thought it was kind of fun to just poke around in the list of submissions and see what people came up with. As you can imagine many of them are geo-spatial display's of district stores, but there are some other ideas in there as well. Take a look, see what you think, and get your people's choice vote in.
And just because the contest is over doesn't mean it's too late to build something. Take a look at the catalog of data and see what comes to mind. This is just the beginning (Mayor Nutter of Philly, I'm looking at you...).
tags: emerging tech
| comments: 3
submit:
My Apple Holiday Wish
by Jim Stogdill | comments: 20
I've been searching for a personal backup solution that doesn't suck for, well, pretty much since I got my first computer in the 80's, and I'm still looking.
A few years ago I was cleaning out old crap and ran across boxes and boxes of 800kb floppies labeled "1988 backup disk x." The trash / recycling picker uppers got those along with a pile of zip disks, various CD's, DVD's, a USB drive or two, and a couple of bare SATA drives that I was too cheap to buy housings for. Oh, and there was even a pile of tapes in some long forgotten format in there.
After a few years of manually copying stuff to multiple USB drives, last year I was completely seduced by the "it's like RAID but you don't need identical drives" beauty of the Drobo. Three failures later (including one with smoke), a nasty virtual tinnitus that comes and goes as its disks transition through a perfect cabinet-resonating frequency, incompatibility problems with Time Machine and Airport Extreme, and access speeds that are too slow to serve Final Cut, and screw it. Now it mostly just sits there powered down making a Drobo-shaped dust-free spot on my desk. It's too buzzy to listen to but too expensive to Freecycle.
Next up, Time Capsule. Still (even more) useless for Final Cut and that sort of thing, but it's doing an ok job with backups - at least of the straight Time Machine variety. There are still a few issues though...
First off, I don't really trust that single spinning platter. It will die some day. Plus, it's in my house about ten feet from where my laptop is usually parked so my eggs are all in a single fire / theft / flood basket.
Apple's Mobile Me and the Backup program that comes with it theoretically provide a solution to this issue, but unfortunately it sucks. It's slow, much slower than a local time capsule backup because it is relying on an Internet connection. Also, it effectively requires my machine to be running all the time so that it can conduct it's backups in the middle of the night when I won't be competing for bandwidth or CPU cycles.
Even worse, it fails all the time. I don't know why, but it's finicky. A brief connectivity hiccup (or whatever) and I wake up the next day to find that my multi-hour backup died. Finally, It's too small to be useful for more than a few key critical files. I have a few hundred gigabytes of data I'd like to secure and my mobile me account is limited to twenty.
So Apple, I don't usually resort to begging, but here's your chance to fix backup for me once and for all. Just update the firmware in my Time Capsule so that my fast Wi-Fi-based local backups can be incrementally streamed to either an expanded Mobile Me account or to a separate S3 account (or whatever) whenever it's sitting at home with my network connection to itself.
I can't leave my laptop connected for the days it would take to stream all those hundreds of Gig, but Time Capsule is just sitting there with my Internet connection doing nothing while I'm at work anyway, so give it something to do. This way I'll have the best of both worlds, fast reasonably secure backups to my local Wi-Fi connected Time Capsule when I'm home and don't-need-to-think-about-it remote storage that can take its time when I'm not. At the risk of way over reaching, it could even work in both directions so that if I'm on the road for an extended period, Time Machine could backup critical changes directly to Mobile Me which could then in turn incrementally stream that back to my Time Capsule.
Ok, that's it. A simple idea I think. Can I have it by Christmas?
By the way, if the thought of all those gigabytes in your Mobile Me data centers makes you blanche (and the idea of using S3 is anathema to Apple's do it all culture), how about a Time Capsule-based distributed hash overlay network? If every Time Capsule shipped with the option of turning on a separate partition representing about 1/3 of the disk, you could put a Planet Lab-like distributed file system in there. My files would be split into chunks, encrypted, and distributed around to other people's Time Capsules while some of their stuff was on mine. Sort of an inverted Bit Torrent for backups, no data center required.
That would be cool but I know you won't do it. And, from the category of "things you are even less likely to do," if you opened up the Time Capsule firmware to third parties someone else probably would.
tags: emerging tech
| comments: 20
submit:
Recent Posts
- PICNIC Network 2008 | by Brady Forrest on September 10, 2008
- Disaster Technology for Myanmar/Burma aid workers | by Jesse Robbins on May 8, 2008
- roBlocks: Simple Blocks To Make Robots | by Brady Forrest on April 3, 2008
- Baseball Simulations | by Brady Forrest on March 31, 2008
- How Technology Almost Lost the War, but Should Do Better | by Jim Stogdill on March 21, 2008
- Radar Roundup: Brains | by Nat Torkington on March 17, 2008
- From ETech to Where 2.0: Disaster Tech and Activist Mapping | by Brady Forrest on March 12, 2008
- @ETech: Matt Webb's Tour of a Fictional Solar System | by Brady Forrest on March 8, 2008
- Why I Love Hackers | by Tim O'Reilly on March 7, 2008
- Neuroscience and Epistemology at ETech | by Tim O'Reilly on March 7, 2008
- ETech 2008 Coverage Roundup | by Tim O'Reilly on March 6, 2008
- @ETech: Wednesday Morning Keynotes | by Jimmy Guterman on March 5, 2008
STAY CONNECTED
TIM'S TWITTER UPDATES
CURRENT CONFERENCES

Where 2.0 2009 delves into the emerging technologies surrounding the geospatial industry, particularly the way our lives are organized, from finding a restaurant to finding the source of a new millennium plague. Read more
O'Reilly Home | Privacy Policy ©2005-2009, O'Reilly Media, Inc. | (707) 827-7000 / (800) 998-9938
Website:
| Customer Service:
| Book issues:
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.