CARVIEW |
View News by Category
- Adobe (3)
- Amazon (44)
- AOL (17)
- APIs (226)
- apple (1)
- BestMashups (123)
- BestPractices (16)
- Books (7)
- CaseStudies (6)
- cloud (3)
- Code (6)
- Contests (68)
- database (1)
- ebay (7)
- Enterprise (49)
- Events (61)
- Examples (153)
- Facebook (17)
- Featured (10)
- fun (7)
- Games (4)
- General (16)
- Google (112)
- Gov (39)
- Green (2)
- Hardware (7)
- HowTo (15)
- Infrastructure (16)
- Issues (58)
- Java (2)
- JavaScript (13)
- Jobs (1)
- Law (22)
- Mapping (106)
- Media (16)
- Metrics (7)
- Microsoft (43)
- Mobile (13)
- Money (82)
- Music (20)
- News (124)
- Nonprofit (5)
- OpenSocial (17)
- photo (23)
- Popular (27)
- PopularAllTime (24)
- Press (17)
- Reference (7)
- Religion (1)
- RIA (6)
- roundup (1)
- Ruby (2)
- Salesforce (8)
- Science (1)
- Search (14)
- Security (19)
- SemanticWeb (4)
- Shopping (17)
- Site News (62)
- Social (57)
- Source (4)
- Standards (8)
- Telephony (13)
- Tools (48)
- Video (33)
- Visualization (8)
- Widgets (20)
- wiki (3)
- Yahoo (62)
Archives
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- January 2008
- December 2007
- November 2007
- October 2007
- September 2007
- August 2007
- July 2007
- June 2007
- May 2007
- April 2007
- March 2007
- February 2007
- January 2007
- December 2006
- November 2006
- October 2006
- September 2006
- August 2006
- July 2006
- June 2006
- May 2006
- April 2006
- March 2006
- February 2006
- January 2006
- December 2005
- November 2005
- October 2005
- September 2005
- August 2005

Google to Shut-down 3 APIs
Andres Ferrate, January 16th, 2009
Comments(0)
This week Google announced that it will discontinue development on some of its services, and in some instances, discontinue the services all together. According to the Google Code Blog, Jaiku, Dodgeball, and Mashup Editor will be affected by this decision; in addition, Google Video will no longer support uploads and Google Notebook will cease to accept new signups. The blogosphere has been abuzz with the news, and there are mixed reactions to the decision, although it seems like the general consensus is that Google trimmed its efforts on services that had either stalled, were redundant, or had failed to capture sufficient market share to make them worthwhile web properties. ReadWriteWeb, TechCrunch, Search Engine Land, Mashable, and CNET all have additional information on Google’s news.
Below is a rundown of the news as reported by the respective Google blog for each service.
On Jaiku:
As we mentioned last April, we are in the process of porting Jaiku over to Google App Engine. After the migration is complete, we will release the new open source Jaiku Engine project on Google Code under the Apache License. While Google will no longer actively develop the Jaiku codebase, the service itself will live on thanks to a dedicated and passionate volunteer team of Googlers.
Besides our Jaiku API profile we also have examples of 10 Jaiku mashups that developers have built.
On Dodgeball (which did not offer an open API):
Some of you may also be familiar with Dodgeball.com, a mobile social networking service that lets you share your location with friends via text message. We have decided to discontinue Dodgeball.com in the next couple of months, after which this service will no longer be available. We will communicate the exact time-frame shortly.
On Mashup Editor:
As we announced today on the Google Code Blog, we will be shutting down the Mashup Editor in six months. While it is always hard to say goodbye to a product, when we launched the Mashup Editor as a private beta last year, we did so to better understand the needs of you, our developers. And you spoke, and much of what we learned together is now a big part of App Engine, the new infrastructure for hosted developer applications. We look forward to working with you in the migration to App Engine, and can’t wait to see what you build.
On Google Video (one of the few contemporary video services to not offer an API, although of course Google’s API from YouTube is one of the most popular APIs with over 340 mashups):
In a few months, we will discontinue support for uploads to Google Video. Don’t worry, we’re not removing any content hosted on Google Video — this just means you will no longer be able to upload new content to the service. We’ve always maintained that Google Video’s strength is in the search technology that makes it possible for people to search videos from across the web, regardless of where they may be hosted. And this move will enable us to focus on developing these technologies further to the benefit of searchers worldwide.
On Google Notebook:
Starting next week, we plan to stop active development on Google Notebook. This means we’ll no longer be adding features or offer Notebook for new users. But don’t fret, we’ll continue to maintain service for those of you who’ve already signed up. As part of this plan, however, we will no longer support the Notebook Extension, but as always users who have already signed up will continue to have access to their data via the web interface at https://www.google.com/notebook.
As you can see on our Google Notebook API profile, it was not widely used and has just 2 mashups listed.
We have seen this happen a number of times where the API goes away along with the underlying service. Look no further than last month when Pownce shut down their service and API.
Amazon Launches Requester Pays Model for S3
Kevin Farnham, January 15th, 2009
Comments(1)
The Amazon Web Services team has announced a new “Requester Pays” pricing model for their Simple Storage Service (S3). Amazon’s S3 service, which provides scalable access to Amazon’s online data storage infrastructure, enables customers to rapidly scale their platform using a cost effective pay-as-you-go model. Now, the new S3 Requester Pays Model gives S3 customers the built-in ability to charge their users for specific data transfers. Furthermore, Amazon takes care of the billing for the requested data, eliminating excess accounting overhead for the data providers. From their announcement:
By simply marking a bucket as Requester Pays, data owners can provide access to large data sets without incurring charges for data transfer or requests… Requesters use signed and specially flagged requests to identify themselves to AWS, paying for S3 GET requests and data transfer at the usual rates.
The S3 Requester Pays model can also be configured to work in conjunction with Amazon DevPay, which enables Amazon S3 and Amazon EC2 customers to sell subscription access to their services.
The S3 Requester Pays model opens up a variety of possibilities, especially when paired with DevPay:
Content owners charge a markup for access to the data. The price can include a monthly fee, a markup on the data transfer costs, and a markup on the cost of each GET. The newest version of the DevPay Developer Guide has all of the information needed to set this up, including some helpful diagrams. Organizations with large amounts of valuable data can now use DevPay to expose and monetize the data, with payment by the month or by access (or some combination). For example, I could create a database of all dog kennels in the United States, and make it available for $20 per month, with no charge for access. My AWS account would not be charged for the data transfer and request charges, only for the data storage.
This is a very promising new capability for developers. As Amazon’s Jeff Barr notes: “business model innovation is as important as technical innovation. This new feature gives you the ability to create the new, innovative, and very efficient business models”.
ffwd Releases Video Discovery and Sharing API
Kevin Farnham, January 14th, 2009
Comments(2)
ffwd is an online community that lets members discover new videos selected based on their interests and favorite shows, and share videos with friends. The “ffwd” button lets you fast-forward to the next video in the sequence if you’ve seen enough of the current video — kind of like channel surfing for the Net. ffwd have now released an API to let developers build ffwd applications on any web-enabled video device. Specifically, ffwd’s new API will enable developers to:
- make requests for a variety of data from the ffwd site
- link ffwd profiles into other applications and mashups
- get live updates on what people are watching on ffwd
At the API launch, ffwd announced that Boxee, the “social media center” that runs on Mac OS X or Linux and interfaces with HDTVs, plans to use the ffwd API in forthcoming product releases (more at our boxee mashup profile).
To get started with the ffwd API, developers need to have ffwd account and request an API key. The API itself is REST-based with data returned in XML. The API documentation includes a usage overview, a methods list, and a parameters list. See our ffwd API overview for more details.
ffwd joins a very crowded segment of the API market: we now have 57 video APIs in our directory.
How To Make Your Web Service More Developer Friendly
John Musser, January 13th, 2009
Comments(1)
Not all API providers know how to make developers happy. In fact, although there are now over 1,100 web service APIs available, many of those API providers fail to really understand the needs and motivations of their (potential) developer community. For evidence of how developers can react to both well-run and badly-run API programs, look no further than a very insightful blog post from mashup developer Alexander Lucas on Making Your Webservice More Developer Friendly (Alex is the creator of Migratr a useful desktop mashup that uses APIs from 11 different web services in order to let you migrate photos between different online photo services).
In his detailed post he gives what’s clearly real-world, from-the-trenches feedback (and wit) from an experienced mashup developer on what works and what doesn’t:
I’ve been working on Migratr for around a year and a half now, and in that time have added support for 11 different webservices. Sometimes I’ve grabbed third party libraries designed for interacting with those API’s, other times I coded up the service-interaction layer myself, and I’ve gone through SOAP, Rest (via URL munging or XML via post), JSON and in one case, even webscraping. It’s been an immensely educational and rewarding experience, with degrees of difficulty varying from totally easy (23HQ, by copying the flickr API verbatim and changing only URL endpoints, took about an hour including testing) to ridiculously difficult (AOL Pictures might have been more popular if their API was more than lipservice).
I can only speak to Photo-related web services, as that would be the area where I have the most experience. But I think most web services “get it” with regards to an API- By publishing an API, and enabling and encouraging developers to interact with your webservice, you’ve effectively given yourself a dev team larger than you could ever hope to afford. Users passionate about your services, with ideas on how to extend and improve it, and the know-how to implement those great ideas. More applications related to your website means more ways for users to interact with it, which means more chance of a “killer feature” written by a user of your service that ends up driving thousands of new users to you, any one of which can be a developer that continues the cycle. It’s an upward spiral.
But it takes more than just publishing an API. You have to make your developers WANT to write stuff for your service. Make it easy and enjoyable for them, and remove as many roadblocks and speedbumps as you possibly can so that they can complete their brilliant idea before throwing up their hands in frustration, or slowly, quietly losing motivation amidst a sea of vicious bugs, counter-intuitive behavior and documentation that either looks like it was written by Hemingway or run through babelfish.
He then goes on to provide an on-the-money “checklist for being developer-friendly”:
- Let developers know your API exists. That is, get the word out. Like list it here on PW:
Publish a listing for your API on Programmable Web, effectively giving the internet a cheat-sheet for your API- Protocols (soap, json, etc), documentation links, fee information (free non-commercial, pay commercial?), 1st and 3rd party libraries. You’ll get legions of volunteer devs through this site, who might not have even known your service existed.
Or other basic options like an “API” or “Developers” link in the footer of your site’s navigation is “like gold.” “It should link to a page providing the basic information: API Sign-up process, documentation, etc. I don’t want to go trolling through support documentation or chat live with a customer service rep. Just let me hit the ground running.” [Emphasis added, since this is such a key success factor].
- Have a developer microsite. Don’t let your developer section be just a couple of pages and a reference sheet. Include docs, forums, a blog, source code samples, listing of third party apps, etc. Have your staff actively monitor and respond to questions and feedback from the community. And personalize it for the registered developer with a “My Profile section showing me my own API keys, stored app descriptions and (unusual, but I consider this a bonus) usage statistics.”
- Don’t half-ass your API. Not pulling any punches, Alex argues that there’s a vast spectrum of API quality and support. On the one hand he felt “AOL Pictures API was little more than “Look at us, we have an API!” lipservice.” But “On the complete opposite end of the spectrum from AOL, Flickr has an AMAZINGLY comprehensive API. You can fine-tune not only what data, but how much of it you want returned. You could practically create a flickr clone using the flickr API as a backend.”
- Provide options. This means don’t assume that all developers are the same in terms what protocols and data formats they’d like to see you provide. Give em choices. Alex proposes: “At least two of SOAP, JSON, or straight-up REST.” Here on ProgrammableWeb we see the savviest API providers offering multiple protocols and/or formats. Why?
Different languages a way of making different protocols easier/harder. Giving the option of parsing XML vs JSON, or providing a WSDL file and letting a code generator handle all that for you (a serious perk of client development in .NET) makes your API more language-agnostic and universally accessible. To summarize, the API should be stable, complete, flexible, and not counter-intuitive.
This is something we’ve seen here at ProgrammableWeb as an API success factor and something we’ve addressed at conferences before.
- Have high quality API documentation. Often API docs feel like an afterthought. Terse at best, inaccurate and out of date at worst. As Alex describes:
Again, Flickr comes to mind as a shining example. Every method you call via their API has documented behavior on whether it needs the user to be authenticated, parameters, example of the response sent from the server, and possible error codes. There should be a TOC of methods split up by category (for example: photos, albums, user groups, friend list, authentication) so I can quickly and easily hone in on exactly what I’m looking for. Provide links to every third-party library that someone has written for accessing your service in a particular language.
Provide tutorials and walkthroughs, and if a third-party dev writes one too, link to that. Throw in some sample code in a few popular languages (Java, c#, python, ruby, perl) so people know what the code should look like. You don’t have to do this for every method. I’d recommend doing this for the authentication step, as developers will be able to test this without getting anything else right first. The net idea is that I should be able to get rocking with your API as soon as possible in the language of my choice, provided it’s a relatively common language (I don’t expect there to be a “Flickr Haskell Library”, for instance. But I do expect a Java one). Good documentation means your dev forums will be manageable.
The key to Alex’s point here is that good documentation is an investment that can reap many rewards, including some not immediately obvious financial rewards, like lower ongoing support costs.
And besides standard, static documentation, other options to consider are rich media for things like videos, screencasts and podcasts. Additionally there’s the option of interactive tools like 9+ tools we’ve covered in the past.
- Offer special “developer accounts”: Here’s an interesting idea that Alex proposes that we’ve not seen much of but has a lot going for it: that is, an account on the service itself with capabilities that are meaningful to the needs of developers. As Alex describes:
I’m a little biased on this one, given that I interact with so many different webservices. But of all the services I use, most have “pro” accounts that I need to be able to test against. Some sites (major props to SmugMug and Zenfolio) comp pro accounts to developers who have written software that interacts with their services, as a matter of practice. Others will give the developer a one-year coupon upon request, while others don’t give us anything at all. Software is going to need testing, and if I run into a free-account limit for uploading data to your webservice while testing my software, I’m either going to have to cough up money for the right to add value to your service, go through the lengthy process of clogging your database with test accounts, or wait a month until I can do more testing. These are all roadblocks. I’m trying to write something your users will appreciate. Please don’t leave me hanging like this!
One thing that I would *like* to see, that I haven’t yet, is a special developer account- Not just a comped pro account, but a user account specific to developers. This account could have behavior specific to testing my software, like
- API-Accessible switch between account types, like “free”,”standard”,”pro”,”super-mega-expensive-awesome-pro-account-of-extreme-justice”, in terms of toggling things like upload/download limitation that vary from account to account, to reproduce errors my users are seeing and test for different errors that might come up
- None of the things I do via the API show up in “Latest activity” bar on front of site. If I’m running tests, it would be good etiquette for me to be able to avoid what could be considered spamming
- 24-hour deletes or auto-rollbacks: If it’s a test account, It’d be nice for me to be able to keep the account empty when I’m done testing, without writing housekeeping code on my end that just deletes everything. This means I’m trying to take up less space on your servers. I’m not saying this should be automatic, just we’d both be happier if I had the option.
Sites that don’t hand out free pro accounts to developers could offer these up instead, and just cripple the accounts in ways that would only matter to an active user: the 24 hour deletes could be mandatory, all data posted on a social site could be kept private and not visible to the community, etc. The main point here is that while I totally dig on the incentives provided by the services whose API’s I interact with, it’s more important that I be able to test my software against theirs, fully and comprehensively.
- Provide more than technical support: This is not one that Alex covered but is worth adding to this list: help your developers help you by giving them business, marketing and other non-technical support. Some of the best API programs (but not enough) will help developers succeed one the code’s written through programs like joint marketing, promotion through Solutions Galleries, certification, etc.
- Look to build community and passion: Another addition to Alex’s list is generating developer satisfaction by creating a strong community. This starts with the forums and online support, but smart API providers are taking this further by creating opportunities to meet their developers in-person (and have the developers meet each other face-to-face). This ranges from smaller events like Facebook Developer Garages to annual events like the eBay Developers Conference, Google I/O and Dreamforce. For a precedent, look at the ways in which the folks in Redmond built a substantial developer community on earlier platforms.
A checklist like the one Alex has given should be part of every developer program strategic plan.
And in the end, Alex concludes by pointing-out to API providers why they should listen, and why it’s in their own best interest to do so:
I know it sounds pretentious and self-important when I say that as a third-party developer, I’m adding value to your service just by coding something against your API, and thus you should make things as easy for me as possible. Like I’m the kind of guy who always types “M$” when I mean Microsoft, or leaves obscure, nerdy references in parentheses followed by “anyone?” (Slashdot/Reddit commenters, anyone?). But I swear I’m not being that guy. I’m basing the idea that I’m adding value (or, in the case that I suck, the idea that someone else is adding value) off of the services I’ve seen flourish, and the ones I’ve seen wither and die.
Yahoo Photos and AOL Pictures: Half-assed API (Yahoo’s was even web-only, so you couldn’t write a desktop application for it). Both deadpooled.
Imagestation and Epson Photos: No API. Both Deadpooled.
Flickr, SmugMug, Zenfolio, Phanfare, PicassaWeb: Comprehensive, well-documented API’s with active developer communities, responsive staff and a wealth of resources for helping you get stuff done. Thriving.
In summation, in the words of Jerry McGuire, “Help me help you.”
Good advice.
Yahoo Boosts Its Open Strategy Reading with Y!OS Docs
Andres Ferrate, January 12th, 2009
Comments(0)
Yahoo has been steadily expanding on its Yahoo Open Strategy (Y!OS), a set of complementary platforms that allow developers to rapidly access Yahoo network data and develop applications. At the core of Y!OS are two platforms: the Yahoo Application Platform (YAP) and the Yahoo Social Platform (YSL), which can be accessed via the Yahoo Query Language (YQL). As regular readers will note, we have covered Y!OS in previous posts here on ProgrammableWeb.
Now Yahoo has released various types of documentation to boost information available for developers who are working with Y!OS platforms. According to the Yahoo Developer Network Blog:
We’ve just released a new batch of documentation starting with a Y!OS docs landing page, to make it easier to find the information you need. We’ve split the page into 2 easy sections. The first section includes the stuff to quickly get started: overviews of the terminology, tutorials, examples, and other quick-start guides. Don’t worry though, the second section contains an index of all technology references you’ll want to use day to day.
Included in the documentation are several tutorials, code examples, and a brand new FAQ section. The tutorials are broken out into specific chapters that include the time required to complete each tutorial/chapter. There are currently five code examples available, including a friend selector and a linking to a Yahoo! profile example.
There is also a set of “getting started” documents based on the tutorials that include:
- Yahoo! OAuth Quick Start Guide
- Getting Started with Yahoo Social Applications
- Getting Started with YQL
- Using YQL Statements
- Introduction to YML
- Making YQL Calls With PHP
- Building Yahoo Social Applications with Flash
Documentation is often one of the most overlooked elements for APIs and developer platforms, and its good to see Yahoo expanding its overall knowledge base with regard to its Y!OS platforms. You can find out more about all of Yahoo’s APIs in our API directory, and be sure to check out our ever expanding list of mashups that leverage Yahoo’s APIs.
US Congress Gets an API
Kevin Farnham, January 9th, 2009
Comments(2)
The New York Times has just announced its new Congress API, which provides capability for developers to access to four sets of data about US Congressional representatives and their votes: “a list of members for a given Congress and chamber, details of a specific roll-call vote, biographical and role information about a specific member of Congress, and a member’s most recent positions on roll-call votes” (see Congress API profile for details).
The data the API provides “data comes directly from the U.S. House and Senate Web sites, and is updated throughout the day while Congress is in session.” And for individual member responses, the Times “include a numeric ID assigned by GovTrack, a free and open-source service that monitors legislative activity.”
The Times team also describes a bit about the coding going on behind the scenes:
We Hpricot, an HTML parser for Ruby, written in C, to parse both the XML produced by the House and the HTML displayed on Senate.gov, and we use the ar-extensions plugin for Ruby on Rails which extends ActiveRecord to speed the bulk loading of records. We also parse some information from THOMAS, the Library of Congress Web site.
It’s a RESTful API with data returned in XML. You’ll need an API key to use it and up to 5000 daily requests are available for free. This is another innovative API from the Times, who now offer a variety of useful APIs. See our New York Times API Directory for a listing.
As Josh Catone notes, while, as a “dead-tree” newspaper the New York Times might be “running on fumes,” they’ve:
been doing a lot of cool stuff with all the data they and others have collected over the years. … [the Congress API] has a database of House votes back to 1991 and Senate votes dating to 1989, as well as membership information going back to 1983 and 1947, respectively.
More from Stephanie Condon at CNET and Marc Canter .
This API is another notable step in making government data accessible to developers. Along with efforts by the Sunlight Foundation, Vivek Kundra in Washington D.C., and the UK government, more of these opportunities are opening-up. Expect to see more government mashups coming soon.
And see our Gov Mashup and API Dashboard for the latest on government-related open platform news.
Apple Brings Geotagging to iPhoto Via Google Maps
Andres Ferrate, January 8th, 2009
Comments(1)
In a clever move, Apple has leveraged the power of the Google Maps API (our Google Maps API Profile) to provide geotagging capabilities for iPhoto 2009, the latest version of its popular photo management software. Announced by Phil Schiller at MacWorld 2009, iPhoto 2009 is packed with several new features, including ‘Places’ which gives users the ability to easily assign geographic coordinates to their photos.
Keir Clarke over at Google Maps Mania has a detailed post that provides general background and a review of the geotagging/mapping features:
iPhoto imports location information from your digital camera and can then show you this location on Google Maps. If your camera doesn’t have GPS capabilities you can simply add locations manually with iPhoto.
iPhoto is also able to tag your photos by location and by nearby points of interest. For example if your location data shows that a photograph was taken near the Eiffel Tower iPhoto will automatically tag your picture ‘Eiffel Tower’.
The video below from MacWorld includes Schiller’s introduction of the Places feature (along with two other new features: Faces and Events)
Existing online photo services such as Flickr, Panoramio, and Picasa currently support geotagging as well, but for many users the ability to geotag and visualize photos directly from iPhoto is certain to be an appealing option. We feel this is a great example of a new desktop app integrating a Web 2.0 API to produce a hybrid mashup, and we look forward to seeing the release of additional desktop applications that leverage Web 2.0 APIs.
TechCrunch, CNN, and CNET have additional coverage on iPhoto’s new geotagging functionality. [Hat Tip: Google Maps Mania]
Enterprise Mashups: New Book Highlights the Patterns
John Musser, January 7th, 2009
Comments(5)
Although mashups started out in the consumer space, their success makes a migration into corporate IT environments inevitable. Firms exploring this new software development model may struggle at first to understand the importance of mashups from a corporate perspective. In the upcoming book, Mashup Patterns, author Michael Ogrinz provides a collection of use-case driven patterns intended to explain the value of enterprise mashups to both technical and non-technical readers. We recently interviewed Michael about the patterns and what he hoped to achieve with his book.
Q: Can you give us a bit of background on the origin of “Mashup Patterns”?
During my day job as an architect at a major financial services firm, I meet with a lot of vendors. About two years ago, the “mashup companies” came knocking at our door. At first, I dismissed them. They seemed too much like the brittle screen-scraping solutions of the past, or required us to Web Service-enable everything.
I started spending more and more time thinking about the underlying technology and how it could impact the problems a professional IT teams face on a daily basis. The list of potential uses suddenly exploded. But I still couldn’t get the traction I needed around the office, which I partly blamed on the confusion created by conflicting vendor marketing strategies. What the industry needed was an objective set of ideas for how mashups could add value, regardless of the particular tool. I realized that a lot of my thoughts could be distilled into generic patterns. Not quite the academic, Gang-of-Four kind, but something a little more rooted in practical use cases. And the material had to be approachable by non-developers – ‘power-users’ that could create their own solutions with the many new mashup tools popping up.”
Q: How have you organized the patterns in this book?
The core of the book is organized into a set of 34 patterns grouped into five main categories. This structure is intended to organize the patterns according the circumstances where they add the most value.
Harvest: Mine one or more resources for unique data
- Alerter: Mashups do not necessarily present data directly to a user. Intelligent Agents can be configured to automatically monitor various conditions and trigger alerts
- API Enabler: Create a custom API for static resources (e.g., web pages) so that they can be utilized as a dynamic data source
- Competitive Analysis: Extract pricing and product information or advertising trends from competing firms to compare against your own offerings
- Infinite Monkeys: Automate a repetitive task to a scale unachievable by normal human agents
- Leading Indicator: Use a mashup to regularly monitor information that may indirectly serve as a leading indicator
- Reality Mining: Incorporate environmental and behavioral data to better understand human interaction
- Reputation Management: Use mashups along with Sentiment Analysis techniques to be scan for words that connote emotion and then rank how a document “feels”
- Time Series: Use a mashup to extract and store information at regular intervals in hopes of observing trends in the data
Enhance: Extended the capabilities of existing resources
- Accessibility: Construct an alternative application interface with no impact on the original code base
- Feed Factory: Create an RSS/Atom Feed for a site that doesn’t expose a feed, and create new feeds by remixing existing ones
- Field Medic: Provide a temporary patch to a system when you are unable to correct the problem directly
- Folksonomy Enabler: Add community-driven tagging or rating features to existing applications
- Smart Suggestions: Enhance productivity by using mashups to suggest material relevant to users’ tasks
- Super Search: Apply business specific knowledge to enhance user search activity so that results are obtained from multiple sites relevant to the problem domain. See illustration below.
- Translation: Pass content through a service to add clarifications or convert it to a different language
- Usability Enhancer: Construct a mashup “wrapper” (or façade) which exposes only the functionality necessary to use the system.
- Workflow: Add workflow capabilities to a system or chain of systems
Assemble: Remix existing data and interfaces to serve new purposes
- Communication and Collaboration: Combine internal communication products to solve problems related to Interruption Overload
- Content Aggregation: Multiple resources are combined to remove inefficiencies caused by frequent task-switching between applications
- Content Integration: Extend a system that accepts an incoming feed by mashing together multiple sources into a new feed that conforms to the original standard
- Distributed Drill-Down: Provide Master/Detail functionality across multiple systems
- Emergency Response: Create an ad hoc solution in situations where response time is crucial
- Filter: Remove unnecessary or unneeded data from a system or data feed
- Location Mapping: Geocode data for location mapping or address verification
- Splinter: Separate a unified data source into smaller, specialized streams of focused information
Manage: Leverage the investment in existing assets more effectively
- Content Migration: Migrate information from one or more applications to a new environment
- Dashboard: Acquire and display summary status information from multiple systems on a single-page
- Portal Enabler: Move existing content onto enterprise Portals without requiring custom coding.
- Quick Proof of Concept: Use mashups to validate a business or product idea that will entail a significant investment
- Single Sign-On: Allow a user to supply credentials one time for authentication across multiple internal and external systems
- Widget Enabler: Repackage existing systems for viral distribution via popular Widget platforms
Testing: Verify the performance and reliability of applications
- Audit: Use mashups to create an aspect-oriented view of application usage
- Load Testing: Multiple mashups run simultaneously can simulate the activity of hundreds of users and assist in load and stress-testing
- Regression Testing: By employing a predefined collection of data, ensure that input/output results across versions are as expected
Q: How are these patterns presented?
Each pattern is presented as a problem/solution pairing, and with a Fragility rating. Since mashups can be more brittle than traditional coding solutions I think it’s important to understand where they might break so organizations approach them with reasonable expectations. The patterns are accompanied by several examples intended to show their value under a variety of different circumstances. On an academic note, there are probably only a few mashup ‘patterns’ in the more traditional sense of the word: Data Acquisition, Data Entry, Transformation, etc. But that wouldn’t really help the layperson see the value of mashups. So instead, I define these operations as ‘Core Abilities’, and each of the patterns above leverages one or more of them. This helps at tool-picking time, too. You identify the patterns you’ll use, and then choose the product that supports the core abilities required to implement them.
Q: Does the book include real-world case studies of some of these?
Yes, I was able to obtain a collection of case studies from a number of firms including The Associated Press, Audi, the Defense Intelligence Agency, SimplyHired, and Thompson Reuters. There were other companies that were unable to attach their names to the text due to legal restrictions but their experiences should nevertheless show that mashups are already an important component of many organizations’ IT departments.
As part of our goal to provide expanded coverage of Enterprise Mashups, we’ve invited Michael to join us in sharing his thoughts and experiences in future posts. We’ll review case studies from major corporations, look at the legality and governance challenges enterprise mashups face, plus we’ll begin comparing the commercial products and evaluating them by cost, capability, and usability.
Bitsmash: Get BitTorrent Stats via Code
John Musser, January 6th, 2009
Comments(1)
Although over 1.1 petabytes of data are available and shared via the peer-to-peer services of BitTorrent, there’s now a new web service that lets you get statistics on it all: it’s called Bitsmash. This relatively simple service allows you to search and sort data and usage metrics about all that p2p data. Data which is available via an API (our Bitsmash profile). And for any given torrent (any file being distributed on the network) it will give you summary data and show you graphs of activity over time. The screenshot below shows a bitsmash entry for Knight Rider
Their RESTful API provides programmatic access to the underlying data as XML. It’s fairly straightforward to use and does not require a developer key for access. In addition, the underlying data is licensed using an open Creative Commons license.
Bitsmash is also a mashup in itself, with a detailed Google Map for each entry that shows the location of seed, host and leech nodes (see the Wikipedia link below for more on these terms).
For those not familiar with BitTorrent, the Wikipedia entry gives a good summary:
BitTorrent is a peer-to-peer file sharing protocol used to distribute large amounts of data. The initial distributor of the complete file or collection acts as the first seed. Each peer who downloads the data also uploads them to other peers. Relative to standard internet hosting, this provides a significant reduction in the original distributor’s hardware and bandwidth resource costs. It also provides redundancy against system problems and reduces dependence on the original distributor.
Programmer Bram Cohen designed the protocol in April 2001 and released a first implementation on July 2, 2001. It is now maintained by Cohen’s company BitTorrent, Inc. Usage of the protocol accounts for significant Internet traffic, though the precise amount has proven difficult to measure. There are numerous BitTorrent clients available for a variety of computing platforms. According to isoHunt the total amount of content is currently more than 1.1 petabytes.
The folks over at TorrentFreak have raised some questions about the accuracy of the data, but they, as well as developer Smash, note that it’s an early beta and the quality should improve over time.
They also note that in some ways this service becomes a meta-search for content, both legal and other:
Interestingly, BitSmash has decided to include a link to the .torrent files on their detail pages, which basically makes it a meta-search engine as well. The anti-piracy lobby might not be too happy about that. A few days ago we reported on the Swedish news site Nyheter24, that was criticized for linking to torrents on The Pirate Bay.
We have seen other p2p and BitTorrent-related APIs before, including SeedPeer API. Given how popular storage APIs have become, we may start to see p2p and file sharing APIs proliferate as well.
DeepEarth: Microsoft’s Open Source Mapping Control
Andres Ferrate, January 5th, 2009
Comments(2)
DeepEarth, a new map control that integrates Microsoft’s Virtual Earth mapping service (our Virtual Earth API Profile) with the Silverlight 2.0 framework, is now available as an open source project on CodePlex.
As described on the project page on CodePlex:
DeepEarth is a mapping control powered by the combination of Microsoft’s Silverlight 2.0 platform and the DeepZoom (MuliScaleImage) control. At its core, it builds on these innovative technologies to provide an architecture for bringing together layers for services, data providers, and your own custom mapping elements together into an impressive user experience. Also featured are in depth examples of how you can leverage Virtual Earth Web Services to take advantage of advanced GIS service functionality. This is what you need to get an interactive, native Silverlight 2.0, map into your application today.
This is an impressive release and the integrated control provides various types of features, including:
- Fully implemented map control with property and event model
- Fully templated set of map navigation controls
- Layers for inclusion of Points, LineStrings and Polygons
- Conversion library for geography to screen coordinate systems
- Geocoding (find an address)
- Reverse Geocoding (getting an address from a point on the map)
- Routing (Directions)
- Marquee zoom selection
- Map rotation
Note that the control can be previewed at the Soul Solutions DeepEarth demo web site (you need to have Silverlight 2.0 installed) and you can also download the latest release of the open source project (source and/or binaries).
Also, be sure to check out a page on CodePlex on how to set up the solution as well as additional coverage on DeepEarth on Chris Pendleton’s Virtual Earth, An Evangelist’s Blog.