In celebration of XML 1.0’s tenth anniversary, I signed back on to XML-DEV to suggest that it’s time to do to XML - just the core of it, please - what XML did to SGML around SGML’s tenth anniversary.
CARVIEW |
In celebration of XML 1.0’s tenth anniversary, I signed back on to XML-DEV to suggest that it’s time to do to XML - just the core of it, please - what XML did to SGML around SGML’s tenth anniversary.
“I think [forgiveness] may be the greatest virtue on earth, and certainly the most needed. There is so much of meanness and abuse, of intolerance and hatred. There is so great a need for repentance and forgiveness. It is the great principle emphasized in all of scripture, both ancient and modern.
Somehow forgiveness, with love and tolerance, accomplishes miracles that can happen in no other way.”
–Gordon B. Hinckley, “Forgiveness,” Ensign, Nov. 2005, 81
It is time for us to forgive. Please consider just this very act.
Update: Thorsten has followed-up with some interesting comments regarding the current state of the virtualized industry, the problems he’s seeing a lot of companies facing, and various obstacles they’re running into along the way. Interesting stuff!
A word of advice, easy to give, hard to follow: design your system so you can relaunch any critical instance!
Amazon has thousands of instances available, just waiting for you to hit the launch button. If a current instance smells bad and your own troubleshooting doesn’t resolve it, launch a new one and bring your service up on it. Actually, if it’s critical, you should have two running so you’d be left with one while you replace the failing one.
All this should be motherhood and apple pie on EC2 or any other hosting facility, or also in your own datacenter for that matter. Systems fail.
Thorsten von Eicken, Posted: Feb 2, 2008 10:53 AM PST
BTW: Thorsten is one of the smartest individuals I have ever had the fortune of coming to know. *GREAT* guy, and someone in whom if you need help with Amazon Web Services-related consulting, in particular EC2, I would *HIGHLY* recommend getting in contact with his company, RightScale. Just the right combination of open source, open minds, and openly giving more than he/they receive in return, so I believe it’s certainly both fair and in-line with the ideals of O’Reilly, and therefore this blog to provide promotion.
Merger-mania is in full swing of late, which is rather astonishing given the current credit market problems. Oracle finally managed after months of trying to snare business services provider BEA, and Sun’s purchase of mySQL was both hailed as a master-stroke and derided as a last gasp hope by a fading giant to bolster its database claims. Today the announcement hit the fan that Microsoft had made a $45 billion dollar bid for Yahoo, something that’s made Wall Street happy but that has people in Silicon Valley scratching their heads.
I am not a CEO, nor am I a big investor … and I’m generally not a big fan of mergers and acquisitions (M&As), because, especially when the mergers involve two reasonably large, well-established companies, the results in terms of performance seldom justify the costs involved. This is especially true of tech sector companies, where so many of the assets of the companies are not tied up in physical capital but rather in abstract ideas carried around in smart people’s heads.
The overtures that Microsoft has been making to Yahoo have been obvious for some time. The integration of Yahoo and Microsoft’s IM formats, for instance, hinted that the two were playing footsies under the table, and certainly the announcements that Yahoo would be utilizing Microsoft technology (and Microsoft’s subsequent PR blitz to that effect) at least indicated that the relationship was serious. Thus, the signs have been around for a while, and both the tech and Wall Street press have generally been playing the role of matchmakers. Yet there are more than a few signs that this particular marriage, if consumated, may end up in divorce court nonetheless.
Karen Deane has an article in The Australian Battle on Microsoft standard push which includes some quotes from a background interview she asked me for, to give her the gossip ahead of the big MS journalist fly-in last week.
There has been a fair amount of chatter lately about defending the value of SOA projects or justifying such projects to the “C-level”. Many of these discussions will point at the business value of doing more with less and achieving IT cost reduction by reducing redundant systems and reuse of services in a SOA. Also streamlining processes in order to run the business more efficiently is a popular opinion,
Actually, there are plenty of reasons why F# ((F == Functional) == True)) *ROCKS*. Here’s a few from the previously linked F# site on Microsoft Research,
Combining the efficiency, scripting, strong typing and productivity of ML with the stability, libraries, cross-language working and tools of .NET.
F# is a programming language that provides the much sought-after combination of type safety, performance and scripting, with all the advantages of running on a high-quality, well-supported modern runtime system. F# gives you a combination of
* interactive scripting like Python,
* the foundations for an interactive data visualization environment like MATLAB,
* the strong type inference and safety of ML,
* a cross-compiling compatible core shared with the popular OCaml language,
* a performance profile like that of C#,
* easy access to the entire range of powerful .NET libraries and database tools,
* a foundational simplicity with similar roots to Scheme,
* the option of a top-rate Visual Studio integration,
* the experience of a first-class team of language researchers with a track record of delivering high-quality implementations,
* the speed of native code execution on the concurrent, portable, and distributed .NET Framework.
The only language to provide a combination like this is F# (pronounced FSharp) - a scripted/functional/imperative/object-oriented programming language that is a fantastic basis for many practical scientific, engineering and web-based programming tasks.
F# is a pragmatically-oriented variant of ML that shares a core language with OCaml. F# programs run on top of the .NET Framework. Unlike other scripting languages it executes at or near the speed of C# and C++, making use of the performance that comes through strong typing. Unlike many statically-typed languages it also supports many dynamic language techniques, such as property discovery and reflection where needed. F# includes extensions for working across languages and for object-oriented programming, and it works seamlessly with other .NET programming languages and tools.
For those of you unaware, F# is now a first class MSFT language, or in other words, this is no longer a “Hey, here’s an idea. Let’s research it.”-type project and instead a true-blue MSFT product backed by mean-green MSFT money, led by some of the very best and brightest minds @ MSFT.
If you were to ask me “What’s the future language foundation of the .NET platform?” I would first state “More than likely, XSLT 2.0++.” And then when you stopped laughing and slapped me upside my head to awake me from my dream I’d say, “What the F#!? was that for?” and you’d say “F#??,” followed by “Isn’t that for programming the way God intended for people to program on the .NET platform?”, and then I’d say “Okay, you got me on that.” at which point we’d move on…
So here’s the thing: While there are *TONS* and *TONS* of reasons why F# *ROCKS* (did I mention that F# is distributed as both an MSI and a ZIP, the latter designed to make it easy for folks using Mono to take full advantage of what F# has to offer?), the biggest reason it *ROCKS* is this,
It’s been a while since I’ve written a “non-directed” blog for xml.com, so while I will be covering a few XML topics here if you’re not interested in economic systems theory, then you might as well skip this.
As I write this, it’s about eight hours before the financial markets open in New York. The markets were closed today, Monday the 21st of January, for Martin Luther King, Jr. day, which may prove to be a bad thing in the morning. Today the average market loss globally was about 7% - here in Canada, the drop translated to a 605 point loss, or about 4.75%, on the TSX. The India Sensex fell 11% in two minutes before trading was halted. I can pull out other figures, but they say much the same thing.
It’s hard to say what will happen in the morning - I’m not even going to try, though I have my suspicions. Enough fire control may have been put into place to keep the US markets from getting too badly singed (though I have NO doubt that few people at any brokerage firm in the country were allowed to stay at home today), but what you’re seeing here is something that we’ve not seen in a long time … the start of a worldwide stock crash.
I’m happy to report that Ruby on Rails not only offers a comfortable way to develop web applications, but that a little-noticed feature makes some formerly theoretical open approaches to XML much more immediately practical.
… and then what will /. have to write about? *NOTHING*, I tell you! *NOTHING*!!!
Wait, that would be a good thing, huh?! *SWEET*! Please carry on…
I wanted to call everyone’s attention to a few interesting developments in Ecma’s proposed disposition document related to the Office binary formats. There were a few comments from national bodies that asked about the documentation of the Office binary formats and the availability of those documents. We had already been talking about these issues in TC45 where there were a number of existing experts in the binary formats (including Apple, Novell, and Microsoft). Based on the feedback from the national bodies, Microsoft decided last week to take some additional steps in this area.
Brian goes on to describe the fact that MSFT will be making it even easier to gain access to the documentation for the legacy Office binary formats, promises not to sue you if — you know — you actually read the documentation and then apply this knowledge to implementing support for these formats, and then takes it one step further by not only providing a mapping from the binary formats to Office Open XML, but promises to start a Binary Format-to-ISO/IEC JTC 1 DIS 29500 Translator Project on SourceForge.net in collaboration with ISV’s.
Hey Micah (Dubinko),
Why don’t they just ISO standardize their binary formats? That’s “backwards compatibility” for ya. -m
Micah Dubinko | January 24, 2007 10:13 AM
Okay, so maybe this isn’t standardization, but as it relates to legacy file formats, isn’t this good enough? And either way, nice work, MSFT!
It just makes them more burnt…
Just because a standards body can’t recogonize done-ness and declare victory, that doesn’t mean the fruits of that body have any bearing on reality.
I don’t agree with everything Sean McGrath writes in his latest post as I think there are a lot of really smart people who have developed some really smart ways to handle the variable width nature of XML w/o turning to malloc() every time the length of an element or attribute name reaches past any given preset constraints. That said, I can’t help but agree with,
Memory-based caches of “cooked” data structures are your friend.
Absolutely!
For you .NET developers here’s a pre-written recipe that handles all of the dirty work of determining whether to create a new XmlReader
or return the in-memory cached version based on the generated ETag for the source file (see Extended Overview below for a deeper understanding of how this works.) To use this recipe you need to do nothing more than create a new XmlServiceOperationManager
when your application starts up like so,
XmlServiceOperationManager myXmlServiceOperationManager = new XmlServiceOperationManager(new Dictionary<int, XmlReader>());
and then use the GetXmlReader
method of the XmlServiceOperationManager
, passing in the Uri
(an actual System.Uri
object, not the string value of the URI, though I guess it would be easy enough to create an overload that takes the string value of the URI. Another task for another day. ;-)) of the desired XML file to get an XmlReader
in return like so,
XmlReader reader = myXmlServiceOperationManager.GetXmlReader(requestUri);
That’s it! Now you can use your “new” XmlReader
however you might need and the next time that file is requested for processing if it hasn’t changed you save all of the time it would normally take to read the source file and convert it into an XmlReader
which is fairly significant.
Source code and extended explanation inline below. Enjoy!
Oh, and stay tuned for the next installment of this recipe where we learn how adding,
1 Part memcached
1 Part ETag's
and
1 Part GZip encoding
… can turn your lame a$$ performance sucking web application into a lean, mean, kick a$$ performing machine. For a precursor, see Joe Gregorio’s AtomPub presentation slides from this past OSCON. I assure you, it’s worth every second you spend studying this gem of a resource.
I just saw a link in the del.icio.us hot list that caught my attention called Data Portability, that bills itself as the open standards stack of the ubiquitous sharing and remixing of data.
Nice, but RSS and not Atom Syndication? OPML and not XOXO (The XHTML Outlines Microformat), JSON or YAML? And no Atom Publishing Protocol? “THE” stack? What is the criteria here?
This all seems rather arbitrary in what is “open” that I am suspicious of a hidden agenda at play here.
I’m all for open standards, data portability and anyone who wants to advocate it, but advocacy of “open” should be just that — open and not so closed, arbitrary or apparently biased. (If it’s not biased then someone didn’t do their homework.) We have enough agenda disguised in open advocacy clothes already.
- Grid computing will grip the attention of enterprise IT leaders, although given the various concepts of hardware grids, compute grids, and data grids, and different approaches taken by vendors, the definition of grid will be as fuzzy as ESB. This is likely to happen at the end of 2008.
- At least one application in the area of what Gartner calls “eXtreme Transaction Processing” (XTP) will become the poster child for grid computing. (see Gartner Research ID # G00151768 - Massimo Pezzini). This “killer app” for grid computing will most likely be in the financial services industry or the travel industry. Scalable, fault tolerant, grid enabled middle tier caching will be a key component of such applications.
- Event-Driven Architectures (EDA) will finally become a well understood ingredient for achieving realtime insight into business process, business metrics, and business exceptions. New offerings from platform vendors and startups will begin to feverishly compete in this area.
Once again into the breach, dear friends. I’ve made it a habit over the last several years to put together a list of both my own forecasts for the upcoming year in the realm of XML technologies and technology in general and, with some misgivings, to see how far off the mark I was from the LAST time that I made such a list. The exercise is useful, and should be a part of any analyst’s toolkit, because it forces you to look both at the trends and at potential disruptors to trends (which I’ve come to realize are also trends, albeit considerably more difficult to spot).
So, without further ado, I bring up the list from last year …
As the anti-OOXML crowd’s technical and editorial objections evaporate, and consequently as the reasonable people increasingly see that ISO is delivering a good result for them and jump ship, the rabid anti-OOXML misinformation campaign is ramping up. The basic strategy is to say that things are so bad that no improvement is possible, and indeed that any improvement is complicity.
But it is quite possible for the different sides to engage civilly and constructively.
On Friday last week UNSW CyberSpace Law and Technology Center organized a really good day-long seminar in Sydney on the technical and legal feasibility of implementing Office Open XML, to try to get people talking.
The morning was a technical meeting: I was honoured to be invited to speak first, with 30 minutes on ISO and SC34 standards and I will be putting up my slides later. Other invited speakers include Mathew Cruikshank (who was very active in New Zealand’s vote), and Colin Jackson (NZ government angle.) Also speaking in little 10 minutes slots were a representative of IBM (same old material), and Lars Rassmussen from (notorious VML-users) Google Maps (he has a meeting report here but I don’t know why he has an “issue” with me; in any case, like Matthew, I think he is a goody.) MS had a 3 man contingent, and people were obviously trying to be on their best behaviour: Oliver Bell from Singapore was there and had blogged. Not many fireworks, I was tired and grumpy from travel so probably just as well. I was pleased to also see Standards Australia’s Alistair Teggart there too; he made some good clarifications. Prof David Vaile was very interested in what people had to say, and frequently asked for clarifications or expansions. Gnome Foundation’s Jeff Waugh was lively (in fact, he called the technical issues boring…clearly a big picture man) matched only by UNSW’s Pia Waugh. (Clarification: Jeff’s point was, I think, that other issues were more important than the technical/editorial issues of the Australian ballot comments.) (I am sure I have missed some who spoke!)
I’d stereotype the various opinions as people who didn’t see why there should be two standards, people who didn’t see the value of even one standard , people who saw the value in their own standards, people who didn’t see the value of their competition’s standards, and people who thought there should be many more standards (err, probably only me.) (Updated: I originally had some names against these stereotyped positions, but as they are probably not even fair stereotypes I’ve removed them, they don’t help the gag. If you think I have misrepresented you in any of my blogs, please write and I will certainly try to fix things.)
The afternoon was the legal session, and very interesting. Unfortunately it only looked at the OSP and didn’t cover any standards law (relation to law of fraud, anti-trust, etc.) probably because in many countries there has been no relevant case law, and in each national jurisdiction the situation will be difference. The US law is quite advanced (or, at least, explicit) here: Australia really needs some legislation to clarify the duties and rights of stakeholders in voluntary national and international standards processes. I have previpously posted some material on this blog Standards and IP for people who are interested.
First up in the afternoon was legal background material by Ron Yu, a very likeable guy. He has made a report that is available from the CybserSpace Law Centre’s website. It was mainly a discussion paper raising various issues that people had made, rather than a definitive position on anything. Then MS’ Steve Mutkoski gave a talk on OSP, mainly focusing on the similarities and differences from the Adobe, Sun and IBM equivalent. He was one of the legal team who drew up OSP and pushed for less legalize in it (using “promise” rather than “covenant” for example.) His main thrust was that the differences between the Sun, IBM and MS licenses were only cosmetic. Steve made his points well.
As it turned out, from discussions it emerged that there was really only one bone of contention, which was the meaning of “required” in the OSP. Now this is something that I have blogged about before: see the lengthy comment (search for “Matthew:”) and also here (search “Kurt #2″).
MS’ legal department have been absolutely hopeless in helping people figure this out, and if OOXML fails at the final vote, they and Steve Ballmer have the lion’s share of the blame. Many in the open source community react strongly to the memory of MS’s FUDing on Linux patents; rather than (as I tend to do) saying “oh good, at least in the standards process they are opening up their IPR” (i.e. due to things like standardization and the OSP) the MS FUD has raised suspicion to the level where people say “since they clearly want to enforce their IPR, they cannot be genuine about OSP” (i.e. there is some trick there). That Steve Mutkoski was so unprepared to answer questions about what “required” meant shows, I think, that this issue (which has been repeatedly and constantly raised over the last year) just has completely flown over the heads of the MS legal department. This part of the QA session was a pretty disappointing performance.
The issue here is that the OSP only covers “required portions” of the spec (MS, IBM, etc) promise not to enforce their patents unless you sue them. When I looked at the OSP, I went ahead and looked at all the other licenses to see what “required portion” meant, since it was clearly some kind of legal term. IBM’s license is better, because it spells it out; MS thinks it is unnecessary to spell it out since lawyers would know; they wanted to keep the OSP to one page. But they got it wrong: people think that normal language is being used.
W3C (and OASIS) dealt with this very problem. I think the OSP should be redrafted to follow their wording (and of IETF), and use “normative” rather than “required”. That aligns the promise with the language of the standards and clears up some potential for confusion.
Laypeople look at “required portions” and decide that this must be in opposition to “optional portions”. Here is the MS wording from OSP:
To clarify, “Microsoft Necessary Claims” are those claims of Microsoft-owned or Microsoft-controlled patents that are necessary to implement only the required portions of the Covered Specification that are described in detail and not merely referenced in such Specification.
Predictably, IBM ’s rep was trying promote this confusion. Quite a lot of chutzpah considering that IBM in fact uses the same legal terminology in its covenant
“Necessary Claims” are those patent claims that can not be avoided by any commercially reasonable, compliant implementation of the Required Portions of a Covered Specification. “Required Portions” are those portions of a specification that must be implemented to comply with such specification. If the specification prescribes discretionary extensions, Required Portions include those portions of the discretionary extensions that must be implemented to comply with such discretionary extensions..
IBM’s wording is much clearer and better, and “required portion” is indeed a common term in these licenses. However, what if Microsoft turned around and said “We didn’t define it as a required term, and now we want to charge licenses for patents”? Lets put aside the common legal usage of required portion in licenses. Lets also put aside the small likelihood that there could be non-junk patents in the area of document processing formats (considering the maturity of Unix Publisher Workbench, TeX and so on from the 1960s to the 1980s, not to mention ISO SGML (IS 8879:1986) and its applications since 1986 and before. And lets put aside fraud issues, given the consistent public representations by dozens of top-level management from MicroSoft.
What happens with an ambiguous licenses? During the session I asked if anyone knew of any case law where “Required” was discussed, having a nag in my memory. I have looked it up again and it comes up in the case Intel v. Via Technologies, 319 F.3d 1357 (Fed. Cir. 2003 which has a discussion here. In that case, the judgement was that “required” must be given the widest interpretation (to include “optional”)
Although we agree with Intel that its reading of the plain meaning of “required by” is a reasonable one, we disagree that its reading is the only reasonable one. First, the words “required by” without any clarification could mean either non-optional protocols of AGP 2.0 or electrical interfaces or protocols that are required to perform any specification “described” in AGP 2.0, including non-optional protocols for an optional specification. For example, books “required by” a school could mean books needed for (1) “required” (non-optional) classes; or (2) any class taken, including optional classes.
…
The word “optional” does not occur anywhere in the license agreement.
…
Thus, we conclude that VIA’s and Intel’s interpretations are both reasonable readings of the license agreement. The district court erred in holding that VIA’s reading of the agreement is the only reasonable one. Nevertheless, it was harmless error because, as there is ambiguity in the agreement, the district court properly granted summary judgment of noninfringement relying on contra proferentum.
…
When a contract is ambiguous, the principle of contra proferentum, under Delaware law, requires that the agreement be construed against the drafter who is solely responsible for its terms.
…
Contra proferentum has been held determinative in resolving ambiguity in a contract that, like the agreement here, is drafted by one party and offered on a “take it or leave it” basis without meaningful negotiations.
It would be interesting to know in which other jurisdictions would also allow contra proferentum: according to Wikipedia it also includes Europe, California and international arbitration. Here in Australia, there are multiple cases that endorse the principle in various circumstances.
If I may throw a spanner in the works, the thing that I see missing from all these licenses is that they only seem to cover the of patents where use of the patents is unavoidable to implementing the spec. In other words, if there are multiple ways of implementing something, you have to use the way that is not covered by the patent. I think this is unacceptable, and something that MS, IBM and the others should fix. Don’t compete at this level, boys and girls, it is counter-productive: open up.
So it was a really enjoyable day, and I enjoyed catching up with Greg Stone, Matthew Cruikshank and the others during the breaks. I think Pia, David and the CyberLaw Policy Centre organizers did a really good job.
But outside in the world, confusion is still rampant. OASIS lawyer Andy Updegrove has never been known to say a positive thing about MicroSoft nor a negative thing about IBM, and most of the time he is happy to be one link down the S-bend from Rob Weir’s mischief, however he is really valuable when commenting on law. But it is interesting to see the level of misinformation of some of Updegrove’s readers.
A case in point. While I don’t see this issue as primarily IBM versus MS (that is one aspect but there are also open source people and industry people and governments occupying all positions: pro, neutral, con, don’t care, be fair, make it right, etc) nevertheless I think IBM’s strategy has long been that since they cannot prevent DIS 29500 being fixed and adopted, they need to shoot the messenger and blacken ISO’s name. In fact, IBM’s Bob Sutor is quite open about who IBM are really interested in, no shame in that: when asked about Open XML and ODF and ISO he replies
I think we have collectively educated and permanently changed the policies of procurement people in many organizations around the world.
Recently, IBM marketing guy Rob Weir has not had much to blog on, since according to JTC1 rules (which they are trying to get strictly enforced following their meeting in Australia last month) ballot resolution discussions are private. This rule is intended to stop hysteria and allow the participants the full range to describe options without outsiders citing proposals and options as done deals. (I.e. exactly what Rob has been doing, such as his complaint that the early issues dealt with were the trivial ones, to try to prop up the crumbling argument that there would be no changes to DIS 29500 in response to the National Body comments) People who are interested in participating have had a year to join their national standards body’s committees and come to grips with all the issues and procedures. So Rob came up with a great spin: Microsoft is bad because, PERFIDY! they are following the rules…
Anyway, Andy’s blog on this was a real classic according to the formula. He picks up on Weir’s message of the day, and links to both Weir and MS’ Brian Jones, which is some balance. Then, in further imitation of fairness, he quotes “Pamela Jones” (Groklaw) but her article is also just a riff from Weir’s tune. Pamela manages to find some minor wording issue based on some material on the SC34 website (the FAQ is really clear on the issue, I thought): Pamela because most people in ISO committees do not have English as their first language, it is not a good idea to try to find the worst meaning in phrasing: material on a general webpage is just general material and you are wasting people’s time by trying to read too much into them. Then, based on the spurious idea of the vote being taken at the meeting, off she goes with imagined opportunities for conspiracies and so on. Again, it is the basic strategy: FUD. Take the most lurid interpretation possible, try to discredit the process.
In politics, this kind of spin is called “innoculation”. What you do is try to get ahead of your opponent by coming in early with responses to them. For example, “Six lies the Democrats will tell you” or whatever. The intent is that then the public will hear any statements through the framework established by you. (Indeed, IBM’s Bob Sutor even talked of having a competition for this.)
Andy’s blog has a similar comical moment: after given some of the smaller details of an ECMA press release, where ECMA rolls over on some of the most contentious issues which previously the Echo Chamber had told us would never be changed, Updegrove says
Despite the meagerness of the sampling of recommendations described in the press release, it is possible to get an idea of the degree to which Ecma and Microsoft are willing to go in order to secure a final, favorable vote.
What Andy: no “How great, we got what we asked for!” instead a complaint about the press release being meagre (oh no, not enough material for a convincing spin) and then the corker that this shows “the lengths” they will go to! PERFIDY! They are refusing to act unreasonably! PERFIDY! They are giving in to user demands! So despite this utterly clear evidence that the process looks like it is working (ECMA proposes a standard, national bodies consider it and make comments, ECMA and the national bodies work out constructive solutions to them, the way that every other fast-tracked ISO standard procedes), instead it is supposed to be bad news. It is hard for me to think why it isn’t kind of pathetic.
Anyway, the thing that grabbed my attention about this blog item was that there had been four small discussion threads. And all of them were based on wrong information.
A world of confusion. The emperor’s new clothes are tremendously well-ventilated.
The XML 2007 Conference has come and gone, with as usual a number of thought provoking talks and controversies. During the evening of the first day, there was a special XForms Evening, with a number of the industry gurus in that space providing very good examples of why XForms is a compelling technology and here to stay.
In the final keynote session, though, Elliott Rusty Harold sounded a somewhat more alarming note, indicating that while XForms does have a huge potential, there are no killer apps out there for it, and without significant support from the various players in this space it will be dead within the year.
I’ve known Chime and Uche for a while now. We all have. In fact, if it wasn’t for the Ogbuji family I doubt much XML would or even could have been much more than a passing fad. Fortunately, that’s not how things played out.
Every generation has their revolutions, and each revolution has their revolutionaries. Our generation has Chime and Uche, and to be quite honest, if that’s all our generation were to ever have, it would be more than enough. Our generation is one of the lucky ones. Sometimes that’s just the way things work. And that’s certainly how it worked out for us this go round.
Two recent entries, one in the form of a blog entry from Dare Obasanjo, the other in the form of a post to the FeedSync list from Steven Lees, both in the last 24 hours,
ADO.NET Data Services (Astoria) Transforms SQL Server into an Atom Store
This is sick. With Astoria I can expose my relational database or even a local just an XML file using a RESTful interface that utilizes the Atom Publishing Protocol or JSON. I am somewhat amused that one of the options is placing a RESTful interface over a SOAP Web Service. My, how times have changed…
It is pretty cool that Microsoft is the first major database vendor to bring the dream of the Atom store to fruition. I also like that one of the side effects of this is that there is now an AtomPub client library for .NET Framework.
Of course, I’m sure there will be many who will contend that GData, and therefore Google were the first to bring the “dream” of an Atom store to fruition my bad. Dare stated “first major database vendor“, which as far as I know is a true and fair statement. That said, I’m leaving in my props to Joe Gregorio cuz’ he deserves both the credit and attention, regardless of the fact that he isn’t a major DB vendor either. and to be completely honest, Joe Gregorio not only brought forward the original dream of the Atom store, but was the originating dreamer that brought AtomPub into existence, quietly building both the client and server pieces of this dream while at the same time acting as the (lead?) editor on a two man *ROCKSTAR* team, and backed by some of the brightest minds in the industry to ensure that the final result was what it needed to be. But let’s try and set aside differences in perspective for now and take a look at what Steven Lees has to say,
… or Girl. Pick whichever most closely resembles your gender and then apply this selection to the following…
Mike Linksvayer? The major political issue of today?
Music distribution companies are only one of the forces for control and censorship. The long term issue is bigger than whether private ownership of 21st-century printing presses should be permitted. The issue is whether individuals of the later 21st-century will have self-ownership.
The trouble with standards is that there are not enough of them. There is a strong public interest in having the interface technologies of market dominators (which would include near monopolists and long-term super-profit-takers) out in the open, unencumbered, zero royalty, non-discriminatory licensed, and with the documentation QAed by an independent group which may include experts and rivals and stakeholders.
And there is indeed a great system set up for this:
First, I should clear up a misunderstanding that many people fall into:
Second, for ISO standards,
Next, I would like to mention that ISO has a very wide range of publication types, from International Standard to Technical Report to Publicly Available Specification, and it certainly may be the case that some technologies (such as obsolescent or fast-changing technologies) would be better made available using a lesser type.
My key point is that once a company’s success in an area brings it to the point of market domination (or long-term super-profit-taker) then anti-trust regulators need to ensure that their interface technologies are open enough for the usual level-playing field concerns to be addressed. It needs to be just a cost of doing business, once you reach a certain point.
Obviously this applies very directly to Microsoft. But IBM is also a market dominator (indeed, monopolist) in the mainframe game. And Google looks similar for search; Apple with the iPod; Adobe with PDF; and so on. There may be non-American companies that it applies to as well, I suppose. (And there are technologies that achieve market dominance outside a dominant vendor: PKWARE’s ZIP for example. These need to be standardized as well.)
The current standardization effort for DIS 29500 (Office Open XML) provides a good backdrop for this. For almost two decades, independent software developers have been calling on Microsoft to open up their interface and document formats: the SAMBA developers for example. In 2004, the European Union recommended to Microsoft that it should put its document formats forward (in an XML version) for international standardization. As Microsoft has done this, first through ECMA then at ISO, it is prompted a vicious campaign against the effort, lead by business rival IBM but also by stakeholders allied to some open source software substitutes for Microsoft Office. (In particular, stakeholders allied to the Sun lead Open Office application and the ODF format; note however that other Open Source stakeholders, notably those allied to Novel have welcomed the standardization effort.)
Some of the objections to DIS 29500 are in fact objections to the idea of a Microsoft-derived technology becoming a standard. In fact, several standards have come that way. The recent ISO Open Font standard, for example, it based on MicroSoft and Apple’s Open Type fonts (True Type, etc).
Other objections to DIS 29500 come from the other direction: DIS 29500 is flawed not because of what is in it, but because of what is outside its scope: media formats, macro languages, printer driver configurations, and so on. The most extreme version of this argument is to find a fault with DIS 29500 that it does not describe the (50 or so different) earlier binary formats for Office and its component products. (When this complaint is made in the same breath as saying that the 6000 page draft is too large, it does smack a little of insincerity.)
I usually respond by pointing out that a standard needs to be scoped, that standard is a work-in-progress, that it is impossible for ISO committees and National Bodies to have enough volunteers to do this work (both because experts are scarce and valuable, and because idealism is not stimulated by contact with market dominators.)
But I should have been articulating this: yes, there should be documentation for the binary formats, and indeed any format, API or protocol used for interfacing computer systems which gain market dominance. There does need to be more rather than less, and the cost (which are ultimately costs of decent QA and education, once the spurious controversy that has unsuccessfully attempted to derail DIS29500’s progress is over. (Of course, the lack of a tangentially related standard is no reason to reject a standard, we need to start from somewhere.)
Now many businesses are naturally coming to opening up their technologies in a similar way: look at Sun, for example, with its Open Solaris, Open SPARC, and its steps towards opening up Java. Public policy makers need to foster a procurement and regulatory environment where the winds of change for openness also blow refereshingly on market dominators.
The European Union was completely correct in asking for MS to adopt standard notations (XML) for the Office documents, and to standardize the schemas (through DIS29500), just as they were right to ask OASIS to standardize ODF (IS 26300). However, I hope this is the start of a larger movement for more: all document formats, all APIs, all protocols which have significant market domination need to be made available through one of the ISO standardization processes. All these interfaces are objects of legitimate interest for public policy for reasons of information ownership, level-playing field access, anti-trust, and even just from a procurement angle to ensure that systems are adequately documented and have had adequate attention paid to internationalization, harmonization, accessibility, and conformance testing: basic QA. And where the market dominating technology belongs to a market dominated by a single player (or cartel), then that player needs to bear the cost of the standardization effort, as a normal cost of business.
Now, my proposal here is not a total package: issues of conformance with external standards still need to be in place. After the statement “Here’s what we do” comes the natural question “Is it good enough?”, and that belongs in a separate blog.
You see, if you draw the right graph, maybe you’ll see the gaping hole in it, the Next Big Thing.
I don’t know, Tim, but am I the only one that sees blips on a radar screen? ;-)
Who wouldn’t want to expand the human communication spectrum?
Absolutely!
Why aren’t more people thinking about this stuff?
TheyWe are. But instead of thinking and talking (Update: That process has been going on for quite some time now), we’re building and delivering.
BTW… Have I ever mentioned just how much I enjoy releasing projects on January 1st of each year? ;-)
(Something tells me 2008 is going to be a big year.)
[2:15pm] elarson: xmlhacker: you should patent the use of capital letters in association with the ‘:D’ emoticon during communication in order to protect your communications intellectual property ;-)
The above quote comes from Eric Larson in a recent IRC conversation after his recent post on “Patent Reform” ensued a conversation related to the 1-Click ordeal from 7 years ago (I was working on the Microsoft Passport team at the time, so those aware of that overall situation will understand what I mean when I say I had a first hand look at just how much fun dealing with the ugliness of the patent system truly is.) Those of you who have ever exchanged an email with me or have been a recipient of an email from one of the mailing lists I have posted to might recognize what Eric is referring to. For those who have not, at the bottom of all of my email communications you will find the following at the beginning of my signature,
/M:D
… which is intended to represent my first and middle initial, my first initial being implicitly bound to xmlns:M=”urn:publicid:Peterson:Patronymic+Surname:EN:1.2″ (<- If you don’t get it, don’t bother asking. It’s not all that clever *OR* funny ;-)) using an XML-ish namespace extension-based syntax.
Of course to those of you who have emoticons turned on in your email reader the above will translate to,
There is something about XML that makes people go crazy: in particular, people trying to make standards: its that ol’ tag fever agin Maude. I think I know what that thing is: the emphasis on standards = good combined with the desire for complete schemas and the idea that organizing schemas by namespace is the way to shoehorn requirements (rather than being a way of expressing results).
The result: vocabularies where unnecessary order and structuring constraints are given. You can tell when a standard schema is over-specified, because people using it will just snip out the low-level elements they need and plonk these in their own home-made container elements.
I have noticed this in a few schemas I have been working with recently: in fact, the trend I notice is that people start off with their own home-made schema, then “adopt” the standard by finding any elements that have close semantics to their home-made elements, and changing the name of the home-made element to the standard name. SVG in ODF looks like an example of this, and there is another standard I have been working with recently that has the same issue: when you adopt arbitrary portions of a cohesive standard, are you really using or abusing that standard?
I suppose there is a case to be made that transitional schemas should be treated seriously.
One software engineering idea that has stuck with me over the last years (which I wrote about in The XML & SGML Cookbook) is the twinning of cohesion and coupling. Basically, that when some information is highly coherent (think of Eve Maler’s Information Units) i.e., it belongs together semantically and would not make much sense in isolation, it deserves an official container.
Conversely, you should try to reduce coupling of information that is not cohesive.
A rule of thumb for many situations is that industry standard groups (and, indeed, inhouse schema developers), may be well advised to standardize data elements eagerly but container elements suspiciously: standardize the jellybeans not the jars. The next bloke may likes your jellybeans but have his own jars.
Various approaches to do this come to mind: think in terms of creating a vocabulary rather than a language; split your industry standard in two, with the tightly coupled elements in one normative section and the loosely-coupled elements in another non-normative section, perhaps with different namespaces even; use open content models and order-independence for loosely-coupled elements.
Another upside for this approach, is that it reduces the number of trivial issues for committee members to get excited about.
Campaign Widget | Creative Commons
Help support CC by putting this widget on your website, blog, or social network!
You don’t have to follow The Oil Drum to know that energy prices just keep climbing. Even if supply holds up, huge demand will make prices a problem for a long time to come. Can the Internet help reduce that demand?
Watch to the very end to understand the title.
TED | Talks | Larry Lessig: How creativity is being strangled by the law (video)
Larry Lessig gets TEDsters to their feet, whooping and whistling, following this elegant presentation of “three stories and an argument.” The Net’s most adored lawyer brings together John Philip Sousa, celestial copyrights, and the “ASCAP cartel” to build a case for creative freedom. He pins down the key shortcomings of our dusty, pre-digital intellectual property laws, and reveals how bad laws beget bad code. Then, in an homage to cutting-edge artistry, he throws in some of the most hilarious remixes you’ve ever seen.
Ok. So perhaps this is not a conspiracy because it’s out in the open, but ebay’s role in keeping feedback ratings artificially high is something worth discussing.
My argument is not about retaliatory feedback, but let’s discuss that briefly. Anyone who has used eBay much knows that feedback retaliation happens. You get treated badly, you leave feedback that says so, and the recipient leaves you bad feedback, sometimes even lying. This is a disincentive for leaving anything negative in the first place. eBay could take steps to make the system more fair, but they don’t. In fact, they have an incentive to leave the system exactly like it is. Retaliation discourages people from leaving bad feedback, and less bad feedback makes the entire marketplace look more trustworthy.
But that could be a bit of me creating a conspiracy, and perhaps eBay has better intentions than the previous paragraph gives them credit for. I considered this a possibility until one of my most recent forays into the depths of their system.
I was sold a counterfeit item on eBay. I paid about $100 for it and when it arrived it was obviously fake. After the seller did not respond to my emails, I filed a claim with Papal (which is owned by eBay for those of you who are not familiar). They offer “seller protection” designed to make sure you don’t get ripped off. Papal sent some messages back and forth and after about a month told me I would need to get the item appraised and send them evidence that it was counterfeit. This can cost hundreds of dollars, and discourages a cheated buyer from proceeding with the process, but let’s allow that it is necessary. On principle, I continued and found someone to certify that I had received a fake. After more than two months of fighting, PayPal finally resolved the dispute on my behalf and sent me a refund. That is all well and good. However, what really got me was the email they sent notifying me of all this:
PayPal has received the item in dispute. A refund will be issued to your
PayPal account within 5 business days.
PayPal regrets any inconvenience you may have experienced.
This claim has been resolved amicably. Please consider this when leaving feedback for this seller.
Thank you for your cooperation.
Sincerely,
Protection Services Department
“Amicably”? What was amicable about this claim? I spent a lot of money and effort trying to get a refund that the seller refused me for months. Why would PayPal tell me to consider this claim amicable when I leave feedback? Well, they have the same incentive as before. A marketplace with no negative feedback looks safer. But none of us want to participate in a system where a seller who regularly sends out counterfeit items is ranked highly, simply because eventually buyers can get their money back by the actions of a third party.
I do not hold out any hope that eBay, who is thriving, will correct their ways. I think it could eventually lead to a third party system (and several have popped up) for creating real and honest feedback about buyers and sellers. I would certainly use such a service - I might even pay for it - because I want to truly know how much to trust people I interact with. I don’t want to be falsely reassured that everything will be ok, even though that seems to be the tactic eBay is betting on for continued success.
To be continued? That’s what I intend to find out.
More when it seems appropriate to report more.
Embedding XML islands inside HTML documents is an old idea, and lately the debate about how to standardise this in HTML5 has been heating up again. As someone working on an HTML to PDF converter with strong XML support, I have a keen interest in the outcome of this debate. It would be very helpful if HTML and XML could be mixed and matched as necessary. So let me throw my five cents into the ring. (It would be two cents, but in this decimal age that would round down to zero).
Living in a place that used to think of itself as the bright future of America, it’s strange to me how people think that particular places will be the bright future.
Efforts have been underway recently to develop a schema language for JSON, analogous to the XML Schema Definition Language (XSD) or RelaxNG languages in the XML arena. Similarly, a JSON transformation language is being proposed and bandied about in various AJAX circles as web2 developers attempt to take the best of what XML has to offer and recast it from the angle-bracket modality to the braced modality.
These efforts are intriguing, and for the most part people within the XML community are now affecting the same rather confused expression on their face that I remember seeing on the SGML generation as they watched the young turks of the XML movement push their view of the world out to the world - “Didn’t we already DO that?”
Yes, I know that looking 43 years ahead is ridiculous for technology. But might it make sense for a place?
So my paper was accepted at XML 2007. I look forward to seeing some of you folks there. The schedule looks interesting not just because I see topics that I enjoy, and some about which I want to learn, but also because I see a lot of stuff that makes me think: “Oh, it’ll be fun to debate that one”. Anyway for my part I’ll be presenting “XML Data modeling for Web publishing workflow”, which is a pedestrian, but accurate title. I’ve been proud of the architectural expertise I’ve provided to help manage content workflow for Sun’s main sites over the past few years, but I’ve never been given permission to discuss it much. It will be fun to talk about some of that with Kristen Harris, the director with whom I’ve closely worked in that time.
Chances are zero for XSLT2 in Orcas. Orcas is spec frozen now, and we can’t do a lot of work hoping that W3C will finalize the spec before we finalize the code. Also, Orcas is a tools release, so we can’t take anything other than critical bug fixes in the .NET 2.0 “red bits”. We are actively working on XSLT2 however, and there will be CTP releases starting in roughly the Orcas timeframe. It’s not clear how it will ultimately be released.
January 9th, 2007
Microsoft XML Team’s WebLog : XSLT 2.0
Our users have made it very clear that they want an XSLT 2.0 implementation once the Recommendation is complete. A team of XSLT experts is now in place to do this, the same people who have been working on the XSLT enhancements that will be shipped in the forthcoming “Orcas” release of Visual Studio / .NET 3.5. Orcas development work is winding down in advance of Beta releases over the next several months, so there is no possibility of shipping XSLT 2.0 in Orcas. The XSLT team will, however, be putting out Community Technology Previews (CTP) with the XSLT 2 functionality and appropriate tooling as the implementation matures. The eventual release date and ship vehicles (e.g. a future version of .NET or a standalone release over the Web) have not been determined, and depend on technical progress, customer demand, and other currently unknowable factors.
So Orcas Beta 2 released a few months back. Depending on your interpretation of various announcements, VS.NET 2008, which is quite stable already in Beta 2, will release somewhere between November and February. So that should mean we should expect an XSLT 2.0 CTP somewhere between now and March?
I must admit that I am getting quite excited by the anticipation of the first CTP release. Of course I can’t imagine MSFT would do something as silly as not deliver on something that the community fought as hard as we did to gain in the first place,
Our users have made it very clear that they want an XSLT 2.0 implementation once the Recommendation is complete.
… would they?
Not the MSFT XmlTeam I know and love**, that’s for sure!
A team of XSLT experts is now in place to do this, the same people who have been working on the XSLT enhancements that will be shipped in the forthcoming “Orcas” release of Visual Studio / .NET 3.5.
See!!! :D
** DISCLAIMER: The MSFT XmlTeam you know and [love|hate] might differ from the MSFT XmlTeam I know and, at present time, [HEART]. ;-)
Hey MSFT XmlTeam: How about a time frame update? Oh, and while we’re at it, will the XSLT 2.0 CTP source be a part of the forthcoming MS.NET Open Reference release? That would be *FANTASTIC*! :D
/me is waiting eagerly to hear when we all get to play with the first MS.NET XSLT 2.0 CTP release. :D
The bumpy ride to ISO standardisation of Microsoft Office Open XML is receiving a lot of attention here on XML.com, and drawing out a lot of strong opinions on both sides of the issue. Frankly, the intense coverage given to every minor detail in the specification bores me to tears, even though I see the need for it, and I think that there is a larger story behind these events that is not receiving enough attention.
One of the most startling aspects of the last year, to me, really shows the disruptive potential of standards: bitter enemies are hard at work making systems that also benefit their enemies in pursuit of a higher goal. A world turned upside down!
Examples include:
All this competition and bile channeled productively! No wonder people are freaked out. :-)
But the paradoxes don’t just mean that enemy act like friends, it seems. Friends also can get accused of being enemies. There is a very interesting post ODF vs OOX : Asking the wrong questions (hat tip to Doug) on the blog Spreadsheet Proctologist which I like very much because it brings out that ease-of-implementation is just as much (and perhaps mostly?) a question of what your starting base is (i.e. your native data structures and functions) as it is a question of what information and forms the external format provides.
But the readers’ comments include statements like Your self-annihilating devotion to Microsoft is too evident., and Just by touching MS OOXML, you are playing their pawn in the only purpose for this exercise. To kill ODF adaption and therefore the threat of Open Office and others as a replacement for Microsoft Office products. Is to laugh! Now GNU developers are pawns and devotees of Microsoft! That GNU software, ooooooh, just another Bill Gates plot!
As a side note, but related to the theme of finding strategies so to make the acts of people’s enemies as productive as the acts of their friends, I think that Stephane Rodriguez’ comments (to that blog and, just as circumspectly, elsewhere) on the calculation chain should be paid more attention to. (Sometime I will look up whether it made it to any national body comments for the BRM, I hope so.) Calc chain needs to be reviewed with the question asked “Is the base case a little too complicated still?” It is a mild and productive question: I suspect programmers would be happy if some more leeway were provided. Now whether the issue is an Office one or a DIS29500 one, I don’t know; but the issue should not be dismissed just because it was deposited by an ostensibly rabid whirlwind! Quite the reverse.
I’ve just glanced over the 3549 or so comments put in by various national bodies for the recent ballot on DIS 29500. I’ve made a table listing the countries that commented, together with their votes and whether I think most of their issues could be resolved during the upcoming Ballot Resolution Meeting next year.
The bottom line: there are a few touchstone issues that may be tricky but it is difficult to see from the comments that DIS 29500 would not be successfully fixed and approved to be an ISO standard. The particular touchstone issues I see are that spreadsheet dates need to be able to go before 1900, that DEVMODE issues need to be worked through more, that the retirement of VML needs to be handled now, and that there needs to be a better story for MathML.
Apart from these, there is a sea of details that are eminently fixable: typos, clarifications, fixing schemas against closed lists, the use of more standard notations for fields, encryption, conformance language, refactoring the spec: editorial and syntactic rather than data model or wholesale semantic changes. On the other extreme, there are various non-starters which I expect have little hope since they run counter to the rationale for the spec: adopting SVG or adding various frustrating little things in the name of compatibility with ODF (Some NBs even call for ODF’s blink element, even though blink has been removed from HTML since it can cause epileptic fits!)
You can find a full list of national votes from the SC34 website. I was pleased to see that all the issues I raised ended up in Standards Australia’s comments (it abstained on the vote, but its comments still go in the mix.)
The thing that interested me in this table was whether I thought each National Body’s comments could be resolved enough to change their No vote to Yes vote. I am assuming there is no point to a standard that Ecma and Microsoft could not buy into. One of most interesting documents in the collection of comments from different bodies is Ecma’s own contribution: basically they accept almost all of Japan’s technical issues (which have a lot of overlap) which augers well for many of the other changes.
So I provide a rating as to whether I expect that a National Body’s vote will be definitely no, probably no, or probably will change to yes as a result of a successful BRM. Caveat: The NB comments do provide a much clearer indication of each National Body’s thinking than just the raw Yes/No/Abstain vote (which are utterly useless in predicting a finally outcome); however, I would be a little more confident in my ratings of the NBs if SC34 or ISO had released information about which NBs had ticked the normal box that says they might change their mind if the issues were resolved. I guess you would rate me as an optimist in general about the process, but still I am not saying that all these NBs will necessarily vote yes ultimately; but there is quite a bit of commonality to the comments.
I also have columns marked “Indie” which has an X if it seems the NB undertook independent review of the specification. And one marked “Parrot” where the NB is reproducing (perhaps with some localization or sorting or selection) the material, turning the standards review process into a form-letter campaign. I have mixed feelings about parrot items: on the one hand an NB is free to consider whatever issues it likes, and some NBs have procedures that may favour the garrulous, but on the other hand it represents a hijacking of valuable review time to obsess on the same issues, rather than give fresh eyes.
The reviews that seem to me the best are those where an NB focuses on its areas of expertise or national interest: Japan is very interested in schemas, Israel is very interested in right-to-left text, Ireland is very interested in correct references, Australia is very interested in clarity, Canada is very interested in assistive technology, Tunisia is interested in the application to mobile devices, Ghana (with a large Arabic influence) is interested in IRIs, and so on. The comments that seem least useful are the parrot comments, and the ones with vague recommendations. (I expect that this is the first comments that many of the NB committees or staff have sent in, so it is a good training exercise nonetheless.)
And there are some nice touches in there, where perhaps some cultural values slip through: Switzerland’s comments are a list of problems they actually have rejected and the details why, and Jordan and Turkey both have dignified documents that explain their positive reasons. Some of the parroted comments are unnecessarily ranty, but only a few were mad: the US comments in one place want to remove OPC because it is not present in the “pre-existing binary format” but then they want to get rid of compatability elenents because they are a “museum”:..they don’t need to worry about consistency because they are voting yes anyway: some of the comments are like that, they are there to only allow the cake to be had and eaten. I expect that several NBs are not really attached to some of their comments.
The second last column is “Off-topic” which is where the NB’s comments includes material that the BRM cannot discuss. These are typically issues concerning IPR. MS needs to spend a bit more effort on this: Switzerland’s comment is really interesting on this point.
The final column marked “radical” is where a National Body’s comments include something that I think will be a challenge for MS or Ecma or ISO to support. I don’t include things like changing minor notations or providing better text explanations for things: I think the Ecma comments show a willingness to have those. However, where some change involves a wholesale alteration of the technology or its implementation, I would be surprised if it were acceptable. This is because for every nation that is voting “No” because they really prefer ODF, etc, there are two who are voting in favour because OOXML is what it is.
Country |
Vote |
Really No? |
Probably No? |
Probably Yes? |
Indie |
Parrot |
Off-topic material |
Radical |
---|---|---|---|---|---|---|---|---|
Australia |
Abstain |
- |
- |
- |
X |
|
X |
|
Austria |
Yes |
|
|
(X) |
|
|
X |
|
Brazil |
No |
|
|
X |
|
X |
|
|
Bulgaria |
Yes |
|
|
(X) |
|
|
X |
|
Canada |
No |
|
|
X |
X |
X |
|
Use DrawingML rather than VML |
Chile |
Abstain |
- |
- |
- |
|
X |
|
Field formatting. Use MathML, Use SMIL, Use SVG, Use ODF |
China |
No |
X |
|
|
|
|
|
Review time |
(Document 13) |
? |
|
|
X? |
|
|
|
Remove VML |
Colombia |
Yes |
|
|
(X) |
|
|
|
OPC to separate standard |
Czech Republic |
No |
|
|
X |
X |
|
X |
|
Denmark |
No |
|
|
X |
X |
|
X |
|
Finland |
Abstain |
- |
- |
- |
|
|
|
Dates before 1900. Remove VML. Use MathML |
France |
No |
|
|
X |
X |
X |
X |
Date prior 1900, remove math pending mathml3 |
Germany |
Yes |
|
|
(X) |
X |
X |
|
Dates prior to 1900 |
Ghana |
Yes |
|
|
(X) |
X |
X |
|
Dates prior to 1900. (replace VML with DrawingML, adopt MathML) |
Great Britain |
No |
|
X |
|
X |
X |
|
Add ODF-isms, (replace VML with DrawingML, adopt MathML) |
Greece |
Yes |
|
|
(X) |
|
|
X |
Dates prior to 1900, (replace VML with DrawingML, adopt MathML) |
India |
No |
|
|
X |
|
X |
X |
Use MathML, pre 1900 dates |
Iran |
No |
|
|
X |
|
X |
X |
Dates before 1900. Add ODF-isms |
Ireland |
No |
|
|
X |
X |
|
|
Dates before 1900 |
Israel |
Abstain |
- |
- |
- |
X |
|
|
|
Italy |
Abstain |
- |
- |
- |
|
|
|
Reference implementation, test suite |
Japan |
No |
|
|
X |
X |
|
|
Publish OPC as separate standard |
Kenya |
Yes |
|
|
(X) |
|
X |
X |
Dates before 1900. Remove DrawingML |
Korea |
No |
|
X |
|
|
X |
|
Needs interoperability with ODF. Remove VML and DrawingML |
Malta |
Yes |
|
|
X |
|
|
|
|
Mexico |
Abstain |
- |
- |
- |
|
|
|
Dates before 1900 |
New Zealand |
No |
|
X |
|
|
X |
X |
Rename elements, vague |
Norway |
No |
|
|
X |
|
|
|
Split out DrawingML. Split out OPC |
Peru |
Abstain |
- |
- |
- |
|
|
|
Dates before 1900 |
Philippines |
No |
|
|
X |
|
|
|
Dates before 1900 |
Poland |
Yes |
|
|
(X) |
|
|
|
|
Portugal |
Yes |
|
|
(X) |
X |
X |
X |
|
Singapore |
Yes |
|
|
(X) |
|
|
|
|
South Africa |
No |
X |
|
|
|
|
X |
Rewrite based on ODF. Make OPC a separate standard. Remove DrawingML |
Switzerland |
Yes |
|
|
(X) |
|
|
|
Dates before 1900 |
Thailand |
No |
X |
|
|
|
|
|
Time for review |
Tunisia |
Yes |
|
|
(X) |
X |
|
|
|
Turkey |
Yes |
|
|
(X) |
|
|
|
|
US |
Yes |
|
|
(X) |
X |
|
|
X remove VML, Drawing ML, OPC, compatibility, dates before 1900 |
Uruguay |
Yes |
|
|
(X) |
|
X |
|
(replace VML with DrawingML, adopt MathML) |
Venezuela |
Yes |
|
|
(X) |
|
X |
|
|
Update: To ensure proper context is propagated, as per Douglas Crockford’s follow-up comment below,
The context of my statement was Ajax data transfer. In that specific context, XML is in fact being replaced with JSON. I didn’t say anything about doing dishes.
Which, I believe, is an absolutely fair statement to make. In fact, if you were to run a couple of quick queries on the two pages this article spans you could determine quite easily that yes, in fact, this article had a heavy tendency to use the word ‘AJAX’,
Page1//html:p[contains(.,’AJAX’)] = https://personplacething.info/service/proxy/return-xml-from-html/?uri=https://www.infoworld.com/article/07/09/07/crockford-ajax_1.html//html:html/html:body//html:p[contains(.,’AJAX’)]
Page2//html:p[contains(.,’AJAX’)] = https://personplacething.info/service/proxy/return-xml-from-html/?uri=https://www.infoworld.com/article/07/09/07/crockford-ajax_2.html//html:html/html:body//html:p[contains(.,’AJAX’)]
What about dishes?
Page1//html:p[contains(.,’dishes’)] = https://personplacething.info/service/proxy/return-xml-from-html/?uri=https://www.infoworld.com/article/07/09/07/crockford-ajax_1.html//html:html/html:body//html:p[contains(.,’dishes’)]
Page2//html:p[contains(.,’dishes’)] = https://personplacething.info/service/proxy/return-xml-from-html/?uri=https://www.infoworld.com/article/07/09/07/crockford-ajax_2.html//html:html/html:body//html:p[contains(.,’dishes’)]
… Well, once again this statement can easily be determined to evaluate to true. Thanks for setting things straight, Douglas!
On a related note, isn’t it cool how you can combine something as ubiquitous as a URI path segment with something as ubiquitous as HTML turning the entire *live web* into an XPath query-able dynamic database as a result?
Anyone ever tried to do that same mapping with JSON?
Just wondering… ;-)
Enjoy your XML and JSON enhanced WebDevWeekends, everyone! :D
Update: I should quickly point out that my comment below regarding the comparison to Dave Winer had nothing to do with Crockford’s contributions to the development community — Douglas Crockford is a *HELLAVU* hacker and his contributions are both obvious and beautiful (as in Beautiful Code.) My statement was directly oriented towards his statement “Fortunately, XML has been replaced by JSON” which, if Dave Winer had invented JSON is exactly the kind of statement you would expect for him to make. Suggesting that XML has been replaced by JSON makes about as much sense as would suggesting that JSON has replaced data. While JSON is great as a data serialization format it doesn’t cook your dinner and clean your dishes. It’s a data container. A *NICE* data container, but a container none-the-less.
Anyway, just wanted to quickly clarify what I meant in my comparison below. Douglas Crockford doesn’t claim to have invented things he didn’t and then insist he be given credit regardless of the fact. And he most certainly knows how to write code like very few people on this planet are capable of. My apologies for making it seem I was suggesting otherwise.
[Original Post]
Web, AJAX slammed for deficiencies | InfoWorld | News | September 07, 2007 | By Paul Krill
XML is complicated and inefficient, he said. “Fortunately, XML has been replaced by JSON,” Crockford said. “This gives me some confidence that we can fix the standards in the Web. This is our first success at that.”
Yikes! And here I was thinking there would never again be another Dave Winer. ;-)
I’m mostly happy about how Living in Dryden has turned out, and every now and then I try to encourage other folks to do something similar. Judging by the growing list of Dryden weblogs, I’d say Dryden is doing pretty well in building a community of people writing on a regular basis about things that matter to them.
A couple of months ago I did an interview with fellow technologist Jon Udell. He wrote up some of the interview (though I don’t think those are really ‘laws’), and the full interview is now available.
I’m also hoping to lead a panel on Creating Local Life on the Global Web at the SXSW Interactive conference next March. Talking about Dryden, New York in the middle of Texas may seem a bit strange, especially at a web development conference attached to film and music shows, but there’s a lot to do there.
When the Internet and the Web first appeared, they seemed like great ways to reach large numbers of people who weren’t already connected to each other. People who lived in California could talk to people in Germany, Bangaladesh, South Africa, and New Hampshire, about common interests they couldn’t have easily shared before. In the past few years, though, it seems that we’re learning about how these technologies can help us communicate on a much smaller scale, helping us look beyond the walls and property lines of our homes to connect with our neighbors.
There’s a lot to discuss here, and it’s not just about places like Dryden. Some ‘local’ communities are local once a year, at a conference or event, while many have members coming and going. Online communications let people who have left the community stay in touch with what’s happening, and even build new connections while they’re away.
If you’re interested in seeing this at SXSW, visit the panel-picker and vote for (or against) the proposal I’ve put in. (The Panel Picker is open until September 21st.)
As seen earlier today on XSL-List (this relates directly to yesterdays post: Opera 9.5 and XSLT document() Function),
Note that it’s entirely conformant (and sometimes even useful) for an
implementation of document() to reject every URI you throw at it as “not
recognized” or “inaccessible”. It’s not conformant, however, to fail to
provide the function.Interoperability and marketability, of course, demand rather more than mere
conformance.Michael Kay
https://www.saxonica.com/
In the first few days of September, just as kids were beginning to head back to school, something rather remarkable happened: Microsoft lost its hegemony. In a vote by the various members of the ISO standards committee, managing only to come up 53% of the votes it needed to fast track the Office Open XML format for consideration as an ISO standard.
Now, Microsoft and its partisans will, most likely spin this as a temporary defeat, pointing to the number of No with comments votes that indicate that it may be possible, with some extensive work, to make OOXML workable in the near term future if some rough edges are smoothed out. In the end of the day Microsoft will have won, and those countries that were too narrow-minded focused on insuring that the standard that emerged was close to being workable with efforts will in the end change their minds, once the spec is cleaned up (and perhaps a few more people were added to the appropriate committees with positions financed by Microsoft money) that surely the OOXML standard will in fact become the de facto one very soon now.
We will (re)create it.
Please see this movie and then begin to think about what your creation(s) will be.
Thanks!
As seen at the top of a Yahoo!/B-Side co-branded shopping cart check-out screen,
NOTE: Our shopping cart requires you to enter your shipping address when you place an order. Yes, you have to do this even if you are only purchasing a download. Yes, that seems a bit, well, stupid. Hopefully, you can forgive us this minor inconvenience. Just enter your billing address for your shipping address, and we can all put this behind us and move on. - The management.
You’ve probably seen it. IBM’s Rob Weir’s 2006 diagram comparing the number of pages of various standards versus the time they spent in committee. It makes its appearance unchallenged regularly: indeed IBM (business rival of Microsoft)’s Bob Sutor gave the diagram a prominent place in his blog this week with what, presumably at this last stage, contains the essence of IBM’s argument against DIS 29500 and Office Open XML.
At the Standards Australia meeting, the diagram was brought out again, and I protested that it was misleading, but seeing Bob’s blog makes me want to explain my criticism more. Here is the scary diagram:
The issue of page count and book size is prone to publicity stunts. If you look at this web page, for example, you can see two different printouts of the open XML Spec, The first manages to fit in boxes under a man’s arms (and we don’t know how full the boxes are) while the second manages to be taller than a man! What can account for this doubling of size? Perhaps it is the magic of single sided printing and thick paper :-) (In the 1990s I was discussing a book with a publisher who said “it has to be 1.5″ thick, but if you don’t have enough material we will use thicker stock”! ) Say we have 6500 pages, and we print it at the maximum common paper weight of 105 weight Bond ledger, that gives us almost 3 metres of print out (10′)! But if we print it at the other minimum common weight of 16 weight, that gives us a tad over 50 cm (20″). On average paper weights, this should give about 64 cm (25″).
But back to the main story. I’ll deal with the issues I have with in reverse order of their seriousness.
If you are using page size to compare documents, you really should make sure the documents are typeset the same. I moved the Open XML spec down from its extravagent 11pt body font and large heading spacing to follow the ISO standard 10pt.
Viola, I estimate that about 1,000 pages can be reduced by this. (Added: I estimate this because I tried it. I saved 800 pages on part 4 alone just by moving to 10pt and more typical ISO clause spacing. Technically, this is because there is so much display content and two-line paragraphs that get pushed to the next page, cascading with many paragraphs taking one line fewer.)
The diagram uses page size as a unit of preparation and review. However, not all pages are equal. A page that contains normative text requires much more review than a page of informative text. A page that contains auto-generated text requires almost no review at all: you sample enough instances to have confidence in the autogeneration and then skip the rest.
Now this is especially relevant for DIS 29500, because it contains enormous amounts of non-normative/tutorial text and of autogenerated boilerplate. ODF editor Patrick Durusau this week tried a an experiment where he removed this fluff, and he reduced the WorkdprocessingML specification from about 1880 pages to about 600 pages (and he thought it could go a few hundred pages more!) Most standards avoid tutorial and non-normative material because it increases the tedium of the review process and confuses readers. A good tutorial is usually a bad standard, and vice versa. DIS 29500 is a really extreme example of this.
So lets say that only a quarter of the text is normative and non-autogenerated (based on Patriclk’s results, and considering the impact of the normative Part 3 and so on, And that the non-normative text and autogenerated text takes about 1/3 of the review effort. That means that, effectively for review purposes, the document requires only half the effort for the number of pages.
So divide the effective page size in half. (The legend “Number of pages” becomes “Review effort expressed in terms of equivalent number of normative pages”)
Now lets look at the other axis. Wier’s numbers here seem to be based on the time spent in committee before coming up for a vote. That might be interesting a year ago, but it is positively misleading a year later. Why is it still being bandied about like this?
In the case of ISO fast track standards, there is the whole review process by ISO that is omitted: the informal discussions with SC34 before submission, the 1 month administrative review period, the 1 month contradictions response period, the 5 month technical review period just coming to an end, and the ongoing review where each national body looks at each other’s comments over the next five and a half months before the Ballot Resolution Meeting in Geneva, which I expect to happen. That is a full year.
So add an extra 370 days there.
The work that a committee does in compiling or creating a standard for a pre-existing technology is very different from what the work that a committee does in creating or augmenting a standard. When the proprietary Torx screws became an ISO standard, one can imagine that the committee had little to do. By contrast, the committee that produced the ISO PDF/X standard had a bit more to do, but still no where near what they would have to do if they were developing a standard fro scratch.
The work is review and discussions of policy, relieved of what-ifs and who-needs-this? As a completely conservative estimate, lets say that development of new material takes half the time, and review takes half the time.
Since we are measuring this in pages, lets be conservative and say that this relieves the committee process of 25% of its workload, and express that in effective pages.
Since we are looking at the workload of a committee, what about where a committee doesn’t have to author much, but is presented with a selection of workable drafts from the pre-existing documentation of a product? That is obviously a lot less work than writing for scratch, especially for the editor.
So lets say that this makes a committee 25% more effective, and express it in effective pages as before.
Now, of course, to compare apples with apples, we would have to do the same procedure to the other standards, and they would move in the same kind of direction to a greater or lesser extent. But none of their shifts would be anywhere near as much as Open XML’s because it has the quintuple whammy of typesetting, fluff, the BRM, the lack of need of development, and pre-existing editorial material.
Furthermore, these other standards are not standing still. ODF has moved to ODF 1.1 with 1.2 in with works.
I have two other additional reasons why I think the diagram (or, at least, the way it is used) is misleading.
The first reason is related to the last segments above. It is really not fair to compare a markup language for an old technology with a markup language for a new technology merely on the basis of the committee time. Microsoft moved into documenting text formats for its standards when it purchased RTF from DEC around 1990. A lot of the documentation in Open XML is adapted directly from the RTF and DOC documentation. Its basic strengths and weaknesses are well-known and long documented.
There have been perhaps fifty different versions of the .DOC format, on six different operating systems over the last twenty of more years. To ignore this history and just use committee time as the metric seems to me to miss out something important. A new standard does not come with all this prior work (and baggage).
I am not sure how to diagram this. Perhaps a line indicating the time the technology and documentation was in development before the start of the committee process? Lets date that from the advent of RTF rather than from the first .DOC format.
VML is a particular issue here: it was introduced into IE 5.5 and presented to the W3C committee. To ignore that early development and attempted standardization work seems to miss something important, again which is why I think we have have to be careful not to be mislead by the diagram.
Finally, my other problem with the diagram is that people use it to say “this is so big it cannot be reviewed”. However, Open XML is made from five or more completely distinct sublanguages: OPC, WordprocessingML, SpreadsheetML, PresentationML, DrawingML, VML, and then the extensions mechanism of Part 5. One person iis not expected to review a whole standard, it is done in co-operation with a committee. India is a good example here: they had separate task forces working on each of the three major application schemas.
So while the size of the draft in total is large, it can be decomposed into smaller sections and reviewed. There have been over 2200 people involved in national standards bodies reviews, I am told: that is a lot. If I was being as free with numbers as some people are, I would say that this represents about three pages per person! But of course, that would be just as flawed logic as accepting Rob’s diagram at face value.
So lets divide up the specification into its parts, and see where they fit on the chart. I’ll take into account the extra time for review, but just use the current raw page count for OPC (part 2), and the individual languages of Part 4 and 5. We get a diagram showing the size of each distinct (and therefore separately reviewable) sublanguage in page size of the current draft.
(If you select “View Image” or the equivalent in your browser, you will be able to see this a bit more clearly: the OReilly formatting system may get in the way here.)
And finally, lets have a look at what happens when we look at these separate languages, but get rid of the fluff as I suggested in the submission I sent to my national body for their consideration on the Australian vote. For WordprocessingML we will use the number that Patrick Durusau found when he stripped out the fluff: about 800. For the other largest four, we will just say that half is fluff, being conservative. (Actually, in my submission I want to remove some lists of examples such as border art to another part, but border art is hardly taxing on the reader.)
So this is a diagram of the estimate page count of normative pages in the component language standards of Open XML, against the time spent in Ecma and ISO development and review (and assuming a Ballot Resolution Meeting).
Note that this diagram does not include the “effective size” considerations above, so the position of the new items can be compared directly with the other pieces of data on the page, as apples to apples. To the extent that the other issues raised above apply to each language, their star would move left (and up); however, for a good comparison the other standards mentioned would also have to have their position adjusted in accordance to the same factors: however, as I mentioned, because the other technologies consist largely of normative material, the adjustment would not be as great; the other technologies might also need to have ISO process time added too, I don’t know whether Rob’s numbers include that or not (the effect would be add six to twelve months in an upward direction to some of the blue points.)
So that is seven reasons why I think the diagram is misleading. Or, at least, why the diagram itself does not give data that is particularly useful for anything other than mindless sloganeering.
What I don’t understand is why people are not on to these kind of tricks. Big standard, ooh scary. Have people never heard of Adam Smith and the division of labour? Have people never changed font size and had a different sized document as a result? Do people think that all text is equally taxing for review? Do people think that adapting a standard from pre-existing text is not easier than writing (and indeed) developing the standard from scratch? I suspect that many people see that on the original graph the OOXML point lies so far to the right, and because pages are easily countable, they don’t have any alarm bells ring.
So let me ring your bell, if I may: what the original diagram tells us is that the standard has a lot of text. And that one stage of its life in a committee took about a year in 2006. both those things are such a partial piece of the picture (where is 2007?) that while they are of some sensational value, the diagram can be misleading.
Fedora Commons today announced the award of a four year, $4.9M grant from the Gordon and Betty Moore Foundation to develop the organizational and technical frameworks necessary to effect revolutionary change in how scientists, scholars, museums, libraries, and educators collaborate to produce, share, and preserve their digital intellectual creations. Fedora Commons is a new non-profit organization that will continue the mission of the Fedora Project, the successful open-source software collaboration between Cornell University and the University of Virginia. The Fedora Project evolved from the Flexible Extensible Digital Object Repository Architecture (Fedora) developed by researchers at Cornell Computing and Information Science.
Nice! Congratulations, Fedora Commons!
The press release continues,
Update: *EXCELLENT* follow-up post from Wladimir in which he closes with the following,
I guess I need to thank Danny for so many great articles in such a short time. On the other hand, maybe instead I should remind him that denial-of-service attacks are illegal, even in the USA.
I’ll let you come to your own conclusions as to what that last sentence is referring to, though I will point out the fact that no matter who you are or what you believe justifies your actions, while blocking ads is not a crime, DOS attacks and other forms of Internet harrasment and vandalism most certainly are.
If you are guilty of any such crimes, please don’t turn yourself into the authorities (our prisons are filled with too many people who shouldn’t be there in the first place), but please stop, think, and then find ways to get over whatever it is you are hung up on in a peaceful manner.
Thanks! Our Internet will be a better place if you are willing to consider the above request.
Update: Wladimir Palant, the *WONDERFUL* developer behind the *WONDERFUL* tool AdBlock Plus recently left the following comment that I thought the rest of you would find interesting,
Thank you for this article, it is real fun to read it. Btw, the numbers you were asking about - I don’t have exact numbers either but it seems that no more than 2% of Firefox users have Adblock Plus installed. Which makes this campaign as ridiculous as ever.
Of course one can only assume that after all of this attention, the number of AdBlock Plus users have increased, but not so much as to drastically change the above percentage to the point where any of the legitimate sites on the net in which use ad revenue as their primary support are going to be noticeably effected. In fact if you think about it, it’s quite possible that, while ever-so-slightly, the reduced cost in bandwidth savings from those who have no interest in the ads being displayed will *more* that offset any potential loss in ad revenue.
In fact, if you *really* think about it, if all of the people in which had no desire nor willingness to click on the ads presented on your site were to install AdBlock Plus there’s an ever-so-slighter (is slighter a word? Probably not, but today let’s make it an honorary word just for fun ;-) possibility that the net result will be that of increasing your cash flow instead of decreasing it.
Okay, maybe thats a bit of stretch, but if nothing else it’s definitely something to consider. Of course if it turns out this theory were to actually hold any water you would have none other than Wladimir Palant to thank for your decreased cost structure and therefore increase in monthly revenue. And according to the following forum entry from about this time last year (which was in response to a question regarding Wladimir’s preferred charity), here’s how you can thank him for your new found cash cow, ;-)
I don’t favor any organization, feel free to choose the one you like
Edit: On the other hand… I do favor one organization: https://www.mozilla.org/foundation/donate.html
Seems reasonable to me. :D
Thanks, Wladimir!
Update: NOTE: For those of you who first read this update at the top of my last post, here it is again but this time at the top of the correct post! ;-)
—
I *LOVE* this comment from an article linked to from Yours Truly (a handle, not a self reference ;-),
Upon clicking the link to https://whyfirefoxisblocked.com/ I was met with a blank page. Interesting, I thought to myself. Let’s check this out in more detail… I bet they want me to wipe the dust off my Internet Explorer and access their site that way. Admit defeat? Go back to using Internet Explorer? Hardly. I simply opened a new tab in Firefox and went to Google. In the Google search field I entered the search term: site:whyfirefoxisblocked.com and then loaded the conveniently offered “cached” version of the page in question. It loaded smoothly in my AdBlockPlus-enabled copy of Firefox.
Absolutely *CLASSIC*! :D Thanks for the laugh, Yours Truly! Of course the real test would be to do the same for the site that you would have been redirected from, but two things,
1) Why waste any more of your valuable time.
2) The spirit of your hack is most certainly in place, which leads to one very important observation,
As mentioned already: Don’t Fight the Internet! There’s fame (the good kind) and fortune and good times for all in whom find ways to embrace the way the web *truly* works, not the way you think it should work. And if anything this is the point of the entire post.
Update: Based on the evidence that has been mounting up in my inbox and in comments I’ve done a quick research project and have come to the same obvious conclusion that everyone else has: That the content that follows that now has a strike through is more than likely a completely bogus attempt at justification. My apologies to each of you that were simply following Digg, Slashdot, Reddit, and other links for proliferating the garbage that is being fed from this guy.
Oh, and Danny, (AKA Jack Lewis),
You know what, nevermind. Why even waste any more of my time.
No wait, I’m sorry, I do have something else to say: You are not a victim of terrorism. You’re a victim of yourself.
Best of luck to you.
Oh, and one other thing: If you are bothered by the ads on this or any other site and would rather read this or any other *FREE* content without being bothered by ads you find annoying: I’ve heard that Ad Block Plus is pretty good. Of course you’ll need Firefox if you don’t already have it, but if you’re interested in my opinion, Firefox is as good as a browser gets.
Enjoy your ad free Firefox browsing days, everyone! The content here on O’ReillyNet is free to read however you might choose in whatever browser you might choose. If you choose to reprint it (beyond that which can be considered fair use) please do so under the terms of the Creative Commons by-nc-sa. Otherwise, do what you want. That’s your right.
And as always, thanks for reading! :D
Update: via a comment from Danny Carlton,
It’s my site, and if i want to control how people view it, I’m not letting a bunch of terrorists force me into changing that–and when you attempt to change someone’s behavior by threat of harm, you are a terrorist. The vile, obscene emails and phone calls, they attempts to shut down my server with DOS attacks and bandwidth eating programs, are all acts of terrorism, and it’s really interesting how many people who seem to get offended at being called “thieves” have no problems acting like terrorists.
Folks, I don’t care who you are or what it is you think you’re accomplishing, as far as I’m concerned anyone who involves themselves in this type of activity is absolutely as Danny specifies,
A criminal.
That’s absolutely shameful to do that kind of crap. You mind not be a criminal for blocking ads placed in the content you read, but you’re certainly a criminal if you take part in any of the crimes mentioned above.
Whoever is involved with the above: STOP!
It’s not funny. It’s not cool. And it certainly isn’t justified. It’s stupid. It’s illegal. And it needs to stop.
[Original Post]
Don’t fight the Internet! I promise, you’ll lose.
The Mozilla Foundation and its Commercial arm, the Mozilla Corporation, has allowed and endorsed Ad Block Plus, a plug-in that blocks advertisement on web sites and also prevents site owners from blocking people using it. Software that blocks all advertisement is an infringement of the rights of web site owners and developers. Numerous web sites exist in order to provide quality content in exchange for displaying ads. Accessing the content while blocking the ads, therefore would be no less than stealing. Millions of hard working people are being robbed of their time and effort by this type of software. Many site owners therefore install scripts that prevent people using ad blocking software from accessing their site. That is their right as the site owner to insist that the use of their resources accompanies the presence of the ads.
Here’s the thing: If people are going out of their way to block ads via Ad Block Plus do you honestly believe they represent a significant percentage of the +/-2.5% of the people who actually ever click on web ads in the first place? Wait, hold up, I think you answer your own question in the next paragraph down, but first let me take a quick moment to point something out,
Don’t you just love Jeffrey Zeldman? I know I do for the simple fact that he has no problem saying it like it is and in many cases he’s right on the money,
Jeffrey Zeldman Presents : What crisis?
The glacial pace of the W3C has given browser makers time to understand and more correctly implement existing standards. It has also given designers and developers time to understand, fall in love with, and add new abilities to existing standards.
So the glacial pace can’t be the crisis. Maybe the problem is lack of leadership. One worries about the declining relevance of The Web Standards Project. (Note the capital “T” in “The”–people who believe in standards should also believe in and follow style guides.) One has worried about the declining relevance of The Web Standards Project since 2002.
Nicely stated! Of course, just a paragraph or two above Jeffrey asks the question,
As per a comment I made to a post from Eric Larson to the internal Vibe* mailing list regarding the usage of Mercurial instead of Subversion for our RCS,
Of course maybe someone will come along and create a BitTorrent-based Darcs or Mercurial plug-in. Now *THAT* would be cool! :D
My point was in relation to the fact that with a decentralized RCS (which in most cases creates an exact copy of the repository with each checkout), as the size of the repository increases so does the cost of hosting that repository with each new checkout. But if a BitTorrent plugin were to suddenly surface?
Like I said, “Now *THAT* would be cool! :D”
Anybody care to become the *WORLDS BIGGEST ROCKSTAR CODER*? This would certainly be one way of becoming just that. :D
Dare Obasanjo aka Carnage4Life - Google Working on Social Network Aggregator
What I find more interesting is being able to bridge these communities instead of worrying about the 1% of users who hop from community to community like crack addled humming birds skipping from flower to flower.
Over the last month I have been collecting examples for fun from the web where scuttlebutt on the websites of well-known commentators has claimed procedural or other irregularities at standards bodies or participants. I started this off on the luridly titled “Bribery Watch page, but it is more “Innuendo Watch.”
Here is a little map (drawn dynamically) with the countries mentioned in red.
Some of the claims have a French farce aspect. For example a mistranslation of “seat” and “chair” caused a great flurry.
However, one persistent theme is the idea that the industry people who actually want a standard should not participate in the standards process. Sometimes there seems to be some idea of neutrality floated, sometimes some idea that people who come late have less legitimate opinions than people who come early, othertimes that the process is flawed unless people are allowed late. But the basic idea is that if you agree with MS on anything or have had any business connection with them, they own you, perhaps even bribed you, and your every opinion is inappropriate. But never an acknowledgment that standards are community self-help efforts participated in, for the most part, by the parties who want to use the standard; and that the standards process is not a tool for cartelization.
I’ll use this entry as an anchor for my observations on the final day of Extreme Markup Languages. I’ll update it with a note each time a new talk begins, but I’ll add my comments on the talk in the comments section. There is a numbering scheme for the talks, to correlate to comments.
If you happen to be reading this in an aggregator, much of the meat is in the comments, so you might want to click through.
D4.1. “Lessons from monitoring the hedge funds: Markup identifies and delineates. Does it give your position to the enemy?”, Walter Perry
D4.2. “Declarative specification of XML document fixup”, Henry S. Thompson
D4.3. “Topic maps, RDF, and mushroom lasagne”, C.M. Sperberg-McQueen
Update: It just keeps getting better. Or is it worse? Guess that depends on your perspective. And with that, from a Wired News article from two days ago,
Crew Member: Previous AT&T Show Had “No Politics” Policy
By Eliot Van Buskirk August 13, 2007 | 10:26:44 AM Categories: AT&TA crew member who worked on a show webcast by AT&T confirmed that there was a policy in place to remove artists’ political comments from shows before they were webcast.
“I can definitively say that at a previous event where AT&T was covering the show, the instructions were to shut it down if there was any swearing or if anybody starts getting political. Granted, they didn’t say to shut down any Anti-Bush comments or anything specific to any point of view or party, but ‘getting political’ was mentioned.”
The crew member went on to say that the order to mute political speech was issued by Davie Brown Entertainment, which had been hired by AT&T to produce the recordings.
Sure, the policy — which AT&T initially denied was in place — applies to all political speech, not just criticism of Bush. But most bands, when they get political, tend to lean pretty hard to the left (especially when they’re on the stage of Lollapalooza, which is trying to hang onto a rebellious, “alternative” reputation).
Randall L. Stephenson, the CEO of AT&T, is also the Vice-Chairman of the President’s National Security Telecommunications Advisory Committee, and has motivation to shield Bush from criticism. And as some readers of this blog have pointed out, AT&T is free to do whatever it wants to the audio on its webcasts.
But one has to wonder whether the same political filtering policy applied to AT&T’s webcasts could eventually affect to the company’s portion of the internet backbone, in the absence of the net neutrality legislation it actively opposes.
PLEASE NOTE: I believe it’s important I point out the fact that I personally am not Anti-Bush. In fact, I voted for him in both 2000 and 2004. Did I make a mistake in doing so? Well, that’s neither here nor there as there’s nothing I can do to change the past, only learn from it. Even still, as per a post I made a year ago last February,
Australian national body Standards Australia had an industry forum today on Open XML. The agenda and invitation for this is up at Tom Worthington’s website.
Here is the most interesting part of the invitation:
This forum is being conducted by Standards Australia as a courtesy to stakeholders. It is an extraordinary meeting that we are not required to hold, but do so to provide an open process. We appreciate your attendance and expect that you appreciate our effort in making this opportunity available to you.
Standards Australia values its vote as a participating member of all international committees, and does not exercise it injudiciously. We provide considered Australian viewpoints that are beneficial to Australian stakeholders, including industry, government, academia and the general community, through the facilitation of trade and the inclusion of clear Australian requirements in international standards.
The JTC1 process has established that the ECMA-376 document is not contradictory to existing standards and ECMA has responded to a number of technical considerations raised in the initial consultation period. This forum is not to debate the merits of the JTC1 decision making process or the validity of the ECMA response.
While technical comments are welcomed, it would be entirely counter productive to use this forum to reiterate technical comments that have already been raised and are likely to be debated in every JTC1 member body in some form.
We are looking for creative, positive contributions that emphasise our commitment to representing truly Australian views to the international community.
More on that later.
The meeting had 30 to 35 attendees (I didn’t count, oops), based on the membership of existing technical committees and people who had sent in comments to the process so far. It was not a voting meeting at all, just a meeting to help consensus and to give more information to higher committee members. (However, participants can submit comments by Aug 21 for consideration by the Australian CITeC Standards Sector Board.)
The meeting was a three hour affair, with the first half invited speakers and the second half question and answer and commentary.
The first half started with an introduction by Standard Australia’s Alistair Tegart, who provided good strong chairmanship that left most people frustrated that they had not had a chance to say more, but which gave everyone a chance to make their most important comments in the allotted time. The interesting thing was that discussion of technical minutae was strongly discouraged (wrong meeting for that), which is a nice break for me. Discussion was civil, everyone friendly in the coffee break, and frank in the meeting.
I had been invited to speak on the subject of General overview of the standards process because of my involvement as Australian delegate to (what is now) SC34 in the 1990s and my continuing involvement with standards. A nice comment afterwards (by a law professor!) was that mine was the only talk with new content. I tried to present an SC34-based perspective on standards: what SC34 standards are, how the preference for enabling standards rather than applications has been overtaken by the fast-track process, the basic standards posture for Australia in SC34 in the mid-90s (need for simplicity to suit our small development teams, I didn’t mention support for regional neighbours though it was important) and how each different country has different requirements. (For example, some countries have a requirement that they do not want to be blocked out from international contracts because of the lack of standards.)
Then a quick mention of some of the issues that I prototype in this blog: that ISO standards for documents are voluntary, that standards form a library of choices, that the mere existence of alternative standards does not prevent any group from choosing one over the other, that standards such as PDF and Torx are not open in the sense of allowing arbitrary change but nevertheless valuable, and so on. I emphasized again that the ISO process is a win/win system in which attempts by one group to stymie another’s needs does not fit.
That took about 15 minutes, then there were speakers on the case against the adoption of Open XML (the scheduled IBM speaker was hospitalized so we were treated to an emergency podcast from Rob Weir which was basically the same content as his Technical Case against OOXML.) and for the adoption: a quick tag team with an MS representative, then the local CompTIA representative, then a CEO. The CEO, Richard White from CargoWise EDI, was particularly forceful on how it would help his business.
Then after coffee we had over an hour of moderated discussion. By and large it went as expected: people from local industry welcomed it as solving a real problem, people from business rivals of MS didn’t like it, people who identified themselves with Open Source didn’t like it, people from academia or standards bodies seemed to think that having it as a standard would somehow force them use it (I didn’t get this.)
I had to gag myself a few times. The local Google Maps operation was represented, but I was quite surprised to hear Lars Rasmussen say how difficult it would be to implement Open XML…surprised because he had told me last year how Google maps used VML to deliver to IE and how format was simply not a problem. (Here is the first line sent for a Google map, for confirmation: note the namespace declaration and stylesheet reference:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "https://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="https://www.w3.org/1999/xhtml" xmlns:v="urn:schemas-microsoft-com:vml"> <head><meta http-equiv="content-type" content="text/html; charset=UTF-8"/><noscript> <meta http-equiv="refresh" content="0; URL=https://maps.google.com.au/?output=html"/><<noscript> <title>Google Maps</title> <link rel="stylesheet" type="text/css" href="https://www.google.com/intl/en_au/mapfiles/86/maps2.css" /> <style type="text/css">body{margin-top: 3px;margin-bottom: 0;margin-left: 8px;} #vp {position: absolute;top: -10px;left: -10px;width: 1px;height: 1px;visibility: hidden;} #homestate {display: none;} v:* {behavior: url(#default#VML);} ...
It seemed strange to be saying that a technology was too big to implement when you were in fact using that technology successfully. Maybe the Google speaker didn’t realize that VML is a part, though obsolescent, of DIS 29500. I think what happens is that “implement” gets stretched to mean “implement all the parts of a specification”: so “It is too big to implement” means “It is too big to implement it all” in Googlespeak. But Australians aren’t going to implement a full new office suite. It is too big, even if you just used ODF; and Open Source people will more naturally join the existing Open Source and Free projects rather than set up new ones, it seems to me. For Australian requirements, “full implementation from scratch” is an imaginary and spurious requirements.
What has maybe slipped Lee’s mind is that most integrators will use DIS 29500 in the same way that Google Maps would be using it: just cherry picking the parts that are needed (in their case, the subset of VML.) And, in particular, when you are using it as an end-format, you only need to “implement” (i.e. generate) the elements that correspond to your input. Not the whole thing.
When I was talking to the local Google people last year, they told me that Google doesn’t actually have any fulltime people allocated to standards work in general. I gathered that was a little pedestrian for them, because they made their money by innovating not by following the pack: sounds like a recipe for QA disaster to me. I don’t know whether their foray into web-based applications will make them a little more savvy with standards.
Another Google guy (who turned out to be a ring-in: Georg Greve, initiator and president of the Free Software Foundation Europe, who gfim says was flown in from Switzerland by Google especially for the occasion) stood up and recommended we should track what the Indian standards body’s concerns about binary mappings. Again I had to gag myself (actually, Alistair did it for me) because I believe I actually was present at the meeting in Delhi where that issue was raised: by the Indian representatives of Sun and IBM. I am afraid I couldn’t help thinking this was the classic Colbert “Echo Chamber” effect, similar to Wikipedia’s astroturfing: one member of a collective puts something up in one forum, then other members of the same collective bring up the first as independent evidence. (In this case with the added twist that Sun and IBM were not mentioned: a lay listener could easily have had the impression that this was some kind of position adopted by the BIS, whereas, as far as I know, it has not so been. I hope the Georg will be a little more careful with attributions in the future, because people can so easily get the wrong impression.)
An interesting comment from local IT29? committee head, Jamie (surname illegible sorry), that for educational users, they needed to guarantee interoperability and could not force students to purchase particular programs. I didn’t uite see the logic of how this meant that DIS 29500 should not be adopted at ISO. Marcus Carr from Allette Systems (who I consult for and teach standards seminars by) sits on a local IT committee too and responded that students would be better served by PDF if guaranteed interoperability were the issue.
A few comments later, I got a chance to mention that none of the XML formats today provide guaranteed interoperability in the sense of visual fidelity, for the reasons that readers of this blog will be familiar with: every application supports different feature sets, has different fonts, hyphenation dictionaries, kerning tables, line break algorithms, and so on. Plus the formats are extensible, so can have all sorts of strange media types. Standard documents may perhaps be a necessary condition, but they certainly are not sufficient. What is needed are profiles, which restrict the features, requires certain application behaviours and require certain fonts. And for uncramped page designs that reduce the chance of page overflow on different systems.
Several speakers raised the issue of IP rights and worries about the MS Covenant Not to Sue and the Open Specification Promise. MS gave the usual response: we have run it past lots of external lawyers who say it is fine, and since OSP is so similar to Sun’s equivalent, why don’t you have the same concerns about ODF. I think Standards Australia has been playing a little coy here, because they are trying to be scrupulous not to be seen to take sides, I guess. I had asked them in email to have a clear position on standards and IP from the legal perspective, and they ended up saying, in effect, that for Standards Australia, it is JTC1s responsibility and competency to evaluate the IP issues of drafts submitted for Fast-Tracking, and not a technical issue for voting.
I think a better and more complete answer would be better. People who are interested in this are should first read the excellent webpage by OASIS lawyer and anti-Open XML conduit Andy Updegrove, especially on the Allied Tubemakers and Dell cases. Standards don’t exist in a vacuum, and MS standard’s participation and the very strong and constant statements that MS would be considered in any court.
Another aspect of the IP discussions is that typesetting and desktop applications are not now a new thing. With a 20 year limit, patents before 1987 have expired, which is well after the invention all of the basic ideas in office suite software. (Last week in Thailand, MS’ Oliver Bell was asked a question on this issue, and IIRC he said that actually MS only uses its patent portfolio defensively and has never sued on IP. Does anyone have a list on this?)
Jamie ? pointed out that participation in a standards body does not nullify the IP; however, the issue is the scant chance that submarine patents are enforceable. So add the two covenants, external legal opinion, vetting by Ecma and ISO/IEC JTC1, the age of likely patents, the recent stronger court awareness of junk patents, the difficulty of enforcing submarine patents, the multiple statements made by MS executives and staff to the highest level, and the basic fact that a document standard is more concerned with schemas and general description and no of methods or algorithms, and I don’t know how much more would be possible to satisfy someone.
One comment mentioned the idea that MS covenant not to sue etc does not cover external technologies. Of course, not, but that is no different for ODF and HTML.
Other speakers brought up a few of the usual suspects. autoSpaceLikeWord95 made its scheduled appearance, along with statements that made it clear that the speaker had never read the spec and was parroting. There is an easy way to tell a parrot in this area: they will say something like “The standard is full of compatibility elements like autoSpaceLikeWord95 which are undocumented and prevent implementation”. In fact, there are 64 compatibility elements, and IIRC correctly all but two are adequately document with explanations of their general functionality. AutoSpaceLikeWord95 is optional and is clearly marked as deprecated: it seems to be a warning flag that some document was originally created by Word95 with this bug and it had never been corrected. The bug is related to the treatment of Fullwidth character used in East Asian typesetting (zenkaku): certainly for Australian users it is utterly extraneous to our national requirements.
The issue of the definition of various functions in SpreadsheetML came up too, from a localization perspective. (Again, irrelevant to Australia.) If the moderator hadn’t been so tough, I would have liked to have asked whether the speaker wanted to remove or rename the existing function (and break everyone who used this function’s spreadsheets) or merely to add better localized functions (which belongs in a maintenance phase.)
On the issue of maintenance, I did get another opportunity to spout. I said that it is too early to tell what effective systems for maintenance will occur. When OASIS and ECMA submit their standards, they also submit information about how maintenance will occur. There would be collaboration with JTC1 SC34, for example. I said I thought this was only practicable for fast corrigenda (which don’t add functionality just fix the text and clear mistakes) and that the approach that OASIS seem to be taking, which would involve resubmitting ODF 1.1 and ODF 1.2 etc for fast-tracking each time, was probably the more realistic thing to expect. However, I noted that DIS 29500 has a quite strong extension regime, indeed a whole part (Part 5) and starts from a much more complete position that ODF: so one would expect complete updates to be rare events, perhaps aligned to the three-year product cycle.
The main Google guy at some stage made a good point about overlapping standards, along the lines that having multiple standards for programming languages was OK because the differences could be justified, but he had not heard arguments why Open XML was so different from ODF that it could be justified.
A few others also had comments that could be fairly reduced to “We don’t need it, therefore we do not support it becoming an ISO standard, therefore we appose it becoming an ISO standard” which is a non sequitur.
A very interesting point was made by the National Archives representative. They don’t have the resources to cope with Open XML and ODF, he said, so they would adopt ODF for their future format and didn’t support Open XML becoming a standard. Again, I don’t see how the standardization of Open XML forces their policy in any way. Standards Australia is not even a government agency, and has no legal clout on the National Archives: moving to ODF where possible seems a reasonable choice (well, ODF 1.3).
Marcus Carr objected to this. He spoke from the perspective of document processing from the early 90s, and the difficulties in practice of dealing with Word documents (with the various hijinks: converting .DOC to the Rainbow DTD, converting .DOC to RTF then processing that, etc) and brought up the key processing issue that I think almost all the commentators on Open XML miss. He brought up the issue of the need for a full-fidelity baseline schema to allow the most flexibility in downstream processing.
Now this is a pipeline approach that has proven itself to work over the last 15 years we have been using it. Elephantine readers may remember a blog of mine a year ago:
A typical strategy when converting from XML into some structured text format is to have three transformations:
* first, convert the XML into ideal XML: resolve links as needed, remove extraneous elements and attributes, convert cases, generate headings and other things that need to be generated
* second, convert that ideal XML into an XML-ized version of the output format
* third, convert the output XML into the text format, delimiting and indenting as needed
If the input data is non-XML, then we have an additional stage where we first convert the data into an “baseline” XML format that maintains all the information from the data source (it could be a database, another format, a binary, no matter.) You never know what information you need, and you don’t want to trust someone else’s abstraction but work with as unmediated form of the data as possible.
So Marcus’ comments was that we (the system integration and document processing community need Open XML as an ISO standard, because it alone provides an adequate baseline format for subsequent transformations. So the Australian National Archives could well decide to archive data using ODF, but they may well decide to implement their conversion to ODF by going through Open XML. So one standard would be useful for one purpose (saving future archives), the other standard for a different purpose (opening existing files in the archive.) If the Australian National Archive is moving to ODF 1.0 fast, I hope they don’t throw away the original binaries…
So all in all, I think the day was a worthwhile exercise, and a good opportunity to help us all escape groupthink.
I suspect, from the tone of the invitation and comments made at the meeting and elsewhere, that when Standards Australia looks at the comments that people send in (deadline August 21) they will be completely disinterested in comments that question JTC1 decisions and comments on issues that have no local relevance. I gather they may not be much impressed by arguments that can be refuted by precedent: for example, that there should be no overlapping standards. However, we shall see, and I don’t know anything about the CITeC Standards Sector Board.
The most enlightened part of their approach, and I think this is pretty novel, is that Standards Australia seem very aware that the role of an individual standards body in vetting a standard when there is a multi-national campaign to discredit it (on the one hand) and promote it (on the other) changes the requirements for a review. In the case of a normal standard review, you raise as many (sensible) flaws as you can, because you don’t know whether the issue will be addressed by anyone else. In the case of a global campaign, it is clear that almost every National Body has been mail-bombed with the same speil and that therefore those are issues that we can actually ignore, unless they have a clear national significance because we know that other national bodies will be examining them. I think that is what may be behind the last line of the invitation
We are looking for creative, positive contributions that emphasise our commitment to representing truly Australian views to the international community: they want to husband their resources to what is important for local industry and local requirements. They don’t want to succumb to a Denial of Service attack where by concentrating on sorting out edge cases and typos they miss out the big picture of national interest.
I’m preparing my comments to the CITeC Standards Sector Board at the moment, and I will put them online here too, if anyone is interested.
I’ll use this entry as an anchor for my observations on the third day of Extreme Markup Languages. I’ll update it with a note each time a new talk begins, but I’ll add my comments on the talk in the comments section. There is a numbering scheme for the talks, to correlate to comments.
If you happen to be reading this in an aggregator, much of the meat is in the comments, so you might want to click through.
D3.1. “Principles, patterns, and procedures of XML schema design: Reporting from the XBlog project”, Anne Brüggemann-Klein, Thomas Schöpf, Karlheinz Toni
D3.2. ” Enhancing AIML Bots using semantic web technologies”, Eric Freese
D3.3. “Converting into pattern-based schemas: A formal approach”, Antonina Dattolo, Angelo Di Iorio, Silvia Duca, Antonio Angelo Feliziani, Fabio Vitali
D3.6. “Relational database preservation through XML modeling”, José Carlos Ramalho, Miguel Ferreira, Luís Francisco da Cunha Cardoso de Faria, Rui Castro
D3.7. ” Mind the Gap: Seeking holes in the markup-related standards suite”, Chris Lilley, James David Mason, Mary McRae
One of the most odd comments that is coming up on DIS 29500 is that plain old XML is not human readable. I would love to hear an explanation of this. A string of characters saved in a text file with a .xsd extension is not human readable, but exactly the same string when cut and pasted into a word processor is human-readable?
(To forestall talking in circles, this is not about whether XSD is baroque, nor whether a human who can read XML can then necessarily understand the intended semantics of the markup.)
I’ll use this entry as an anchor for my observations on the second day of Extreme Markup Languages. I’ll update it with a note each time a new talk begins, but I’ll add my comments on the talk in the comments section. There is a numbering scheme for the talks, to correlate to comments.
If you happen to be reading this in an aggregator, much of the meat is in the comments, so you might want to click through.
D2.1. “Retiring your metadata shoehorn (For OpenOffice documents)”, Patrick Durusau
D2.2. “Localization of schema languages”, Felix Sasaki
D2.5. “Semantic resolvers for semantic web glasses”, Nikita Ogievetsky
I’ll use this entry as an anchor for my observations on the first day of Extreme Markup Languages (See also: Looking forward to Extreme Markup Languages). I’ll update it with a note each time a new talk begins, but I’ll add my comments on the talk in the comments section. I also added a numbering scheme for the talks, to correlate to comments.
If you happen to be reading this in an aggregator, much of the meat is in the comments, so you might want to click through.
D1.1. B. Tommie Usdin, one of the organizers of opens up the conference with “Riding the wave, riding for a fall, or just along for the ride?”
D1.2. “Easy RDF for real-life system modeling”, Thomas B. Passin
D1.3. “Writing an XSLT optimizer in XSLT”, Michael Kay
D1.4. “From Word to XML to mobile devices” , David Lee
D1.5. “MYCAREVENT: OWL and the automotive repair information supply chain”, Martin Bryan & Jay Cousins (Martin presented alone)
D1.6 “Advanced approaches to XML document validation”, Petr Nalevka, Jirka Kosek (Petr presented alone)
I’m still getting my Weblogger profile here updated, but this year I transitioned from one company I co-founded to another. Zepheira provides data architecture solutions, with a focus on semantic technology. I was early on the Semantic Web bandwagon, and I almost fell off at one point because I felt the useful, modest ideas at the core had been overrun by an academic brand of technological megalomania. This year I felt the timing was right to not only renew my interest in the technology, but to stake my livelihood on it. Part of it was timing: I was starting to see the more useful underpinnings of semantic technology take hold in corporations. Part of it was people: I found a group of professionals who I believed were capable of building practical semantic technology solutions, and, more importantly, selling them.
One of those people, Eric Miller, former chair of the W3C Semantic Web Activity, is especially well known for describing the benefits of semantic technology in terms executives can appreciate, and he’s featured in a new InternetWeek article “The Semantic Web Goes to Work”. The article says:
“You[’d] better figure out what the Semantic Web is and soon, because its concepts have graduated from academia and are starting to contribute to your competitor’s bottom line.”
I’m hearing a lot of that sort of thing, lately. The pundits, having written off semantic technology as so much pipe-dreaming for so long have switched into a level of hype overdrive. The reality is that as Eric puts it in the article, a consistent, universal system of identifiers and a layer of technologies for mapping these identifiers is the sweet spot of semantic technology. Semantic Web technology is the specialization that builds these identifiers on Web technology, and in particular URIs. This opens up the benefits of REST architecture, and for me that the third pillar is the universal writing system provided by XML. These are all, individually, modest technologies. Hardly nanotech, quantum mechanics or genetic engineering. But take these three and combine them with a skilled data architect and I do believe very special things are possible. There’s a large crowd of folks who still make the free association from “semantic” to “metacrap”, but that presents nothing but a ripe opportunity for others who know how to keep it simple, and thus get real work done.
The article also mentions Eric’s keynote at the Semantic Web Strategies conference, which is chaired by fellow XML-meets-semantic-tech pragmatist Bob DuCharme this October. I’ll also be on a keynote panel, and I’ll be co-presenting with Kristen Harris, long-time collaborator at Sun about how we improved content architecture for Sun’s mail Web sites using Semantic technology and REST.
This conference, organized by IT industry watchers Jupitermedia, is just another indication of how seriously folks are starting to take this stuff. I almost cringe that the stampede could end up ruining the crop, but that’s a test every worthwhile technology must endure at some point.
I’ll be off to Extreme Markup Languages 2007 on Monday. It’s will be my first time, and I’m excited because it’s always been one of those conferences I’ve wanted to attend, but August is usually not a good time for such things on my calendar. I’ve always heard that it’s a brilliant conference, and my French friends always tell me Montreal is a very fun city (doesn’t stop them from poking fun at the French-Canadian accent).
Some of the talks I’m especially looking forward to are:
* Writing an XSLT optimizer in XSLT
* Advanced approaches to XML document validation
* Retiring your metadata shoehorn (for OpenOffice documents)
* Localization of schema languages
There are many other juicy -looking talks, but the above really stood out for me at first glance.
I hope to meet up with many old colleagues, and make some new acquaintances at the conference, and I’ll be reporting often from this Weblog. I might even try a bit of live-blogging.
Update: As I pointed out in a follow-up comment to Woof, one of the things I absolutely love about blogging is that is encourages interaction and communication on important subject matter that would otherwise not take place if the medium did not exist. Often times I find myself having to reevaluate my position on any given subject matter because someone has forced me to do just that via a blog post they’ve written or via a follow-up comment to one of my own blog posts. Such is the case I am currently faced with,
But I groan over this particular post because you generally attack rules of clear writing, which is all S&W (and others like them) are trying to promote, with a class of “hey, man, WHO’S TO TELL ME” middle-fingered hubris.
Of course, as I pointed out in another follow-up,
Put this way your position becomes quite a bit more clear. And I can’t help but agree with your point.
… which is absolutely the situation I am currently left pondering. That doesn’t mean I believe the content of this post is no longer relevant, and instead that there is certainly more to this than meets the eye, a fact of which is forcing me to reevaluate my overall position.
Of course, life could be worse. I could go around thinking that my viewpoints were always and without a doubt the correct viewpoints and that everyone else who disagreed was, in fact, wrong. If there is one thing I have learned in life it’s that you don’t *ever* want to be “that guy.”
Thanks for helping me realize the flaws in my argument, Woof! Still thinking through this a bit, but once I have I’ll provide a follow-up comment with the results of my reevaluation.
Update: via Piers Hollot we have ourselves a new quote-of-the-day, week, month, and possibly even year,
To their credit, messrs Strunk and White had no way of knowing that semicolons, hyphens and parentheses could also function as winking faces.
*YES*! :D Thanks for the laugh, Piers! :D
[Original Post]
Coding Horror: Google’s Number One UI Mistake
Strunk and White urged us to Omit Needless Words:
Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell.
If you were to ask me who I believe to be the greatest writer of our era, that answer would be immediate and definitive,
Of course I doubt Strunk and White would agree. “Too wordy. Too much personal expression. Too much social and political undercurrent. Too much. Too much. TOO MUCH!” would of course be five too many too’s for Strunk and White’s liking.
A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts.
Of course if this were truly the case, there would be no need for erasers. In fact, there would be no need for pencils. Everything would simply be written in permanent ink.
In the same sense, there would be no need for code refactoring tools and there would only be one programming language. I mean, why mess around with Ruby or Python when you can write everything in assembly, or better yet, machine!
Hell, for that matter there would probably only be one spoken language if Strunk and White had things their way. Why waste our time describing things differently than someone else when it’s much more efficient to describe them exactly the same way as everyone else? With such efficiency we could spend less of our time communicating and more of our time…
– doing —
… Hmmm, I don’t know… Staring at the wall? Watching the paint dry? Or would the paint already be dry in such a world? For that matter, why even paint the wall! What’s wrong with the wall the way it was in the first place?! In fact, why do we even have walls! It’s just wasted space!
Of course, in an efficient world such as this that has no need for walls and with such efficiency in our written and spoken languages I’m not exactly sure what we do with our time. But we’d have plenty of time to do whatever it is we wouldn’t be doing, that’s for certain! ;-)
Hey Strunk and White**, here’s some elements for your style: As hard as it would obviously be to actually submit to, go to your local library or favorite offline or online retailer and pick up a Tom Robbins novel. I’d personally recommend Half Asleep in Frog Pajamas or Fierce Invalids Home from Hot Climates. Then again, Jitterbug Perfume or Skinny Legs and All are both excellent works of literary art, as are each and every one of his other titles.
Oh, and while you’re at it, take a week off and go visit the Louvre. It’s a beautiful, wonderful, and thought provoking place filled with LOTS and LOTS of lines. No, not those kinds of lines. These kind,
–
** Yes, I’m aware of the fact that William Strunk Jr. and E.B. White are both dead.
NOTE: I should also point out that I have always found,
This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell.
… to be a pathetic attempt at saving face. By who’s definition makes the determination as to what words are necessary to *tell* a story, and what words are not? Theirs? Yours? Mine? Which, of course, in and of itself is the entire point,
One mans trash is another womans treasure. This same rule can be applied to *ANYTHING* and *EVERYTHING*. If you don’t like it. Don’t read it. But don’t tell others how to define what is trash and what is treasure…
*They* can make that determination on *their* own.
I’m sitting in a quiet coffee shop on the mist-shrouded Oregon coast, taking a much needed break from family in the wee morning hours to put down some thoughts on the recent O’Reilly Open Source Conference in Portland. I’ll be heading back to Victoria over the next couple of days, nursing my poor, ailing Saturn back to the island and no doubt stopping in Seattle to indulge my daughter’s mania for all things Japanese anime related. She tells me that she’s a dedicated Otaku, regaling me with the plot-lines from half a dozen Japanese comics, many of which she’s now reading (more or less) in the original Japanese (”They always get the translations wrong, Dad!” she says with the conviction that only a fourteen year old teenager can have).
The conference itself was immensely enjoyable, and very eye-opening. I did get a chance to meet with Simon St. Laurent (an old friend and acquaintance that I haven’t seen in nearly a decade) and hung out with M. David Peterson, Kevin Farnham and James Turner, all of O’Reilly, spent some time talking business with Jason Gilmore and Terry O’Donnell, Managing Editors of Apress and DevX respectively, and sat in on some very good presentations (and hopefully gave a good one, though its always hard to tell when you’re on the stage side of the presentation).
The issue of handling legacy binary formats is one that impacts much more than old Word documents, especially for governments who have long-term archiving requirements.
I think governments should simply legislate “After 20 years, the documentation for all formats used in government data should be made public for access on government archival websites and is deemed unencumbered by IP considerations for the purposes of information retrieval of government data” as a matter of public policy. Hand them over or get a fine for obstruction or bad record keeping!
Of course, the regulations would need to say more than that to cope with industry churn and the ravages of time. For example, what if the vendor or product has been onsold and no-one knows where the documentation is now? What if the local sales body is no longer the sales body for that product, or the development organization is defunct. But that need not stop the general case.
Of course, for contemporary and future data, standard open formats are the thing.
I’ve been making some presentations this week on XML Governance. The aspect of governance in particular is the promotion of evidence-based management, with governance involving higher-level management asking lower-level management “What objective evidence do you have that you are taking care of issue X?” The trouble is that it is very difficult to come up with a good list of Xs.
So the approach I am suggesting is that as well as the top-down approach, there also can be a bottom-up approach where you invert the question so that we ask “Given that we have these technical artifacts (e.g. XML), what information can we extract from them and what issues can it be used as evidence for?” In this way, we come up with a list of the issues for which there can be objective evidence, and management can cherry pick the issues which are useful,
One case-history I gave was of a markup operation who installed context-based full-text indexing system. They started using it in an entirely different way to the way we had expected: they went through the list of words that had been marked up as keywords and then looked for every instance of that word where it wasn’t marked up as a keyword. This allowed them much better consistency.
But taking my bottom up suggestion, it also can be used as a governance input: the technologists first report “It is possible to get evidence (a measure or metric) of the words that have not been marked up correctly” which then allows the managers to trace to or add the business requirement “All keywords should be marked up” which in turn leads us to the governance requirement “How to you prove that this business requirement that all keywords should be marked up is being met?”
Things like the Extensibility Manifesto may help formulate useful issues for governance, but it is top-down. And the trouble with top-down is that sometimes tracing from issue to evidence peters out or stalls on the way down: a fine sounding abstract requirement that is unmeasurable. Now this is, of course, the basis of many of my company’s tools and my work on validation and metrics: concentrating on the possibilities that XML allows for evidence gathering, and then trying to progress this upstream to management questions and the governance issues.
Roger Costello has been making a series of “best practice” papers on Schematron over at XML-DEV recently. While these are very important to think about and to gather intelligence about, in a sense bests practices represent a kind of middle-out or context-free approach, which I think can be criticized because abstract statements of principle move away from the worlds of evidence (at the low level) and governance (at the high level). For example, at the moment there is a discussion on whether it is better to embed Schematron schemas in XSD schemas or to have separate documents: a good question. (I have posted to say “well what about XSD types inside Schematron rules too?”)
But my main comment was that perhaps whether a constraint is bundled with the grammar is kept independently should perhaps follow organizational lines: database people can look after static kinds of storage requirements, and analyst people can look after the business rules -checking. It may be that dividing constraints between schema documents should be based on who is looking after them. Now this would correspond to a management requirement “A separation of concerns should be implemented to reduce the intra-organization dependencies on data and applications.” And the relevant governance question would be “How do you prove that you have a separation of concerns in your data and schemas?” And the evidence would be to trace from each constraint in a schema to the driver for that requirement, and showing that a particular schema only has traces to a requirements set by a single organizational entity.
So in summary the bottom-up approach starts with technical artifact (e.g. XML) then finds out what it can evidence (”what can I measure?”) , then extracts potential management requirements which could be analyzed using that evidence, which then suggests possible governance questions. The bottom-up approach never descends in airy-fairy handwaving or impossible to implement abstractions: it seems a practical approach. The result will be partial: if you start with XML as the artifact you will get “XML Governance” issue raised. And the issue of “What can I measure?” leads directly into the worlds of complex validation (e.g. Schematron) and the need to develop good metrics in general.
In the spirit of truth and reconciliation, and to calm the situation down, I thought I’d keep a running list of new accusations on the Web of bribery and corrupt procedures in the document standards world, for the instruction of readers. Of course, follow the links for the context behind the money quotes.
I’ll add notable or novel examples I find for the next week, hopefully none! Sorry no comments on this one; the quotes and links can speak for themselves, And I want to plainly emphasize that there is a big difference in saying that a process is bad (see this interesting Karl Best blog and the interesting post by Marbux referenced below) and saying that some people or group is corrupt.
Here is a little map with the countries mentioned in red.
create your own visited country map
(Added) From NOOOXL.ORG
After the swiss cheese, you can taste the smell of the bitter cacao from Cote d’Ivoire. The Chairman of the Technical Committee in Cote d’Ivoire is Roger Kouadio, from the company Inova Formations. I let you guess from which vendor he is a business partner.
which leads to
Microsoft paying National Bodies to suddenly become P?
(Added) From “Anonymous Coward” on Slashdot
Just yesterday I was sitting in the relevant meeting of SNV/UK14 (https://www.snv.ch/), that decides how Switzerland will vote. The chairman (Hans-Rudolf Thomann) explained the following rules:
- we are here to create standards, not to reject them
- if we reach consensus (>=75%) to vote for Microsoft, we will vote for Microsoft
- if we only reach a majority (>=50%) to vote for Microsoft, we will vote for Microsoft
- if we reach a majority to vote against Microsoft, we will vote for Microsoft
- if we reach consensus to vote against Microsoft, we will abstain
The present spin doctors of Microsoft and ECMA managed to convince Mr. Thomann to reject every serious technical and general concern we had regarding OOMXL by pointing to compatibility reasons. At the end we had a majority _against_ Microsoft but which (giving the unfair rules) results in a Swiss vote _for_ Microsoft. “
(Added) From NoOOXML website from FFII
Brazil also in the way to be hijacked by Microsoft
…
ISO and ABNT are being hijacked by Microsoft and its partners.
(Added) From NoOOXML website from FFII
A generous contributor gave us the list of members of the Technical Committee in Colombia. The vendor is also invading Colombia.
(Added) From NoOOXML website from FFII
Romania votes “yes” for OOXML
Our correspondent writes, “…our politicians here are corrupt as hell, the general view around is that they are “yes-men” when it comes to lining their pockets or lack of knowledge of the technology…
(Added) From NoOOXML website from FFII
Microsoft 0wns Azerbaijan
Our correspondent from the Caucasus reports on his efforts to shed some light on the OOXML vote in Azerbaijan, a P-Member. Seems Microsoft ‘arranged’ the vote already
(Added) From NoOOXML website from FFII
Rumors of Microsoft blackmail in New Zealand?
Some rumors are saying that Microsoft’s representative has blackmailed the Government and the Standardisation body in New Zealand with something like “If you don’t say Yes to OOXML, we will revise our license terms with the Government for all Microsoft products you use. There will be sanctions”. We are looking for some people from New Zealand to investigate this.
(Added) From NoOOXML website from FFII
Microsoft recruiting puppets in Australia
Microsoft is looking for puppets to represent themselves in the Australian Standardization Body: “We need more Aussie companies supporting Office Open XML”.
(Added) From Roy Schestowitz
Has OOXML ‘Funny Business’ Already Reached Australia and India?
(Added) From Roy Schestowitz
We have seen it in Italy, we have seen it in Portugal, we have seen it in Spain, we have seen it in Denmark, we have seen it in Colombia, and we have seen it in the United States. Microsoft has no shame about abusing any system that is open to abuse or where abuse will go unnoticed. Microsoft even admitted this. The latest report comes from Switzerland (although its source and credibility cannot yet be verified).
…
Is China next?
(Added) From Roy Schestowitz at Boycott Novell
The Microsoft OOXML Corruption Train hits Denmark
From Charles-H Schutz’s blog The Price is Not Right
“Hungary voted with a yes. The price was right for whoever was in charge”…
From Sam Hiser’s blog Sacrebleu
“The ISO committee in France voting on OOXML is bent, too!”
(Added) From “Stephane Rodriguez” comment on Sam Hiser’s blog above
“The guy from CleverAge has posted the list of 14 voters in AFNOR. 5 of them are Microsoft bitches. 3 of them I am not sure
then
WYGWAM : Microsoft bitch. One of the employees has been bribed by Doug Mahugh for months (in what is now a classic…)
Paoli Pires’ blog OOXML vs ODF - The war is still ongoing in Portugal on the conduct of the standards committee
“Apparently this bunch-of-impartial-tech-companies has rejected a proposal from Sun Microsystems (Portugal) to become an active part, arguing that “there are no empty chairs in the room used for the meetings”.
UPDATE: More material at Groklaw, including some notes by an open source representative, Rui Seabra.
UPDATE #2: “The person that answered “no room left” is from IIMF (IT Institute of the Ministry of Finance), not from Microsoft.” says translator
UPDATE #3: From MS attendee Stephen McGibbon’s Much Ado about Nothing is some responses to other parts of the Groklaw story)
“My recollection was that Rui himself was amusingly irreverent and was chastised at one point by one of the other members for lack of respect for others on the committee.”
UPDATE #4: From IBM’s Bob Sutor
“In spite of various communications, we are still locked out and will not be allowed to participate.”
(Watch this space: I note that Sutor does not state the reason given by the Portugal NB committee to exclude IBM, but it seems to be that they have adopted a jury-style system with a fixed number of committee members..)
(Added) From Zaine Ridling in a comment to the same post
“Since the US tech press isn’t writing about this (nor about the weird situation in Chile), then I guess it will be filed under “Who cares?””
UPDATE #5a: From several “Rui Seabra” comments on Ed Brill’s blog (which I would summarize as “The problem is that IBM and Sun were too late for seats on the committee, not chairs in the room”)
“Ed: «According to Groklaw, he was Microsoft’s representative at a standards committee meeting in Portugal…where the committee chair, a Microsoft employee, »
This is false, of course. It was the National Body sectorial organization who refused their presence.”
UPDATE #5b: From several “Rui Seabra” comments on Ed Brill’s blog
0. There was not a due and fair process of invitation. When the real rules were known, people had less than 48h to propose members to the TC.
1. The National Body sectorial organization (NBso) refused the entry to IBM and SUN *before* the meeting had occurred.
2. The NBso claimed that a) there was a maximum of 20, and b) representativity was achieved.
3. There was an average of 24 people seated in the room. There were at maximum 25 people in the room. The room could handle almost 30 people seated. The NBso had an auditorium and chose not to use it. Microsoft occupied 3 seats. NBso occupied 2 seats. ASSOFT (BSA alike) occupied 2 seats. Microsoft business partners occupied a few seats more (at least 5, maybe more).
4. Some people more appeared to the TC meeting who thought they had been accepted (invitation process was broken). They got inside because some TC members (noticeably not Microsoft nor its business partners) invited them to stay as experts.”
UPDATE #5c: From several “Rui Seabra” comments on Ed Brill’s blog
“Nathan: «@12 - RS claims that the issue wasn’t that Sun and IBM were denied space on the committee. It’s that they WEREN’T ALLOWED ENTRANCE INTO THE ROOM. “Applied after the deadline” is not the same as “showed up at a particular time at the meeting place.”?»
Your understanding is wrong, there, Nathan. They were denied space on the committee.
(Added) From Roy Schestowitz’s (who says he is a maintainer of Groklaw’s New Picks) blog Voters on OOXML Up for ‘Hire’ in Italy
“It’s not just Britain and it isn’t just Portugal, either. Watch the following observation which comes from Italy. Voting on OOXML seems like a rather iffy business, not just in the United States.
From Groklaw
It seems there may have been more games played by Microsoft in the OOXML saga, or at the very least some confusion spread, and this time our story comes from Spain…
From Groklaw on the standards process in South Africa (he is MS’ Jason Matusow):
Normally, bodies do follow the recommendations of the technical committee. And why wouldn’t they? How else do you decide if an offering should be a standard? Stacking the decks to get the votes you want despite the technical concerns? I’m not sure I’ve understood him precisely, so perhaps he’ll clarify.
From “Marbux” in a comment on Rob Weir’s blog
It is the ANSI/INCITS process and procedures that are corrupt, not its TC members.
…
Allowing people with vested financial interests to cast ballots on federal government decisions also violates federal conflict of interest laws.
From “Anonymous” on the same comments list
How much do executive boards cost these days? Microsoft seems to have nearly unlimited pockets when it comes to bribing officials and paying off executives, so the OOXML standard is a merely a technicality.
(Added) From “anonymous” on Groklaw Newspicks comments responding to the following
“A source close to the voting process speculated that Microsoft might still attempt to cripple the process bureaucratically before the vote is taken internationally in September.” James Archibald
Yes, they have done that in Malaysia already when it was clear that OOXML would not be accepted. They had the TC4 committee suspended.
From “mcrbids” on Slashdot
But just look at that graph! The lengths that Microsoft will go to in order to prevent people from being free of the vendor lock-in… Cash is king, and Microsoft has more available cash than many countries’s GNP. How far can they corrupt the process? Probably far enough, with enough time and money, and the only holdback is the time.
(Added) From John Scholes’ blog Current OpenXML ballot is unlawful
So now that we have dealt with how to vote, we should perhaps turn a more important issue: the ballot is unlawful. … Why? Because the JTC1 Secretariat and ITTF (Information Technology Task Force) failed to follow the JTC1 Directives and ordinary principles of international law in dealing with the “contradictions” that were raised.
From “Stephane Rodriguez” on Doug Mahugh’s blog first about hAl that
If you are not being paid by Microsoft for the FUD you are throwing, then in addition to being full of shit, you are really a lame bastard.
then
I am actually incorrect in stating that hAL is one of those poor schmucks going around the internet for gratuitous comments to make. The other one is Rick Jetliffe. After a quick scan of related blogs today, I have noticed he’s in just about every comment area, with the exact same argument : OOXML cures cancer.
Whatever bribery buys…
UPDATE: I also get a rather milder mention this week from Roy Schestowitz
I’m sure you know this, but just to be 100% certain, Rick Jelliffe is consulting for Microsoft, he was paid by Microsoft to edit Wikipedia, and articles with/about him are anti-open formats.
Feel the love!
memcache.c:45:2: warning: #warning “Working around busted-ass Linux header include problems: use FreeBSD instead”
As seen in the libmemcahe 1.4.0.rc2 ./configure script. :D
I’d long ago told my Microsoft contact that I thought ultimately ANSI will abstain on the Open XML vote at ISO, due to an inability to achieve consensus, so I was quite surprised that they only missed out by maybe one vote in the end in the V1 technical committee. From emails from my friends on both sides of the table, it seems discussions at the V1 committee meeting became, if not acrimonious, then whatever the step beyond “robust” is. It will be interesting to see what INCITS/ANSI does to proceed, but it is delightful to see that the opponents of Open XML have started protesting that they really wanted Open XML all along!
(NOTE: I am withdrawing the rest of the blog, for now, while I think about whether it just perpetuates tit-for-tat sniping. Consequently, some of the comments no longer have their context. Apologies. The missing parts express surprise at Sun’s recent statement of support for Open XML, bring up what anti-trust means in the context of standards, points out the impracticality of adding thousands of pages of binary mapping documentation to the Open XML standard, looks at the bad logic used to justify it, brings up the notion of a “poison pill” which is a trick where an impossible-to-fulfill clause is added knowing it will cause refusal, and also thinks about voting procedures when there are multiple choices.)
This is a simple list of 10 general corrections to Open XML. There have been comments recently that pro-Open XML people are not contributing any fixes, so here are my big ticket items. They flow out of the principles that I mentioned in my blog before, and other discussions, and my distaste for non-verifiable specifications.
(If Australia decided to become a P-country and vote for ISO Open XML, these are the general corrections that I would submit to Standards Australia, apart from specific typos and unclear sentences. Whether they formed part of a ‘Yes with comments” or a “No with comments” wouldn’t bother me, since they all are fixable.)
It is here as PDF and some blog feeds will have it below in the extended entry. Download PDF file
Why would Open Source developers want to support Open XML becoming an ISO standard? Isn’t it from Microsoft, the great Satan? Isn’t is some kind of trick or trap to stop nice Open Standard ODF? Are we going to let this chance to overthrow the monopoly escape?
Now there have long been two different camps in the Open Source movement: those who think that it is important to have independent APIs and those who think that it is important to have Open Source clones of the most important proprietary APIs. This latter group is of course associated with Novel and the Mono effort is a good example: on their history, I don’t think they have much problem with Open XML going through ISO (Gnumeric’s Miguel de Icaza fits into this latter camp.) So this blog is more addressed at the first camp.
First, I would like to set the scene. I think the reasons for supporting Open XML at ISO become a lot clearer if we take a fairly hard-headed view of what is possible. Which is perhaps a nicer way of saying that I think some of the anti-Open XML case has been built on naively faulty assumptions about the miraculous power of ODF to disrupt Microsoft.
Putting it all together, it means that there was never a chance that Microsoft Office would or could adopt ISO ODF 1.0 as its native and default format. So the real choice that faces us is whether we want Office to generate files in a format that MS controls with very few external checks (with all respect to Ecma) or to generate files in a format that MS instigated but which has the extra checks and balances that come from being an ISO standard. ISO standardization is not an Aladdin’s cave of democratic rights, and it is not a Pandora’s box for Microsoft, but it is way better than nothing. Because that is what the anti-Open XML people would achieve: no controls on Microsoft. Under the guise of supporting ODF.
If you, like me, are in the position where you don’t use MS products in your normal work lives, then you may not feel any urgency to support Open XML, but I think we at least should not oppose it. It is a good step forward.
We often read that Microsoft is doing this as some kind of sinister ODF spoiler. However, Open XML is a path they have largely been forced to take (though obviously they will try to make the path as beneficial to them as possible) in order to fend off continuing anti-trust problems in Europe: Microsoft went down this path after a very strong hint from the European Union: in the same report that recommends that ODF be submitted to ISO, the EU’s ‘Telematics between Administrations Committee’ recommended that Microsoft should consider the merits of submitting XML formats to an international standards body of their choice as well as improving the documentation, IPR issues and going all the way with XML.
I certainly support governments mandating that public documents should use standard formats: HTML and PDF being the primary two, and ODF after that, but also Open XML as a second source. However, having an ISO Open XML does not prevent any government from preferring ISO ODF. ISO standards are what are called “voluntary”, which means that they are not like laws where you have to adopt them. In my view, the drivers for ODF will continue unabated even after/if Open XML becomes a standard.
So, in my jaded view, ODF will not make Office go away, ISO ODF will not make Ecma Open XML go away, and ISO Open XML will not make ISO ODF go away. So I see no downside in Open XML becoming an ISO standard: it ropes Microsoft into a more open development process, it forces them to document their formats to a degree they have not been accustomed to (indeed, the most satisfactory aspect of the process at ISO has been the amount of attention and review that Open XML has been given), and it gives us in the standards movement the thing that we have been calling for for decades (see my blog last week that compared what Slashdotters were calling for in 2004 with the path that MS has taken).
In the 80s, there was a hilarious wrestler called George the Animal Steele. He was incredibly hairy, especially on his back, which was supposed to be emblematic of his sub-human state. His great flaw as a wrestler was that just as he was winning he would be distracted by the turnbuckle, often trying to eat its stuffing while his opponent recovered. I guess this is how I feel about the attempts to stymie Open XML at ISO: just as we have victory in our hands, with MS prepared to go XML and standards, along comes this distraction, ODF, which is great in its place but dumb, unworkable and counter-productive as a Microsoft buster.
Lets imagine that we are transitioning into a “Document Engineering” style of architecture, so that we can model our entire business using old but not-as-outmoded-as-we-first-thought Data Flow Diagrams. At each data flow we need to ask the Exception Question: Does an exceptional document need human intervention or can it be dealt with automatically? Indeed, the expected answers to this question is probably what distinguishes the document community from the database community: the docheads would expect exceptions to be dealt with by humans who can monitor, fix and reset the production flow at all stages, the dataheads would expect exceptions to be dealt with by automated process, since humans involvement is at the input/output periphery of systems.
Obviously, the most “exceptional” kind of document is the invalid-against-a-schema document. However, Schematron allows a much milder (or tougher, depending on how one looks at it) bar: the presence or absence of any arbitrary pattern in the document can allow it to be marked as exceptional. (Schematron not only define valid/invalid, but it also allows complex dynamic diagnostic messages, and it also allows various flags to be set by assertions that fail.)
So the Exception Question then becomes a criterion for evaluating schema or constraint languages: when exceptional documents are to be sent to humans for intervention, does schema language A provide clear enough information to be usable by those humans. Similarly, when exceptional documents are to be sent to software (services) for intervention, does schema language A provide clear enough information to be usable by that software. Looked at in those terms, grammar-based systems do not shine. Grammar-based systems excel in all-or-nothing Great-Wall-of-China exclusion uses, but then throw the users (systems and humans) at the mercy of the validator-developer for the kinds of feedback and information possible, who has of course absolutely no idea of the problem domain of the schema. XSD is perhaps a little more organized in this regard compared to the other schema languages, because it defines a specific list of outcomes that can be found in the notional Post Schema Validation Infoset after validation.
But, the trouble is that, whether for humans or systems, the more that problems are diagnosed in generic terms (i.e. in terms of the markup) rather than in domain terms (i.e. in terms of the intended patterns, or dare I say semantics), the less chance that the diagnostic can serve any practical purpose for downstream systems. Notoriously this is true for system which “hide the markup” from the user: the grammatical errors are unavailable and incomprehensible to the users. Grammars have shown themselves over the last 20 years to make programmers more productive but to stupify end-users: the traceablity issue I raised this week on XML-DEV in response to one of Roger Costello’s excellent fishing expeditions is another head of the same Hydra.
There is a running gag in the Simpsons where, in a flashback, Homer or someone will boldly predict something wildly wrong: 8 track tapes will never die,, that kind of thing. I was reminded of this type of gag today when reading an Slashdot thread from 2004 entitled “Is the new Microsoft Office really open?” It relates to Office 11, but it is interesting to note what rabidly anti-Microsoft people were demanding at that time, and their expectations of getting it.
Some highlights:
If the XML files office produce are not made the default save types or if the XML merely encapsulates large portions of binary code, it will not matter one lick that office can save these xml documents because the majority of people will be stuck on the default, unreadable formats.
And so on…ratbags and survivalists: a bitter cup but with a few drops of sweeetness from such as Liam Quinn plonked in. So lets suppose the Masters of the Four-Paned Beast capitulated and gave these Knights of the Slashed Dot pretty much everything they were calling for: we would get a requirements list along the lines of:
however
It strikes me that this is almost exactly a description of the direction that MS then took with Open XML. The Slashdotters don’t know they won, (or at least that in 2007 they would get what they were asking for in 2004.) …Perhaps in 2010 they may get what they are demanding in 2007! “Doh!”
Here’s a quick tip for the interested. When someone says “Standard A violates standard B” ask “Which clause?” No clause, no violation. (A cynic might think that if no clause from standard B is mentioned, it may suggest that standard B has not even been sighted.)
“XSLT Stylesheet (possibly) contains a recursion” ?
Well yeah… It’s XSLT! ;-)
NOTE: What I quickly realized was that I accidentally invoked a infinite recursive loop which was easy enough to fix. Of course something just a little more meaningful than “XSLT Stylesheet (possibly) contains a recursion” might be in order, but GranParadiso is still in alpha so I guess I should cut the MozDev’s some slack…
For now… ;-)
The Open XML standardization process is in its most interesting time at the moment: the various National Bodies are deliberating, the extreme anti-Open XML people have largely abandoned the sillier of their claims and tempered the rest, the moderate Open XML people are bringing up a lot of good issue (many of which seem reasonable and fixable to me), the Ecma side is having to seriously consider what kinds of changes they can live with, and national bodies have to look at whether particular issues identified are at the showstopper level of seriousness (not all flaws in a spec are showstoppers: some are better left to maintenance, of course. And few “showstoppers” are actually reasons to vote no in any case, in the ISO context.)
From my perspective, the lion’s share of problems will be solvable simply by appropriate clarification of the text (i.e. the normal ‘wordsmithing’ that goes on in a standard prior to its final acceptance. I presume that the intent of fast-tracking precludes changes that invalidate existing documents or require changes to semantics: what good would ISO PDF be, for example, if it did not reflect the pre-existing reality of PDF? But I do see scope for syntax additions, which MS would have to support as part of some service pack. It will be a test of their seriousness.)
I am due to speak to Bureau of Indian Standards today, largely on the same topics as in my recent blog on developing Principles for evaluating standards. (This was the same blog that various anti-Open XML ranters portrayed as being pretty far-out; in fact it is not even cutting edge…) Microsoft has flown me up here to talk to the local standards committee, on invitation from the Federation of Indian Chambers of Commerce; there is quite a feeling here that India does not need any artificial barriers to trade and that document standards help industry.
This particular essay has been sitting as a draft on my laptop for some time. I’ve been intrigued lately by the bigger issues facing the IT market, and especially of the people who work in it. The news of late seems to be ominously similar in tone no matter where you go, that even in the face of softening employment elsewhere companies can’t fill programmers fast enough. I don’t think this is peculiar to this particular moment, but may in fact shadow a significant shift in the relationship of the IT workforce to the companies who employs it. My apologies for the length, but there’s a lot to cover …
An interesting paradox is occurring right now in the IT market, one that I predicted back in 2002 when jobs for programmers were as scarce as new IPOs about then. In a nutshell, even as the general economy is beginning to suffer from significant doldrums, the demand for skilled IT professionals has seldom been higher … and many companies are having to shelve new production because they can’t find the programmers to build their applications … globally. From the United States and Canada to England, Germany, India, China, Japan, Australia … there just don’t seem to be enough skilled programmers to fill more than about fifty percent of all IT-related jobs.
Update: via a follow-up comment from W^L+,
Never forget the vendor-neutral part, because without it the switching cost makes choosing a vendor a high-stakes decision and subjects consumers to all sorts of abusive treatment from their suppliers (the vendors and those who distribute the products / services of the vendors).
Nicely stated!
[Original Post]
I just left the following as a comment to John Lilly’s recent post entitled “A Picture’s Worth 100M Users???” I’m reposting it here (with a few grammatical fixes and inserted extensions. I’ve linked to the original for comparison) because I believe the point is an important distinction to understand, and that is this,
It’s *NOT* about the number of players in any given marketplace! It’s about maintaining a competitive atmosphere, resulting in better products.
Of course one could argue it’s also about the number of choices you have, and to a point I agree. That said, the next time you travel overseas and find yourself fumbling around in your electrical plug adapter kit, ask yourself the following question,
“Do I really wish I had just a few more adapters to choose from?”
Of course, maybe the one adapter that you need is the one you lost on your last trip, or is the one you determined ahead of time to not be of any use, so chose not to bring it along. In that case, then the answer would probably be “Yes!” But in most cases what is likely to be stated is “Why can’t we all just agree on one standard and be happy with it?” Of course, that brings us back full circle to the importance of competition, as without competition the incentive to build better product is diminished, which collectively comes together to form the following,
It’s not *just* about competition, nor is it *just* about choice. It’s about balance.
My recent comment follows,
John’s Blog � Blog Archive � A Picture’s Worth 100M Users???
Update: Question,
Think about the way you design. Do you do the controls first, or do you work out the data, the relationships, the rules, then build controls to measure and set those?
Len Bullard provides his answer below.
Update: Two nice additions to provide some food for your lunchtime (or dinner if you’re in the UK, *VERY* early bfast if in Tokyo ;-) mental fodder,
First from Len Bullard,
Controls are emergent. People don’t get that. As a result, many systems are preconceived notions based on superstitious relationships resulting in wasted motions instead of frugal applications of measured and just-in-time force to encapsulated data.
Second from Piers Hollott,
It is interesting, people seem eager to adopt the principle of “survival of the fittest” (bestest?) while ignoring that in any evolutionary system, “fittest” and “survival” are both very much rooted in context and subject to change.
[Original Post]
Disagreement, ambiguity, variability, lie, etc are not bugs, but a feature of the system. Each system which becomes too rigid looses its flexibility and often die. The social agreement makes it happen in a *context* keeping the possibility of an error, a mistake and by it, giving the possibility to fork, evolve, etc.
It’s a question of balance.
1+1 = 2 most of the time, but not necessary in poetry.
Karl Dubost from a recent post to the W3C SemWeb Mailing List
Here are the press releases I’d like to be reading…
ODF champion Sam Hiser today withdrew many of his objections concerning Open XML. “Imagine our surprise when we looked in the dictionary and discovered that “optional” does not mean the same thing as “required”. And yet this is the basis of most of my comments. Boy was my face red!”
Redmond, Thursday. Microsoft XML Supremo Jean Paoli today announced that future versions of Office 2007 will come with support for all ISO standards out-of-the-box. “We are in love with standards and the whole fun process, we just can’t get enough.” Paoli wrote on his blog this week. “But I discovered that some newer ISO standards required optional plugins in Office. Boy was my face red!”
IBM XML Supremo Bob Sutor today announced that IBM would be open-sourcing Lotus and IBM’s Workplace word processor. “After going around the country proclaiming the links between Open Source and ODF, imagine my embarrassment to discover that our products are closed-source. Boy was my face red!”
Groklaw’s “Pamela Jones” announced today that the Groklaw site would remove or annotate statements that it knew to be misleading. “People respect the site because it tries to cut through FUD. But we found we had become a breeding ground for it. Boy was my face red! For example, the claim on Grokdoc that XML processing software cannot process bitmasks is clearly bizarre, crazy and Rick Jelliffe, who I am a big fan of, has even controbuted counter-examples in the discussion pages. Yet well-known people still repeat the claims. So we will remove or add annoations or warnings to material that is wrong: most of our readers don’t have the time to trace through the various claims so it is quite important. ”
Mr X of Y today revealed that he or she is in fact “anonymous”, the prolific commenter on websites. “One day, over a cup of T, while reading the Story of O, I realized that I was not being a gentleman or woman and in danger of being the slightest bit hypocritical in the eyes of my fellow paid shills. Boy was my F R! Also I am curtailing my sock puppetry on Wikipedia and passing off a corporate blog as my private one .” Mr X would only let his name and company be represented by initials for this interview.
New York, Friday. The United Nations today elected Rick Jelliffe king of the world and promised to follow his radical program for change, based on the revolutionary principle “Use the right tool for the job” under which both knives and forks may be used together, even though most experts believe that knives contradict forks. In his first speech, King Rick said “Go away. Don’t you know what time it is here? King eh? Boy are your faces red!”
Back in the day, XML was easy.
Today? Hmmm… Not-so-much,
only this, and nothing more: WADL: its really about the XML APIs
XML is one of those things that looks really easy, but is actually full of nasty surprises that don’t show up until either the week before you ship (or worse.., a few weeks after). Things like character encoding issues, XML Namespaces, XSD Wildcards. It is really hard for your average developer (who makes no pretenses at XML guru-hood) to write good XML serialization/hydration code. Everything is stacked against him: XML APIs, XML -Lang itself, XSD.
That is why so many developers (especially in the Java world) just use XML Binding layers.
Solution? Give them a good XML API. Not one designed for XML Gurus, who understand every nuance of the spec. Give them an API that makes using XML easy, and relatively efficient. This ain’t easy, or it would already be there. XLinq is C#’s answer to exactly this issue. Java needs something similar. An XML API that isn’t designed primarily for XML as text markup, but an XML API that is designed for data serialization.
darcusblog � Blog Archive � OOo: Quality Through Obsolescence?
Michael notes a lot of progress on the OOo organizational front of late, such as the move to more frequent releases. But clearly the deeper organizational dynfucntions are really, seriously, weighing on the capacity for OOo to innovate. I really hope they don’t slow down implementation of the new metadata support in ODF 1.2. It really has the potential to be a killer innovation opportunity for OOo, but not if it gets delayed for five years by business as usual.
I’m cautiously optimistic, though.
PLEASE: Before anyone and/or everyone gets bent out of shape in regards to the reported news that Apple is placing unencrypted watermarks in the form of your name and email address inside of the DRM-free tracks, please remember how big of a battle we all just won…
Apple criticized for embedding names, e-mails in songs | Tech News on ZDNet
“Watermarking does not treat the consumer like a criminal,” Goodman said. “DRM is also restrictive, telling you how many times you can play a song or which device it can be played on. Watermarking works on the assumption that a consumer is innocent but provides the industry an opportunity to catch someone that breaks the law.”
… and then let it be. EMI is only the first of the majors to go DRM free. Let’s not scare the others away by taking things to the extreme.
Thanks in advance for your considerations!
Is this the dullest, most trainspotterish blog I’ve ever written, mo mean a feat: you be the judge if you can make it to the end! I’ve just come back from a few days in Kuala Lumpur. Lots of new words made many newspaper article inpenetrable: I knew the title “Tun” but I had never seen the title “Datuk“, let alone all the “Yangs.” But I was impressed that the hotel had universal power sockets installed.
Foreign power plugs are always surprising. Here in Australia, we use 230V at 50Hz with a three pin plug in a wilted T configuration, with flat blades: its is used throughout the South Pacific, and supposedly the Okinawan, Argentinean and Chinese sockets are compatible too. What is surprising is to visit foreign countries where there are multiple kinds of plugs, and multiple kinds of power: the countries which initially went for 100V-120V now have also moved to also provide 220V-230V power for equipment that sucks more current, and these usually have different plugs. In the US, for example, I cannot help but thinking “Why on earth don’t they get rid of 110V?” since it is technically poor, complicates things, has obsolete plug forms (the two pin versus the three pin.) “Why don’t they just all adopt what the rest of world does, at the electrical level at least? They could take the opportunity to go metric too!”
Wikipedia has a good guide to Domestic AC power plugs and sockets. There are about common 12 kinds national or regional AC connectors (but each many of these have variants that don’t prevent plugging in, and many nations have an extra handful of other less common plugs, such as the US NEMA plugs), and IEC defines 13 kinds of connectors for electrical equipment. That makes scores of different plugs in use, with scores more now obsolete or obsolescent.
The technical reason for multiple plugs is to prevent you from plugging in equipment and motors designed for the wrong voltage and frequency: and especially between AC and DC of course; the historical reason is empire and inter-national uncooperativeness; the economic reason is that it would be too expensive to replace them all at once or to have twin connectors on equipment; the design/manufacturing reason is because the differences can be isolated by having detatchable plugs, so there is no cost impact on equipment made with IEC sockets or DC plugs because the major component is the same; the parochial reason is that people get used to things being a certain way and it was a barrier to trade felt to encourage local manufacture or to promote empire-first policies. There are some deeper technical issues too: in Japan I was told they adopt 100V for normal domestic outlets because of earthquakes…if a pole or building falls over they prefer that stray wires should be as low voltage as possible; on the other hand, I have been told that higher voltage is better for power transmission over large distances, so countries like Australia adopted 240V (now 230V). I have been told that mosts modern US homes are wired as 220V now, with only a single side use for the 110V domestic circuits.
The nice thing about the hotel in Malaysia was that they had installed universal plugs: the Malaysians use the big clunky British style 220-250V power (also used in Britain, Brunei, Uganda, Liberia, etc) but the hotel sockets could also fit my smaller but still clunky antipodean style plugs and many others.
What was interesting to me, was that though there are multiple standards for the DC plugs going into appliances (for example, into the laptops), multiple national standards for AC power with plugs, frequency and voltages varying (sometimes within a country!) a perfectly workable solution has emerged.
So in the power plug world, the answer to multiple incompatible standards that is emerging is not based on standardizing on a single exact connection type or electrical type but quite the opposite: the underlying systems are increasingly starting to look more like each other (230V, 50Hz) , the connections to consumables has been standardized (IEC sockets), transformers support electrical plurality (100V-250V, 50 or 60Hz), and sockets are being deployed which accept as many different plugs as can be accommodated as low-hanging fruit.
Ah, I hear you cry, aren’t you just advocating N-N which is inefficient? Well, not really: the notebook approach is layered, so that the physical connection becomes an N-1 issue (a universal socket support multiple plugs going to an IEC connector) and then the electrical connection becomes an N-2 issue (the transformer handling multiple voltages). The thing is how to gradually reduce the N-1.
When there are multiple existing technologies deployed, and where a unified standard technology would provide no significant difference in capability that would drive consumers to demand the unified standard by their wallets, the way to achieve a unified standard is by providing a bridge or on-ramp: a technology that neutralizes differences so that users can continue to use their existing systems. That is what happened with XML and character sets: no-one was using Unicode for data interchange; XML provided a bridging mechanism where users could incrementally switch from their locale-based encodings to Unicode systems gradually and incrementally, and now UTF-8 is widely and fairly painlessly used.
Applying this back to power plugs, the solution is not to unify the plug standards, but diversify the sockets specs so that each nation requires a socket that accepts both the national standard and some intended future international standard. If it turns out to be impossible, because countries still want different plugs and sockets for 110 and 220V, then two standards are better than none. If it turns out that the countries with round pins and the countries with blade pins cannot agree or the design/cost problem is too hard, again, two standards is better than none.
Update: via a recent email from David Carlisle (used with permission),
I recognized that name when it came up in the feed titles, Andy Kimball
used to post regularly to xsl-list and was largely responsible for
giving the impression that not everyone at MS was fully signed up to te
evil empire, and that there were in fact real people there and that
msxsl would eventually turn out to be a good thing…
David continues with a couple of links [1,2], the second of which links to the following,
Hi all,
I’m Andy Kimball, the Microsoft XSL developer. After today’s “nested
template abomination” discussion, I had a couple of comments. First,
Microsoft is committed to delivering a conformant XSLT processor. ….
[1] https://www.biglist.com/cgi-bin/wilma/wilma_glimpse/xsl-list?query=Andy+Kimball
[2] https://www.biglist.com/lists/xsl-list/archives/200003/msg00614.html
I have to stand by David’s comments. Andy is definitely one of the good guys. And if you have ever used any of MSFT’s XSLT processors (in particular MSXML and .NET 2.0 System.Xml.XslCompiledTransform) you know just as well as the rest of us…
Andy knows how to write *BLAZING FAST* code! Smart kid, that Andy Kimball ;-) :D
Thanks for the info, David!
[Original Post]
So the craziest email arrived in my inbox yesterday evening. It begins,
I’m not 100% sure that you are my old friend from the 90’s, given that you seem to go by M. David Peterson now, but I thought I’d email you and see. If you are Mark Peterson, who worked at Microsoft in the 90’s as an “Independent Contractor”, and knew a couple of guys named Andy Kimball and Brandon Hall, then let me know.
I’ve already responded to Andy to let him know that yes, in fact, it is I that is he (M. David Peterson == Mark David Peterson for those unaware.) The reason for writing this post, however, is to point out something I didn’t know until just now. Andy continues,
It would be quite a coincidence if indeed you were that Mark Peterson, as you seem to be very gung-ho on Xml and Xslt, which is my specialty.
He continues to describe his involvement with XML and XSLT at Microsoft.
- Member of the Core XML team for MSFT for almost 10 years.
- One of the developers of the MSXML processor.
- Ditto on the XSLT Processor in .NET 2.0
- Currently a member of the Linq To Xml team
ABSOLUTE CRAZINESS! Some background,
Step #1) If you haven’t already, repent of your sins and go an pick up Tool: 10,000 Days from your favorite local or online retailer.
Step #2) Pick up a pair of Sennheiser PMX100 headphones**. (local retail directory)
Step #3) Change the equalizer setting on your Zune to “acoustic”. If you don’t already own a Zune… REPENT!!!
Step #4) Set the volume level to a comfortable factor of “loud” (whatever that might mean to you.)
Step #5) Endulge in an orgy of sound like you’ve never endulged before while you hack, hack, hack the night (or day if you write code during the day and still believe you qualify as a hacker ;-)) away.
Youuuu’rrre Welcome. ;-)
–
** Just trust me on this one.
Obviously, nobody’s ever tried an Unconference at the scale of Java One, and there’ll be compromises and stumbles and the requirement for real interaction-design creativity. But there is low-hanging fruit: a lot of really bright people facing really hard interesting problems who’ve been spending too long, at this kind of event, sitting in the dark listening. I want to hear their stories.
I think by now most people are pretty phlegmatic about accepting various assertions about Open XML and ODF on face value. The sky is not falling. The boy is crying wolf. A seive is full of holes no matter how loudly someone shouts that it is a bucket.
When, for example, one side says “Open XML normatively refers to MS’ proprietary WMF” and the other side says “Err, where? Not in the Normative Refences sections” and the first side says “Err, then there is an *implied* normative reference because a mention is made of it elsewhere as a possible kind of graphic that may come in from the clipboard” and the other side says “The ISO usage of ‘normative’ revolves around indispensibility: isn’t ‘possible’ the opposite of ‘indispensible’?…” disinterested observers may think Surely there is a more constructive approach? These silly examples are distractions from serious concerns.
So here is what I suggest, for national bodies reviewing Open XML: adopt a set of general principles and apply them (to Open XML, ODF, and whatever). When someone raises a specific issue, verify that the issue indeed is as claimed, find the general principle, and base your responses on that, with the particular flaw as an examplar. The tactic adopted by some activists is to read the draft text, think of the worst possible interpretation and ramification, then insist it is the case: the “normative reference” example is a good case of this. The trouble with this approach is that it won’t work; impartial reviewers will note that there is some kind of concern but that the actual issue raised does is not a problem. The result will be frustration and a lack of a “meeting of the minds”. Indeed the legitimate issues that underly some of the anti-OpenXML comments risk being unaddressed.
What kind of principles would there be? Here are a few off the top of my head:
Applying this to Open XML, for example, it would mean that where DrawingML uses EMUs coordinates, it also should allow inches, cm and points. And where Spreadsheet ML allows numbers for date indexes, it also should direct ISO 8601 dates. Do you see the difference between saying “Open XML should be banned because it uses EMU” and “Open XML should be improved to allow more than EMU”. The most important thing is that this is a superficial change to the exchange language, not to the underlying model: it doesn’t force MS to adopt a different model or require them to generate standard units. (That is a different issue: the issue of profiles or application conformance.)
Applying this to Open XML, we see that the string approach taken by SpreadsheetML conforms: you can have text directly or index to a shared string table. Adopting this principle lets a National Body vet the issues: if someone says “This doesn’t look like HTML! Therefore it is bad!” the NB can say “We adopt the principle that optimized references can be allowed as long as literal content is allowed too”.
SGML and XML DTDs have a mechanism called Entities that allow indirect references. This is really important for maintance of large documents, because it disconnects references from names: you can update a graphics file and a single reference. Applying this to Open XML, OPC meets the criterion. OASIS catalogs would also probably fit the bill.
Following from principle 1 and 2, an indirect reference mechanism should allow the standard notation (IRIs) but may also allow a local or optimized form. Applying this to Open XML, this principle would mean double checking that IRIs are allowed (I will check this sometime) in OPC; I don’t think that OPC uses a local, optimized or legacy form (I will check this sometime.)
Applying this to Open XML, the sections on VML would be marked “informative”.
I am not sure of the ramifications for Open XML: I need to check the part 5 of the standard, which deals with extenions and future-proofing. Certainly the use of MIME types in OPC follows this principle, but it goes more than that: could DrawingML be augmented or replaced by SVG for example? (I will check this sometime)
I think Open XML is OK in this regard: it allows Word macros, Java, and other scripts, but these are not required and IIRC partitioned.
In other words, no standard should be denied merely on the grounds of “My requirements are more important than yours”. In the case of Open XML, it means that “don’t ignore the elephant in the room” arguments —that the needs for level-playing field basic document exchange by governments and suite vendors (ODF’s supposed sweet spot) trump the needs of integrators, archivists, and so on for Office’s format to be standardized— would be rejected. (Not rejected from all consideration of course, but relegated to their proper place, which is for legislators, regulators, CIO policy makers, and profile makers, not ISO.)
When a standard followed the kinds of principles above, it allows both full-fidelity (the main principle behind the design of Open XML) to meet round-tripping/API-replacement/archiving requirements, and it sets the stage for interoperability between different systems: this is where in addition to the broad requirements of the standard, specific limitations are imposed so that all the different kinds of local, legacy, optimized, common-but-non-standard, and platform-dependent notations, media types, scripts and so on are avoided. ODF has just as much need for these kinds of profiles as Open XML does, as far as document interchange goes, by the way.
It is a kind of paradox: an “open” data format must be extensible, but the more that extensions are used, the more that a closed range of applications will be able to use the document; a document format that is “open” in the sense of having a fixed definition that allows guranteed document interchange is actually must be a “closed” (non-extensible) format! The solution? The long-standing policy of SC34 is to standardise “enabling technologies” and to leave profiles to user groups and industry consortia: XML itself is an example of this. ISO SGML allows many different delimiters; the industry consortium W3C picked a particular set of delimiters and features, added some internationalization features, and re-branded their profile “XML” which gives simpler interoperabilty.
In the absense of these kinds of principles, what we have is a line of argument that reduces to “Microsoft is bad, therefore anything they do or make is bad”, even when Microsoft is forced to backflip and to start doing the opposite of what they previously did: in this case, abandoning closed, binary formats. Ten years ago, Bill Gates was saying they would be crazy to open up their file formats, now they are doing it. If users and, most importantly, system integrators, keep on encouraging them to further open up and adopt a more modular architecture, it bodes well for where we will be in ten years time. The future is mix and match.
Last week was a big one: if we are lucky it may have been the week in which the US patent system imploded under the weight of its own recent ludicrousness, hopefully to be replaced by a saner one. I am no fan of patents: they are government-granted monopolies and distortions of the market; I can understand their usefulness for encouraging research and development in important but non-lucrative areas (drugs and technologies suited for the 3rd world problems, for example) but it seems to me that without patents companies would still innovate: they would just do so differently,
Speaking at a conference this week, I mentioned that the innovation of the WWW, which surely has dominated the shape of modern IT and production more than any other technology of the last fifteen years, is notable for its use of standards (IETF, W3C, ISO, etc) and for the irrelevance of patents (except as a distraction and brake). Now I know that the physical side (the big pipes of CISCO and so on) do have a lot of IPR involved, but I suspect that the owners of the IP would be making their products and profits regardless: removing IPR would lessen the chance of super-profits, reduce the gambling short-term view of VCs, but not prevent market demand.
A later speaker at the same conference pushed the “you won’t get anywhere unless you aggressively respect IPR” line: non-Western countries get a lot of this. Because of the manifest failure of the US patent system, I have not been able to subscribe to this: an unfair, eager-to-please, monopoly-encouraging IPR system directly reduces the ability of R&D-poor countries to compete, and distorts production away from consumer products and towards products for rich people. But at the same time as we were speaking, on April 30, a bomb was waiting to go off in the US that may force me to change my mind, or at least to temper my opposition.
If we are seeing the start of the reform of the US patent system, so that the bar to patents becomes extremely high, there may be a sliver of merit in very temporary monopoly grants as long as it encourages bringing products to market: granting patents that then allows the holder to sit on the patent and protect their current technologies is a terrible abuse, and, indeed, should be an offense: if a company does not want to take up its patent rights, they should lapse. (The Japanese had a similar thought in their old patent system, which IIRC had five year grants to force products to market: this perhaps shaped the form of Japanese technology and innovation towards the practical and quickly realizable, not a bad thing in my book.)
The bomb that has caused this implosion is of course the recent US Supreme Court decisions on KSR Int’l Co. v. Teleflex Inc., which has raised the bar for “obviousness” so that it means something more like…err… ‘obviousness’ and to re-urge caution in granting patents merely “on the combination of elements found in the prior art”, and Microsoft Corp. v. AT&T, which was to do with extraterritoriality and the issue of whether non-physical software was a component of a patentable device. Groklaw has the judgments here.
I’m starting to believe that Silverlight may change the world. It really raises the level of what is possible.
Peter Fisk on Silverlight (via a private email thread dated May 4th, 2007 (used with permission))
Many said Jack didn’t get the digital age. I think what he didn’t get was binary thinking. As a man who stood next to Jackie Kennedy as Johnson was sworn into office, and who lived the following forty some years in public life, he understood the importance in subtle differences. Let that analog understanding survive.
Lawrence Lessig on the passing of Jack Valenti.
Microsoft XML Team’s WebLog : Live from MIX07: Silverlight and XML!
XML Features in Silverlight
In the Silverlight 1.1 Alpha release, we have enabled streamed XML reading and writing through the XmlReader and XmlWriter, respectively.
That’s it, you say? For the MIX Alpha release, yes. Over the 1.1 alpha release cycle, we have focused on providing a great XML foundation within Silverlight through the reader and writer in order to enable the delivery of additional pieces of the XML stack within the context of Silverlight in the future.
XML, Silverlight, and the Future
Going forward, we are planning to support LINQ to XML within Silverlight to enable a great story for query, caching, manipulation, aggregation, and data binding using XML.
Additionally, we’d love to get feedback on what types of activities are relevant for you, given this great new programming model of .NET within the browser. In particular, how do you feel about the following features in the browser?
· XSD Schemas
· XPath
· XSLT
· DOMWell, the dinner bell is ringing here at MIX07, so that’s all for now. Though, as we’re now allowed to talk about Silverlight publically, I am very excited to discuss XML and Silverlight, what types of applications are interesting for you in this space, as well as the types of XML features are relevant for you in the context of the browser.
Great! Here’s my wish list,
· XSD Schemas
· Schematron
· RNG
· XPath 2.0
· XSLT 2.0
· DOM
· E4X
As governments and organizations increasingly adopt document standards, such as ODF, for data interchange and to allow non-Microsoft products a better level-playing field, designers of documents and forms will need to alter their approaches so that users can print or display the forms on their particular without concern. Here are some rough tips, and I’d welcome more.
Adobe announced a few days ago that they would be open sourcing the FLEX API and framework. I’ve found it amusing and instructive over the years to realize just how open source licenses are increasingly being used by companies as a business weapon, as a means of gaining (or keeping) control over a market at the cost of losing software license fees.
Certainly, this is the case here. Adobe and Microsoft have long been engaged in a quiet cold war that has, at its base, control of the way that information is presented - how documents are laid out and fonts are displayed, how vector graphics work in two, three and four dimensions (assuming time as the fourth), how we build user interfaces for everything from game programming to advertisements to forms. Adobe, Microsoft and the W3C have each established differing approaches to this problem of presentation, the first two by creating proprietary standards and technologies on top of them, the last by creating open standards and encouraging the use of these standards by others to build the technologies.
// @author RussMiles.com - Home - Why Rails is not yet ready for SOA…
I am most definitely a Rails advocate. Not to the point of religious fanaticism, as you sometimes see around the Ruby on Rails camp, but definitely a massive fan. Rails, to me, is honestly a technology that makes it very easy to create great web applications and, funnily enough, services[3].
So why do I say that Rails is not ready yet to be great for SOA? Well, the key in my title is the word ‘yet’.
More goodness at the above linked title…
Thanks for the review, Russ!
This has been one of the rougher weeks I’ve faced technologically speaking. A technical glitch forced the server that housed most of the Understandingxml.com content for the last year to reformat itself, wiping out much of what’s been there for the last several months. Fortunately, much of this content was also echoed on this site, so not a huge amount has been lost, but it will take a while before all of my essays have been migrated back. For those of you who’ve wondered where Understandingxml.com has gone, well - it will be back up in time. I’ll be using XForms.org as my primary blogging point for a while, until we can get things sorted out with UXML.
It happens periodically that I get hit with what I’ve come to term a “chaos storm”, where technology in general seems to experience extreme entropy around me. When I was first learning computers in high school, I’d actually held off even though programming appealed to me because I had absolutely no aptitude whatsoever with electronics, and even several years later when I was working on a degree in physics I chose fairly esoteric theoretical areas because the experimental physics professors quietly discouraged me from getting anywhere close to a laboratory.
[IronPython] pybench results for CPython and IronPython
For me, the difference between python’s dynamicity and Boo is simply
that python allows for a more exploratory way of development.
One of things that I have come to absolutely *LOVE* about IronPython is the interactive capabilities the ‘intellisensed’ console facilitates. Programming in “real-time” (AKA Read Evaluate Print Loop, or REPL) as opposed to statically compiling an application to then run the result has got to be the most powerful programming pattern we dev folk have in our development tool bags. Until just now, however, I hadn’t thought of it in terms of “Exploratory Programming”, but Luis has nailed it right on the head, as that’s exactly what progamming in Python and/or any other dynamic language-based development environment is all about.
Nice!
I just caught up with the interesting news item of a fortnight ago that Malaysian standards body Sirim Bhd has “suspended the process for approving the Open Document Format (ODF)… as a Malaysian standard.”
What was particularly interesting was the reason given: “Ariffin said some TC/G/4 members had taken to belittling other members who did not share their … views, both during committee meetings and in personal blogs. These … members were also attempting to short-circuit the normal consensus process for adopting a document standard, he said. ” “”There has been unprofessional conduct and a lack of ethical standards among some members of the technical committee. ”
Now I don’t know anything more than the article and various blogs around the place claim. But if this represents a trend by standards bodies to get tough on personal attacks and so try to bring back a more professional and civil level of discourse, then that is great.
A cooling-off period may seem a strange approach if we have the idea of standards committees as being like courts of law that judge technologies. But in fact they are like formalized conversations. Courtrooms are the world of accusation and defense; standards procedures are the world of dialog: questions, answers, suggestions, tentative questions, and so on. Committee procedures stop hectoring and bullying, and make sure that member’s voices are heard: this is frequently boring, but nevertheless a Good Thing.) The aim of a conversation is the meeting of minds: I have mentioned before that the ISO process is one of consensus, of trying to find win-win positions, and I think the same is generally true of national bodies.** (Of course, not every conversation is constructive, or cannot be dominated, so I don’t want to take the analogy too far!) IIRC, ISO suspended the 802.3 committee because it did not seem to be constructively engaging with China for a similar timeout; it is unusual, and salutatory but pragmatic.
Sirim’s boss Dr Ariffin makes some very interesting side points too: I hadn’t heard* his attributed view, for example, that ‘a mandatory standard would constitute an illicit non-tariff barrier against software products using other document formats.’
I think his reported position that ‘a standard can only be mandatory when public health or safety is at stake” has a nuance: it relies on the distinction between voluntary standards (where users decide to adopt) and mandatory standards (where the state forbids anything else, say due to treaty obligations, and may police). Governments, as organizations, can still restrict themselves to a voluntary standard as part of IT strategy or other policy, all other things being equal, however: that is a different issue.
I will be in Malaysia (and Philippines and Thailand) in mid-May, presenting some seminars on Open XML and the standards process, so maybe I’ll get some better information then.
Give me a child until he is seven and I will give you the man. I wonder whether my enthusiasm for plurality springs from my childhood indoctrination by the book Fattypuffs and Thinifers. Not so much Maximalist versus Minimalist, but Monolithist versus Layerist, perhaps.
My dear old Dad had a patient, an old lady who used to work in the railway kitchens who, a little intoxicated by her medication, presented him every week with an enormous multilayered chocolate cake, about three times as high as round and both splendid and terrifying. I think from that I learned that even a highly layered thing can be too big and fat. Perhaps people who make unnecessarily large technologies never had the benefit of the cooking of eccentrics: perhaps we should be feeding our children more, or, at least, more miraculously architected, cakes.
Since 1999 I have been putting out a diagram “A Family Tree of Schema Languages for XML”. Here is version 7: I have redrawn it because it runs onto an A3 sheet.
The extra size has allowed me to make the lines less confusing and to add
* the complete set of ISO DSDL schema languages
* the ISO Topic Map set of constraint languages
* the RDF family of schema languages
* the newer and proposed schema-related languages
* some misc older languages, or languages that didn’t fit
* a new section for other languages
* more of the schema languages that I invented during my time at Academia Sinica, in fine print
Here is a higher resolution version.
A PDF is available for download here.
So in preparation for this, which is also in preparation for the launch of something even bigger, I’ve been pretty much heads down in both code and some other things of which I can’t really speak about at the moment, so I am coming to this news flash just a tad bit late. But regardless, this is a little too big to just ignore.
I’ve obviously taken a somewhat playful attitude with the fact that I am under NDA with Bungee, but none-the-less, I can’t really speak all that much in regards to my own knowledge about what they have going on behind the curtain. If you will be at the Web 2.0 Expo, you’ll find out soon enough, and if not; well, with the stir they are bound to create, again: You’ll find out soon enough. ;-) That said,
only this, and nothing more: Lack of elegance is viral
Many years ago, I made my living writing Perl CGI scripts, but the state of the art has made some serious advances since then. PHP is not one of them.
I was out for a walk earlier this afternoon and came across this incredible performance by what I believe is the Bellevue Christian Concert Choir (I spoke with some of the students afterwords (pictured below) and they mentioned they were from Bellevue Christian School.)
XML is everywhere, and XML technologies are proliferating across all spheres of IT. Naturally, some XML technologies are more successful than others. However, which XML technologies are beautiful?
Touch more Englebart and Kay (and plenty of Berners-Lee) please, hold the Gates and Jobs.
*FANTASTIC* review from Danny Ayers on the OLPC. You should take the time to read it.
So Danny,
Adriaan de Jonge’s article Xforms vs. Ruby on Rails has created quite a stir in both the XML and Ruby communities, and for good reason. He asked a fairly important question - are XForms an also-ran technology that Ruby has managed to supplant?
I’ve been thinking about the article for some time myself. Certainly it’s a difficult question to answer, because to a certain extent I tend to agree with him … on some aspects. XForms had a lot of potential that it hasn’t yet lived up to, does require more than a little bit of advanced computing skills (or at least the right mindset) and suffers the fate of many of the W3C standards, which is to be smashed up against the rocks of browser vendor indifference.
And yet … I’m not quite ready to throw in the towel. Now, don’t get me wrong. I’ve worked with Ruby, and overall I think it is a delightful language for many of the same reasons that Adriaan does - it is more declarative than imperative, it places unit test planning well ahead of coding, it fits in very nicely with the web development paradigm, and it works reasonably well in mapping basic data sources into objects. It’s contributions to the development of JavaScript have been crucial to the success of AJAX, and overall I think that it would not be a bad skill for any web developer to have on his or her resume.
Update: As per my follow-up to orcmid,
Oh, I think the marketplace is a *GREAT* idea, and in fact is *WELL* overdue. They should have been doing this all along! I was just laughing at Oleg’s follow-up phrasing of one (of many!) ways you could utilize OSS to your financial benefit. In fact, I almost titled the post “On SourceForge and Open Source Obfuscation”, (and probably should have now that I think of it), but chose not to for some odd reason.
I was in a hurry, and didn’t extend things as I normally will, so my apologies to those of you left with the impression that I thought the SourceForge Marketplace was in any way a bad thing.
[Original Post]
Signs on the Sand: SourceForge Marketplace
Sounds interesting. Another way to get rich - create great open source product, make your code unreadable, provide no documentation and then sell support :)
Dear XML Mind,
I was disheartened to learn that you’ve chosen to make the free version of your product even less useful to end users. This appears to be an acceleration of a recent (and alarming) trend in your release cycle of removing features for non-paying customrs.
While I can certainly understand the desire to convert more customers to paying ones, I suspect that what instead will happen is that people will abandon the product entirely, rather than risk running afoul of the license.
Here at O’Reilly, your XML Editor has helped to bring an XML workflow to many more mainstream users (in authoring and production of manuscripts) than any other tool available. Among the biggest selling points for authors has been their ability to dowload and use a fully functional version of the product for free (making it a viable alternative to OpenOffice, to our advantage). It’s a very low-risk way to test the waters.
As we develop and expand our XML workflow, we’ve purchased a half dozen licenses for the professional version of XML Editor, and have had several discussions about purchasing an Enterprise-level license within the next 6-18 months (partially pending the addition of a revision-tracking feature). However, this latest announcement means that we’ll need to seriously reconsider that — most of our authors are not on site, and are not employed by us, so would not be covered by an Enterprise license. And it is not economical for us to purchase individual licenses for those authors (200+ per year).
Again, I can certainly appreciate your desire to convert more free users into paid ones, and I of course don’t know the specifics of your business model or situation. But I *strongly* encourage you to reconsider your decision on the Personal Edition license. While it may well be the case that some of your users who are using the Standard Edition are getting something “for free” that they would otherwise pay for, I suspect that the vast majority will not convert to paid customers, and the overall user base for XML Editor (and hence potential market) will shrink. As with the case of music and software piracy, you may soon find that such restrictive measure have the opposite effect of what you’re intending (see Piracy is Progressive Taxation)
Regards,
Andrew Savikas
Director, Digital Content and Publishing Services
O’Reilly Media, Inc.
A couple of weeks ago, the W3C made an announcement that caught a great number of people by surprise. After nearly a decade of inactivity, the HTML working group was being restarted, in order to handle the fairly significant amount of development that has occurred on top of the HTML standard since HTML 4
.3 became the last formal HTML standard prior to the introduction of XHTML.
I have to admit to some qualms about seeing this. I’m not doubting that it isn’t needed - XHTML’s adoption has been comparatively slow because of the legacy base of HTML out there, the introduction of AJAX has shifted the balance of power to imperative scripting, and the realization is increasingly being made that the namespace issues dividing HTML and XHTML are beginning to tear the standards apart.
The question, however, comes back to the role that XML ends up playing in all of this. HTML has its own DOM, can in fact be treated as quasi-XML-like, but it most demonstrably isn’t well formed XML in most cases. For those of us who have been pushing XHTML adoption in industry, this is going to be seen as a fairly major step backwards, as it has the potential to make browser developers decide that perhaps incorporating XHTML support isn’t that big of an issue, and can be pushed off for a release or ten.
First, my apologies for writing a personal introduction, then being silent for the next three weeks. That’s not what I had been planning. The problem, as you might guess, has been too much work, too many jobs, all competing for the same time. And, since I’m an entreprenuer, my work tasks feel no shame about demanding my time late into evenings and on weekends. They tell me I chose this career path, so now I must bear it!
Meanwhile, during those weeks I have thought about posting here again. So, having a bit of unexpected “free” time, and not being the kind of person who finds the “couch potato” kind of relaxation particularly appealing, I just sat down and started writing this. But not until some reading of other blogs stimulated these thoughts.
Technology, Blogs, and Traditional Journalism
Though I have several web sites of my own, the AOL Developer Community is pretty much my home site, these days. So, I was visiting the consolidated blog page there, catching up posts I hadn’t yet read.
As I browsed back into the posts, I came upon “The Power of an Apple”, posted by M. David Peterson (of XML.com fame, of course). I’d noticed the post several days ago, but I hadn’t read it, being preoccupied with project deadlines. I was expecting the post to be about Apple computers, or about some technical innovation wherein an apple is employed to illustrate some technology principle. To my surprise, the apples in this post are simply eaten, and the post describes the effect on the writer of such ingestive activity, along with the negative consequences a deficit of apples seems to induce.
Now, this post is certainly unusual for a technology blog. Some people might even argue that it doesn’t belong on a technology site. But as I read the post, my long-running interior discussion with myself about the relationship and differences between traditional print journalism and blogs was reignited once again. I find blogs and blogging fascinating in many ways. While “serious” blogging is similar to old-fashioned newspaper reporting, it’s also different in important ways. I think it has to do in part with the cost of posting information on a blog, versus the cost of printing on paper. But, my point is that in a blog — not an individual post, but a writer’s blog in its entirety — there is room for a fuller view of the person behind the posts.
Of course, many blogs are exclusively expressions of an individual’s personality (or the image they’d like to convey). I’m really talking here more about “serious” blogs, for example, technology blogs. In the past, in printed newspapers, or magazines, the editorial staff could not permit publication of a piece that would be of interest to only a small number of readers. With the Internet, with blogs, with Technorati and Google and other blog aggregators, the cost of publishing is lower and yet it is still possible for people to find the post.
Of Apples, Alarm Clocks, and Mother Nature
Anyway, finding myself unexpectedly at home today, while I was on hold calling USAirways to try to get a refund for our tickets for our flight (which was cancelled by New England’s very late “Noreaster” snow/sleet storm), I found it very interesting to read:
In the same way the sun rises and brightens each of our days gradually, I believe that our bodies have been trained for millions of years worth of evolution by this same process to gradually come back into the world of wakened conscience.
and:
The smiles back! And thanks must be given, yet again, to Mother Nature, a woman who obviously understands how to take care of herself: No alarm clocks, no additives, and no preservatives.
Reading one of my peers saying such things was enough to give me pause. Hmm.. So, yeah: even though so many of us spend enormous amonts of time thinking about and working on understanding new technologies and their implications; even as we find ourselves facing an almost overwhelming blizzard of information that we know is interesting and we know would be useful to study and understand; even as we see hundreds or thousands of potential good reads fly by each day, their titles barely skimmed, because we know we don’t have time for the pursuit… it is important to somehow try to keep things in perspective.
Seeing this added something to my interior conversation about blogs versus traditional publishing.
The Advantages of Blogging
I’m no psychologist (in fact, I tend to doubt that their formulations are of much relevance in most cases), but I think blogs can play an important role for both writer and reader that was not possible in the case of old-fashioned print journalism. You’re reading along, it’s part of your job to study this stuff, and you suddenly come across a little island oasis, such as “The Power of an Apple.” And it’s refreshing to the reader. Hopefully it was refreshing for the writer as well.
Although some people may complain (”What does this post have to do with this site?”), the Web is such that a mouse click takes you elsewhere if you’d prefer to read something else. But I’d actually argue that a post that brings some revelation of how the author lives, where the energy that is seen and expressed in the technology posts comes from, how life is ordered to support and elicit the creativity that is displayed in the more standard technology-specific posts, is relevant. It’s useful (and also entertaining, to me) to know that three apples a day and a lack of alarm clocks are what is behind posts such as
and a very interesting project project involving
Quite a combination of technologies! And it works on all the major platforms: Windows, Linux, Mac, etc.
Conclusion
Blogging as a venue for technology writing can bring the technology itself more to life, I think, because in knowing something about the writer, who is the “narrator” of the technology posts, you have additional insight into the point of view from which the technology is being surveyed, described, analyzed. I consider that to be valuable insight, which could not normally be published in traditional printed media.
For example, now that I know M. David Peterson doesn’t use alarm clocks and does eat apples, I feel like I understand his other posts and his project just a little bit better. Not that I can throw my alarm clock out the window (though I’m working diligently to try to get to that day). But, we do have some nice Fuji apples waiting out there in the crisper. Hmm…
Gregor J. Rothfuss
(March 16, 2007 11:18 AM #)
After clicking the above link left as a comment to Bill de hÓra’s recent “Future proofing” post, it took me a second, but then realized why this most definitely deserved credit as Photo of the Day.
Google: Deliver this, and your Goose is forever Golden; MSFT: Deliver this and you’ll cook Google’s Golden Goose for this years Thanksgiving Dinner.
’nuff said. ;-)
Call it Len’s Proposition … I’ve been taken to task recently by Len Bullard for my unflagging support and belief in open standards in general and XML in particular. I respect the many voices here on XML.com, especially Len’s, so when he starts saying the sky is falling I generally at least look up, but it occurs to me that this offers an opportunity for many of those same commentators to express their positions about the big issues about XML. With AJAX and JSON in one corner, .NET and Linq off in another, Java sitting impatiently in a third, not to mention a host of languages such as Scheme, Lisp, Haskell, etc., just waiting to get their boxing gloves on, XML’s position may be far from secured. So I think the ultimate question I’d ask here is simple:
Is XML doomed? Is it fatally flawed, too weak, not weak enough, too abstract, too specific? Is the core philosophy that it enables, the principle of open standards, a far-left communist plot or the salvation of the computing world as we know it? Have we gone down the wrong path, and only determined action now will right that wrong?
I’ll weigh in with my own opinions in a bit, but I open the floor to one and all … was XML a mistake? If so, why, if not, why not? (You may use a #2 pencil for your answer).
Update: Miguel has provided a *FANTASTIC* follow-up to a question posed by Robert,
Well let’s hope Miguel changes from Mono to Java now that Java is going GPL.
I personally questioned why on earth he would want to do that when Mono provides support for not only the .NET languages, but for Java itself via IKVM.NET, but Miguel has taken it several steps further by not only providing an extended understanding of why he feels Mono/.NET is the better platform, but most importantly (from a Linux-Geek perspective, which Miguel quite obviously is), why it’s the better choice in regards to the promotion of Linux as a platform. He then concludes with,
trollbait: I think that if anything, now that we got a free java in the pipeline, the free java community can focus on improving Mono :-)
Beyond stating that I most definitely agree with Miguel, I’ll let the rest of y’all take it from here, providing the following question to build and extend from,
What’s the point of having a platform in the first place, if you don’t utilize that platform to its fullest potential? For example, Windows has its strengths, and its weaknesses. The same is true about Mac OSX. The same is true about Linux. Why approach things from the standpoint of vanilla, when there’s chocolate, strawberry, vanilla almond fudge, and *OH SO MUCH MORE* than just plain-old vanilla?
That’s not to suggest that vanilla is a bad flavor in the same sense that chocolate, strawberry, or vanilla almond fudge are bad flavors. It’s really up to you which you prefer, right?
So then: What’s your favorite flavor? and why?
Thanks for the follow-up, Miguel!
Update: Manu Stapf followed-up this same post recently with the following,
If you read the original article till the end (https://www.apostate.com/
programming/bm-freesoftware.html) you will realize that Bertrand Meyer
was actually proposing a new way (thus the title of the article) to
embrace/recognize/encourage/promote a vision of open source software.
I’ve appended to the end of this post Bertrand Meyer’s “A COURSE OF ACTION”. I am still trying to consume it all internally, but I have to admit that at first glance it most definitely seems to hold significant value as source of inspiration in regards to how to approach the world of software, both open and closed, commercial and non-commercial alike moving forward.
Thoughts from the community at large?
[Original Post]
Eiffel now Open Source on 13 Mar 2007 - tirania.org blog comments. | Google Groups
Not everyone understood open source the day it was launched.
Look at Sun and Java, they have a history of decades of resisting the
open sourcing of Java, it took a long time, but they eventually
changed their mind.Like someone said “wise people change their minds” ;-)
Miguel.
You are my new hero!
Don Dodge on The Next Big Thing: Microsoft lawyer rips Google on copyrights - Why?
Oh boy, here we go. Microsoft attacks Google on copyright regarding their book scanning project, and then takes a swipe at YouTube as well. Really dumb move! What are these Microsoft lawyers thinking? Even if they are right, which is debatable, what reaction do they expect from the public at large? This strikes me as pandering to the Association of American Publishers where the Microsoft lawyer is speaking today. Here is a transcript of his speech. The speech is actually pretty good until he drives over the cliff and starts slamming Google.
The AAP filed suit against Google for copyright infringement 16 months ago, and it is still in the courts. What is to be gained by making these inflammatory comments? Be quiet and let the courts sort this out.
NOTE: For those unaware, Don Dodge is a lawyer. Apparently I was mistaken. As I followed up in a comment below.
All of this time, and I thought you were a lawyer? Yikes! I think it must have to do with your “say it like it is” approach to the business world (which I *GREATLY* admire, btw…), and your tendency to tackle the tough, legal related issues that most ’softies tend to steer clear from.
ADDITIONAL-NOTE: For those unaware, Don Dodge works — as an *lawyer* inspiration to us all — for *Microsoft*!
Dear Microsoft,
I’m sitting on the Queen of Sannich, one of the older BC ferries, on its way to Vancouver from Victoria. A gentle fog envelopes the surrounding water, turning the small islands that we pass into abstract hills fading into the distance and the water itself into pale white sheets broken only slightly by the turbulence of our passing. Earlier, we had a spitting of snow, a last lingering reminder that winter gives up its presence only very reluctantly, even into these first days of March.
I usually enjoy these trips on the ferry, for all that they add considerably to the complexity of returning to the mainland. They are times for reflection, for helping to put things into perspective. Reflection, and the time for it, is becoming a rare commodity in this world. We move at “Internet Speeds”, twenty-four hours a day, caught up in the big now, yet in all of our trumpeting of technological advance we often lose sight of the fact that the truly profound discoveries and realizations of humanity did not come from the middle manager, from the hyperactive programmer or the driven politician. Instead, they came from people who sacrificed some or all of their worldly drive for goods and status in favor of spending time in reflection, for taking the time to truly think.
Where is XML Going? - O’Reilly XML Blog
“I do not think that JSON is going to “replace” XML; what I do see though is perhaps the dawning realization that the XML Infoset does not in fact have to be represented in angle-bracket notation”. I very strongly agree with that. ‘XML’ will come to mean the Infoset (or the XQuery data model, or tree views and XPath-like axes over object graphs) more than the bits on the wire format. That liberates XML tools to support JSON, various binary XML formats, HTML tag soup, etc. without insisting that everyone play by the XML syntax rules.
Michael Champion | February 28, 2007 07:59 PM
So I’ll just come out and say it: XML doesn’t matter!
Okay, yes it does. But not in the same way people seem to thing it does. In this regard, I agree 100% with what both Kurt and Mike (Mike’s comment stems from Kurt’s recent above-linked post) have to say on the matter.
In a follow-up comment a while back, I posed the following question,
Update: So, Brianary now gets the award for not only making me laugh by finding creative ways to mock my mockery,
Girl Zune (to Zune): What are you doing?
Zune: I’m squirting a song to you!
Girl Zune: That’s gross. I don’t see it.
Zune: Oh, I guess the label that “owns” this song didn’t allow squirting.
Girl Zune: Stop saying “squirt”.
(note: you have to admit, that’s pretty funny ;-))
but also for providing one of the more honest comments to the original post: One that highlights yet another true competitor: The iRiver.
I’m not interested in DRM crap that prevents me from using a UMS driver to sync my player with RoboCopy. My 1Gb, 40-hours-on-a-AA-battery iRiver iFP-799 works just fine, thanks.
Folks, this isn’t about FanBoy’ism > This *IS* about finding ways to highlight the fact that there is more to life than just the iPod, and as such, *encourage competition*.
Do I love my Zune? Yep! But while I have never owned one (maybe I should change that?), to be honest, I don’t recall *ever* hearing *anything bad* about iRiver. And to be even more honest: I’ve only heard *REALLY GOOD* things about iRiver. < Maybe you should check them out?
Sounds like a pretty safe bet to me.
Thanks for the laugh and the honesty, Brianary! :D
[Original Post]
While many of the follow-up comments to the iPod/Zune > Mac/PC parody disagreed with the premise behind the parody, you can’t help but laugh and provide credit for those who actually took the time to come up with something more creative than the typical “I think you’re the best writer in the whole world! You should be writing TV commercials!” M. David FanBoy/Girl club member comments which tended to be the norm. ;-)
As such, here’s some of what seem to be the better, more creative efforts to provide opinions of differing persuasion,
I did my prognostication schtick at the beginning of the year, but I thought given the resuscitation aspect of this particular blog, that I would go back and look at things in a little more depth, as I’m seeing a number of interesting, and in some cases disturbing trends that are unfolding right now, things that will likely have a fairly significant impact upon the XML development community.
Update: With all of these updates, if you would like to view the original post without the need to find it first, please do. That said, please keep in mind the fact that 1) It’s a parody. 2) A parody is intentionally written to mock. 3) There are some decent follow-ups that you may want to take into consideration beyond this (for example, what follows). I believe it would be worth your time to come back to them if/when time allows. 4) Thanks for keeping these three things in mind. :)
So two more things,
1) I’ve followed-up this post with a listing of what I feel to be the better, more creative attempts to mock my mockery. In short, if you are going to spend the time to write a comment that disagrees with the premise behind something I might post, at least do so creatively, or with solid factual reasons to back up your feelings (or both.)
2) To the second point,
Hey M Dave. Enjoyed the post. Not a Zune fan, but appreciate the interface as it looks good. I haven’t spent a lot of time with it nor will I buy one, but at least I’ve got good reasons beyond the nonsense I’ve read in some of the replies above. People, good tech is good tech and the fact that some device isn’t more popular OR the one you chose doesn’t make it crap. My personal reasons for not buying one is:
1. Doesn’t currently support podcasts. I’m a podcaster so this is a feature I want. Chances are this will be fixed in either the firmware or with the next gen.
2. Microsoft’s payment to Universal music. While this shouldn’t be a showstopper as it doesn’t come out of my pocket directly (the price didn’t go up as a result of this), it really ticks me off and is a bad precedent.
3.Doesn’t work with a Mac. And yes I know that the iPod also didn’t work with Windows PCs when it first came out. If I was a Windows user, I wouldn’t have bought an iPod back then either.
Thanks for the update on Erica Sadun. I reviewed her iMovie books on Amazon and at MyMac.com and gave them thumbs up for because of how great they were. Here’s hoping that she’ll do a new one for iLife 07 (whenever the hell Apple decides to release it)
The above comes from a recent follow-up from Guy. These all seem like legitimate concerns to me. Maybe they do to you as well? Not sure, but having this information brought to the top of the post seemed like a good thing to do to ensure you had an ever more solid base of information to make any future purchasing decisions on.
Thanks for the follow-up, Guy! Oh, and regarding Erica: *COMPLETELY* agree! Erica is a one-in-a-billion talent. We geeks are lucky to have her as one of our own. :D
Håkon Wium Lie’s recent CNET column is entirely dodgy in its details, but solid in its ultimate premise (can a premise be ultimate?): in all the talk about ODF and OOXML, it is important not to lose track of HTML’s potential and actual suitability for much document interchange.
I’ve endorsed this position many times, but it is worth stating it again. For simple word processing style documents, if you need interoperability (and you want to get it by restricting the kinds of structures in the document so that the documents can be read by many different applications and be easily repurposed), then HTML is the format to consider first: validated, standards compliant XHTML in particular. Think of it in terms of a continuum, with HTML at one end (simple WP documents), PDF at the other end (full page fidility but read-only): HTML, ODF, OOXML, PDF. And certainly not to forget the ultimate premise(!) of markup: to rigorously label the important information in your documents accroding to its rhetorical and semantic structures, which sometimes simply requires custom schemas and microformats, extending or augmenting or even replacing the standard formats.
On this last point, Lie has a great line: speaking of ODF and OOXML … I’m no fan of either specification. Both are basically memory dumps with angle brackets around them. Lie thinks this is a bad thing; I think it is a necessary thing: sometimes you want to only save what can beread everywhere (the case with HTML save) but usually you want to save everything that is in your document so that when you next open it, it is exactly the same publication you saved
When looking at any writer on standards at the moment, it is good to establish point of view. Lie is employed by a competitor of Microsoft, Opera Software, whose business is based on standards from W3C not ISO. It is not shocking that his response to the issue of Office formats at ISO revolves around promoting that applications should follow W3C standards. It is not as much a non sequitur as it seems. Lie is the inventor of CSS: it is hardly suprising he does not want it sidelined, especially if organizations or governments adopt ISO formats ahead of W3C ones.
Perhaps it is time for W3C to take ISO seriously, befriend it, snuggle up to it, and put the core Web standards through some kind of fast-track procedure as well? Deal yourselves a better hand! The world of fast-track that ISO JTC1 has, in its wisdom, launched us into is intended to allow specifications from consortia that corporations can participate in and dominate (W3C, Oasis, Ecma, etc.) to get promoted up to ISO standard level, if national bodies vote to accept them. This is because ISO sees its core activity at enabling agreement, not sniping at rival brands.
ISO has taken a more constructive approach than denigrating the boutique standards consortia: it does not denigrate ECMA merely because it is designed to help companies make their technologies public with copyright-free specs fast; nor that W3C may be extra-accomodating to the larger fee-payers (such as the ‘phone calls’ that Lie mentions); nor that OASIS’ procedures made it susceptible to ‘branch stacking’ to favour one groups technology. Because, at the end of the day, ISO votes are very hard for commercial organizations to manipulate. There are simply too many countries; where manipulation is possible, of course, is when there is the appearance of a grassroots movement in many countries; however, at least some national standards bodies (and their committee members) take a dim view of lobbeying on non-technical issues, and certainly against single-issue committee people with no real interest in promoting standards.
Update: The first part of “Week 1 : The Zune Experience” (with more to follow later today) is now available @ https://dev.aol.com/blog/mdavidpeterson/2007/02/26/week-1-the-zune-experience
[Original Post]
So as I blogged about last Thursday, I received the Zune I was awarded for being one of the first 10 folks to create and publish a VHD-based instance of their rPath Linux-based project. In the 10 days since, I’ve realized a couple of things,
1) “WOW! You think maybe you could turn up the quality rating the next time you post a picture of yourself so you don’t look like a 14 year going through puberty?” Or is just the angle I’m looking at it again, this time from a different monitor?
Well, regardless, my apologies if I scared you, your children, love ones, or possibly any of your pets due to concerns over catching “Whatever the hell that is on his face! Beth, get some rubbing alcohol! John, *DON’T* touch the screen until we disinfect it!”
Yikes!
So, on to the next item on the list,
2) Zune ROCKS!!!
As made mentioned at the bottom of that same linked post,
The debate between ODF and OOXML adopters even on these pages has definitely managed to clearly push people into one or the other camp, to the extent that I find it amusing to see the degree to which both sides have rallied around their respective flags. I’ll freely admit that I am very much in favor of seeing ODF’s acceptance as an ISO standard - it has, in most cases, respected the principles that I myself believe about standards, to whit:
Programming, at its core, is the willful manipulation of metaphor. This may sound, perhaps, like a lesson more appropriate for an English literature class than a column on the nature of coding, but that statement nonetheless not only describes the sum of software evolution for the last fifty years but likely also describes the arc of computing over the next fifty. Metaphors are lazy tricksters, convenient mnemonics that become new realities as people forget the reason for the mnemonics in the first place. The shifting of magnetic fields within transistors become tokenized as numeric codes, which in turn receive the first level of nomenclatures as short instructions of assembler code.
Yet assembler by itself does not occur in isolation — patterns emerge, and those patterns can be named, and codified in turn, providing the second level of abstraction. We build parsers to convert these abstractions into the appropriate characters, and the parsers then define languages such as C … but only when the systems become fast enough and efficient enough that the compilation process makes sense. Lines of code form patterns which get resolved as functions, and functional programming in turn creates libraries of code that pave the way towards the first level of object-orientedness. Languages such as C++ emerge, and Java, and in time others as well. Yet even here the levels of abstraction begin to fail when the complexity of the frameworks becomes too large, too all encompassing for any one person to ever completely articulate.
I’m a democrat (not in the US capital D sense): I think regulations should be made transparently and fairly by wise heads with elected oversight, administrative checks and balances, and judicial appeal. So I am horrified by the idea that standards bodies should see themselves as lawmakers. Standards bodies make standards that fit technical, editorial and administrative criteria, under the particular articles of association of their group, different in each nation and body. When some part of some community would benefit from an agreed, formalized, vetted and published agreement about some technology, it becomes a standard.
To shift the job of standards-makers to being regulators seems terribly anti-democratic to me: standards bodies simply are not constituted in any way to be democratic. Standards bodies are technocratic, and rightly so, and making them regulatory agencies too could only replace the technocracy (of harmless drudges) with a plutocracy.
It is the job of regulators to decide which standards to adopt or not, and why and where and when and for which uses. They have (notionally) the mechanisms for accountability (at least in democratic nations) and for preventing petty tyranny, whether of a minority against the majority or of the majority against a minority.
Patrick Durusau, editor of ODF, asked me to restate my thoughts on what “contradiction” should mean at ISO. I had mentioned my views in an SC34 meeting last year. This topic is, of course, of interest right at the moment, because the Ecma proposal for OOXML is at the stage in its acceptance process where the process says it should be checked to make sure it doesn’t contradict other standards.
I take a fairly strict view of “contradiction”. Anything else works against fairness of process.
A contradiction is where
A contradiction may have negative effects, such as user confusion, but it is not the negative effects that cause that there is a contradiction; a highly technical standard will confuse anyone. It is the direct contradictions int he text of the standards that is involved.
In a recent blog entry, “The Hardware of Tomorrow Versus the Platform of Tomorrow”, Joe Walker raised some important Ajax issues. He talks about the increasing multiprocessing capabilities of today’s hardware, and web browsers’ inability to take full advantage of it.
He mixes two separate issues, talking about the lack of threading/concurrency support in the Javascript language, and also the lack of multi-threading within web browsers.
It’s this second issue that I want to explore here, particularly this comment:
“The problem is that web-browsers are a step backwards as far as multi-threading goes. In Javascript there is no such thing as a new thread, and worse than that, the entire platform (i.e. a browser) runs a single JavaScript thread. If a script in one window goes into a tight loop, or runs some synchronous Ajax then the browser HTML display freezes.”
Hmmm. That last sentence started gnawing at me. I’ll leave the “tight loop” problem for another day, but is it true that a synchronous XMLHttpRequest call will “freeze” the browser?
If, like me, you’re a subscriber to Darwin, Dawkins et al, then you probably assume that there’s only one purpose for your existence: to reproduce.
With that in mind, when you choose a programming language for a task, you shouldn’t be concerned about performance, security, efficiency, documentation or technical support, but rather the ultimate question of; “which language will give me the best chance of procreating?”
May not be much of a title, but at least it speaks the truth.
But it is interesting that at least on the face of what the Wikipedia guidelines say, the entire premise of all the newspaper articles is actually wrong. We are not talking about a conflict of interest that is banned, we are talking about, at the most, the potential of an appearance of a conflict of interest for which there are non-absolute guidelines.
I wonder if we will see any newspapers or press printing retractions apologizing to me. It is what I would like. They have published my name far and wide in connection with shady allegations. A headline like “Microsoft sounds out guy to improve entries as allowed by Wikipedia rules and says it is no secret and the guy discusses it in a blog but still hasn’t done any edits and hasn’t talked money yet” is not much of a headline is it. If you look at the AP article at CNN when setting up Jimmy Wales’ comments “paying for copy is a no-no.” But they are not paying for copy, they are paying for me to improve technical material on a prospective ISO standard (that they and others would be using.)
DISCLAIMER: None of what I write on this blog or elsewhere can or should in any way be seen as the opinion(s) of O’Reilly Media and/or any of their affiliates. This is stated elsewhere, though it doesn’t hurt to make sure this point is understood.
DISCLAIMER: The opinions expressed here are my own. My employer is me; I’m an independent contractor; I am not paid to write blog entries for O’Reilly, though have signed contracts in the past and plan to sign more contracts in the future to write other material such as books and articles; I write entries here on my O’ReillyNet blog because I,
The news is something I’d rather despaired of ever hearing - XSLT 2.0 is now a Recommendation - big, capital R, with XPath right there beside and XQuery making its formal debut. It’s been a long time coming, but I suspect in many ways that the timing couldn’t be more perfect.
XSLT 2 started out as XSLT 1.1, an effort to try to solve some of the nastier, thornier issues of XSLT 1.0. Getting rid of the damned ext:node-set function that was a non-standard hack that became practically de rigour in XSLT processors, because there are many times where you want XML fragments to just act like XML. The introduction of a <xsl:function> tag so that you could call templates from with XPath. Multiple output serialization, support for tokenization and regular expressions. Take a look at EXSLT (https://www.exslt.org) sometimes, and you can see where many of us playing with XSLT 1.0 were seriously hoping XSLT 1.1 would become back in 2002-3.
My blog An interesting offer - get paid to edit Wikipedia made it to the most downloaded page on oreillynet, was Slashdotted, and even made it to CNN.
Here’s the way the script would have gone in a sane world:
Wikipedia: Microsoft eats babies!
Microsoft: No we don’t
Wikipedia person: Tell it to the hand!
MS: Are you interested in correcting this?
Rick: Sure, lets make sure everything is upfront. I know, I’ll publish a blog called “An interesting offer - get paid to edit Wikipedia” to start the conversation.
Microsoft: No problems.
Wikipedia person: oh, we have conflict of interest rules
Rick: Oh, OK. Whats the best way to proceed then?
Wikipedia person: A or B or C. But we have a problem in our procedures here, so maybe D. In this case, E is OK too.
Here’s the way it went instead
Wikipedia: Microsoft eats babies!
Microsoft: No we don’t
Wikiepedia person: Tell it to the hand!
MS: Are you interested in correcting this?
Rick: Sure, lets make sure everything is upfront. I know, I’ll publish a blog called “An interesting offer - get paid to edit Wikipedia” to start the conversation.
Microsoft: No problems.
Journalist 1: BLOGGER REVEALS BRIBE ATTEMPT!
Journalist 2: SCANDAL! ATTEMPT TO SUBVERT WIKIPEDIA DISCOVERED!
Journalist 3: BABY EATERS STRIKE AGAIN!
Journalist 4: WHO IS WORSE: THOSE THAT EAT BABIES OR THOSE THAT SELL THEM HERBS?
Journalist 5: WIKIPEDIA BOSS IN COMA!
Journalist 6: CONSPIRACY BLOGGER TO JOIN CELEBRITY SURVIVOR ISLAND
Wikipedia reader: Oh, I didn’t know that Microsoft eats babies.
Unscrupulous competior: Excellent, Excellent. Relase the flying monkeys and pass another baby (urp)
From the Official Blog of the Open Document Foundation. {Speaking I suppose to Microsoft:]
I have a counter offer that ISO/IEC might consider; Give us the keys to those legacy binaries and the documentation for the new MSXML InfoSet binaries that first appeared in Microsoft Office EXcel 2007, and we’ll give you international standardization for EOOXML. A fair trade i think, because it will break the monopolist’s grip, level the competitive playing field, and restore competition wherever desktop, server and device systems need to interconnect and exchange information.
Wait a minute. I thought OOXML had so many technical flaws it shouldn’t become a standard! Now you are saying that these alleged flaws can be brushed aside by some other horse trade? Err, doesn’t that mean they are not, in fact, showstoppers to you at all? If Microsoft gives you something else, magically they will become features not bugs? Breathtaking cynicism.
My previous weblog An interesting offer has generated more vitriol than the last time I suggested that torture wasn’t entirely wonderful. One writer even speaks of 30 pieces of silver. I suppose this casts ODF as Jesus, ISO as Pontius Pilate, me as Judas, and MS as Satan in the guise of an angel of light called Ecma. Are OASIS the apostles?
Now, ODF does have kind of biblical connection, I guess: its universally respected editor Patrick Durusau used to work for the Society of Biblical Literature. But I don’t think the virtue goes out of Patrick into ODF (to compare it to the woman with the issue of blood, one of my favourite passages) to the extent of making 30-pieces of silver an analogy that is anything other than offensive. But that is probably the point.
Another writer described OOXML (I think slightly tongue in cheek) as “horribly evil.” I saw that movie Blood Diamond last night: excellent. Child soldiers being hooked on heroin and being forced to kill is horribly evil. I don’t believe file formats reach that standard. That movie suggests that before buying diamonds, we insist on getting proof that they are not from conflict zones, where they fund real wars.
My first computer was a Mac Plus. Loved it. My second computer was an AT&T Unix PC running System V. Loved it long time. My third computer was a Sparc running Solaris or SunOS. Loved it. At work I run Linux, Open Office, Firefox, Eclipse, etc. No drama. For the last six years I have been running a little company making Java programs. Love Java. I do a little open source development, in particular with the Schematron program (quite like it!), but I have also contributed some code to the Flamingo/Substance project over at JavaDesktop, which provides novel looks and feels and more modern GUI components.
The only time I use Microsoft products is on my laptop at home (a present from my dear old Dad), but I need it to run the SynthEdit program for making virtual synthesizers. Oh, I occasionally also use a ten year old Microsoft C++ compiler, to make some DSP filter code: I have released about 80 filters open source this way. I’m not a Microsoft hater at all, its just that I’ve swum in a different stream. Readers of this blog will know that I have differing views on standards to some Microsoft people at least.
As a regular participant at ISO standards, on and off for more than a decade at my own expense, it has always frustrated me that the big companies would not come to the table and make use of ISO’s facilities. So I am a big fan of the Mass. governments push that governments should use standard formats only. I know some of the ODF people, I had some nice emails with the ODF editor over Christmas for example, and Jon Bosak asked me to join the original ODF initiative at OASIS (I couldn’t due to time, unfortunately.)
So I was a little surprised to receive email a couple of days ago from Microsoft saying they wanted to contract someone independent but friendly (me) for a couple of days to provide more balance on Wikipedia concerning ODF/OOXML. I am hardly the poster boy of Microsoft partisanship! Apparently they are frustrated at the amount of spin from some ODF stakeholders on Wikipedia and blogs.
I think I’ll accept it: FUD enrages me and MS certainly are not hiring me to add any pro-MS FUD, just to correct any errors I see. If anyone sees any examples of incorrect statements on Wikipedia or other similar forums in the next few weeks, please let me know: whether anti-OOXML or anti-ODF. In fact, I already had added some material to Wikipedia several months ago, so it is not something new, so I’ll spend a couple of days mythbusting and adding more information.
Update: Some keen words-of-wisdom from “critic“,
It’s still an immutable law of economics that high margins can only be supported by monopolies. As the monopolies erode (as they are now doing because of lowered economic barriers to entry in everything from recording and post-production to distribution), consumers have more choice, and they don’t choose restrictions without benefits.
Nicely stated!
[Original Post]
Close to the end of the third episode of The Matrix, Agent Smith, after “defeating” Neo, suddenly realizes that “this isn’t how it ends.” to then realize that what he has actually done is defeated himself.
Okay, so there’s obviously more to the story than this, but if there are lessons to be learned by this exchange, at least one of them is this,
You don’t *HAVE* to defeat evil. Evil will defeat itself.
That’s not to say that we shouldn’t continue to fight all of that in which is evil. Without a fight, “Evil” transforms itself into the status quo, or, in other words, without a fight, what was once considered evil becomes so ingrained into society that it is no longer seen as being evil, and instead, normal.
Dare Obasanjo aka Carnage4Life - Article in NY Times on Why DRM is Evil
Much rumored Apple iPhone was finally unveiled by Steve Jobs at MacWorld yesterday. It impressed lot of people and Wall street too. But slowly the buzz is fading out and people(including me) are talking about the disadvantages of it. Since their announcement, I have been talking to my friend Kiran Mudiam on the pros and cons of the iPhone. Here are the excerpts from our conversation:
First lets talk about advantages:
* Seamless integration of a PDA,Phone and Internet device.
* Gives the user seamless integration for all the content they have already acquired from the iTunes store.
* They own all the pieces of the puzzle now. Server, PC/Mac, handheld device and DRM
* 720p HD Quality TV in your palm.
* Perhaps will make a comeback of the so much hated “touch screen phone” back to the US Market.
* Perhaps provide good and healthy competition to big 4 phone manufacturers.
Here are disadvantages that I can see in iPhone:
Its too expensive
I don’t think, $599 with 2 year contract is a cheaper price. I guess the carrier is baring at least $200 in the cost of the iPhone, so if you want to get the phone without the carrier(initially it may not be possible) you will end up paying around $800, which is as expensive as laptop. Ofcourse this is going to be initial price and it will come down once it reaches the mass(if it can). I don’t think its the price for mass. Though it provides iPod+Phone+Internet communicator, the mass is not going to use all the features that iPhone has, hence mass cannot afford $599 for the features that they actually use.
For enterprise people, but missing enterprise functionality
Looking at the price and the Internet communicator features, they are targeting only the enterprise people. But it doesn’t provide connectivity to enterprise data. Majority of the enterprises uses either Microsoft Exchange or Lotus notes and iPhone cannot be used with these. It provides Yahoo IMAP push, its great. But I don’t think enterprise people are going to store their data on Yahoo servers. So for them Yahoo IMAP push is useless.
Full flexed Web browser support, not really
Based on the presentation, it doesn’t seem to be the regular browser. If it is like a regular desktop browser, then it should support all the features like audio, video etc. May be it is. But Jobs just demonstrated only text/image based Newyork Times site. If it is a regular browser, then Goggle maps should work in it. But why he demonstrated separate Google maps application? Some may argue that its a separate app optimized for iPhone, so that people can call the phone numbers from Google local results directly. May be, they are right! Lets assume that it supports all feature like desktop browsers, do you think we are going to really use Web2.0 apps on iPhone through Cingular EDGE network(since its not 3G complaint), which provides the data speed ranging from 80 kbps to 110 kbps? Nope!. So the only option in this case is, using it via WiFi.
Exclusive for Cingular, what about the others?
They announced the exclusive partnership with Cingular until 2008. That means no body else can carry iPhone until 2008 in US. How many of Cingular 58 million users will buy this phone and what about the users like me who like the iPhone but can’t switch from the carrier because of the contracts and other things?
Other disadvantages
Already people(including O’Reilly mac devcenter bloggers) talking about these, some of them are:
* No 3G, it works on Cingular EDGE platform, not on Cingular 3G network HSDPA.
* No OTA.
* Cannot isntall third party software.
* No expandable memory. I think 8GB is enough memory, but for iPod video lovers, its a disadvantage.
* No removable battery. If the battery is broken, you send the phone to Apple instead of they sending you the battery. So no problem. Its definitely a disadvantage for the people who use a phone lot and rely on the extra battery. But how many people are using extra batteries for their mobile phones? Its very small amount. So I don’t think its a big deal.
What do you think? I know most of you disagree with me. If you are, please prove me that I am wrong. I am more happy to hear that I am wrong, cause I love this phone.
Update: Ric (who links to JSON.com, which seems to be a new blog about JSON-related items… SUBSCRIBED!) provides a nice summary in a comment below,
XML is
1) DOCUMENT centric
2) Well known with lots of tools
3) FAT (SOAP is)JSON is
1) SERIALIZATION of a structure
2) Less known and not so many tools
3) Thin (No client side libraries needed)JSON is also a form of remote scripting (no XmlHttp request)
Xml is much more mature: quite a LOT of thought on namespaces, unicode, schemas, external file inclusions, and binary attachments (I am working on Base64 encoding in JSON, so that should be not an issue.)
Sounds about right to me. Thanks, Ric!
[Original Post]
… and then it hit me…
The Arguments Are Over � There used to be an argument about whether platform-neutral, language-neutral data formats were important, or whether distributed objects were the right answer. That’s over: HTML, XML, JSON. �
There used to be people who argued that network interchange formats shouldn’t be text-based, but use binary where possible, for efficiency. That’s over: HTML, XML, JSON.
When SOAP is overkill, is JSON the lightweight answer? Seems to me it’s at very least a possibility, and if I were an Anti-SOAP kind of guy (which I’m not, by-the-way… SOAP has its place. If you need it, use it, if you don’t, don’t.) the “war” I would be waging would not be JSON vs. XML, and instead JSON vs. SOAP.
Think of it this way,
JSON: Java Script Object Notation
SOAP: Simple Object Access Protocol
Now before I get myself in any trouble, I’m not advocating that a war be waged against SOAP (please see note from above regarding my feelings on this topic, as well as this three part article on my personal blog for more info.) What I am suggesting, however, is that if you are going to wage a war of any type, shouldn’t you at very least be comparing apples to apples?
Just food for thought…
You *ROCK*!!!
Details to follow later today, but when you add our offline campaign to the online campaign (and assuming we solidify some pledges made in the final week), we will have bested our goal of $300,000 by some $200,000 — raising over $500,000 in total. Stay tuned for some interesting surprises (and feel free to give some more in the meantime.)
Announced today, Microsoft has apparently applied for the patent on “web feeds”. This is one of those announcements that make people roll their eyes up in their heads in puzzlement and disgust, in what I think in this case is perhaps justified anger. The issue in contention is basically that the Microsoft CDF format, which appeared very briefly in Internet Explorer largely in reaction to similar formats appearing in beta versions of Netscape. Both companies were attempting at that time to try to exploit the perception that the Internet would end up becoming the next TV (true, ironically, though by a very circuitous route that took a decade or more to make happen).
Most of the analysis I’ve seen on this would tend to push the idea that Microsoft is doing it to flush out patent trolls in preparation of the consumer rollout of Longhorn/Vista. It’s unlikely that they will succeed in actually gaining the patent - given the rather late date of the patent and the fact that there was in fact a fair amount of prior art even at that time mitigates against it, but I think there’s another facet here to the attempt that’s bothered me about XML-oriented patents for quite some time.
What, exactly, constitutes a desktop, and for that matter, what exactly constitutes an operating system? I beg a bit of an indulgence here in order to talk about a couple of cool apps and a couple of very disturbing developments, then would like to come back and reconsider these questions in light of this.
I am writing this particular post in a little Google applet called Google Notebook which is nothing exceptional (basically just a rich-text-edit control with a small piece of back end storage) that’s nonetheless quite exceptional in what it implies.
re: Patent Applications in the RSS space
>…We have always fully acknowledged the innovators and supporters of RSS, like Dave Winer, Nick Bradbury and many others…
I think you misspelled ‘innovators’ - let me help. Did you mean ‘inventors’?
You can get a spellchecker for Firefox here - https://spellbound.sourceforge.net/ as that’s a nasty habit to get into, and would be worthwhile correcting real soon…
Sunday, December 24, 2006 6:33 AM by TH
Huh. Interesting comment. COMPLETELY FALSE! But interesting, none-the-less.
Actually, that’s not true… “You can get a spellchecker for Firefox here - https://spellbound.sourceforge.net/ ” is true. The rest is false.
How so?
From: Aristotle Pagaltzis (Dec 21 2006, at 18:52)
Anders:
It’s a stretch to call the man who designed both RSS 2.0 and OPML an “XML partisan.”
I have no desire to get myself in any more trouble than I tend to get myself into, so I will leave my comments at a combination of the title and: Yeah, what Aristotle said. ;)
… Uhh, since when were they in competition with one another?
As a 10 year XML veteran, and informal minister of propaganda for the “XML Team”, aren’t I supposed to leap to XML’s defense? I just can’t summon the energy.
And I can’t blame you, Mike! Why waste the energy of defending XML when theres nothing to defend!?
Quick show of hands from those who subscribe to a JSON Web Feed?
Okay, how about an XML Web Feed?
Okay, now lets try how many of you find that if you don’t want and/or need any of the extra stuff that working with the XML source provides, JSON is a really nice way to serialize XML data (e.g. a Web Feed), or any other source of data/data format for that matter, such that you can be more productive with less code inside of your web apps?
So tell me again… Where’s the comparison? Where’s the battle?
If anywhere, it must be in your mind (speaking to those who believe it’s one over the other, but “not both!”), cuz’ it does not exist! GET OVER IT!
From my own viewpoint, and in my own opinion** here is the list of what I believe to be the 10 most influential people or groups of people in the Information Technology sector in 2006. Please feel free to add/subtract/multiply/divide to/from this list in the comments section (or better yet, blog your list and provide a link in the comments section***).
… don’t get (re)elected.
The Liberty Papers�Blog Archive � John McCain Wants To Regulate Blogs
What would happen if this law is passed and upheld in Court, of course, is easy to predict. Unfettered public discussion forums would, largely, become a thing of the past as most web site operators will not want to invest either the time or the resources into policing every conversation that takes place. Debate and discussion will be limited. All of which argues quite strongly that these regulations would violate the First Amendment.
Dear Current and/or Future Politicians,
Want to get (re)elected?
Mark’s blog item is worth a read: he is working towards an ideal of time-independent validation. I take his essential point to be that different consumers have different constraints as showstoppers, and that it is inefficient, frustrating and wasteful for your input to barf on constraints that don’t effect you in particular. For example, if you are just storing a numeric field in a database now and then writing it out later, you don’t need to care whether it is in a particular range, just that it is a number at all.
I think there are four kinds of validation strategies, with a natural underwear analogy
* schema used for validating incoming data reflects the public interface (tighty whiteys)
* schema used for validating incoming data only reflects the capabilities of the consuming system (speedos)
* schema used for validating incoming data is a looser family schema that gives some kind of version independence (boxers)
* schema used to validating incoming data only validates traceable business requirements (G-string)
Err, OK lets forget the analogy…
Rusty has a quote from Wil Shipley today that Microsoft has nothing to gain by making life better for small programmers….they make all their money selling Windows and Office..
The subject of the quote is Windows APIs (and is a pretty broad claim for someone who admits to not really having looked at .NET or C#…hmmm) but without taking sides on any technical merits I suspect people will apply the same trope to (MS/ECMA) Office Open XML file formats. And I think they would be dead wrong there: MS needs to enable their army of system integrators to sell MS back-end systems and Windows-based solutions in the new XML-ified, document-exchanging world. So MS is positioning Office as a platform that can both compete with web-based applications and integrate with them.
Think about it: Can MS compete with Windows versus Linux as a platform? Not really: you cannot get any cheaper than free… And can MS compete with Java versus .NET as a platform? Compete maybe but not win: they are a two-man conga line…But what competes with Office as a platform? Open Office? Err, perfectly good as far as it goes (I use it!) but certainly it doesn’t actually go very far (I am going to upgrade to the new Open Office 2.1 today…maybe it will solve the current problems I have with unusable arrowed lines and jaggy PDF export.)
The thing about XML is that it reduces role of the API to being glue for connecting declarative pieces: queries, XML documents, XML transformations, XML configuration, XML GUI. Worrying about crappy APIs is so 90s. (I don’t really mean that, of course. OK I do mean it a bit.)
If you search Google for…
…then, as a reader of O’Reilly weblogs, you’re probably going to see the kinds of results that you expected to find. But as the majority of people in the world aren’t web developers, are these results what most people would expect? If I asked someone in the street to tell me the most relevant thing they could about ‘ruby’ or ‘python’, it almost certainly wouldn’t be to do with the programming language.
From a quick glance at job vacancies on the Guardian website (one of the major newspapers in the UK), only about 1.8% of jobs have ‘development’ or ‘internet’ in them (limiting the search to just IT and Telecoms). So, in theory, the Google results would only be relevant for about 2% of the general population.
Which isn’t exactly the whole story, of course. Most people searching for ‘python’ on the web today are almost certainly searching for the programming language (i.e. the web population is different to the general ‘offline’ population). So is Google correct? Or is this a chicken and the egg situation, and - like the Nintendo Wii - the web will primarily be targeted at the hardcore until someone dares to design a system that targets the mainstream ‘non user’ over the tried and tested current user base? What other side effects could be caused by ‘web developers’ (or techies) being the people who put the most content on the web, and hence disproportionately telling Google what is ‘important’?
Some people are startled to find that, amidst all the talk here of OASIS (now ISO) ODF versus ECMA Office Open XML, China has developed its own independent office document format, Uniform Office Format (UOF). I am not startled, but delighted, and here’s why.
Update: Firstly, we’ve got ourselves a QOTD like none other!
len:
YouTube? Gimme a break. It is the ultimate expression of ADD for the unsophisticated web surfer.
YES!
Secondly, WHOA!!! << That's one helluva close-up! If you feel a sudden sense of fear overcome your entire being... You're not the only one! YIKES!!! ;)
Thirdly,
But then the last year or so has been quite a surprise for everyone, and one of the key messages is the groundswell of support for open standards and consistent browsers. In today’s climate it would take an incredible marketing effort to ’sell’ XAML as the ‘new language’ of the web, and you do wonder what exactly it would gain them anyway.
That is a VERY good point! And now that I think about it, I believe Mark is right on the money >> Regardless of whether or not XAML is the “superior” technology, attempting to sell it as a replacement for XHTML as opposed to an enhancement that provides a tool for creating cross-browser/platform web-based applications, is a bad idea. In other words, for the average web presence, XAML is EXTREME overkill, and the average web presence isn’t going to suddenly go away, replaced instead by weblications with super human powers. Text is text, and in a majority of cases, the simpler the presentation, the better.
Or to put it another way: If what I want to read requires that I first load an additional application that doesn’t already reside on my machine each time I visit the site >> Forget it… It’s not going to happen. No matter how slick the interface, each and every millisecond that is required to load a page means fewer and fewer people are going to stick around long enough for that same content to load, and in a world where >> CONTENT << is *KING*, or in other words, content is what generates the revenue in which keeps the web churning, regardless of how “cool” the eye candy is, if the result means lost revenue, then, once again…
Forget it… It’s *NOT* going to happen.
Of course, maybe MSFT already realizes this, and has no plans for making an attempt to replace what works with what they believe works better. So, again, as Mark points out in the same linked comment,
Of course, it might make it more attractive to support, too. Standards are a double-edged sword for the big guys like Microsoft, since they would all ideally like us to only use their proprietary formats, but they generally realise that this isn’t the way of the new internet. It’s interesting that we haven’t heard much about XAML in the recent period, which either means that MS are taking advantage of the ‘current obsession with Ajax’ to quietly get XAML ready, or they realise that the ‘current obsession with Ajax’ means that people actually like standards.
Again, VERY well said… Guess time will tell.
[Original Post]
DISCLAIMER: The title might be one of my most pathetic attempts at generating traffic to this blog I have *EVER* made. To my knowledge, there is *NO DIRTY LITTLE SECRET* that Microsoft wishes “NOBODY* knew. And if there is? Well this ain’t that secret!
With that disclaimer firmly in place…
Windows Presentation Foundation - Wikipedia, the free encyclopedia
Windows Presentation Foundation/Everywhere is a cross platform extension to WPF to provide a subset of WPF features, such as hardware accelerated video, vector graphics, and animations to platforms other than Windows Vista. Specifically, WPF/E will be provided as a plug-in for Windows XP, Windows 2000, Mozilla Firefox, Apple Safari, Linux, and mobile devices.
These extensions will allow the browsers and other applications to use WPF/E graphical capabilities. The browser extensions will be in the line of Macromedia Flash, a highly popular graphic plug-in available for most browsers. Internet Explorer will have native support for WPF in Windows Vista, and will support WPF/E in older versions.
WPF/E will work in concert with XAML and will be scriptable with Javascript, it will also contain a version of the Common Language Runtime so it can execute VB.Net and C# code.
WW:*
“Blah, blah, blah, yadda, yadda, yawn — We’ve heard it all before, Peterson, now quit your yappin’ and take your meds already!”
M.:
Bite me, WW:*, then take a look at this!
XML has changed the Web dramatically over the last decade, though not at all as originally planned. XQuery, though, is gathering steam to drive a new round of potentially invigorating changes, even as Ajax heads down the JSON path.
The family analogy is an apt one: probably about 80% of my XPathing is pulling information from the current node’s children, and a comparable amount of time in my personal life is spent telling my son to get back into bed.
*GREAT* analogy from piers!
So give Goldfarb credit for his presentations of uhh.. 1991 with the input/transform/output diagrams on them…
Done. :)
[Original Post]
… and ends with…
How would you easily explain the concept of the Semantic Web to someone with basic technical knowledge? Re-factoring something I said in a blog a while ago, I’ve now edited it down to this:
So, it’s a bit basic, but I think that gets the premise across. I’d be interested to hear any suggestions you have for other easy explanations. Similarly, the Semantic Web really needs a re-brand, doesn’t it? Maybe if it sounded cooler or less techy, people wouldn’t be so scared of it. What would you call it?
iWeb? Web 3000? Ultra-web? Ultra-Intra-Web? The Dark Web? (After all, it’s a bit like dark matter, and is intended to be the invisible glue that makes up the majority of the mass of the web). Maybe we should bring the word ‘cyber’ back into fashion, I kinda miss it… Cyberdata! No, on second thoughts… Anyone have any bright ideas!?
I’m spending the five mnutes of “extra” time I have this morning (waiting for a response email before I can continue is what’s providing me this snapshot moment :) catching up on as many of my favorite blogs as I can fit in, and stumbled upon this gem from Todd Ditchendorf,
Todd Ditchendorf’s Blog � Blog Archive � XML Power
Q. I’ve tried reading the (XML | SGML | XSL | XPATH | DSSSL | …)
specification, but it doesn’t make any sense! There’s too
much jargon!A. Specification authors deliberately obfuscate the text of
ISO and W3C standards to ensure that normal people
(e.g., Perl programmers) can’t use the technology without
assistance from the so-called “experts” who designed the
specs.Fortunately, there is a handy translation table you can use:
-------------------------------------------------- ISO/W3C terminology Common name -------------------------------------------------- attribute tag attribute value tag attribute value literal tag attribute value specification tag character reference tag comment tag comment declaration tag declaration tag document type declaration tag document type definition tag element tag element type tag element type name tag entity tag entity reference tag general entity tag generic identifier tag literal tag numeric character reference tag parameter entity tag parameter literal tag processing instruction tag tag command --------------------------------------------------With the help of this table, even Visual Basic programmers
should have no trouble deciphering ISO prose.
ABSOLUTELY CLASSIC!!!! :D
Thanks for the laugh, Todd!
Update:
But musicians? When the cops finally came, they protected the labels.
Sorry lads, but you and Tim Bray and the rest of the ‘information wants to be free as long as it isn’t our software or graphics’ can go…
(more below) NOTE: Don’t worry, if taken out of context, the last sentence from above probably seems to be something that its not… LOTS of good stuff! I promise, it’s worth the click ;)
[Original Post]
via Sylvain,
As mentioned in my last post,
DRM is one thing. DRM affects us in the here and now, so it’s an issue worth arguing.
DRM? When it keeps us humans from being able to listen, watch, read, and in other forms interact with creative content by locking us into a particular media player/device and locks us out of being able to share that content with others (e.g. “Hey Sam, check out this track from the FooBar Fighters!”), this –
THIS we SHOULD FEAR and FIGHT AGAINST!
Thanks for the link, Sylvain!
The Digital Ice Age - Popular Mechanics
The documents of our time are being recorded as bits and bytes with no guarantee of future readability. As technologies change, we may find our files frozen in forgotten formats. Will an entire era of human history be lost?
The above lead-in to the above linked story is about as close to the actual content of the story as we are to the last “Ice Age” this planet encountered. Subtle changes in the processing software is one thing. But the idea of a hard drive crashing, an online email company going out of business, or the magnetic disk our data is stored on losing its “memory” is quite another. It has nothing to do with the fact that they are stored in “bits and bytes” and everything to do with the storage medium they are stored on as well as how many places they are stored.
Today we have hackers crackin’ some of the most tightly guarded cryptographic file formats — formats that were specifically designed to keep people from viewing their contents unless they have “permission” to do so — in a matter of hours, days, weeks, sometimes months, and in rare cases, a few years after these formats were first introduced. So the notion of no guarantee of future readability is a flat-out fabrication of the imagination.
DRM is one thing. DRM affects us in the here and now, so it’s an issue worth arguing. But in regards to we may find our files frozen in forgotten formats.?
If they’re forgotten, it will have been for a reason — for example: NO ONE USED THEM.
Will an entire era of human history be lost?
It’s always possible. But it wouldn’t be because of forgotten formats. Natural disaster, the medium in which data is recorded, or simple human error are all plausible scenario’s. But attempting to scare people into believing that “in the future it’s possible that people may not be able to read our 4th grade “What I Did Last Summer?” report, or even our doctoral thesis on the lifespan of the average floppy disk” because “The documents of our time are being recorded as bits and bytes” is nothing less than FUD (fear, uncertainty, doubt), pure and simple.
The chance of a natural disaster, the storage medium our data is stored on giving up the ghost, or a fly-by-night online email company? These we SHOULD fear.
“The documents of our time are being recorded as bits and bytes with no guarantee of future readability.
This we SHOULD NOT fear!
Thanks for reading.
Designer Widget Properties � Microsoft .Net and Smalltalk
The Designer widget is now updated with z-ordering working and a new panel for changing several common properties.
Designer running in IE7 with property panel openedThe next updates will include exporting and importing of Xaml to permit Vst sessions to share interfaces through Jabber instant messaging.
SWEET!
NOTE: Okay, so technically speaking, anything that can be wrapped inside of an XML envelope can be passed as a message via Jabber/XMPP, so I guess this kind of *IS* your fathers Jabber/XMPP — But did your father ever do this?
Combine XAML, WCF and Jabber/XMPP and what you have is a *WHOLE NEW WORLD* of XML goodness streaming your direction.
Of course, XML as an *OVER THE WIRE CROSS-PLATFORM DATA FORMAT* is nothing new. In fact, it’s what XML is all about. Add VST and WPF/e to the mix and –
Well, I’ll let you decide for yourself what you believe the next generation of Web-based applications will be built with, but if you want my opinion: Please see the above mentioned acronyms.
Thanks for the sneak-peek into the future here and now, Peter!
Waaaay up there on the list of downright impressive geeky books sits Structure and Interpretation of Computer Programs, known also as SICP.
Part of its gravitas (apart from being technically excellent that is) is its association with MIT and all those tenacious electrical engineering students. Think you are a hot shot between the ears? Then learn computer programming by doing *Scheme* as your first language. Pascal, Basic, Java? Thats all just whimpy stuff. Real programmers use Scheme. All programming languages are toys by comparison :-)
AMEN TO THAT!
Sean continues,
Okay so I’m over-stating it a bit. Still, it sure is interesting to see Python in the new curriculum over there.
“Can I get an AMEN?!”
AMEN!!! w00t!
Update (May 30th, 2007): Thanks for all of your *enlightening* comments, everyone! But I think the time to kill this thread has long since come overdue..
[Original Post]
via Sylvain (thanks Sylvain!)
Mi2N - Music Industry News Network
With no advance notice or explanation, Baton Rouge indie rockers Bones have lost their long-established Myspace URL (www.myspace.com/bones) to the FOX Television show of the same name. Bones, the band, has used www.myspace.com/bones for nearly 2 years, racking up close to 20,000 profile views, over 21,000 song plays and over 2100 Myspace friends.
Apparently when they decided on the name MySpace, they really meant it!
NOTE: Apparently the band got it back, but as per my follow-up comment to Sylvain,
regardless of the fact they gave it back, it goes to show the mentality of the folks running the show at myspace…
NOTE-TO-WWW: Go get *YOUR OWN SPACE* (meaning, your own URI) and then link to it from whatever happens to be the latest “rage with the kids.” If this proves nothing else, it proves that nothing belongs to you unless *YOU* maintain control of what it is used for. While technically speaking you only “own” your domain name if you continue to pay the yearly fees, @ $8.95 or less this shouldn’t be problem for *ANYBODY* to maintain. Unless, of course, your band *REALLY* sucks. In that case maybe you should worry more about practicing than maintaining a web presence, but for $8.95 a year you would have to *REALLY*, *REALLY* suck, so while practice is still important, maintaining your own URI is *MORE* important.
The recent announcement that Sun would be GPL’ing Java has caused an immediate (and well deserved) reaction among the technical blogosphere. It’s another one of those titanic shifts that occur periodically as large software companies jocky for position, one that causes all of the pebbles on the Go board to suddenly shift into a different alignment. I’m sure that these pages will be analyzing this move for months, but I thought I’d get my two cents in now while the topic’s still germane.
Java and XML have long had a significant, albeit somewhat uneasy, relationship. Most of the early XML tools were written in Java, and even today, many of the major ones, from Saxon to a number of the Apache technologies, appear first in Java form before showing up in other languages. Eclipse, the open source editor that is rapidly becoming one of the major development platforms for XML work, is of course fundamentally a Java application, and it is fair to say that Java likely has near as many (if not more) developers in its fold as C++.
That being said, it took XML to accomplish what Java couldn’t. Most communication that goes over TCP/IP nowadays is not done via either Corba or RMI - instead, its marshalled XML as either SOAP or REST-like messages (with a growing proportion of JSON added into the mix). AJAX has largely replaced Java applets as the mechanism for providing functionality on the web, and on the server Java’s role is increasingly a supporting one to host either XML based services or XML oriented server frameworks.
I have to admit … I knew if I ventured into Microsoft land that I’d likely end up with these responses (and yes Dave, I get the message).
I haven’t done a formal analysis of the release version of IE, nor was the attempt in my last post an attempt to do so. The article I wrote WAS intended to point out a few of my major peeves with the way that Microsoft went about upgrading the browser:
I’m frankly a little disappointed in the feedback on the article, however. Linux and OSS partisans have a reputation for occasionally getting hostile and anal about their particular OS, but its a little disturbing to see the same kind of attacks coming from the highly literate readers (I hope) of this site from the Windows side.
I am frustrated with IE because it could have and should have been so much better a browser. It is still the one that most people use, largely because its what gets defaulted in any new windows installation, and many (especially non-technical) people have neither the time nor the understanding to seek out something else. However, this didn’t prompt my last rant … rather, I am taking fault with the user experience of the installation, which is THE front door that most people will end up seeing before using the browser in the first place.
The IE7 auto-upgrade gave Microsoft had a chance to gain back a lot of people that it has lost in the last few years - wow them with something spectacular, send the user to a page that would put the browser through its paces and show off the positive points, get past the “We are Microsoft and we know better than you do what you want.” attitude and make people wonder if maybe they were wrong about Firefox after all.
They blew it.
Instead, the user experience that was sent out was cold - “We don’t trust that you’re running a valid copy of Windows, so before we go anywhere, we’re going to frisk your operating system!”, “We’ll let you know how soon we can give you back your computer as soon as we’re actually done.”,”Oops, there’s a glitch here that we didn’t anticipate, and on 17.2% of all computers this program will crash.” Programmers may not believe that User Experience matters - I’m USED to Linux programs crashing all the time - but for most non-programmers, the message that comes out of this user experience is simple: “Microsoft is a cold, large, indifferent monolithic corporation that builds shoddy products”. Is that the reality? No, from my own experiences, generally it isn’t. But it IS the perception, and experiences like this only serve to bolster that perception, especially among the non-technical.
I’m hoping to do a formal analysis of Internet Explorer 7 in the near future, and no, I’m not going to be partisan about it - there are a number of good features and upgrades that IE7 has to offer, and I think it is important to highlight these positive aspects, especially given the expectation that IE7 use will spike this week despite the less than sterling installation process.
Kurt Cagle is an author, CTO and software industry analyst for Metaphorical Web, located in Victoria, British Columbia.
Proof that there is somebody from above (with some pull) that still cares about the REST (< sorry ;) of us down here below.
What we are doing
We want to restore the World Wide Web to its rightful place as a respected architecture for distributed programming. We want to shift the focus of “web service” programming from a method-based Service-Oriented Architecture that just happens to use HTTP as a transfer protocol, to a URI-based Resource-Oriented Architecture that uses the technologies of the web to their fullest.
Our project has technical aspects but it’s mainly a job of evangelizing: spreading the good news. Currently the REST philosophy is typecast as sloppy or unserious. This despite the fact that:
Most of the web services the public actually uses are URI-based.
Most Ajax applications are nothing but browser clients for URI-based web services.
Most of the world’s biggest web applications are technically indistinguishable from URI-based web services.If REST doesn’t work or doesn’t “scale”, then neither does the World Wide Web.
REST is typecast because its practices are folklore. It’s got no canonical documentation beyond a doctoral thesis which, like most holy texts, says little about how to apply its teachings to everyday life. Its technologies are so old and heavily-used they seem undocumented and unsupported when their true power is revealed. It’s like finding out you can pick a lock with a paperclip.
Because it occupies this odd middle ground–familiar yet suddenly cast in a new light–a lot of people have gotten the impression that REST just means “whatever you want to do, so long as you don’t use SOAP”. That it’s a sloppy no-methodology used to justify bad design, malformed XML, and, in particularly troublesome cases, Extreme Programming.
To counter this, REST advocates have come up with a new term, “HTTP POX”, to describe URI-based web services that aren’t RESTful. But that just brings back the arguments about what REST is and isn’t. Is it like pornography, where you only know REST when you see it? Or is it like communism, where if a service fails it must not have really been REST? Can a service be somewhat RESTful, or is that like being somewhat pregnant? How many resources can dance on the head of a pin?
We’re writing a book to codify the folklore, define what’s been left undefined, and try to move past the theological arguments. We’re doing programming to improve tool support and introduce new kinds of tools. We’re doing marketing and memetic engineering to make REST a more fit competitor in the marketplace of ideas. Some may find our methods heretical; others may see no method at all. Personally, we see six: GET, HEAD, PUT, POST, DELETE, and sometimes OPTIONS.
From the same linked page,
If all goes well, REST Web Services will be published by O’Reilly in May 2007. We want this to be the definitive work on the real-world use of REST. If you’re a REST fanatic, we need your input now: your best practices, rules of thumb, and folklore; your review of what we write. If you’re just interested, we need your questions and concerns. Please send email to Leonard (see_same_link@this.post) to get in on this project.
—
Update: What are you waiting for? GO!!!!!
What do you call a program that gets loaded in surreptitiously and without your approval, has the potential to lock down your computer so you can’t get access to it, takes up significant system resources and promptly crashes upon running. Normally, I’d call it a virus, except for the last part … viruses are usually stable (and well written) once they start. On the other hand, it’s a perfect description of Internet Explorer 7.0.
I am a programmer dealing with client-side development, which means that, like it or not, I spend a great deal of time in Windows, because that’s where my users are. Given the nature of Windows, I am also forced to keep Microsoft’s Auto-Update feature active, because without it I can’t receive the dozens of weekly patches necessary to keep the system stable in the face of bad programming decisions made by Microsoft over the years. However, I was more than a little bit peeved to discover that Microsoft seemed to consider Internet Explorer 7 a “necessary patch”, rather than giving me the decision to choose to install it.
On November 1, the world over, people will boot up their Windows system and discover that mysteriously IE6 has gone the way of the dodo and IE7 is now the designated heir apparent. Of course, this assumes that in the process of booting up their system they don’t run afoul of the Validation feature, which presumably goes in and checks with the mother ship that the Windows that people are running is in fact legitimate. I don’t know the fate of those who don’t, though I can see significant swathes of business throughout the world suddenly in the dark because the copy of Windows they THOUGHT they were legitimately buying proved to be bootlegged.
Of course, if that wasn’t bad enough, I then get to sit through the install process itself, which features the increasingly common we’re-doing-something-in-the-background “cylon” bar that only tells me that Internet Explorer Core Components are being installed. At no point can you say “No, don’t install this, I’m still developing my app for backwards compatibility with Internet Explorer 6!”. At no point can you say “Hey, I really don’t LIKE Internet Explorer taking up resources on my system and I spent the requisite six hours deep in the bowels of the registry trying to extricate the LAST version of IE, so don’t install this!” At no point can you say “Wait, we haven’t properly tested this in our enterprise setup to insure that the applications we have spent YEARS developing will actually work in your stupid browser!”
Nope, you WILL install Internet Explorer 7, we won’t tell you what’s going on in the background even as we do, and we won’t even bother to show you a simple progress bar that indicates how far to perdition you’ve actually gone. We are Microsoft and you aren’t. So there.
Perhaps I wouldn’t have reached the point of ranting about this issue, save for the simple fact that after this whole process had completed, and I, begrudgingly, double clicked on the Internet Explorer “e” icon, the browser opened up, showed the default installation page, then promptly crashed.
I am writing this in Firefox 1.5. I’ll be upgrading to Firefox 2.0 in a week or so when the XForms extension is completed for it and I can finish my development there. FF1.5 does occasionally crash, usually at 3am after I’ve been extensively programming and have left all kinds of interesting things hang in the environment. As a developer, you expect crashes - if you don’t get them you’re not pushing the envelope enough, but you generally expect that such crashes are due directly to something you did. I like Firefox. I like Opera 9, which to me is a fine-jeweled watch that’s a wonder to work with. I’m even beginning to like Konqueror when I can escape outside of Windows land and play in my Linus sandbox.
I don’t know about IE7 - I’m afraid to start it up again for fear that it will corrupt my system.
Kurt Cagle is an author, software developer and technology analyst, and is the Chief Consultant for Metaphorical Web, in Victoria, British Columbia.
When you run the same process over a few years, its particular shortcomings emerge and can dominate: for example, Joel Spolsky claimed that MicroSoft had an economic criterion for fixing bugs, so that they only fix a bug if it costs them more (e.g. in sales) to have it than to fix it—for a monopoly in a growing market there is no loss of sales from bugs, and for a near monopoly with free alternatives to some extent the funds you lose from an application sale may be spent on another purchase from you anyway: we spend less on Office but that gives us more to spend on Vista, for example.
I don’t know if Joel was correct in his assessment, or whether Microsoft have a different strategy now. But clearly the mid-term impact of such a strategy would be a buggy code base, with entrenched workarounds, combinatorial explosions of symptoms that prevent diagnosis, and an inadquate foundation to prevent major errors. Not to mention a sudden exposure to loss of market share when the market gets saturated and stops growing: when a sucker isn’t born every minute.
Sun’s Java effort is similarly suffering recently: they have a nice-looking error process based on people voting for errors as critical. Now whether Sun acutally use this list to determine which bugs they fix first, or whether they use the vote to justify ignoring bugs that they are not interested in, the result is probably the same. A system with lots of known bugs.
There are lots of other single-strategy methodologies: risk-based analysis, ISO 9126 software quality analysis, weighting bugs against their depth in the call stack so that libary bugs are fixed at hgih priority, metrics, test driven programming, and so on. I don’t know why we should have any confidence that any of them will necessarily not, over time, systematically fail to address some kinds of errors. Which will bite us.
So is a better approach to just fix bugs randomly? Pick a bug from a hat? Well, maybe….
Perhaps we should say each maintenance methodology applied singly over time will result in an accumulation of unaddressed errors in some aspect.
Part of the problem is human: people have interests and pressures and viewpoints. So democracies solve this by what Lee Teng-Hui (the Taiwanese president who secretly funded the opposition parties) called “the regular alternation of power”: term limits, shifting jobs, even sabbaticals.
Part of the problem, as I see it, is with simple prioritization of bugs. Sometimes it is better to see each module as a whole, allocate quality requirements for that module, and then handle each bug according to its module priority. For example, Sun could say “we don’t treat text.html as a priority module but we do treat 3D rendering as a priority”. Apply this to voting, and then two votes for an HTML bug would be required to equal one vote for a 3D bug.
But that is a more complex strategy to be sure, but it is still a single strategy.
A better way of doing things may be to divide the debugging/maintenance/natural enhancement effort into independent efforts. For example, have main stream process use immediate rational economic effect, risk or deadline criteria. But also have a background effort that alternates between different strategies: systematic audits for internationalization, performance, standards-compliance, transparency, integrity, resource utilization, and other quality concerns. And also have a background effort that uses weighted voting and different criteria that accepts minor Requests For Enhancement as well as bugs.
And even, for one in a hundred bug fixes, do pick a bug out of the hat, on the grounds that you don’t have 100% confidence that even the multi-criteria maintenance will prevent the emergence of a nasty clump of errors in some aspect. Shake it up.
Nobody learns if nothing is brok3n: Dropping the addiction that is irc
Eventually I switched to linux and started delving into linux channels. The most important thing I learned from a linux channel is that if someone doesn’t know what they are talking about, they’ll act like they do anyhow, and when asked for information they’ll tell you to go find it. It seems to serve some people well.
Mokka mit Schlag � Chameleon schemas considered harmful
This is not how namespaces are designed to work, and it’s going to cause massive problems for anyone writing any sort of software to process XForms, whether it’s DOM, SAX. XSLT, XPath, or almost anything else. XForms elements should be able to be recognized by their namespace alone. You should not have to care about the host language in which they’re embedded. If we’re going to start changing the namespace for every host language that comes along, we might as well not have namespaces in the first place.
If you believe there is a single, more important, absolute requirement in the land of XML than that of the proper usage of XML namespaces: You obviously don’t understand XML.
That said, there’s got to be at least some sort of reasonable explanation to the mentioned madness quoted above, doesn’t there?
Somebody care to clue me in, cuz’ I have to admit, at first sight it very much seems like a few loose screws have rattled off the W3C XHTML2/XForms working groups wagon wheel, placing some serious doubt upon the ability of these same mentioned groups to produce a quality specification that even closely resembles any of their 1.0 counterparts.
Anybody?
{ End Bracket }: Is Programming an Art? — MSDN Magazine, October 2006
“The reason we say programming remains an art, not a science or an engineering discipline, is because we haven’t as yet been able to break it down into component steps and mechanize it. Once we succeed in that, a new scale of possibility emerges: programs to write programs from people-oriented design languages (PODL), programs to prove program correctness, and to analyze and support semantic query. Until then, however, programming remains an art, or so conventional wisdom has it.”
Wait, what?
“The reason we say programming remains an art, not a science or an engineering discipline, is because we haven’t as yet been able to break it down into component steps and mechanize it.”
With all due respect, Mr. Lippman (and I most certainly do have a lot of respect for you and your work), who’s we? And since when does breaking something down into component steps and mechanizing it turn this same something from art into science? In this regard it almost seems as if you are suggesting that science is defined as the final state instead of a series of experimental steps that help bring about this same mentioned state.
Then again, using the term “final state” isn’t really something that can be applied to this analogy either, as when was the last time anything anybody has ever created through the process of science, or maybe better said, the process of trial and error, stayed in this same “final” form indefinitely? Of course, through the *art* (or is it a science?) of preservation, there are those who can do a pretty good job of keeping the original state of something as close to its original state as possible. But even then its no where near perfect, and in many ways the very act of preservation goes against the very essence of what life is all about,
Change. Transition. Transformation. Progression.
Just to be certain we are both on the same page, lets take a look at the definition of “science”,
Link Attribution Lineage : Dare Obasanjo
—
Jeffrey Zeldman Presents : Web 2.0 Thinking Game
If we’re stuck with this meaningless Web 2.0 label, let’s at least have some fun with it. Here’s my new game. I’ll start, you finish:
Web 1.0: Joshua Davis on the cover of Art News.
Web 2.0: 37signals on the cover of Forbes.Web 1.0: Users create the content (Slashdot).
Web 2.0: Users create the content (Flickr).Web 1.0: Crap sites on Geocities.
Web 2.0: Crap sites on MySpace.Web 1.0: Writing.
Web 2.0: Rating.
…Now you try it!
Web 1.0: Dynamic Data, Static Languages
Web 2.0: Dynamic Languages, Static Data
NOTE: Probably should have mocked up the title in Python, huh?! ;)
Four little letters…
LLUP
—
As a social phenomenon, the end of email has been widely reported. The next generation doesn’t use it. As a technical phenomenon, spam is a persistent threat. Spam’s been a lot worse in the last couple of weeks (no doubt the reason I started thinking about these things); apparently the spammers have concocted a strategy that circumvents Bayesian filtering (it’s only temporary, I’m sure, but the next victory in spam filtering is only temporary too). �
I’ve noticed the same phenomenon. It’s getting really, *REALLY*, bad!
What’s next? IM, Wikis, web forums instead of email? Bleh!
Agreed!
Maybe I’m just too old to learn new tricks, but I want correspondence pushed to me (or I want the appearance of push, anyway) and I want to read and edit it locally, in the application of my choosing, not in some browser form
Agreed. Too much effort. The solution must be seamless, and work with the tools we already use for email-esque communication. In fact, the solution has to be developed in such a way that those with an established position in the email client/server market(s) can quickly, easily, and as mentioned (and is really the key, in my own opinion) seamlessly integrate with these tools such that the “switch” from the existing technologies (e.g. SMTP, POP, IMAP, proprietary protocols such as those used in Exchange for advanced workgroup/corporate communication/collaboration, etc..) may not even require a switch at all (i.e. a driver that allows each of these technologies to easily interop with any of the new required protocols), and if it does, will be as transparent as possible to the customer/employee, etc… who will be using it.
It occurs to me that with a little work, Atom might function as a replacement for POP/IMAP and the Atom publishing protocol might replace SMTP. I can see a glimmer of how I might move forward while mostly preserving a couple of decades of work habits. As usual, the social problems are larger than the technical ones
Yep, completely agree! Through the work I have been doing with LLUP, I have come to my own conclusions that there are a few additional off-the-shelf pieces necessary to complete the puzzle, but without a doubt, Atom and APP are the key behind all of this.
In fact, this was a point I brought out to Eve (Maler) a while back when Russ and I first spoke with her about LLUP. There have been a few people along the way who have insisted that “you guys are taking too long to finish this up” or “if this really was so simple, why not just finish it out and be done with it” to which the answer, as mentioned to Eve, is pretty straightforward,
IEBlog : SSL, TLS and a Little ActiveX: How IE7 Strikes a Balance Between Security and Compatibility
Obsolete controls disabled through ActiveX opt-in
An important part of the ActiveX opt-in feature is doing good housekeeping of the ActiveX controls that come with Windows. Many sites will benefit from IE7’s new native XMLHTTP control and sites can continue to use the MSXML 6.0 and 3.0 controls. The MSXML 5.0 control will not be enabled by default. The WMP 6.4 player is also disabled because its been replaced by the WMP 7 generation controls. As we can infer from HD Moore’s month of browser bugs, using the newer controls and leaving older controls disabled helps reduce the chances of user being exposed to a security or stability issue in an older control.
Since this should be a straightforward change for most sites, we’re asking for your help in moving your pages towards the native object XMLHTTP, the latest version of MSXML or the newer WMP control. In the best case scenario, the change might be to simply swap in the native object for XMLHTTP or the newer CLSID for the current WMP control.
There was a time that I had every desire and intention to stay closely attached with the development of IE7 and the RSS Web Feed engine via forums, blogs, and in some cases, email communication.
Why did that change?
ADD. My desire to overcome my ADD tendencies and actually place my primary focus on one of a bazillion and a half projects I have rolling around in my head at any given second, of any given day, month, week, year, and etc..
In other words,
I have been intently following a blog thread that was initially started by the “Other” David Chappell. David’s initial posting
SOA and the Reality of Reuse sparked a small flurry of responses, and responses to responses, which also include Joel McKendrick’s Pouring Cold Water on the Service Reuse. I tend to agree with most of the points made in David’s initial posting which includes -
- Reusable services can sometimes be hard to do
- Effective SOA reuse requires strong corporate wide organizational support with top management support and buy-in.
- SOA creates agility by allowing for flexible business processes, which if are configuration driven may be readily changed to suit the changing needs of the business. David argues that this can be a form of reuse.
However the overall tone of the threads seem to be casting a very strong doubt on the reality and practicality of service reuse at all. Some of the responses are flat out saying that service reuse is not possible.
So I went and compiled some information gathered from our customers who are building SOAs.
Got my invitation a few hours ago for MSN (or is it Windows Live? Seems like its branded MSN) SoapBox. I can’t link to the video on SoapBox and expect that everyone will be able to view it until such time as they open it up to anyone with a Passport/Hotmail/MSN/Windows Live account (I think that technically speaking these are all one in the same, though a Windows Live account doesn’t necessarily mean you have an equivalent Hotmail account, though that one should be obvious). However, similar to YouTube, I can embed the videos in a web page.
As such, here is David Spade expressing his feelings in general towards the iPod,
I am sorry to be bringing this up in a technical forum, but I felt it’s too important to let go by the wayside.
I’m going to deviate from my normal XML discourse to point out that the US has officially become a police state. On September 27, 2006, the Military Commissions Act of 2006 was passed by Congress, and is being sent to the President who is only too eager to sign it. This particularly poorly thought out act includes the following provision:
“No court, justice, or judge shall have jurisdiction to hear or consider an application for a writ of habeas corpus filed by or on behalf of an alien detained by the United States who has been determined by the United States to have been properly detained as an enemy combatant or is awaiting such determination.”
The definition of alien, elsewhere within the same act, is written so broadly that it can mean not only foreign nationals but also US citizens, a choice of wording that was quite deliberately made by the framers of the bill.
Habeas Corpus - To have the body - is a very basic principle, but is, more importantly, one of the foundational principles put into the Constitution. It means that if you are detained by a federal, state or local police agent, that agent must announce that you have been taken captive within a limited period (usually three days), must announce why you have been taken captive, and if no crime is charged against you within that period you must be released.
The principle exists for a very simple reason - without it, people can be arrested and made to disappear. It means that you can be legally detained for no reason other than the fact that someone felt that it would be better if you were not free, and it means that if someone is arrested they can be held indefinitely with neither trial nor access to a lawyer. It was the bedrock principle upon which nearly all law and order within the United States was based. It no longer exists.
Fully legally (assuming that the Supreme Court does not turn it over), this essentially means that your Congress (if you are an American) has backdoored the United States into Martial Law. The president, or any agent of his choosing, can imprison you or your friends or neighbors simply because you represent a threat to them. That is what Martial Law is. The United States has only not had the protection of Habeas Corpus once before in its previous history, and that was during the Civil War. Lincoln declared Habeas Corpus invalidated for the duration of the war because he saw no choice, and one of the first acts of Congress after the war concluded was to restore it.
Remember that date: September 27, 2006. It is the day that the United States ceased being a democracy and become, for all intents and purposes, a dictatorship.
The following is a transcript (or at least a prescript) of the talk that I gave at the AJAXWorld conference on October 4, 2006, looking at emerging technology in that space and focusing (not surprisingly for me) on the XML side of things. I also have a Powerpoint of the presentation if you want to see how cheesy I can get with my own productions. If you are interested in seeing a video of the presentation itself, check back with AJAXWorldExpo.com - they should have that up shortly (I’ll post a more precise link when they have it loaded).
So, without further ado, AJAX on the Enterprise:
NOTE: The title? I’ll let you figure it out for yourself, though I will suggest that if you don’t already own each and every Massive Attack and Tricky album, then you really should consider changing that just as fast as you possibly can.
For those that already know what I’m talking about, suffice it say…
‘nuf said ;)
—
So Jason Kolb, both a good friend and professional colleague in whom I have a tremendous amount of respect for has been on a blog entry tear like none other over the past six months. If you haven’t already subscribed to his web feed, you really need to change that or get left behind by those who are, and as such, already know EXACTLY what I am talking about.
So, with that… Do yourself a favor and snap out of that Karmacoma funk you’ve been in and Future Proof yourself courtesy of Jason’s latest entry,
[NOTE: There’s a reason for choosing to attach such a lengthy title, so please forgive me for blowing up your feed reader of choice if that feed reader of choice doesn’t handle titles that better resemble short stories all that well. ;)]
So instead of attempting to describe the experience I just had, instead, with permission from Sylvain, I am simply going to copy/paste the conversation that just took place that has brought into a whole new light what the Lesser General Public License is all about, and why I think that it flat out ROCKS! (thanks for helping to bring this into a greater understanding for me, Sylvain!)
Firstly, and understanding of what Kamaelia is all about,
A framework providing the nuts and bolts for building components. A library of components built using that framework. Components are implemented at the lowest level as python generators, and communicate by message passing. Components are composed into systems in a manner similar to Unix pipelines, but with some twists that are relevent to modern computer systems rather than just file-like systems.
Secondly, the conversation that lead to my recent epiphany in regards to the greatness (or at least potential greatness) of the LGPL.
[NOTE: To fully grasp the meaning of the title, please see me make a complete fool of myself (yet again ;)]
So whats got me going all BiPolar in Love with Kim Sarah?
Behold,
I’m sitting here in my hotel room the night before the AJAXWorld conference kicks off in Santa Clara, California. It’s an event that I’ve been looking forward to for some time, both to get a sense of what is happening in the development community with this technology and to help (hopefully) to shape those technologies to a certain extent myself.
Getting down here has been … challenging. My ancient Saturn, battered and beaten and held together by duct tape (quite literally) has been coughing and wheezing on its way down, including a rather embarassing episode where the engine overheated and started spewing steam and smoke just as we pulled up to the border crossing station at Blaine, Washington. I want to thank both the American and Canadian border agents for helping me push the poor thing off the side of to the side of the road and for expediting our trip through as a consequence. This is the car’s last hurrah, alas, and I suspect that I’ll be driving a new one by next week.
I did meet the instigators of this particular grand fete, including Jeremy Geelan, Sr. Vice President of SYS-CON and Dion Hinchcliffe, Editor in Chief of AJAXWorld Magazine, both true gentlemen, who along with editor Kate Allen were taking a short breather after having pulled off the AJAX Boot Camp for the day. I am extraordinarily impressed with all of their efforts thus far — its been fascinating watching this process come together in a truly professional manner, and I’ve seen a lot of conferences over the years.
I will be blogging the next two days (Tuesday and Wednesday) of the AJAX conference, though it’s unlikely I’ll be able to manage the blow by blow account that I did for XML 2005. I will see if I can at least provide a comprehensive view of the sessions that I will be attending, however. I am also speaking on the subject of AJAX on the Enterprise Wednesday morning, and plan to put the session notes for that up on this blog as well.
This should be a major and influential conference, and I am looking forward to being a part of it. AJAX has come an incredible way in a very short time, and the ripples of the technology are only just beginning to be felt. More tomorrow …
Spam on O’Reilly Weblogs - O’Reilly Linux DevCenter Blog
Yo bloggers- you can delete the spam comments to your blogs. Just go to the same place you create your blog and visit the ‘Comments’ section. You have mighty delete powers over your own blog comments. Please don’t give spammers free hosting.
Just to give you all an idea of just how bad of a problem this is, The following five screen shots taken at a screen resolution of 1280 x 1024 (trimmed to accomodate toolbars) represent each and every “comment” received in my inbox from 17:35 to 21:50 MDT this evening…
Enjoy!
Update: piers has added some fantastic commentary that I believe adds several keypoints of significance to this overall discussion,
KeyPoint:
>> There may be something to be gained from the recuperation of the other-space of “non-economy”,
Keypoint:
>> however, it seems the free (as-in-speech) economy is already inherent in the software development triangle of resources, time and money…
Keypoint:
>> (or, even better, if you can turn your clients into a resource you can draw on for innovation, beta-testing, or information and editing, like wikipedia), the money side of the triangle approaches zero.
I agree 100%! From the software development standpoint, I believe these are some of the most fundamental areas in which we need to place focus such that we can bring Corporate America to our aid instead of to our detriment in regards to our fight for a “free-as-in-speech” culture.
While I recognize that the Free Software Foundation has *ALWAYS* been about free-as-in-speech software, unfortunately there is a free-as-in-beer side effect that in many ways has pigeon-holed their efforts into a “free-as-in-everything” type-cast. The problem with this, of course, is that you can’t exactly build a business model and an underlying business economy on top of a donation-based revenue stream.
Actually, that’s not true…
(and while you’re at it, Gmail too? Thanks!)
Beta status meaning that they open to suggestions.
Huh… Well that certainly clears things up.
That said, I do wonder what will would happen if they were to ever remove the “beta” tag though. Would this then me they are now “closed to suggestions”?
Google Customer Service Rep Speech Synthesized Bot: “Oh, hey good suggestion, but do you see a “Beta” tag anywhere on the site there idea boy?!” User.Next()!
Shouldn’t it be “2years ( and still in beta)” instead of “2GB (and counting)”?
More than 2 years I have been using GMail, but I never impressed by its reliability. It always gives problems. Some times clicking on mail loads the message for ever until you click it on again. It frustates me particularly when I see an important message in Inbox, but couldn’t open it. There are sitiuations where I felt “what if GMail cannot open this important message forever?”. Some times it keeps sending the mail for ever until you click it on send button again. From the last 30 minutes(09/28/06, around 10:30PM MST) I have been trying to send a email that I composed, but I couldn’t. After 2 minutes of hanging with the word “sending” in top right corner it displayed the Java Script alert message “Oops, the system was unable to perform…..”. I can’t even save it is a draft. How come a product which is 2 years old is still buggy? For God sake, its still in beta. (if the product name is “GMail” not “GMail beta”-:)). How come a company like Google can keep the product in beta for a long time? Oh boy!
How many complaints about GMail issues?. Will Google ever focus on fixing the GMail issues instead of adding new features? I doubt it only adds features to make the headline news or to impress Wall Street. Instead of fixing the issues, Google concentrated on merging GTalk with GMail(which most of the time won’t work inside proxy, even if it works it keeps disconnecting, connecting….), increasing the mail box size, RSS feeds in GMail etc. Do we need all these features with out core mail functioning properly?
MSN launched live mail(Ajax versions of hotmail) recently and its no more in beta. But the 2 year old GMail still stays in beta. Can’t Google fix the problems and take the GMail to next level? Surprisingly its still not open to public, its still by invitation only. Can’t they handle the load on serves if it is open to public? If they can’t handle the load I doubt that how( the hell) they handle the 2GB(and counting) mail boxes?
Another good(worst) feature from GMail is POP3 access which will be out of sync with web version. If you read a mail in web, its not read in POP3 and vice versa. All your emails including sent mails will show up in inbox. Cant identify the actual inbox mails vs sent mails. Anyway, this is a design issue not a reliability issue.
I am damn sure that I will see GMail beta version forever!!!! Are you having the issues with GMail? Share your horror experiences here in comments.
(more detail as to why below)
The crime of the so-called ‘wisdom of crowds’ is that when the crowd gets it wrong, they can keep it wrong for a long time.
[Original Post]
The real challenge here will be Richard Stallman’s. His work helped launch important movements of freedom — free software, most directly; free culture, through inspiration, and examples such as Wikipedia. It also helped launch a movement he’s not happy about, the Open Source Software Movement. Much of the latter builds on the former. And these movements have been joined by many who share his values, some more, some less. (Again, see Torvalds). These movements have built much more than he, or any one person, could ever have done. So his challenge is whether he evolves these licenses in ways that fit his own views alone, recognizing those views deviate from many important parts of the movement he started. Or whether he evolves these licenses to support the communities they have enabled. This is not a choice of principle vs compromise. It is a choice about what principle should govern the guardians of these licenses.
Can enough gratitude be given to Richard Stallman for what his ideas, ideals, and subsequent work has accomplished? Nope! The idea that anyone should be able to tinker with the source code of a software application, improve upon it, make it better, and so forth has been the foundation of many great and wonderful “freedom movements” since the Free Software Foundation was first founded.
The problem, of course, is that true and pure freedom does nothing to control that in which is derived from one idea to the next, even if the original idea was never intended to become the foundation of something we would have wanted it to become.
These movements have built much more than he, or any one person, could ever have done. So his challenge is whether he evolves these licenses in ways that fit his own views alone, recognizing those views deviate from many important parts of the movement he started. Or whether he evolves these licenses to support the communities they have enabled. This is not a choice of principle vs compromise. It is a choice about what principle should govern the guardians of these licenses.
Yep.
I was first introduced to Trac by Bruce D’Arcus in December of 2004. At that point in history one could easily claim that setting up Trac was very much NOT Pythonic**.
On the other hand, using Trac for the first time was my initial introduction to the “Pythonic method” (if there is such a thing), in this case, the Pythonic approach to managing software development.
Firstly, what is Trac?
Trac is an enhanced wiki and issue tracking system for software development projects. Trac uses a minimalistic approach to web-based software project management. Our mission; to help developers write great software while staying out of the way. Trac should impose as little as possible on a team’s established development process and policies.
Okay, so while the focus is primarily on software development, in my own experience, Trac can easily be used for managing MUCH more than just software development. In fact, I would definitely go as far as suggesting that in many ways Trac implements (again, if there is such a thing) the Pythonic method to workflow management, and as such, can be applied to any type of project, software and non-software related alike.
Just to clear things up, what does it mean to be Pythonic?
… you don’t invent the next new thing; you breed it.
Update: I queried len in regards to the correlation between hackers and musicians.
His recent response,
Music and programming: two media originating one set of mental skills with the exceptions being another computer can understand the output of code but can only replicate the output of music.
Music is God’s voice in the human heart.You can emulate that process with a computer, but it’s just processed signal. The gap is small but of enormous importance. A friend of mine told me once that code is the real post-modernist poetry. It’s value as tender is assigned by the people making the transaction. Just as poetry has little sale value into today’s culture, code is becoming equally devalued in some currencies. That doesn’t make it without value in other currencies. The choice is one of currency.
Giving my songs to the web for free downloading was the only legal tender I had for all of the great code I was being given. It seems fair. Music is a language of human emotion. If you can work out where the heart of a computer is, you may discover a correlative value but the value of music is in the emotions produced in composing it and in hearing it.
Beyond *WOW* I think I’m going to let this one speak for itself.
Thanks len!
When all software is free, what is the value of software? Is it worthless, or priceless? In hindsight, it is pretty obvious that people like Kevin Kelly and others of that ilk were very wrong about the trends, where ultimately everything would become free and a grand new society of plentitude would emerge. Gas, while dropping down in the last few weeks, is still trending upward, food is becoming dearer by the day, the cost of health care services is rising dramatically, and indeed the costs of most raw commodities - trees, metals, oil, and so forth - has been aggressively pushing upwards, and will likely continue to do so, though not without a few corrections here and there.
No, ironically, it is only those things that are fundamentally intangible that are diminishing in cost - software, movies, music, the value of the US dollar, that sort of thing. Finished goods are reaching a critical breaking point - they aren’t selling anywhere near in proportion to the amount being produced, but at the same time the cost to produce them continues to rise in proportion to both raw material costs. Labor costs still factor - in some areas there is something of a labor crunch going on, yet the crunch is nowhere near as grave as it was in the late 1990s, and at least anecdotal evidence seems to indicate that this is easing except in limited spot markets.
Quick Update: I keep reading TONS and TONS of blog posts in regards to both why this deal is both good and bad in regards to YouTube. I think I should clarify why I believe this deal is a good thing overall.
This has nothing to do with YouTube. Whether YouTube survives the HypeBubble or not is up to YouTube and its ability to prove that profit can be derived from all 100million (and growing!) of those daily video views.
What this does have to do with is the fact that a major presence in the music industry has made a significant move in what I believe to be the right direction: Treat your customers like customers instead of criminals. Nothing more, nothing less.
Yes, this is only one company, and its only one division of one company for that matter. But its this simple change in thinking that has already shown signs of other industry players opening things up a bit more. Its not going to happen overnight, though as more and more companies realize that other companies are generating a revenue stream from a previously untapped source, it could happen fairly quickly. Furthermore, their product is now being advertised to 100’s of millions of daily visitors. For those that do it right, there will be simple ways to allow someone who sees a clip, likes the song in the background, and would like to purchase that track for themselves, can do just that.
Oh, you wanna know who’s probably going to be the first folks to do it right?
Enter WebTV MSNTV (theres just something about technology, Microsoft, and the number seven (as in seven years *too* early (CDF? 7 years too early. DHTML, err, I mean Ajax? 7 years too earlier. Internal browser storage persistence? 7? Yup!)
In short: These are not new ideas. Just a new way to apply the old ones.
[original post]
via a link from Sylvain (thanks Sylvain!)
Slashdot | Warner Opens Video Library To YouTube
“From the article, ‘Warner Music has agreed to make its library of music videos available to YouTube, marking the first time that an established record company has agreed to make its content library available to the user-generated media company. Under the agreement, YouTube users will have full access to videos from Warner artists. They will also be permitted to incorporate material from those videos into their own clips, which are then uploaded to YouTube. Warner and YouTube will share advertising revenue sold in connection with the video content.’ This is in contrast to how Universal is handling the situation.”
So, to my letter,
Dear Jack Valenti,
Step One: Don’t Sue Your Customers
Step Two: Find Innovative Ways To Turn Your Customers Into Marketing Machines
Step Three: Save Money By Not Having To Pay Lawyers To,
Step Four: … (Not) Sue Your Customers
Step Five: Pay The Artists You “Represent” More Money Because Of All The Money You Both Save and Gain as a Result.
Yours sincerely,
Your former customer(s?)
Anyway > Blog > Archive Autumn
“The weather has changed here at 49 ° latitude north, the days still warm and sunny, but the nights are cool. The memories of nights when we slept with the fan running and the windows open are receding fast, soon to be filed with memories of previous summers…”
And thus begins a wonderful snapshot of life by Lauren Wood, a reminder to us all that life isn’t about 0’s and 1’s, err, I mean <binary-data><binary>00000000</binary><binary>11111111</binary></binary-data> and instead everything that this same mentioned binary data allows us to represent and communicate with others.
Thanks for the reminder, Lauren! It’s easy to forget sometimes, and a reminder such as this helps reset the brain, placing back in focus the things that matter most.
NOTE-TO-WW:* - What does this have to do with XML? EVERYTHING! and nothing, all at the same time.
Enjoy your Monday :)
I’ve been laughing so hard from what follows that its taken me five minutes just to get to the point where I could type again…
Brutal honesty with a smile, :)
From: Sylvain Hellegouarch
To: “M. David Peterson”
Date: Sep 14, 2006 6:51 AM> G:\PyPod.Net\trunk\SaxonConsoleTest\bin\Debug>SaxonConsoleTest.exe
> Setting configuration
> <hello-world xmlns:clitype=”https://saxon.sf.net/clitype”><output>
> 1.4142135623730
> 951</output></hello-world>
>
> :D
>I know we’re geeks but what did you just say?
Hmmm… I guess when I sent the copied portion of the above message as part of an unrelated thread it would be make sense that Sylvain would have no idea that I was saying “SWEET! Extension functions are working! :D”
I’ll remember to add “SWEET! Extension functions are working! :D” to the top of the message next time ;)
Thanks for the laugh, Sylvain! :D
DISCLAIMER: The title is not intended to suggest that AJAX is a bad, horrible, and evil thing in which requires an Anti-AJAX activist effort. However, it is to suggest that usability should *ALWAYS* be the primary focus of any web-based and/or desktop-based application. While not directly related to Asynchronous communications, Javascript, or XML, given that the feature of an AJAX-enabled page that is most often noticed and therefore implemented is that of an active/re-active interface, there has developed a strong connection between an AJAX-based web page and poor usability practice. Therefore, the connection with the title.
—
Fly-out menus that are activated via a mouseover event are what I would term an Anti-Usability feature as in *MOST* cases they are designed to fly-out over the top of the text of a given page, covering up whatever it is you happen to be reading. This is fine when the action requires a click (or the Enter key if you are tabbing your way through links), but when all that is required is a mouseover, more often than not the action is activated at a moment when you have no desire for it to be activated.
If a page has been designed using usability as the primary focus, things such as menus, in-line ads, and other often “active” elements contained within any given page should never fly-out unless an action that can be deemed as “user instantiated” (such as a click) has taken place.
The reason?
This has been a bad month for the Scalable Vector Graphics movement Because of a developing family medical crisis, I’ve had to step down from chairing the SVG Open 2006 conference, a decision which, combined with other issues that the conference has had, led to the mutual decision by the SVG Open board to cancel the conference this year and hold it in 2007. This decision was hard to make, given the importance that this conference has for many people and not a few companies, but it is also a reflection of larger factors that have seriously buffeted the standard over the course of the last year.
Adobe this week made an announcement that was, while not unexpected, yet another blow - they were choosing to stop supporting the Adobe SVG Viewer in any fashion, to make it unable for download by the end of the year and to effectively dismantle the last vestiges of SVG outside of the fairly secondary roles that that standard plays in Adobe products in favor of their own FLEX language, acquired from Macromedia during the merger last year.
The future home of the n[ui]x Development Libraries.Heres the whole scoop;
On auguest 2nd 2006 my house was robbed. Stolen were my iBook G4, an iPod and 2 thumbdrives. The iBook was my only Macintosh computer and my development box. The iPod and one of the Thumbdrives i was using to backup on a daily basis. So i’m stuck. I have no Mac to continue my work, the only burned backup i have was 3 months old at that point (hard to do as it was 4 DVDS).As soon as i can get my hands on another Macintosh, and an iPod i will continue development. Untill then, Molten Visuals will continue to host the current downloads untill Feburary 2007. At that point assuming i have the funds this stie will host the files. This site will also become the active host as soon as development continues.
Anyone thowing away a working G4 or Newer Macintosh, would be willing to part with it, and would like to support this project please contact me. Contact details can be found at Xargos.
Due to problems at Molten Visuals the blog formerlly there will now be hosted at TechnicalStressings.com.
Glenn Martin
n[ui]x Developer.
Firstly, to whom ever it was that robbed Glenn’s house,
A while back I made the statement,
Of course Tim Bray is doing his best to ensure that this doesn’t happen and if anyone is capable of accomplishing the task of (ironically, given the roots of the term “Java”) waking up the Java insiders to the fact that there are a TON of people who could care less about Java as a language, but have interest in Java as a language neutral platform, it would be Tim. But in all honesty, it may very well be too late. Then again, we’re talking about Tim Bray, so maybe not. Time will tell.
I guess time has told,
Charles Nutter and Thomas Enebo, better known as “The JRuby Guys”, are joining Sun this month. Yes, I helped make this happen, and for once, you’re going to be getting the Sun PR party line, because I wrote most of it. [Update: It strikes me that thank-yous are in order. Given that Sun is in the process of (appropriately) slashing some of its engineering groups, new hires are a little tricky. A lot of people helped, but I think the biggest contributions were from our Software CTO Bob Brewin and especially Rich Green, who approved the reqs about fifteen seconds after Bob and I raised the issue.]
First XML, then Atom, now this = Both the greatest decision *AND* investment Sun Microsystems made in 2004 (or was it 2003? Lets just say both, and be done with it ;)
Tim, I’ve never been shy to speak my mind on things, and I’m not going to stop now: Yet again, your talent and capability to get the job done and done right absolutely astonishes me. Your humble approach to demanding respect (e.g. earning it with your actions instead of demanding it with your words) is something I think each and every one of us, including and, even more so, *especially* me, both can and should use as an example to follow in our own lives.
Thanks!
To the rest of the world: Buckle up folks… I’d say we are about to witness something truly spectacular in regards to the competition in the VM space and therefore a better overall world of computing as a result.
SWEET! :D
DISCLAIMER+SPECIAL NOTE: If your initial reaction to the titled attempt at a Scheme expression was: “That doesn’t even make any sense!” my response to you would be,
—
[Post]
Dare Obasanjo recently had this to say about multithreaded-programming,
Dare Obasanjo aka Carnage4Life - (thread.Priority = ThreadPriority.BelowNormal) is Bad?
This is yet another example of why multithreaded programming is the spawn of Satan.
Eric Fleischman then followed-up with,
FWIW, there is something worse than multithreaded programming….distributed programming. Concurrency on a node is hard, concurrency across N nodes is even harder. :)
To be fair, I don’t completely disagree with either of their points. That said, I also don’t completely agree, either.
Here’s why,
Received an email today from Amazon Associates regarding their new aStore Beta.
About 30 minutes from start to finish, and the new eXplorations “Store” is open for business :D
eXplorations : Featured Music, Books, and Other Items of Interest
—
NOTE: It needs some work, yes, but not bad for 30 minutes of work!
—
While obviously not the only reason why one would want to open up an aStore, providing a simple, easy to access, and easy to purchase from store front in which we can highlight all of the musicians in whom provide the intro and exit music for our shows, as well as any particular products we might speak about, this is yet one more fine example of how Amazon has and is paving the way into the next generation of community-based eCommerce.
SWEET! Thanks (again) Amazon!
I think its time for another show :) Yo, Kurt! ;) :D
Amazon Web Services Developer Connection : how fast do IP addresses get …
In follow-up to another post in the EC2 forums, Brad Clements jokingly asks,
> Firstly, regarding billing, you won’t be billed at all from the time thehost machine
> crashes (or indeed as soon as the host machine isnetwork-isolated).So .. I have a long-running compute task that doesn’t need any I/O while it’s crunching.
Can I do an “ifconfig eth0 down” and you’ll stop billing me?
;-)
In response, proving that you can have fun and do business all at the same time, RolandPJ@AWS responds with,
Why don’t you launch a large set of instances and try it out ;)
NOTE: To those of you looking for a shining example of community involvement done right, look no further.
To the Folks@AmazonAWS: Can I just state that you’ve not only a proven time and time again that you’re a pleasure to do business with, but your sense of humor showcases one very important thing: You’re human, just like the rest of us.
Without a doubt, a shining beacon for the rest of the tech-world to look to for guidance into the emerging generation of Software as a (Web) Service.
Thanks, Amazon!
Dare Obasanjo aka Carnage4Life - Kiko sold
I’m confused as to how anyone can define this as good. After you take out however much the investors get back after investing $50,000 there really isn’t much left for the three employees to split especially when you remember that one of the things you do as the founder of a startup is not pay yourself that much. At best I can see this coming out as a wash (i.e. the money made from the sale of Kiko is about the same as if the founders had spent the time getting paid working for Google or Yahoo! as full time employees) but I could be wrong. I’d be surprised if it was otherwise.
Like Dare, I’m confused, but for one additional reason…
$50,000??? Google Calendar didn’t kill Kiko. Underfunding killed Kiko!
There has been a predictable backlash against “Web 2.0″ as a meaningful movement (above and beyond a set of technologies). In response, I present here a short case in favour of Web 2.0 — what (I think) it means, what (I think) it’s made of, and the very real difference it can make when fully embraced.
I’m NOT leaving the basement before all sites follow W3C standards.
You mean like this standard?
Opera has near-complete support of XSLT 1.0 and XPath 1.0
“near-complete support”?
Yo Glenn… How’s basement life these days?
InformationWeek reviewed online Ajax applications in 6 different categories: Calendar, Email, Info Manager, Spreadsheets, Webtops, Word processors. In their definite perception, it is evident that Google is the emergent conqueror in 4 out of 6 categories. Google lost the race in Webtops category and it is behind Zoho writer in Word processor category. Some of these outcomes won’t make sense for me as the way I looked at the applications. In my decisive point of view, Google is only winner in two categories: Mail, Info Manager and Zoho is also the winner in two categories: Spreadsheet, Writer. Anyway take a look at the results+my pick:
Feature | Winner | Runner | Also Available | Also Available | My Pick |
Calendar | Google Calendar | 30 Boxes | CalendarHub | Kiko | 30 Boxes |
GMail | Yahoo Mail | AOL Mail | Windows Live Mail | GMail | |
Info Manager | Google Notebook | Backpack | Voo2do | TimeTracker | Google Notebook |
Spreadsheet | Google Spreadsheets | Zoho Sheet | Num Sum | iRows | Zoho Sheet |
Webtop | Pageflakes and YouOS | Goowy | Protopage | Windows Live | Pageflakes |
Word Processor | Zoho Writer | Writely | ajaxWrite | Writeboard | Zoho Writer leads in Ajax versions, but ThinkFree(Java Applet version) is the best |