Exporters From Japan
Wholesale exporters from Japan   Company Established 1983
CARVIEW
Select Language

March 2003 Archives

Jason Deraleau

AddThis Social Bookmark Button

Introducing Centrino

Intel recently backed up Apple’s claim that 2003 is the year of the portable. Intel’s new Centrino lineup introduces the Pentium M processor, the first Intel chip designed from the ground up for mobile systems. Traditionally, Intel’s laptop chips have been “diet” versions of their desktop chips. While features like Speed Step help increase battery life, Intel (and AMD for that matter) based laptops just haven’t been able to keep up with PowerPC based laptops.

PC Magazine’s April 8th issue has a round up of the various new Centrino laptops coming to market. To be considered a Centrino system, a portable must pack the new Pentium M as well as a new Intel chipset that stresses wireless technology. Laptops are already being released by Acer, Dell, Gateway, and several other major players in the market. The price point on the new systems is quite a bit higher than many other portables on the market at this point (and even some Tablet PC systems).

The Pentium M is actually an interesting venture. Prior to the Pentium M, the only other major processor designed specifically for portables was Transmeta’s Crusoe. If you’re not familiar with the Crusoe chip, it’s a hybrid processor that is half hardware and half software. The back end (the part that actually handles the calculations and processing) of the processor is the hardware portion, while the front end (the portion that handles scheduling) is based in software.

I have not yet seen a comparison that specifically makes use of Crusoe based laptops and Pentium M laptops. However, judging by the benchmarks I’ve seen for them on an individual basis, I’d guess that the Pentium M will out perform a Crusoe, but doesn’t get as much battery life as a Crusoe system.

Why do we care?

The big concern as Mac users is how the battery life will compare to Apple’s portables. It’s long been considered an unwritten rule that if you wanted good battery life you should choose an Apple portable. The PowerPC processor uses far less battery power than the traditional x86 processor. While definitely not the only factor in determining battery life, most other components of Apple and x86 laptops are quite similar in design and power consumption.

Intel claims a battery life that’s better than five hours. In contrast, Apple’s new 12″ PowerBook claims a battery life of up to five hours. I don’t know about anyone else, but I’ve never actually gotten results even close to that. I have a 15″ TiBook 800MHz and tend to get just about three hours of battery life with normal use. I’d be interested to hear how long Pentium M users are able to go without a recharge in normal use. If we go by what the manufacturers suggest at this point, it appears that the new Pentium M based machines are getting better battery life.

The Thick Plottens

Performance wise, a Pentium M machine at 1.4 GHz will best a Pentium 4 M at 2.4 GHz. Impressive, especially considering the lower power consumption. Almost makes you wonder why they don’t strap these puppies into desktops so we can cut down on the number of power plants out there. My initial thoughts when I heard the name of the new processor is that it convolutes the Pentium branding. It’s easy to see how people will become confused between a Pentium M and a Pentium 4 M. Two very different processors, but with names differentiated by a single character.

In addition to this, bringing in the Centrino name blurs things even further. A laptop with a Centrino label has a Pentium M processor, but apparently not all manufacturers are including the Intel Centrino chipset, which is an essential part of the platform. These half-Centrinos are shipping with the exact same label as the true Centrinos. Buyer beware!

Some additional thoughts on Intel

John Dvorak, columnist for PC Magazine, recently made some conjecture about an Apple Switch to Intel based processors. He points to various interactions between Apple and Intel over the past year or so. I’d like to just respond to this a bit, since I feel quite differently on the matter.

Apple was one of the original companies to help develop and nurture the PowerPC platform. They have spent years and millions of dollars in creating a market for the chip through their systems. They have given countless presentations on the “Megahertz Myth” to help explain how clockspeed isn’t a true indicator of performance, helping to alleviate buyer concerns. Apple has a strong interest in the PowerPC processor and it is as much a part of today’s Macintosh experience as a one button mouse.

A few months ago, many of the Apple rumor sites started running stories about how Apple was going to switch to using AMD processors. The rumors pointed to close ties between Apple and AMD, various common public appearances, etc. The same kind of ties that John Dvorak pointed out in his Apple Switch article. Apple keeps ties with many manufacturers on a variety of projects (e.g. Apple and AMD are both part of the HyperTransport consortium). This is normal behavior, not a sign of such a major change.

With the PowerPC 970 just around the corner, I don’t see Apple switching to an x86 architecture. While it is true that many of the other components inside a Macintosh are quite similar to an x86 machine, Apple’s stance is more political than technical. Apple has long kept a tight hold over the Macintosh architecture, which these days essentially encompasses the processor and chipset. Almost every other component in a Macintosh system is compliant with the same standards x86 manufacturers use for their systems.

The reason Apple keeps such a tight hold over the core of the Macintosh is to maintain the Macintosh’s legendary stability. By having such specific basic hardware to work with, Mac OS X runs rock solid. I’ve often thought that the reason the Windows platform is so unstable is the fact that Microsoft has to write code for thousands of different processor and chipset combinations (well that and a bunch of legacy hardware and software, but I digress). Mac OS X runs on far fewer variations in hardware, helping not only with stability, but with performance (okay, in theory).

The biggest hurdle in moving your customer base to a completely different processor ISA exists in end user applications. While it is quite true that Mac OS X could easily be ported to x86 (if it hasn’t been already), thousands of other Macintosh applications would have to be recompiled or rewritten to run on the different processor. Let’s be realistic, not even all of the popular applications have been moved to Mac OS X yet.

Some of you might be thinking “well, why not just port the Windows stuff over?” The issue there becomes one of libraries. There is nothing like Cocoa or Carbon on Windows. Windows applications would have to be completely rewritten to run on an x86 version of Mac OS X as well. A move to x86 would require a lot of effort by both users and developers that I just don’t think Apple is willing to risk forcing.

There will be a move though

Moving to the 64-bit PowerPC 970 from IBM would basically require a recompile and some small changes of Mac OS X only. The PPC 970 can handle 32-bit code (current Mac OS X) as well as 64-bit code, as long as the operating system knows how to handle both. While end user applications would all require the same recompiles and small changes, the immediate gain of having 64-bit processing in your application is minimal at best. Applications probably wouldn’t move to 64-bit except in a “why not?” kind of approach.

The biggest advantage of moving to the PPC 970 doesn’t come from the fact that it’s 64-bit, but its higher clockspeeds. The latest reports from IBM are talking about the processor coming out with a clockspeed topping out in the 2 GHz range. This is obviously a fairly big jump from the current 1.4 GHz G4. In addition to higher clockspeeds, the PPC 970 has a lot of interesting designs to help get more work done per clock cycle than our friend the G4e. For some great information on this, I recommend John Stoke’s article on IBM’s PowerPC 970.

So, I do see Apple making a processor change, most likely this year even. I predict however, that it won’t be to an AMD or Intel processor, but to the more logical PowerPC 970. While it won’t be the fastest processor on the market either, it will definitely be a breath of fresh air for the platform. Apple is currently preparing their product line for a new processor. The G3 remains in only a single system (the iBook). Expect the PPC 970 (not sure if it will be branded the G5) in PowerMacs first, then the high end PowerBooks. The G4 will be in the “i” line of products for some time to come.

Do you think Apple will switch to x86? Does the Centrino threaten Apple’s portables?

Scot Hacker

AddThis Social Bookmark Button

Just how disconnected are mid-career adults from kids who grew up with the Internet in their back pocket? The UC Berkeley Grad School of Journalism is hosting a full-day conference to explore issues related to the new digital childhood. As a public service, the entire event will be webcast live. Keynote Friday night, panels all day Saturday. Archives will go online sometime next week.

Chris Adamson

AddThis Social Bookmark Button

A little think-about-it for you:

What typically takes longer in java?

  1. Sorting 1,000 Long objects
  2. Reading one byte from Yahoo

You might figure that the sort is going to be worse, seeing as how it’ll have to loop (or recur) through all those objects, and there are so many, and Java’s an interpreted language after all, so that means it’s sure to be slow. And reading a byte from a URL’s input stream that’s like, what, two lines maybe? Surely it’ll be fast.

Well, here’s the code:

import java.util.Random;
import java.net.*;
import java.io.*;
import java.util.Arrays;
public class WhosSlower {
    public static URL theURL;
    static {
        try {
            theURL = new URL ("https://www.yahoo.com/");
        } catch (MalformedURLException murle) {
            murle.printStackTrace();
        }
    }
    public static void main (String[] arrrImAPirate) {
        for (int i=1; i<=5; i++) {
            System.out.println ("--- Trial #" + i +
                                " ---");
            System.out.println ("Sort 1000 Longs: " +
                                sort1000Longs() + " ms");
            System.out.println (
                                "Read 1 byte from URL: " +
                                getFirstByteFromURL() +
                                " ms");
        }
    }
    /** sorts 1000 longs, returns the time it took
        (doesn't count time to set up the array
        in the first place)
     */
    public static long sort1000Longs () {
        Random rand = new Random();
        Long[] longs = new Long[1000];
        // populate array
        for (int i=0; i < longs.length; i++) {
            longs[i] = new Long (rand.nextLong());
        }
        // start clock, do the sort
        long inTime = System.currentTimeMillis();
        Arrays.sort (longs);
        return System.currentTimeMillis() - inTime;
    }
    /** opens connection to theUrl, reads one byte,
        returns the time it took
     */
    public static long getFirstByteFromURL () {
        long inTime = System.currentTimeMillis();
        InputStream in = null;
        try {
            in = theURL.openStream();
            in.read();
        } catch (IOException ioe) {
            ioe.printStackTrace();
        } finally {
            long outTime = System.currentTimeMillis();
            // close up stream (not counting this
            // in the race)
            try {
                in.close();
            } catch (IOException ioe2) {
                ioe2.printStackTrace();
            }
            return outTime - inTime;
        }
    }
}

And the results on a 300 MHz iBook running Mac OS X 10.2.4:

--- Trial #1 ---
Sort 1000 Longs: 30 ms
Read 1 byte from URL: 355 ms
--- Trial #2 ---
Sort 1000 Longs: 9 ms
Read 1 byte from URL: 123 ms
--- Trial #3 ---
Sort 1000 Longs: 4 ms
Read 1 byte from URL: 123 ms
--- Trial #4 ---
Sort 1000 Longs: 4 ms
Read 1 byte from URL: 129 ms
--- Trial #5 ---
Sort 1000 Longs: 4 ms
Read 1 byte from URL: 200 ms

And on a Windows box of unknown speed running Windows 2000:

--- Trial #1 ---
Sort 1000 Longs: 10 ms
Read 1 byte from URL: 361 ms
--- Trial #2 ---
Sort 1000 Longs: 0 ms
Read 1 byte from URL: 200 ms
--- Trial #3 ---
Sort 1000 Longs: 0 ms
Read 1 byte from URL: 150 ms
--- Trial #4 ---
Sort 1000 Longs: 0 ms
Read 1 byte from URL: 170 ms
--- Trial #5 ---
Sort 1000 Longs: 0 ms
Read 1 byte from URL: 171 ms

Setting aside the issue of Windows’ low resolution for System.currentTimeMillis(), it’s consistent that the sort is two orders of magnitude faster than the web read.

My point? Just that I think doing stuff in memory with java is faster than most developers generally think, and doing stuff on the network is slower than most developers think. I’ve met people who insist on returning Vectors or ArrayLists from methods because of “performance concerns” with converting those to fixed-length, strongly-typed arrays, yet will happily throw an RMI call into a for-next loop and iterate over it 20 times

The important but invisible difference between local and remote method calls sort of touches on an idea brought up in W. Keith Edwards’ Core Jini, the idea that making the network transparent to the developer is, in fact, a bad idea. Of CORBA, RPC, and the like, he writes:

The hardest parts of building reliable distributed systems have to do with precisely those those things about the network that cannot be ignored by the programmer - the fact that the time required to access a remote resource may be orders of magnitude longer than accessing the same resource locally; the fact that networks fail in ways stand-alone systems do not; and the fact that networked systems are susceptible to partial failures of computations that can leave the system in an inconsistent state.

(first edition, p. 41)

So not only are our network calls slow, they’re hazardous. There’s no way to know that when our client calls

    happyServer.doStuff()

that the implementation of doStuff() on happyServer isnt something like:

    while (true) {}

In other words, we really don’t have the right to expect that an RMI call will ever return.

And to make life more fun, let’s assume that we’re writing a GUI, and that this call is made from the AWT event dispatch thread, say, in response to a button click. As long as we block, possibly forever, we won’t service the GUI and in fact, won’t even get repainted if another window is dragged over ours.

Well…. that would suck, wouldn’t it?

And considering how many Java GUI’s are written for enterprise applications - typically distributed applications that use JDBC, RMI, CORBA, JMS, Jini, etc. - this seems like a problem that’s going to come up a lot.

The solution, it seems, is in using threads, and letting them run as long as they need (possibly forever), updating the GUI when the threaded network call finishes. The network call can be isolated in its own thread, and when done, it can use a Swing “worker” - a Runnable called by the invokeLater or invokeAndWait in the SwingUtilities class - to update the GUI is a Swing-friendly way.

Well, that’s well and good I suppose, but what does your app do in the meantime? If a user clicks a button and you immediately return (because you launched the thread), what do you do with the GUI in the meantime? Worse, what if the user clicks the button again - are you going to launch a second thread?

Now the problem is that the GUI has to know about the thread and what it’s doing.

I discovered this problem when I was doing a project last year, one in which I decided that everything was going to have a very rigid model-delegate design: I’d pass around model objects and ask a factory for an appropriate “view”, ie, some kind of custom JPanel I’d written to render that model object. I figured I needed to handle models in two states:

  1. the model is null, so we clear disable all the widgets in the panel
  2. the model is non-null, and has all the data, so we’re ready to go.

This was nice and all, but sometimes I had models that would take 30 seconds to populate from a database call, during which I’d have to just spin the wristwatch, hourglass, or rainbow stress-pizza. I was trapped by my panels’ need for a fully-populated model, and the trap with my design was that it was a false dichotomy, that there was a third state I hadn’t dealt with. The “and” in the second item is a giveaway: I needed my panels to handle a new case where:

  • the model is non-null, but it doesn’t have all the data yet, so we’re not ready to go.

If we’re willing to tolerate this state, then we can have a happy GUI again. I extended the tool interface with a thread-aware subinterface that also added a getStatus() String that the GUI could use to provide feedback, and a simple listener scheme that would fire off an event when the threaded operation was done (ie, when the model had its data and was usable). Some implementations of this subinterface also got a getPercentDone() method, which allowed me to provide a progress bar. At any rate, the panel’s setModel() got trickier, but could now handle the various states:

  1. if model object is null, clear and disable widgets
  2. if model is non-null and not a running thread, call a poulateFromModel() to enable and fill in the widgets
  3. if model is non-null and is a running thread, disable the widgets, set up set up a Swing Timer (which runs every 500 ms, calls getStatus() on the model and resets a temporary label in the panel with the returned status), and also set up a listener (which when when called back with the thread-done message calls populateFromModel and stops the Timer)

You could also do this without the listeners, by requiring that the model objects be Threads, and then just having the panel check to see if the thread isAlive(), and using the Timer to poll it, either updating the status label if the thread’s alive or populating the widgets and stopping the Timer if it’s not. Just a question of design.

This approach gives my app more responsiveness in the places I’ve implemented it, since something’s always happening - for example, a count of loaded database records updates every 1/2 sec until they’re all available, at which point I can put them in a JList or something.

The two cases it doesn’t help much with are long RMI calls, and the theoretical case where the call never returns. A single RMI call that takes 30 seconds for the server to process doesn’t give me a status update half-way through - it’s just a single line that my thread blocks on. True, I’m not blocking the GUI anymore, but all I can do for the user is to put up a “Waiting” type message or an animated GIF. As for the call that never returns, my user could go elsewhere in the GUI, but we’re now leaking Threads, memory, and network resources, so a day of reckoning may eventually come, perhaps in the form of an OutOfMemoryError. If my user exits the panel, I can’t really assassinate the thread with Thread.stop(), because it’s deprecated and deadlock-prone (who knows what state it will leave the RMI call in?)

So, problem partially solved - the user experience is vastly better than when it would block the GUI for a 10-30 second database call, but there are still cases I’d like to be able to handle and can’t.

How do we write more responsive network GUI’s? Does Java give us the right tools? Should it force us to use better practices? Let’s have some ideas…

Derrick Story

AddThis Social Bookmark Button

Once there was order in the world of Mac conferences: Macworld SF in January, WWDC in May, MacHack in mid June, Macworld NYC in July, and Mac OS X Conference in October. Depending on your computing preferences, you’d pick a couple over the course of the year and book your flights.

Then the rumbling began. First there was disharmony between Apple and IDG over how to handle the East Coast show in the summer. Then we learned that Panther wasn’t going to be ready in time to distribute at WWDC in May. Suddenly, this year’s Mac conferences began to tumble upon each other.

So here’s the latest if you’re following the action:

  • WWDC has moved from San Jose to San Francisco and is scheduled for June 23-27.
  • MacHack remains in Detroit on June 19-21.
  • Macworld NYC is now called CREATE and is still in New York City on July 14-18.
  • O’Reilly’s Mac OS X Conference is happening in Santa Clara, CA on Oct. 27-30

Indeed, Apple has begun a chain reaction for the remainder of this year’s conference schedule. Maybe it’s time for change. And just like with everything else, we have to be more flexible than ever when making plans.

Bottom line is… I’m not booking my flights 6 months ahead of time anymore. No way! The world could be a diffrent place before I ever reach the boarding gate. Thirty days in advance is just fine with me.

Jason McIntosh

AddThis Social Bookmark Button

Related link: https://www.fools-errand.com

Cliff Johnson is making a sequel to his celebrated 1988 puzzle game The Fool’s Errand, and has also made that and other classic titles free for download from his website.

The Fool and His Money picks up where the first game left off, rejoining the unflappably sanguine Fool for another puzzle-filled treasure hunt though Johnson’s Tarot-inspired, silhouette-rendered landscape. The new game will ship on Oct. 31, for both Mac and Windows. Johnson is accepting pre-orders on his website. (Those who do pre-order get their names embedded in the game itself — look for mine among them, certainly.)

He has also made the puzzle-stories that made him famous, The Fool’s Errand and 3 in Three, free for download, again for both Mac and Windows. Stunningly, both ancient games work fine under Mac OS X (under its Classic emulator). The earlier one lacks sound, but that’s still rather impressive forward compatibility for a game designed on a 512K Macintosh.

I will also randomly note that if you like those games (and have a Mac), you may also like Andrew “Zarf” Plotkin’s System’s Twilight, a freeware puzzle/story of the same style as Johnson’s games, but with completely original content and its own play style. (At least one of my puzzle-crafting friends insists it’s actually the superior game, since its puzzles tend towards the purely logical, with less reliance on mouse dexterity, knowledge of American English idioms, and other tricks that Johnson’s games sometimes like to pull.)

Does it count as paid work training if I write Perl programs to solve 3 in Three puzzles?

Jason Deraleau

AddThis Social Bookmark Button

Related link: https://www.apple.com/pr/library/2003/mar/19gore.html

Despite the demand for a recount, former Vice President Al Gore has joined Apple’s Board of Directors. This move honestly surprised me. I fear that Mr. Gore feels that Mr. Bush has already clinched a re-election through his war efforts. Of course we could see Mr. Gore return to politics for the following election, at which time he will certainly have my vote. I am reminded of the Steve Jobs for President campaign. Maybe Al’s political experience will rub off on Steve and they’ll end up as running mates some election. I’m not holding my breath ;)

In other news, today we saw the announcement of some changes to this year’s WWDC. Looks like Apple has rescheduled it to June and is planning to reveal Panther, the next major release of Mac OS X. If history repeats itself with this iteration, Panther should present some major feature and technology upgrades as compared to its predecessor. Mac OS X 10.1 was a drastic departure from 10.0. The Jaguar release brought major improvements in speed as well as several great new features (e.g. Rendezvous, iChat, etc.).

Hopefully Panther will bring as many great advances. I’m keeping my fingers crossed for multi-protocol, webcam, and voice support in iChat. I imagine we’ll see Safari replace Internet Explorer as the shipping browser as well. I’m hoping for as many drastic speed and stability improvements as well.

Thoughts on Mr. Gore? Any new features you’d like to see with Panther?

Daniel H. Steinberg

AddThis Social Bookmark Button

Related link: https://developer.apple.com/wwdc/

Apple has just rescheduled their May WWDC (Worldwide Developers Conference) gathering. The new date is June 23-27 and the place is San Francisco’s Moscone Center.

Jason McIntosh

AddThis Social Bookmark Button

Related link: https://dear_raed.blogspot.com/

Where is Raed? is an English-language weblog maintained by a resident of Baghdad. It goes without saying that this offers a rather unique perspective into certain current events.

Chris Adamson

AddThis Social Bookmark Button

Some developers who installed Apple’s new java 1.4.1 for Mac OS X (and didn’t read the release notes or the mailing lists) got a nasty surprise when they tried to run apps they’d written for use with QuickTime for Java:

cadamson% java -classpath PlayMovie.zip PlayMovie
Exception in thread "main" java.lang.NoClassDefFoundError:
com/apple/mrj/macos/carbon/CarbonLock
  at quicktime.jdirect.QTNative._getLock(QTNative.java:111)
  at quicktime.jdirect.QTNative.(QTNative.java:105)
  at quicktime.QTSession.(QTSession.java:114)
  at PlayMovie.main(PlayMovie.java:24)

What’s happening is that java 1.4.1, now the default when you type java on the command line, does not support certain Java-to-Carbon technologies that QuickTime for Java depends on. This problem was hinted at in a previous QTJ article, but I had to address the issue indirectly, since Apple’s 1.4.1 was still under NDA.

So why so much breakage? The story, as well-covered by
MacDevCenter’s Daniel Steinberg in href="https://www.macdevcenter.com/pub/a/mac/2003/03/10/osx_java.html">Apple
Releases Java 1.4.1 for Mac OS X is that Apple has made a huge
under-the-hood change, swapping out an AWT/Swing implementation based
on the Carbon API for one built on Cocoa.

The change is quite significant, as the move to the
object-oriented, multi-threaded Cocoa environment appears to have
significantly simplified Apple’s java codebase. However, it now makes
calls from Java into Carbon code much more difficult. As explained in
href="https://lists.apple.com/archives/java-dev/2002/Nov/21/accessingcarbonfromjava1.txt">a
java-dev message from Apple’s Java Project Manager,
balancing between the pre-emptive threading of Java with the
co-operative threading model of Carbon is a tricky proposition.

As a result, in many cases, they’ve decided to replace or
simply abandon the Carbon-bound java extras. The
com.apple.mrj packages used to access Mac-specific
features are now deprecated, replaced in part by com.apple.eawt and com.apple.eio. In fact, the term "MRJ", a holdover from the old Mac OS’ "Macintosh Runtime for Java" era, seems to be going away. Also gone is the "JDirect" API for simplifying java-to-native calls.

Notice that both terms, "MRJ" and "JDirect" are implicated in the stack trace above. The QTJ code tries to get the carbon lock to avoid hosing the Human Interface Toolbox, but the com.apple.mrj code to do so isn’t available in 1.4.1.

However, the situation is not as bad as it might sound. java 1.3.1
remains in place after a 1.4.1 upgrade, and will still run QTJ apps, and will reportedly be included in the next major release of Mac OS X. And in some cases, 1.3.1 will be the default JVM, for example, if your app is distributed as a .app bundle — the schemes for every scenario are spelled out in href="https://developer.apple.com/techpubs/macosx/ReleaseNotes/java141/multiplevms/index.html">the release notes. You could force use of 1.3.1 by using href="https://java.sun.com/products/javawebstart/">Java Web Start, or by using a shell script that explicitly calls /System/Library/Frameworks/JavaVM.framework /Versions/1.3.1/Commands/java, although the latter option is fraught with peril if a java 1.3.2 is ever released.

With java 1.3.1 sticking around on Mac OS X for the foreseeable
future, and with ways to call it explicitly, there doesn’t seem to be
need for QTJ developers to panic just yet.

Still, the future of QuickTime for Java does seem to be up in the
air. It has not been explicitly canceled or deprecated, as have
other API’s, but it clearly and unapologetically breaks in 1.4.1,
which makes a lot of QTJ fans nervous.

It’s strange because until now, QTJ had seemed like a first-class
citizen in the QuickTime world, prominently featured in developer
documentation (the main href="https://developer.apple.com/techpubs/quicktime/qtdevdocs/RM/frameset.htm">QuickTime
API documentation even cross-references all href="https://developer.apple.com/techpubs/quicktime/qtdevdocs/APIREF/INDEX/javaalphaindex.htm">Java
equivalents to the native API’s calls). Yet on the href="https://developer.apple.com/quicktime/qtjava">QuickTime for Java
page, Apple says it is "interested in hearing from
QuickTime developers and understanding what aspects of the technology
they are using", as if it needs a business case to be made to
commit to the technology (or some subset of it) going forward.

From what I’ve seen, QTJ interest is greater than it has ever been,
a beneficiary to some degree of Sun’s neglect of the href="https://java.sun.com/products/java-media/jmf/index.html">Java
Media Framework, which has not seen a major version since 1999, as
Sun’s media team has focused on a href="https://java.sun.com/products/mmapi/">Mobile Media API
instead. Developers who want to do media apps in Java are choosing
between several imperfect options:

  1. Stick with JMF despite its limited collection of supported
    media formats and codecs. This is still the most practical all-java option.

  2. Use QuickTime for Java and get great support of real-world
    media types, and arguably the best API for editing and creating media,
    with the gotcha that it only works on Windows and Mac
    (and now only on 1.3.1 on Mac)

  3. Do everything yourself in Java like JavaZoom did with their all-Java MP3 decoder. Not for the faint of heart.
  4. Use JNI to tie into QuickTime, Windows Media, the href="https://www.helixcommunity.org">Helix software open-sourced by RealNetworks, or other native media API’s. Also difficult,
    and obviously limited to a handful of platforms.

  5. Dump Java and use the aforementioned media API’s in their
    native format.

My fear is that if QTJ is not supported in the future, that option
5 will be the default, i.e., it will pretty much close the curtain on
developing media applications in java, because the other options are too much work for too little benefit. And that will make developers
like me wonder if we like QuickTime so much that we’re ready to give
in to the crazy square-braces and semi-automatic garbage collection of
Objective-C. Time… or is that QuickTime… will tell.

Does Apple need to get QTJ happy in java 1.4.1? Is there a better alternative? What do you think?

AddThis Social Bookmark Button

Related link: https://www.techtv.com/screensavers/showtell/story/0,24330,3419005,00.html

Cory Doctorow and I visited Leo Laporte on TechTV’s The Screen Savers. We chatted about the upcoming O’Reilly Emerging Technology Conference and some of the tech we see as emerging. Among the topics discussed were: Mesh Networking, Hardware Hacking, Rich Internet Applications, and Social Software.

TechTV has made the segment available online for your streaming enjoyment.

AddThis Social Bookmark Button

Related link: https://www.fastcompany.com/magazine/69/google.html

Fast Company has a wonderfully in-depth article on the the success of Google. Bottom line:

Sidebar: How does Google keep innovating?
One big factor is the company’s willingness to fail. Google engineers are free to experiment with new features and new services and free to do so in public. The company frequently posts early versions of new features on the site and waits for its users to react.

These are just the themes we asked Google’s Craig Silverstein to cover in his
upcoming O’Reilly Emerging Technology Conference keynote, “Google, Innovation, and the Web”.

Brian Jepson

AddThis Social Bookmark Button

Related link: https://radio.weblogs.com/0108971/2003/03/16.html#a135

Clemens Vasters writes: “C#? C++? VB? Java? English! Whenever I speak at conferences�across Europe (right now that’s about twice a week), every other attendee’s first comment when�talking to me after�any speech will not�be a technical one. Instead they say ‘your English is fantastic’. People�seem indeed surprised s’at mei englesh iss�not vot s’ey vutt expeckt…”

Derrick Story

AddThis Social Bookmark Button

Back in January I was able to steal a few moments of Jon Rubinstein’s time (Apple Senior Vice President of Hardware Engineering and a really nice guy). Among other things, I was curious to hear his opinions about USB 2.0 and its possible inclusion on future Macs.

Of course Jon wasn’t able to say much to someone wearing a media badge, but it seemed to me by his remarks and expressions that bringing USB 2.0 to the Mac wasn’t exactly a top priority on his task list.

Why caused this topic to bubble up tonight? Well, I just stumbled across the greatest bargain in film scanners ever, the Minolta DiMAGE Scan Dual III that packs a powerful scanning punch (optical resolution 2,800 dpi, multi-sampling, and 4.8 dynamic range) and sells for less than $300. If you’ve shopped for film scanners, you can understand my enthusiasm here. Those specs usually cost you three times more.

The Duo Scan is both Mac and Windows compatible featuring a USB 2.0 interface that also works with the older 1.1 standard. According to Minolta’s features overview, the speed difference is notable: 30-second scans for USB 2.0, and 48 seconds for USB 1.1. That difference adds up as you work through a stack of slides and a six pack of Coke.

So I go over to PC Connection and search for a USB 2.0 PCMCIA card for my PowerBook. Looks like IOGEAR had engineered the Dual Port Hi-Speed USB 2.0 card that sells for $64 US. According to the catalog description, it auto configures and requires no additional drivers. I’d love to get my hands on this card and the Duo Scan III to see how they work with Mac OS X.

But here’s my point: USB 2.0 is somewhat threatening to the FireWire golden goose, and Apple isn’t about to prop up this standard the way it did for 1.1. At the moment, there aren’t a ton of USB 2.0 devices, but they seem to be emerging faster than I had anticipated.

If you own a desktop Mac, you can always add a 2.0 PCI adapter and be on your way. But if you’re in the market for a laptop, keep in mind the value of a PCMCIA slot. iBooks and 12″ PowerBooks don’t have ‘em.

If Apple is going to sit tight with USB 1.1, you might want to leave your options open and get a laptop with PCMCIA slot.

Have you used IOGEAR’s Dual Port Hi-Speed USB 2.0 card? If so, what’s your report?

Scot Hacker

AddThis Social Bookmark Button

Until recently, I had a large block of static IP addresses for my home network, which made server setups easy. But I also had fairly low upstream DSL speeds. In order to get faster upstream, I switched providers to Speakeasy. So far I’m very impressed with their service — no limits on reasonable connection sharing within households, and no limits on what kinds of servers you can run. The 768kbps upstream I purchased is going to be perfect for moderate domain hosting from home. I was able to buy an extra static IP from Speakeasy online for a few bucks and have it become immediately available — very cool. But during setup of the new network I hit a snag.

I have five machines on the home network, one of which is going to be a public web/mail server. The server needed to be on the public/static IP, while the other four machines needed to be on 192.168.x DHCP addresses. I couldn’t figure out how to configure the LinkSys BEFSR41 to enable both the Class C and the public networks simultaneously.

The answer was not in the user manual, nor was it on the LinkSys web site. But a friend had been through the same situation and had the situation down cold. The trick is to place a hub inline before the router. So rather than running from the DSL modem to the router and from there to the server and the workstations, run from the modem to a standard hub’s “Crossover” or “Uplink” port. The server can then be connected to one of the hub’s other ports. Another free port on the hub can be run into the LinkSys router’s WAN port. The workstations are connected to the router’s ethernet/hub ports (note that using the Uplink port on the LinkSys will disable port #1, so you’ll need to leave it empty.

As for machine configuration, the workstations can now be set up to use the router as their DHCP server, presumably at 192.168.1.1. Meanwhile, the server gets configured to use its static IP. Rather than looking to the LinkSys router, it uses the DSL remote gateway address, just as the router does.

Of course, the server does not benefit from the firewall features of the router, and so becomes responsible for its own security. Another firewall of some sort needs to be deployed. But it is absolutely possible to run a mix of static/external and dynamic/internal IPs on a home network with a DSL gateway.

James Duncan Davidson

AddThis Social Bookmark Button

Recently, I’ve received a spate of inquiries about a particular problem that readers of Learning Cocoa are having when building projects that contain a space in their name. Unfortunately, there are several projects in Learning Cocoa including some of the first applications in the book like Hello World and Currency Converter. You’ll know you’ve run into it when you see an error like the following show up:


missing file or directory Hello
missing file or directory World_Prefix.h

Why is this happening? Well, it’s because the December 2002 Tools release introduced support for Precompiled Headers. This is a feature that makes life better by shortening compile times after the first and is a welcome addition to Project Builder. However, there’s a bug in that the project creation templates don’t put quote marks around the file name when setting up the precompiled header settings. There are two possible solutions, both easy:

  • Name your projects without spaces in them
  • Edit the target settings. To do this, select the current target, go into Settings -> Simple View ->GCC Compiler Settings, and then put quote marks around the file name in the Prefix Header box.

I’ve pestered my contacts at Apple and they are indeed aware of the problem. Hopefully, once we see a new version of the developer tools, the answer will be simply "Go to the ADC site and upgrade your tools!" Until then, use either of the above options and you’ll be good to go.

Daniel H. Steinberg

AddThis Social Bookmark Button

Related link: https://www.macdevcenter.com/pub/a/mac/2003/03/10/osx_java.html

Jaguar users, head to your software update to download J2SE 1.4.1. Apple has moved their Java implementation from Carbon to Cocoa and added many other improvements to this major update.

Derrick Story

AddThis Social Bookmark Button

Since the news broke that Microsoft has bought Connectix’s Virtual PC, much of the online discussion has speculated as to what Microsoft is going to do with this popular Mac product. I’m not sure myself. But technically speaking, I’m very interested in the latest release of VPC. I have version 6.0.1–running Windows XP–on a 1GZ 15″ TiBook, (in full screen mode even). And I have to tell you, for the first time I’m impressed with VPC’s performance on Mac OS X.

In the past I did recommend version 5 because it was valuable for Web testing and other light PC tasks on a Mac. But only for short periods of time. It’s difficult to turn back our internal “computer performance” odometer to the old days when waiting for a page to slowly appear was acceptable. Version 5 of VPC was just too painful for more than 20 minutes at a time. But, then, that was all I usually needed to view newly designed Web pages or to make sure that all the components on a CD worked properly.

Then I installed version 6 on a 1GZ TiBook. Even using Windows XP Home Ed, I noticed a tremendous speed difference between version 5 and 6. Plus, now I could run Windows full screen on the 15″ PowerBook instead of being limited to 800 x 600 resolution. (And you can even put Windows apps on your Mac OS X Dock.)

It’s funny what we’ll do given the opportunity. I’m writing this weblog live in IE 6.0, running on XP Home, via Virtual PC 6.0, on Mac OS X 10.2.4, on a TiBook. See, if you stick to working with text, you can do anything!

To give VPC 6 the ultimate test, I loaded Ulead’s VideoStudio 7 on to XP and played around with some clips. Believe it or not, VPC could actually play .avi clips smoothly in VideoStudio. The .mpgs and .movs did experience some stuttering, but the application never crashed and performed well enough for me to use it for research as part of a project I’m working on.

Side note here: Ulead’s VideoStudio 7 seems to be a darn good application if you want to edit DV on Windows. It costs less than $100, and is far better than Windows Movie Maker.

I’ve only had one gotcha so far. I enabled “Security Lockout” in VPC’s Security Preferences panel. I entered my normal password twice as required, and checked the lockout “PC Settings” box. Then a while later, when I went to change some settings, VPC balked at my password.

In the help menu, VPC lists this stern warning: “Warning: Passwords are stored using a triple-DES encryption. It is not possible for Connectix to retrieve passwords, even in an emergency. Please note your password and use the “hint” feature. Passwords are case-sensitive and are always required to access preferences.

Well that’s nice. But you know what? My hint text is scrambled, leading me to believe that so is my password. So, I would recommend avoiding this option until I have more information on what happened. (And no, I didn’t forget my password, and the caps lock is not turned on.)

Other than the fact that I can not longer change my VPC preferences–fortunately I set the RAM allocation to 512MB!–I’m truly impressed with the usability of version 6. If you’re a VPC owner with a fast computer, it’s probably worth the $99 upgrade fee.

However, this will most likely be the last weblog I write in IE 6 on Win XP, on VPC, on OS X. Safari with its built-in spell checker and speedy performance (not to mention much better font rendering) is just too much fun.

Jason Deraleau

AddThis Social Bookmark Button

I love reading all of the rumor sites out there. Spymac, Think Secret, the list goes on. There are so many different “news” sites out there, some more reputable than others. My Mac news mostly comes from a MacSlash feed and a MacCentral feed viewed through Kontent (a Konfabulator widget). These two sites tend to have accurate news. Not so much rumor as actual fact.

This week I saw a news item come through describing a music service through Apple. Links are here and here. Both articles link to the LA Times, which is where the news originally broke. It seems that Apple is working out a deal with the various record labels to provide music to iTunes users.

While other providers out there have similar services, the Mac platform remains (as always) neglected. The rumor is that Apple is filling the void by extending iTunes with consumer-friendly DRM. This is important for two reasons: 1) That Apple has resigned themselves to include DRM in their software under industry pressure. 2) That Apple is going to focus their DRM implementation on the consumer. Both of these are actually good things. If your peers are going to force you to ride on the DRM bandwagon, at least make sure you pick a good seat.

This is where Apple’s creativity in improving technology comes in to play. The iPod is still the nicest MP3 player on the market, it was revolutionary when it came out. Apple can take a product and spin it in such a way to woo the masses. Can they do it again with a music service?

Of course the two biggest concerns are price and selection. If there isn’t any music that anyone wants, it will fail. If the pricing isn’t fair, people will just continue to steal. Hopefully, if this all turns out true, Apple will implement it in such a way that the downloads are easy and inexpensive. I’d like to see something along $10 a month (with maybe a $5 a month discount for .Mac users) for unlimited downloads to your Mac.

Once on your Mac, unlimited plays, just like any other track in your library. To move the tracks to your iPod, no charge. To burn them onto CD, one time charge of $1 per track. This last part I feel is fair. If I went to the store and bought a 15 track CD I wouldn’t be surprised to pay $12-15. The advantage here is that I don’t have to take time out to drive and I don’t get all of those flakey filler tracks that are on so many albums these days.

Now, what would be really nice is if we saw the Rendezvous library sharing feature added to iTunes at the same time. This would allow you to download your secured music to one Mac and share it with others on your network. I’ve been waiting for this feature since Macworld last July. With any luck both of these features (the library sharing and the online music service) will make it into the next iTunes update. It’s always nice to see Apple leading the pack.

Would you pay to download music? What’s a reasonable price?

Derrick Story

AddThis Social Bookmark Button

Weblogs are a growing part of O’Reilly Network content. We have
more than 80 bloggers who take turns publishing interesting
items every day. Currently, our weblogs account for more than
one-tenth of overall page traffic throughout the Network, and
that includes our domain sites such as ONJava, ONLamp, and Mac
DevCenter
, plus forums, search results, and other tallied pages.

Clearly, the time had come to create a new home for this flowing
stream of topical information, and that’s exactly what we did.
Now, at weblogs.oreilly.com, you can quickly peruse top ‘blogs
from the last month (who’s hot and who’s… well you know), descriptions about our active contributors,
and a topics list that helps you quickly pinpoint content in the
areas that you find most interesting.

We’ll continue to list the latest weblogs on the O’Reilly Network
home page, as well as throughout the site, but now when you click
on the Weblogs heading, or the More weblogs link, you’ll be
greeted with our new home for bloggers.

Brian Jepson

AddThis Social Bookmark Button

Related link: https://radio.weblogs.com/0001011/2003/03/01.html#a2380

In which Robert Scoble plays with a laptop that runs Linux and says “I would guess that Linux will have a hundred thousand desktops by the end of the year and possibly more than a million by the end of 2005.”