Exporters From Japan
Wholesale exporters from Japan   Company Established 1983
CARVIEW
Select Language

` Using the background-color property and assigning an RGBA value to it, we are able to define the transparency for the divs’ background color. The transparency of any text or elements inside of the div is unchanged. In contrast, using the opacity property, the paragraph above would inherit the 50% transparency defined on the div.

Unfortunately, as is often the issue, browser support for RGBA is limited. Both Safari and Firefox 3 offer support for the RGBA color value system, but so far Opera and IE do not. The good news though, is that we can use the RGBA value without worrying about it breaking our design by also defining a fallback color.

`

  1. div{

  2. background-color: rgb(0,0,0);

  3. background-color: rgba(0,0,0,.5);

  4. color: #fff;

  5. }

` In most browsers that do not recognize RGBA values, that declaration is simply ignored, as it should be. In IE though (I know, surprise, suprise), it appears that RGBA values cause IE to not display the background at all. A way to get around this would be to use conditional comments to reset the background to a solid color for IE. So we can just define a solid color for browsers that do not accept RGBA values and leave the transparency for those that can support it…a prime example of progressive enhancement.

I have set up a working comparison of RGBA versus using the opacity property for you to view in each browser. Remember, to see the effects of RGBA, you will have to view the page in Safari or Firefox 3.

]]> An Objective Look at Javascript 2.0: Strong Typing https://timkadlec.com/2008/04/an-objective-look-at-javascript-2-0-strong-typing/ Mon, 28 Apr 2008 19:00:00 +0000 https://timkadlec.com/2008/04/an-objective-look-at-javascript-2-0-strong-typing/ In our first look at the new features of Javascript 2.0, we will focus on the new typing system. We are just going to highlight some of the major changes and potential uses. For a more detailed look, take a look at the ECMAScript 4.0 Language Overview.

Traditionally, Javascript is a loosely-typed language, meaning that variables are declared without a type. For example:

`

  1. var a = 42; // Number declaration

  2. var b = “forty-two”; // String declaration

`

Since Javascript is loosely typed, we can get away with simple ‘var’ declarations…the language will determine which data type should be used. In contrast, Javascript 2.0 will be strongly typed, meaning that type declarations will be enforced. The syntax for applying a given type will be a colon (:) followed by the type expression. Type annotation can be added to properties, function parameters, functions (and by doing so declaring the return value type), variables, or object or array initializers. For example:

`

  1. var a:int = 42; //variable a has a type of int

  2. var b:String = “forty-two”; //variable b has a type of String

  3. function (a:int, b:string)//the function accepts two parameters, one of type int, one of type string

  4. function(…):int //the function returns a value with a type of int

`

NOTE: There has been some confusion about enforcing type declarations so I thought I’d try to clear it up. Enforcing type declarations simply means that if you define a type, it will be enforced. You can choose to not define a type, in which case the variable or property defaults to a type of ‘Object’ which is the root of the type heirarchy.

Type Coercion

Being a strongly typed system, Javascript 2.0 will be much less permissive with type coercion. Currently, the following checks both return true:

`

  1. “42” == 42

  2. 42 == “42”

`

In both cases, the language performs type coercion…Javascript automatically makes them the same type before performing the check. In Javascript 2.0, both of those statements will resolve to a ‘false’ value instead. We can still perform comparisons like those above; we just need to explicitly convert the data type using type casting. To perform the checks above and have them both resolve to ‘true’, you would have to do the following:

`

  1. int(“42”) == 42

  2. string(42) == “42”

`

While adding a strongly typed system does make the language a bit more rigid, there are some benefits to this change, particularly for applications or libraries that may be worked with elsewhere. For example, for a given method, we can specify what kinds of objects it can be a method for using the special ‘this’ annotation. I’m sure there are many of you who just re-read that sentence and are scratching your heads trying to figure out what the heck that meant. An example may help:

`

  • function testing(this:myObject, a:int, b:string):boolean

`

The method above accepts two arguments, an int and a string. The first part of the parameters (this:myObject) uses the this: annotation to state that the function can only be a method of objects that have the type of ‘myObject’. This way if someone else is using code we have created, we can restrict which objects they can use that method on, preventing it’s misuse and potential confusion.

Union Types

We can also use union types to add a bit of flexibility. Union types are collections of types that can be applied to a given property. There are four predefined union types in Javascript 2.0:

`

  1. type AnyString = (string, String)

  2. type AnyBoolean = (boolean, Boolean)

  3. type AnyNumber = (byte, int, uint, decimal, double, Number)

  4. type FloatNumber = (double, decimal)

`

In addition, we can set up our own union types based on what we need for a particular property:

`

  • type MySpecialProperty = (byte, int, boolean, string)

`

One final thing I would like to mention is that in contrast to Java and C++, Javascript 2.0 is a dynamically typed system, not statically typed. In a statically typed system, the compiler verifies that type errors cannot occur at run-time. Statically typing would catch a lot of potential programming errors, but it also severely alters the way Javascript can be used, and would make the language that much more rigid. Because JS 2.0 is dynamically typed, only the run-time value of a variable matters.

]]>
Phantom CSS https://timkadlec.com/2008/04/phantom-css/ Thu, 24 Apr 2008 19:00:00 +0000 https://timkadlec.com/2008/04/phantom-css/ At the heart of CSS, of course, are its selectors. They are after all what allow us to apply styles to a given element in our (X)HTML. Sometimes though, there is a desire to apply a style based on an elements state. That is where pseudo-classes come into play. You’ve probably all used them at some point…but there may be more there than you realize. Their value makes it worth taking a closer look.

Static Pseudo-Classes

Pseudo-classes allow us to apply an invisible, or “phantom”, class to an element in order to style it. For example, let’s look at the element most often styled using pseudo-classes: the anchor tag (). Some anchor tags point to locations a user has already viewed, and some point to locations the user has not yet visited. Looking at the document structure, we can’t tell this. No matter if the link is viewed or not, it looks the same in (X)HTML. However, behind the scenes, a “phantom” class is applied to the link to differentiate between the two. We can access this “phantom” class with pseudo-class selectors, like :link and :visited. (Pseudo-classes are always prefixed by a colon.)

The :link pseudo-class selector refers to any anchor tag that is a link…that is any anchor tag that has a href attribute. The :visited pseudo-class selector does exactly what it sounds like…it refers to any link that has been visited. Using these pseudo-classes allows us to apply different effects to links on the page according to the visited state.

`

  1. a {color:blue;}

  2. a:link {color: red;}

  3. a:visited {color: orange;}

` The above styles for example, will make any anchor tag that does not have a href attribute to be colored blue (line 1). Any link that has a href attribute, but has not been visited will be red (line 2). Finally, if a link is visited (line 3), then it is an orange color.

Another static pseudo-class is :first-child (The :first-child pseudo-class is not supported by IE6). The :first-child selector is used to select elements that are first children of other elements. This can be easily misunderstood. A lot of times, people will try to use it to select the first-child of an element. For example:

`

  1. Here is some text

Say we want to apply a style to the paragraph element. It is not uncommon to see people try to do this using the following style: `

`

  • div:first-child {font-weight: bold;}

However, this is not how the pseudo-class works. If we think back to the concept of pseudo-classes essentially being "phantom" classes, then what we just did was apply a phantom class to the div like so: `

`

  1. Here is some text

Obviously that is not what we want. The :first-child selector doesn't grab the first child of an element; it just grabs any of the specified element that is a first-child. The correct way to style that would be with the following line: `

`

  • p:first-child {font-weight: bold;}

That's probably as clear as mud, so it may help to take another look at the "phantom" class: `

`

Here is some text

`

Watch Your Language

Corny headings aside, we can select elements based on the language using the :lang( ) pseudo-class. For example, we can italicize anything in French using the following style:

`

  • *:lang(fr) {font-style: italic;}

` Where does the language get defined? According to the CSS 2.1 specification, the language can be defined in one of many ways:

In HTML, the language is determined by a combination of the lang attribute, the META element, and possibly by information from the protocol (such as HTTP headers). XML uses an attribute called xml:lang, and there may be other document language-specific methods for determining the language.

Dynamic Pseudo-Classes

So far, what we have discussed are static pseudo-classes. That is, once the document is loaded, these pseudo-classes don’t change until the page is reloaded. The CSS 2.1 specification also defines three dynamic pseudo-classes. These pseudo-classes can change a document’s appearance based on user behavior. They are:

  • :focus - any element that has input focus

  • :hover - any element that the mouse pointer is placed over

  • :active - any element that is activated by user input (ex: a link while being clicked)

Usually, these pseudo-classes are applied only to links. However, they can be used on other elements as well. For example, you could use the following style to apply a yellow background to any input field in a form when it has the focus.

`

  • input:focus {background: yellow;}

` The main reason this is not done a lot is because of a lack of support. IE6 does not allow any dynamic pseudo-classes to be applied to anything besides links. IE7 allows the :hover pseudo-class to be applied to all elements, but doesn’t let the :focus pseudo-class be applied to form elements.

Complex Pseudo-Classes

CSS offers us the ability to apply multiple pseudo-classes so long as they aren’t mutually exclusive. For example, we can chain a :first-child and :hover pseudo-class, but not a :link and :visited.

`

  1. p:first-child:hover {font-weight: bold;} //works

  2. a:link:visited {font-weight: bold;} //link and visited are mutually exclusive

` Again, there is a compliance issue here with IE6. The IE6 browser will only recognize the final pseudo-class mentioned. So in the case of our first style above, IE6 will ignore the :first-child pseudo-class selector and just apply the style to the :hover pseudo-class.

Looking Forward to CSS3

In addition to the pseudo-classes laid down in CSS 2.1, CSS 3 provides sixteen new pseudo-classes to allow for even more detailed styling capabilities. The new pseudo-classes are:

`

  • :nth-child(N)

  • :nth-last-child(N)

  • :nth-of-type(N)

  • :nth-last-of-type(N)

  • :last-child

  • :first-of-type

  • :last-of-type

  • :only-child

  • :only-of-type

  • :root

  • :empty

  • :target

  • :enabled

  • :disabled

  • :checked

  • :not(S)

`

For more information about the new pseudo-class selectors laid down in CSS3, take a look at the CSS3 selectors working draft, or the excellent write-up by Roger Johansson. Currently, very few have decent cross-browser support, but as Johansson says, they can still be used for progressive enhancement…and in such a quickly changing field, when we can stay ahead of the curve, we should take advantage of it.

]]>
An Objective Look at Javascript 2.0: Looking Back https://timkadlec.com/2008/04/an-objective-look-at-javascript-2-0-looking-back/ Tue, 22 Apr 2008 19:00:00 +0000 https://timkadlec.com/2008/04/an-objective-look-at-javascript-2-0-looking-back/ There has been no shortage of debate over Javascript 2.0, based on ECMAScript 4.0. Some people are extremely excited about some of the new features being discussed, and some feel that Javascript 2.0 is shaping up to look a bit too much like Java or even C++ for their tastes.

Whether you like the new features being proposed, think they’re silly and unnecessary, or have no idea what the heck I am talking about, I think it’s important to have a firm grasp on some of the changes being proposed. Doing so will help you to better understand both sides of the debate, and also help to prepare you for when Javascript 2.0 becomes available for use.

There’s far too many changes and fixes to discuss them in one post, so this will be an ongoing serious of posts. I’ll be taking a look at what the new language provides us and why. Hopefully by taking a closer look at all the changes, we can get a better feel for how those changes affect both web developers and javascript in general. First though, we should take a quick look at how Javascript got to this point, and the reasoning behind the changes beings suggested in Javascript 2.0.

Once Upon a Time…

Javascript has been around since 1995, when it was debuted in Netscape Navigator 2.0. The original intent was for Javascript to provide a more accessible way for web designers and non-Java programmers to utilize Java applets. In reality though, Javascript was used far more often to provide levels of interactivity on a page…allowing for the manipulation of images and document contents.

Microsoft then implemented Javascript in IE 3.0, but their implementation varied from that of Netscape�s and it became apparent that some sort of standardization was necessary. So the European Computer Manufacturers Association (ECMA) standards organization developed the TC39 committee to do just that.

In 1997, the first ECMA standard, ECMA-262, was adopted. The second version came along a bit later and consisted primarily of fixes. In December of 1999, when the third version rolled out, the changes were more drastic. New features like regular expressions, closures, arrays and object literals, exceptions and do-while statements were introduced, greatly adding value to the language. This revision, ECMAScript Edition 3, is fully implemented by Javascript 1.5, which is the most recently released version of Javascript.

Like ECMAScript 3, the proposed ECMAScript 4 specification will provide a very noticeable change in the language. As it stands now, Javascript 2.0 will be featuring, among other changes, support for things like scoping, typing and class support.

Let the Debate Begin

While some of the changes are bug fixes, the justification for the major revisions appears to be largely based on providing better support for developing larger-scale applications. With the growing popularity of AJAX, and the rise of RIAs, Javascript is now being used for much larger-scale apps than it was ever intended for. The proposed changes to ECMAScript 4 are intended to help make development of those kinds of apps easier by making the language more disciplined and therefore making it easier for multiple developers to work on the same application.

This is where the debate starts….how much do we need these revisions? Technically, we can implement a lot of the same kinds of structures using the language as it stands currently. The proposed changes are aimed at making that easier, but there are some people who worry about the effect this may have on what is currently a very expressive and lightweight language.

Which group is correct? Are the changes going to make our lives as Javascript developers easier, or force us to lose a lot of what makes Javascript such an attractive scripting language to use today? I think the only way to really judge how the changes will affect us is to take a closer look at the changes themselves and see both the good and the bad. *[AJAX]: Asynchronous Javascript and XML *[RIAs]: Rich Internet Applications

]]>
Spring Cleaning https://timkadlec.com/2008/04/spring-cleaning/ Wed, 16 Apr 2008 19:00:00 +0000 https://timkadlec.com/2008/04/spring-cleaning/ Overall, I’ve been quite happy with the feedback gotten about the site so far. It’s still quite young however, and therefore, there are a few changes that I thought needed to be made to help it continue to grow, and hopefully make it easier for readers to find and use the content here. For anyone interested, I thought I would highlight the changes.

First off, I’ve increased the focus on past posts. I decided to add a listing of the latest posts to each page, as well as a listing of the most popular posts on the site in terms of views. The idea here is to hopefully make it easier for you to find earlier posts that you may have missed that may still be worth a look.

I also decided to add full RSS feeds. I have heard a lot about the debate between partial and full feeds and wasn’t sure at first how to proceed with them. Up to now, I had just been offering partial feeds. I am still keeping those, for any of you who do prefer them but I am also offering an RSS feed with the full posts in them now for those of you who prefer to have the entire article in front of you.

One new area in the footer is a listing of the galleries that were gracious enough to link to my site. I was overwhelmed by the positive response to the design of the site, and those galleries are how quite a few of you first came across the site. I thought I would finally get around to returning the favor by supplying some links back to them.

Finally, I decided to embed by Twitter status into the site. I fought the Twitter urge until March, but once I finally gave in, I’ve become fairly addicted. It’s a makes it easy to keep connected with people who you may not converse with a whole heck of a lot otherwise, and if you are following the right people, it can be quite the news informant…lots of tech news seems to hit Twitter before anywhere else. To sum it up, you could do a lot worse for a networking tool.

Now, with all the new additions, something had to go to clear up some room. So, I decided to pull the “Things I Learned Online” section. Actually, I’ve wanted to do something different with that anyway, and this made for a good excuse. By only showing a couple links at a time in the side, I felt that some of the articles and tools I come across probably weren’t getting the attention they deserve given their quality. I wanted a way to highlight a few more at a time, and to do it in a way that brings a bit more attention to them.

So after much hemming and hawing I decided to occasionally post a small group of links to resources, articles and tools that I have found that I think may interest you. Don’t worry…I’m not going to turn into a site that posts nothing but lists of other sites, nor am I going to start doing a whole bunch of Digg-made top 10 posts. The bulk of the posts here are still going to be the same kind of content you’ve been seeing with focuses on technical, design, and theoretical articles…I’ll just occasionally throw some focus out to other articles that I think are worth a read.

Having said all that (that was a bit more long-winded than intended), I’m always interested in improving the quality of what my site has to offer. So, if anyone has any strong opinions about any new changes (good or bad), or has some other ideas they’d like to see implemented (content or just enhancements), let me know.

]]>
It’s Good to Be Wrong https://timkadlec.com/2008/04/its-good-to-be-wrong/ Sun, 13 Apr 2008 19:00:00 +0000 https://timkadlec.com/2008/04/its-good-to-be-wrong/ Being wrong is a good thing. I know…I know…we’ve been told our entire lives that it’s better to be right than wrong. I think, though, that in the design/development industry, it’s good to be wrong sometimes.

Always being right means we’re not challenging ourselves enough. It means that either we’ve become comfortable and content with where we are at with our skills, or that there is no one challenging us to improve those skills. In either case, we’re not progressing.

If we’re wrong, it means we’re pushing ourselves to explore our limits, to continue to expand our skill set. Being wrong opens the door for constructive criticism, which in turn leads to opportunities to learn. People who are willing to tell us when we’re wrong are the kind of people we should be surrounding ourselves with…they’re the kind of people who challenge us to become better designers and developers.

One quote, that I believe sums it up pretty well, is by Bill Buxton a Principal Researcher at Microsoft. In his book “Sketching User Experiences”, Bill has the following to say:

People on a design team must be as happy to be wrong as right. If their ideas hold up under strong (but fair) criticism, then great, they can proceed with confidence. If their ideas are rejected with good rationale, then they have learned something. A healthy team is made up of people who have the attitude that it is better to learn something new than to be right.

While Bill’s quote is aimed at designers, I think the rule applies to both designers and developers. Making mistakes, getting constructive criticism, and learning from that criticism is a healthy thing. It allows us opportunities to expand our skills and grow in our field. Only through this kind of healthy criticism can our skills, and ultimately the products we produce, become finely tuned.

]]>
Book Review: Pro JavaScript Design Patterns https://timkadlec.com/2008/04/book-review-pro-javascript-design-patterns/ Tue, 08 Apr 2008 19:00:00 +0000 https://timkadlec.com/2008/04/book-review-pro-javascript-design-patterns/ NOTE: This is the first book review to be featured here. The idea is that I will frequently review web-related books to hopefully help give you an idea of whether or not a book is right for you. The books reviewed will all be somehow related to web development or design so you will never hear me tell you how much I enjoyed Stephen King’s Dark Tower series or Napoleon’s Pyramids by William Dietrich….except for right now of course.

Who Wrote It?

Pro JavaScript Design Patterns is written by Ross Harmes and Dustin Diaz. Ross is a front-end engineer from Yahoo! and blogs (albeit not for awhile) about random tech topics at techfoolery.com. Dustin works for Google as a user interface engineer. You can find Dustin’s musings about web development topics at dustindiaz.com. This is the first book by either author.

What’s covered?

Pro Javascript Design Patterns is about…well, applying design patterns in Javascript of course. Design patterns are reusable solutions to specific, common problems that occur in development. Design patterns are more popular in software engineering, but as web applications become larger and more robust, design patterns are starting to become a bit more well known in the web development world.

Dustin and Ross do a great job of explaining different design patterns and showing how to apply them in the world of Javascript. The book starts off by walking you through some object-oriented principles as they relate to Javascript. There are sections on such advanced topics like interfaces, encapsulation, inheritance and chaining. The second part of the book dives right into design patterns. For each pattern, you get to see how to implement it in Javascript, when to implement it, and the benefits you will see. Design patterns can also create difficulties if used inappropriately, so Ross and Dustin take a look at the disadvantages of each pattern so that you can accurately determine whether or not to use it in your applications.

Should I Read It?

The book definitely holds value for any person working with Javascript and front-end development. The ideas laid out in the book can help anyone working with the language to create higher-quality, efficient code. Particularly developers who work with large scale Javascript applications will benefit from the book, as that is what design patterns seem to be best suited for.

Make no mistake, the book’s title starts with the word ‘Pro’ for a reason…this is not a book intended for beginners. It is a very concisely written book that doesn’t take a lot of time setting the tone…the authors dive right into advanced concepts and code. If you are just getting rolling with Javascript or you don’t have a good grasp of object-oriented programming in Javascript, then you should probably pick up another book and come back to this later. On the other hand, if you are familiar with object-oriented programming in another language, you may find the book still manageable. That’s part of the beauty of design patterns…the theory works regardless of the language…it’s the syntax and implementation that can differ.

Final Verdict

All in all, I really enjoyed the book. It can take awhile to work your way through it (this is not a bed-stand book), but it is definitely worth it as the concepts addressed are invaluable to creating quality code. For anyone doubting the power of Javascript, this book is a real eye-opener. You will find that Javascript’s flexibility offers a lot of possibilities and by using it, along with industry-recognized design patterns, you can develop scripts that are both easy to communicate and easy to maintain.

Great…Where Do I Get A Copy?

]]>
More Manageable, Efficient Code Through 5S https://timkadlec.com/2008/04/more-manageable-efficient-code-through-5s/ Wed, 02 Apr 2008 19:00:00 +0000 https://timkadlec.com/2008/04/more-manageable-efficient-code-through-5s/ Sometimes code turns ugly. We add quick fixes or enhancements and our code starts to become a big tangle of functions that aren’t laid out in any sort of organized fashion. Over time, our code becomes bloated, difficult to maintain and what should be simple little fixes can quickly turn into long walks through messy syntax. One way of combating this is by implementing the 5S System.

The 5S System is actually a Japanese improvement process originally developed for the manufacturing industry. Each of the five words when translated to English began with ’S’ hence we call it the 5S System. Like many good philosophies however, the 5S System can apply to a variety of topics. For example, the 5S System has been applied by the Hewlett-Packard Support Center in a business context and has resulted in improvements like reduced training time for employees and reduced call-times for customers. By using the system applied to coding, we can make our code more efficient and much easier to maintain.

Seiri (Sort)

The first ’S’ is Seiri which roughly translates to ‘Sort’. Applied to the manufacturing industry, the goal of sorting was to eliminate unnecessary clutter in the workspace. The idea here is that in a workspace, you need to sort out what is needed and what is not. If you eliminate all of the items that are not necessary, you immediately have a workspace that is cleaner and thereby more efficient

Applied to coding, this can mean going through our code and determining if we have any lines of code that are really just taking up space. This can be things like error checking that has already been done at a previous step, or if working in the DOM, retrieving the same element in more than one function, instead of simply passing a reference to the element. This definitely applies to CSS code as well. There are very few stylesheets in use that don’t have a line or two that are really just unnecessary because they either accomplish nothing different than the user agent’s default behavior, or are being overridden elsewhere.

Seition (Straighten)

The next ’S’ means to straighten or ‘sort in order.’ This step involves arranging resources in the most efficient way possible so that those resources are easy to access.

For coders, this means going through and making sure that functions and code snippets that are related are grouped together in some way. This can be done by a variety of ways. If you are working with server-side scripting, consider placing related code together in an include. In CSS, use either comments or imported stylesheets to separate style declarations based on either the section of page they refer to or the design function that they carry out. (Typographic styles in one place, layout styles in another). In object-oriented programming, organize your code into logical classes and subclasses to show relationships.

Seiso (Shine)

The third step laid down in the 5S system is the Shine phase. This involves getting and keeping the workplace clean. This is an on-going phase that should be done frequently to polish up anything that is starting to lose its luster.

As we go back and work on code, we can often start to get lazy and just throw things wherever and use messy coding techniques because it’s quick and dirty. The long term result of that though is unorganized code that is difficult to maintain. The phase requires a bit of discipline, we have to be willing to keep an eye out for portions of our code that are becoming a bit unwieldy and take the time to clean it up so 6 months down the road we aren’t pulling out hair trying to remember what the heck we were thinking there.

Seiketsu (Standardize)

The Standardize phase involves setting down standards to ensure consistency. We can apply this to our coding and make it much easier both for us in the future, and for new employees who may have to try and work with some of the code we have developed.

Standardization in code can come in a variety of forms. We’ve seen some standardization in coding for-loops for example. In a for-loop it is very typical for people to use the variable i as the counter variable throughout the loop. Coders of various levels of expertise recognize the variable i in those situations very quickly and easily, because it is used so frequently.

You can also standardize the way you format your code. Some people prefer to indent code inside of loops or functions for readability, others don’t. Whatever the case may be, be consistent with it. Having a consistent coding style makes it a lot easier to come back to that code later and be able to quickly locate where that new feature needs to be dropped in.

Shitsuke (Sustain)

Finally, our last step is to sustain the work we set down in each of the previous phases. This is perhaps the most difficult phase of all because it is never ending. There is a definite level of commitment to the process that has to be displayed here in order for us to continue using and utilizing this process when we code. We can’t be satisfied with doing this once or twice and then letting it go. If we work to continually implement this process, we help ourselves to create more manageable and efficient code from the start of the development process to the conclusion.

]]>
Hats Off To Opera https://timkadlec.com/2008/03/hats-off-to-opera/ Wed, 26 Mar 2008 19:00:00 +0000 https://timkadlec.com/2008/03/hats-off-to-opera/ Well that didn’t take long. It was just announced today that Opera’s developers have received the first 100100 test score on the new Acid 3 test. There is apparently a small rendering glitch they still need to take care of, but this is really incredible progress considering the test was just formally announced on March 3rd.

The Acid test, for those unaware, is a test page set up by the Web Standards Project (WASP) to allow browsers to test for compliance with various standards. The test runs 100 little mini-tests, and to score 100, you need to obviously pass all 100 of the tests. The first Acid test was set up in 1998 and checked for some basic CSS 1.0 compliance. Acid 2 came around in April of 2005 and tested for support for things like HTML, CSS2.x and transparent PNG support. The new Acid 3 test checks for support for CSS3 selectors and properties, DOM2 features and Scalable Vector Graphics (SVG) among other things.

It should come as no surprise that Opera was one of the first to successfully pass the test. After all, they were the second browser to pass the Acid 2 test (Safari was first). What’s so impressive is the little amount of time necessary to complete the test. It took Safari about 6 months to pass the Acid 2 test, but it took Opera just under a month to pass the Acid 3 test.

Not that we can get too awfully excited about this. The two major players here (IE and Firefox) both have a ways to go. The last I saw Firefox 3 was up to a 71100 score and IE 8 was at a frighteningly low 18100. Let’s just hope that IE can get the gap closed quicker than the 3 years or so that it took them to reach Acid 2 compliance! It’s looking like Safari, who has their WebKit Nightly Build’s up to 98100, will be the next to hit a perfect score.

In spite of the needed improvements in Firefox and IE, this is great news and I think that congratulations need to go out to Opera’s team of developers. They’ve done a great job of being proactive with their standards-support and it shows. I also think that WASP deserves a pat on the back for all of this….they are obviously doing a good job of pushing standards-compliance in browsers and giving vendors a goal to shoot for. We are starting to see some great improvements in compliance to standards across the web and I for one, am greatly looking forward to playing around with all the new toys!

]]>
Getting Started With ARIA https://timkadlec.com/2008/03/getting-started-with-aria/ Thu, 20 Mar 2008 19:00:00 +0000 https://timkadlec.com/2008/03/getting-started-with-aria/ Finding purely static websites today is becoming harder and harder. The line between website and web application blurs more and more as clients want more interactivity and real-time interaction on their site. This rich experience raises accessibility concerns though.

To create a lot of these dynamic interfaces, we often have to use (X)HTML elements outside of their semantic meaning. For example, navigation is marked up using list-items. That is all fine and well for a sighted visitor…we can see that the list is meant to be navigation. However, to a non-sighted user who is relying on a screen reader to determine the usage of elements on a site, it is difficult at best to determine that the list is used as a navigation structure.

That is where Accessible Rich Internet Applications (ARIA) come into play. ARIA offers attributes that we can use to add semantic meaning to elements. One of those is the role attribute.

Add Some Information

Roles provide information on what an object in the page is and help to make markup semantic, usable, and accessible. Using our previous example of a list used for navigation, by providing the role attribute, we can help the user agent to understand that the list is being used for navigation.

`

`

Likewise, we can tell the user agent if a paragraph is being used as a button:

`

`

There are many different WAI roles to utilize. Nine of them where imported from the XHTML Role Attribute Module:

  1. banner - typically an advertisement at the top of a page

  2. contentinfo - information about the content in a page, for example, footnotes or copyright information

  3. definition - the definition of a term

  4. main - the main content of a page

  5. navigation - a set of links suitable for using to navigate a site

  6. note - adds support to main content

  7. search - search section of a site, typically a form.

  8. secondary - a unique section of site

  9. seealso - contains content relevant to main content

The ARIA 1.0 specification also includes support for many more roles set down in the ARIA Role taxonomy. These include roles like button, checkbox, textbox and tree. There are many available there, so I am not going to try and show them all here For that you can take a look at the ARIA working draft.

Now For Some Meaning

In addition to the information provided by the role attribute, we can further add meaning about the state and relationship of elements with states and properties. Unlike roles, which are static, states and properties may change. For example, one state that is available is checked, which as you may guess is used with an element that has a role of checkbox. When a checkbox is unchecked the checked state is false. When the checkbox is checked, the checked state should change to true.

Using states and properties is rather easy to do:

`

  1. Add Some Style

    In browsers that support attribute selectors in CSS, we can even use our new roles, states and properties to provide different visual effects to reflect an elements meaning. For example, we can target all items on a page that have an aria-required state with this:

    `

    • *[aria-required=“true”] { background: yellow;}

    `

    In addition, some states have pseudo-classes that can be used to reflect the changes in state. Consider a list-item that is tagged with an aria-checked state. Using the :before pseudo-class, we can provide a different image with each state change. (Note: this example is used in the W3C Working Draft)

    `

    1. *[aria-checked=true]:before {content: url(‘checked.gif’)}

    2. *[aria-checked=false]:before {content: url(‘unchecked.gif’)}

    `

    There is a lot of value in using ARIA. It helps to give meaning to the usage of an element on a page, greatly increasing the accessibility of a site. It’s very easy to use, and doesn’t break in browsers that don’t support it. If you want to learn more about ARIA and how to start implementing it, I highly recommend checking out the W3C’s overview on the topic.

    ]]> Respecting What You Don't Understand https://timkadlec.com/2008/03/respecting-what-you-dont-understand/ Mon, 17 Mar 2008 19:00:00 +0000 https://timkadlec.com/2008/03/respecting-what-you-dont-understand/ While at SXSW, I had the privilege of attending a panel called Respect! During the panel, Jason Santa Maria made a comment that really struck me. He said that it’s “difficult to respect what I don’t understand”.

    How very true. Respecting what we don’t understand is if not impossible then extremely hard to do. Without some sort of knowledge of the process and steps involved in arriving at the solution, how can we really respect the work required to make the solution? I think this comes into play when working with both clients and co-workers.

    As far as clients go, the solution involves making sure good communication takes place between you and the client. I think involving the client early and often helps to build respect and knowledge of what you do. If we meet with the client about a project, then hand them a design some time later, they are not going to have any idea of the process involved. To them, it’s like delayed magic…they ask us to come up with a design, and viola, we come up with one.

    However, if we go through a more involving process, they start to get a taste of all that goes into designing/developing the final product. We can start to show them our research, information architecture, wireframes and prototypes, all before actually showing them some sort of design. By walking through the project with them, a few things happen. First, they feel more involved. This can be great for clients…it’s always difficult to just blindly trust someone else with such a crucial part of your company’s marketing.

    Secondly, by allowing the client to see a lot of these steps, they begin to gain a greater respect for what is involved. Let’s face it, a lot of people simply don’t realize how much goes into developing their site or application. The web is open to anyone, and it makes people feel like anyone can just jump in and throw together a website. That’s why you run into clients whose site was developed by their mothers’, brothers’, lawnmowers’, sons’ cousin! By letting them see a bit more of our process, we help them to gain a bit more respect for what actually is going on in the professional development of a site or application.

    Clearly, this can be taken too far. You don’t want to involve the client too much. If you do, you may end up confusing the client, which leads to frustration. It’s important to remember that while you want to get them involved, this is not their expertise, and anything you show them should be a very general perspective, and should be explained in non-technical terms.

    I also said that respecting what we don’t understand comes into play with co-workers. A co-worker with no knowledge of CSS is going to have a difficult time respecting your job of creating cross-browser compatible layouts. I think in this case we just need to try and remember just how involved our job can be, and should assume that so and so down the hall’s job is just as involved.

    I think there is an excellent argument to be made here for the “Jack-Of-All-Trades” worker. Having at least a basic understanding of a variety of topics will help you to respect the work of the people using those languages or techniques (not to mention, at least in my opinion, make you a more attractive candidate for employment).

    In the end, it all comes down to communication. If we can find ways to effectively communicate to our clients and peers throughout our working process, we can hope to achieve some level of respect.

    ]]>
    Quicker DOM Traversing with CSS Selectors https://timkadlec.com/2008/03/quicker-dom-traversing-with-css-selectors/ Wed, 05 Mar 2008 19:00:00 +0000 https://timkadlec.com/2008/03/quicker-dom-traversing-with-css-selectors/ After looking at XPath and how it can be used to quickly traverse the document tree, I also thought I’d take a look at the W3C Selectors API as it kind of falls in that same line. At this point, it none of the major browsers support it. However, any WebKit build (Safari’s engine) since February 7th supports it, and it looks like IE8 will be supporting it as well. I’d be eager to hear if anyone knows where Opera and Firefox stand on getting it going here in the future.

    The Selectors API allows us to utilize CSS (1-3) selectors to collect nodes from the DOM. This is actually quite a common enhancement in a lot of Javascript libraries….CSS selectors are a very efficient and powerful way to quickly look up nodes, and since most people are familiar with CSS syntax, it is very user friendly. The Selectors API offers native browser support for CSS selectors using the querySelector and querySelectorAll methods.

    The querySelector method as defined by the W3C returns the first element matching the selector, or if no matching element is found, it returns a null value.

    The querySelectorAll method returns a StaticNodeList of all elements matching the selector, or if no matching elements were found, it returns a null value. For anyone familiar with DOM traversal, you are probably familiar with NodeLists. NodeLists are returned by methods like getElementsByTagName. The main difference between the StaticNodeList and a NodeList, is that if you remove an element from the document, a NodeList is also affected and therefore the indexes of the NodeList are altered. A StaticNodeList, however, is not affected…hence the Static part.

    The querySelector and querySelector methods are very easily used:

    `

    1. //returns all elements with an error class

    2. document.querySelectorAll(“.error”);

    3. //returns the first paragraph with an error class

    4. document.querySelector(“p.error”);

    5. //returns every other row of a table with an id of data

    6. document.querySelectorAll(“#data:nth-child(even)“;

    In addition to calling the methods with a single selector, you can also pass groups of selectors seperated by commas, like so: `

    `

    1. document.querySelectorAll(“.error, .warning”);

    2. document.querySelector(“.error, .warning”);

    ` The first line above would return all elements with a class or error or a class of warning. The second line would return the first element with a class of either error or warning.

    You can see the advantage of having native support for the SelectorAPI by taking a look at some test results. SlickSpeed runs the test cases using the popular Javascript libraries Prototype, JQuery and ext as well as by using the Selectors API and the results are substantially quicker using the Selectors API. To run the native support test, you will need to go grab the WebKit nightly build. If you don’t want to do that, Robert Biggs ran the test in various browsers and has the test results up.

    ]]>
    SXSW Anticipation and Twitter https://timkadlec.com/2008/02/sxsw-anticipation-and-twitter/ Fri, 22 Feb 2008 19:00:00 +0000 https://timkadlec.com/2008/02/sxsw-anticipation-and-twitter/ After having been signed up to attend since early October, it just dawned on me yesterday that I only have one full “work week” left until SXSW. This will be my first major web conference, and to say I am excited about going is a vast understatement. I believe my wife is probably looking forward to it as much as I am, if only for the fact that once it is over she no longer has to hear every little update from me about panel programming and new social events.

    I can only imagine that being surrounded by that many people who are passionate about the web for 5 days will be quite inspiring and reinvigorating. While this is the first conference I will be attending, I hope that there will be many more.

    In fact, ideally I’d like to go to several each year. Listening to the presentations and having the chance to mingle with other web-minded folk seems to me an incredible way to keep in tune with the trends of the industry, and an effective way to find new techniques or skills to pursue.

    After having looked over the panels roughly 100 times, there are several that I am particularly excited to check out.

    Secrets of Javascript Libraries

    I was excited for this back when I thought only John Resig, he of JQuery fame, would be presenting. Now that I hear people who either created or contributed other major libraries like Dojo, Prototype and Scriptaculous are also going to be there, this panel has really shot up to the top of my list.

    Browser Wars: Deja Vu All Over Again?

    Finally, a question I have long wondered about will be answered: What happens when you stick major players at Firefox, Opera, and IE in a room together? Cage match anyone?

    Design Eye for South By

    I’ve heard nothing but great things about this panel from year’s past. Can’t wait to see what they come up with this time around.

    Everything I Know About Accessibility I Learned From Star Wars

    Honestly….Derek Featherstone had me at Star Wars. The fact that the presentation covers such an important topic like accessibility is really just gravy.

    Design is In the Details

    Actually not sure what to do here. Naz Hamid’s presentation sounds fantastic, but Slideshare is also talking about the lessons learned about AJAX and Flash while creating SlideShare.net during this time. Decisions, decisions.

    I could name many more that sound great, but then you would just get bored and move on if you haven’t already.

    In addition to the panel programming, from everything I hear, the networking opportunities are amazing at SXSW, and I am quite excited to have the opportunity to meet some people in person for the first time. I always enjoy running into another passionate web developer or designer. The discussions are always interesting.

    I am amazed by the amount of social events currently scheduled. Should be a good time, but I am quite curious as to how people actually manage to stay at these things the whole time and then be ready to go again in the morning? I hope there is a Starbucks near by.

    For anyone interested, I did break down and sign up for Twitter recently, in no small part because I hear last year it turned into quite an essential tool to stay in the loop as far as where to meet up with people and such. So, if any of you are going to be at SXSW, you should follow me on Twitter so we can meet up some place. I’d love the opportunity to meet some of you in person.

    And for those of you who aren’t going to be there, but want to follow me anyway, feel free to. I’m going to do my best to keep up with the updates there and I may even have something interesting to say from time to time.

    ]]>
    XPath in Javascript: Predicates and Compounds https://timkadlec.com/2008/02/path-in-javascript-predicates-and-compounds/ Mon, 18 Feb 2008 19:00:00 +0000 https://timkadlec.com/2008/02/path-in-javascript-predicates-and-compounds/ Welcome to the second part of my look at XPath and how it can be used in Javascript. Part one served as a real basic introduction to what XPath is, how it can traverse the document tree, and an introduction to using XPath expressions in Javascript using the evaluate method. So far what we have seen is really basic. There is some value in it, but we can build much more robust expressions with a bit more knowledge.

    Getting More Detailed with Compounds

    So far, we haven’t dealt with any compound location paths…each of our expressions has just gotten nodes that are direct children of the context node. However, we can continue to move up and down the document tree by combining single location paths. One of the ways we can do this (and this should look quite familiar to anyone who has moved through directories elsewhere) is by using the forward slash ‘/’. The forward slash continues to move us one step down in the tree, relative to the preceding step.

    For example, consider the following:

    
    myXPath = “//div/h3/span”;

    var results = document.evaluate(myXPath, document, null, XPathResult.ANY_TYPE, null);

    The expression above will first go to the root node thanks to our ‘//’. It will then get any div elements that are descendants of the root node. Then, we use the forward slash to move down one more level. Now we are saying to get all h3 elements that are direct descendants of one of the div elements that was returned. Finally, we once again use our forward slash to move down one more level, and tell the expression to return any span elements that are direct descendants of the h3 elements we already found.

    In addition, we can use the double period ‘..’ to select an elements parent nodes. For example, if we use an expression like ‘//@title’, we will get all title attributes in the document. Let’s say that what we actually wanted, is all elements in the document that have title attributes. Using the parent selector (..), we can do just that. The expression ‘//@title/..’ first grabs all title attributes. Then the double period tells the expression to step back up and grab the parent node for each of those title attributes.

    This is a pretty handy little feature. We can use the double period to select sibling elements by doing something like ‘//child/../sibling’ where child is the child element, and sibling is the sibling element we are looking for. For example, ‘//h3/../p’ would get all p elements that are siblings of h3 elements.

    Finally, we can use a single period ‘.’ to select the current node. You will see this become useful when we introduce the use of predicates.

    Speak Of the Devil

    Each expression we’ve seen returns a bunch of nodes matching criteria. Occasionally, we will want to refine this even further. We can do that using predicates, which are simply Boolean expressions that get tested for each node in our list. If the expression is false, the node is not returned in our results; if the expression is true, the node is returned.

    Predicates use the typical Boolean operators, ‘+’, ‘<‘, ‘>’, ‘<=‘, ‘>=’, ‘!=’, ‘and’ and ‘or’. As promised, the single period becomes much more useful when combined with predicates. For example, we can grab all h3 elements that have a value of “Yaddle” by using the following expression:

    
    //h3[.="Yaddle"]
    
    

    The dot tells the expression to check for the value of that current node. If the value equals “Yaddle”, the h3 will be returned to us. Let’s take a look at another example, one maybe a bit more practical. Let’s say you have a calendar of events, and all you want to retrieve all the events that occurred between 2005 and 2007. Being the smart developers we are, we wrapped all the event years in a span with a class of year, like so:

    
    <span class="year">2007</span>
    
    

    Getting all the year spans where the value is between 2005 and 2007 is easy. We can simply do this:

    
    //span[@class="year"][.<= 2007 and .>=2005]
    
    

    Ok…granted, at first glance that is pretty ugly, so let’s break it down.

    1. //span - Get all span elements

    2. [@class="year"] - Make sure the only span elements we grab have a class of ‘year’

    3. [.>=2005 and .<=2007] - Make sure the value of span is between 2005 and 2007. We use the ‘<=’ and ‘>=’ operators versus the ‘<’ and ‘>’ operators because we want to also return values in the years 2005 and 2007.

    Making sense out of all the slashes and brackets can take some getting used to, so don’t be discouraged if it takes you awhile before you can make sense out of what is happening there. Once you get more familiar with the syntax used, you will find you can create some really robust checks in one line of code that would have taken numerous iterations using DOM methods.

    ]]>
    XPath in Javascript: Introduction https://timkadlec.com/2008/02/xpath-in-javascript-introduction/ Tue, 12 Feb 2008 19:00:00 +0000 https://timkadlec.com/2008/02/xpath-in-javascript-introduction/ As reported by John Resig, Prototype, Dojo, and Mootools have all switched their CSS Selector engines to be using XPath expressions instead of traditional DOM methods. With the attention being placed on XPath expressions, now is a good time to get familiarized with them and what they can accomplish.

    This is going to be a multi-post series, as there is just so much you can accomplish by using XPath expressions that if I tried putting it into one post, no one would have the time to sit and read the whole thing.

    What is XPath?

    Any of you out there who are familiar with XSLT will no doubt be familiar with the XPath language. For the rest of you, XPATH is used to identify different parts of XML documents by indicating nodes by position, relative position, type, content, etc.

    Similar to the DOM, XPath allows us to pick nodes and sets of nodes out of our XML tree. As far as the language is concerned, there are seven different node types XPath has access to (for most Javascript purposes the first four node types will most likely be sufficient):

    1. Root Node

    2. Element Nodes

    3. Text Nodes

    4. Attribute Nodes

    5. Comment Nodes

    6. Processing Instruction Nodes

    7. Namespace Nodes

    How Does XPath Traverse the Tree?

    XPath can use location paths, attribute location steps, and compound location paths to very quickly and efficiently retrieve nodes from our document. You can use simple location paths to quickly retrieve nodes you want to work with. There are two basic simple location paths - the root location path (/) and child element location paths.

    The forward slash (/) servers as the root location path…it selects the root node of the document. It is important to realize this is not going to retrieve the root element, but the entire document itself. The root location path is an absolute location path…no matter what the context node is, the root location path will always refer to the root node.

    Child element location steps are simply using a single element name. For example, the XPath p refers to all p children of our context node.

    One of the really handy things with XPath is we have quick access to all attributes as well by using the at sign ‘@’ followed by the attribute name we want to retrieve. So we can quickly retrieve all title attributes by using @title.

    Using XPath in Javascript

    That’s all well and fine, but how do we use this in Javascript? Right now, Opera, Firefox and Safari 3 all support the XPath specification (at least to some extent) and allow us to use the document.evaluate() method. Unfortunately at this time, IE offers no support for XPath expressions. (Let’s hope that changes in IE8)

    The document.evaluate method looks like this:

    
    var theResult = document.evaluate(expression, contextNode, namespaceResolver, resultType, result);
    
    

    The expression argument is simply a string containing the XPath expression we want evaluated. The contextNode is the node we want the expression evaluated against. The namespaceResolver can safely be set to null in most HTML applications. The resultType is a constant telling what type of result to return. Again, for most purposes, we can just use the XPathResult.ANY_TYPE constant which will return whatever the most natural result would be. Finally, the result argument is where we could pass in an existing XPathResult to use to store the results in. If we don’t have an XPathResult to pass in, we just set this value to null and a new XPathResult will be created.

    Ok…all that talk and still no code. Let’s remedy that shall we. Here’s a very simple XPath expression that will return all elements in our document with a title attribute.

    
        var titles = document.evaluate("//*[@title]", document, null, XPathResult.ANY_TYPE, null);
    
    

    If you take a look at the XPath expression we passed in “//*[@title]“, you will notice that we used the attribute location step followed by the attribute we want to find, ‘title’. The two forward slashes preceding the at sign is how we tell the browser to select from all descendants of the root node (the document). The asterisk sign says to grab any nodes regardless of the type. Then we use the square brackets in combination with our attribute selector to limit our results only to nodes with a title attribute.

    The evaluate method in this case returns an UNORDERED_NODE_ITERATOR_TYPE, which we can now move through by using the iterateNext() method like so:

    
    var theTitle = titles.iterateNext();
    while (theTitle){
        alert(theTitle.textContent);
        theTitle = titles.iterateNext();
    }
    
    

    Since each item in the results is a node, we need to reference the text inside of it by using the textContent property (line 3). You can only iterate to a node once, so if you want to use your results later, you could save each node off into an array with something like below:

    
    var arrTitles = [];
    var theTitle = titles.iterateNext();

    while (theTitle){ arrTitles.push(theTitle.textContent); theTitle = titles.iterateNext(); }

    Now arrTitles is filled with your results and you can use them however often you wish.

    This is just the beginning…as we continue to look at XPath expressions and introduce predicates and XPath functions, you will start to see just how truly robust XPath expressions are. At this point, IE doesn’t support using XPath expressions in Javascript, but with each of the other major browsers having some support, and major Javascript Libraries placing an emphasis on using them, it’s only a matter of time before we can begin using these expressions to create more efficient code.

    ]]>
    Share Your Site with the Masses https://timkadlec.com/2008/02/share-your-site-with-the-masses/ Tue, 05 Feb 2008 19:00:00 +0000 https://timkadlec.com/2008/02/share-your-site-with-the-masses/ Originally, it was never going to get this complex. The internet was never meant to be this popular. However, as time has gone by and this wonderful beast of resource has evolved, it is becoming important to be able to provide our content to a wide variety of devices. In addition to simply viewing a site on a computer screen, or printing it, our information may be accessed by Braille feedback devices, speech synthesizers, handheld devices, etc. More often than not, one set of styles will not be adequate to provide our content optimally to each of these devices. That is where media types come into play.

    Media types can be extremely useful. For example, there is very little reason to display a site’s navigation on a print-out. Using the print media type, we can then set up a style that hides our navigation section. Handheld devices which have very small screens and often low-bandwidth, may benefit from not displaying a bunch of images.

    CSS 2 offered us 10 media types as a way to designate which styles are applied depending on the device that accesses our site:

    1. All - all devices (this is default)

    2. Aural - speech synthesizers

    3. Braille - Braille tactile feedback devices

    4. Embossed - paged Braille printers

    5. Handheld - handheld devices (usually small screen, low bandwidth, possibly monochrome)

    6. Print - printing or print preview

    7. Projection - projected presentations (projectors, printing on transparencies)

    8. Screen - computer screen

    9. Tty - media using a fixed-pitch character grid (terminals or teletypes)

    10. Tv - television devices

    If no media type is declared, the default is “all”. Using these media types, we can tell devices to only use certain sets of styles. There are three basic ways of doing this:

    Using Inline Syles

    
    <style type="text/css">
        @media print{
            body{ background-color:#FFFFFF; }
            #heading{ font-size:28px; }
        }
    </style>
    
    

    Inline style sheets are not a very good solution, as they do not separate content and presentation.

    Imported Stylesheets

    
    <style type="text/css" media="print"/>
        @import "print.css";
    </style>
    
    

    Imported style sheets are a much better solution, and are fairly widely used. A distinct advantage of imported style sheets is that a styles sheet is only downloaded if that specific media type is being used. For example, if I defined the above styles to be associated with the handheld media type and someone using a regular computer came to my site, they wouldn’t have to download the styles.

    Linked Stylesheets

      
    <link rel="stylesheet" type="text/css" media="print" rel="print.css" />
    
    

    This is the most widely supported. As you may have guessed, a user will download each stylesheet regardless of the media type, and then use the appropriate ones. A bit unfortunate, as it wastes a little bit of time downloading styles we’re not really going to use.

    It is important to note that some styles only have meaning within a certain media type, and others are not applicable to certain media types. For example, the aural media type has no use for the font-size style while the page-break-before style is really only useful in the media types like projection, printing, and tv.

    Unfortunately, the support for most media types is quite minimal. You can pretty much depend on all, screen, and print. However, at this point, only Opera supports the projection media type, and the handheld media type isn’t widely supported yet on handheld devices. Feel free to use them anyway, as even if the user agent doesn’t recognize the media type named, it will just ignore it.

    Media Types on Steroids: Media Queries

    Media types will eventually become even more useful. CSS3 will implement media queries, which will allow us to check for certain criteria. For example, with media queries we can do something like the following:

      
    <link rel="stylesheet" type="text/css" media="screen and (color)" rel="print.css" />
    
    

    What we are telling the user agent is to only use those styles if the device uses a screen media type AND the device is a color device, not monochromatic. The parentheses are required around the text expression to indicate that it is a query. Media queries will allow us to check for items like, width, height, max-width, max-height, min-width, min-height, color, resolution, etc.

    Opera already has some limited support for media queries. You can check for height and width values using the pixel measurement in Opera. Hopefully other browsers won’t be to far behind. Actually, to try and push the concept forward a bit, media queries are one of the criteria being built into the new Acid 3 browser test.

    You can check out a more detailed look at media queries by looking at the W3C candidate recommendation on the subject.

    ]]>
    It's All in the Details https://timkadlec.com/2008/02/its-all-in-the-details/ Mon, 04 Feb 2008 19:00:00 +0000 https://timkadlec.com/2008/02/its-all-in-the-details/ When coding and designing there are a lot of steps and techniques that may seem trivial and appear to have little importance in the grand scheme of things. Does it really matter if we are using meaningful names for our variables in our code, or for our CSS id’s and classes? Who really cares if we use deprecated elements in our X/HTML so long as browser recognize them? So what if I am not consistently formatting my code the same way?

    I am a huge fan of basketball, and find the history of the game particularly enjoyable. One of the basketball figures from the past that I have always admired the most was John Wooden, who coached the UCLA basketball team to 10 NCAA national titles, including 7 in a row at one point. He had four 30-0 seasons, and at one point his team won 88 consecutive games. Point being…the man was quite good at his job.

    Each year, Wooden started out his season by having all of his players come into the locker room for his first lesson. He’d sit them all down, then pull out a pair of socks and slowly demonstrate the proper way to put them on. He’d roll the socks over the toes, then the ball of the foot, arch and heel, and then pull the sock up tight. He would then have the players slowly run their fingers over the socks to make sure there were no wrinkles. Seems kind of trivial right?

    However think about it for a second…if he put that much attention into ensuring that such a small task was carried out so precisely, wouldn’t it follow that each task his team performed would be given the same kind of thought and attention to detail?

    It’s that way with programming and design as well. If we think details like semantic names, using progressive enhancement, and consistently formatting our code are important, won’t we also be concerned with much bigger details like making sure our code is efficient, our program is easy to use, and our design is effectively portraying the message we are trying to send?

    And what if we do decide that some of these “trivial” details are not important enough to worry about? Where do we then draw the line between what matters and what we can just kind of ignore? If it’s ok to not use meaningful names for our variables, is it also ok if our code takes a few more seconds to load, or if one of our scripts is not unobtrusive? When does something become important enough to matter?

    It may seem somewhat trivial to make sure that all our identifiers in CSS are meaningful names, and that in our programs we always format our functions the same way. However, if we put that kind of attention into all the little things that go into programming and design, just imagine the high quality finished product we will have. It is that attention to detail that separates the good programs from the great, a good looking design from a “wow” design.

    That’s why we can never sit still. We need to always push ourselves to find better solutions…more efficient code, more effective design. Just because something works doesn’t mean it works well. Only by taking time to pay close attention to the “minor” details that go into our development process can we be sure that our final, finished product will be one of high quality and durability.

    ]]>
    Detailed Look at Stacking in CSS https://timkadlec.com/2008/01/detailed-look-at-stacking-in-css/ Wed, 30 Jan 2008 19:00:00 +0000 https://timkadlec.com/2008/01/detailed-look-at-stacking-in-css/ Using the z-index to affect stacking order in CSS is a much deeper topic than it may appear at first. The idea seems quite simple, but if we take a look we can see that there is actually quite a bit going on here that warrants a closer examination.

    Most of the time, stacking order just kind of works behind the scenes and we don’t really pay any attention to it. However, once we use relative or absolute positioning to move an object around the screen, we will end up with several elements occupying the same space. Which element is displayed on top is determined by the elements stacking order. We can adjust an elements stacking order by using the z-index property.

    The z-index is so named because it affects an elements position along the z-axis. The z-axis is the axis that goes from front to back from the user. If we think of the x-axis and y-axis as height and width, then z-axis would be the depth. The higher the z-index of an element, the closer it becomes to the user, and the lower the z-index, the further back on the screen it appears.

    If we do not specify any z-index values, the default stacking order from closest to the user to furthest back is as follows:

    1. Positioned elements, in order of appearance in source code
    2. Inline elements
    3. Non-positioned floating elements, in order of appearance in source code
    4. All non-positioned, non-floating, block elements in order of source code
    5. Root element backgrounds and borders

    Based on the default stacking order above, you can see that any element that has been positioned, whether relative or absolute, will be placed above any element that is not positioned. Both positioned and non-positioned elements are of course, above the background of our root element.

    Mixing Things Up A Bit

    Now let’s say we want to move some of our elements around in the stacking order so different elements appear on top. We can use the z-index property on any positioned elements to adjust there stacking order. The z-index property can accept an integer, the auto value, or an inherit value. When using integers, the higher the positive number, the further up in the stacking order it will appear. You can use negative z-index values to move the element further down in the stacking order. If we do not use a z-index value on an element, it will render at the rendering layer of 0 and will not be moved. The stacking order now looks like this:

    1. Positioned elements with z-index of greater than 0, first in order of z-index from lowest to highest, then in order of appearance in source code
    2. Positioned elements, in order of appearance in source code
    3. Inline elements
    4. Non-positioned floating elements, in order of appearance in source code
    5. All non-positioned, non-floating, block elements in order of source code
    6. Positioned elements with z-index of less than 0, first in order of z-index from highest to lowest, then in order of appearance in source code.
    7. Root element backgrounds and borders

    Stacking Context

    An interesting thing happens though when we set a z-index value to 0 or auto: we establish a new stacking context. Let’s say we set #front to have a z-index of 5. Now, we have just established a new stacking context for any element descending from (contained in) #front. If #middle is contained within #front, and I set its z-index to 2, it should still appear above #front. Why? Because since we set a z-index value to #front, every descendant of #front is now being stacked in relation to #front. It may be helpful to look at this as a multiple number system (as demonstrated by Eric Meyer in CSS: The Definitive Guide):

     
    #front 5.0
    #middle 5.2
    
    

    Since #front is the ancestor that sets the stacking context, it’s relative stacking level can be thought of as 0. Now when we set the z-index for middle, we are merely setting it’s local stacking value. Of course 2 is higher than 0, and therefore even though in our CSS it looks like #middle should be displayed behind #front, we can see that actually it should be displayed on top.

    For an example, consider the following code:

      
    <div id="one">
        <div id="two"></div>
    </div>
    <div id="three"></div>
    
    
    Now, using CSS we position these elements so that there is some overlap:
    
    #one{
        position: absolute;
        left: 0px;
        top: 20px;
        z-index: 10;
    }
    #two{
        position: absolute;
        left: 50px;
        top: 30px;
        z-index: 15;
    }
    #three{
        position: absolute;
        left: 100px;
        top: 30px;
        z-index: 12;
    }
    
    

    Z-Index Example

    The result is that #two shows up below #three, even though the z-index value we gave it (line 11) is higher than the z-index value we gave #three (line 17). This is because #two is a descendant of #one, which established a new stacking context. Which means if we use our numbering system, we would get the following stacking order:

    
    #three 12
    #two 10.15
    #one 10.0
    
    

    Firefox Gets It Wrong

    Ok…that felt weird to say. We are all used to Firefox getting most CSS things right, but this is one area it gets wrong. According to CSS 2.1, no element can be stacked below the background of the stacking context (the root element for that particular context). What this means is if we adjust the CSS above to give our #two element a negative z-index, the content of #one should overlap over the content of #two, but the background color should not. The way IE renders this is correct. Both results are shown below:

    Z-Index Example

    You can see that in IE, while the content of #one is still set above the content of #two, the background color remains behind it, as specified in CSS 2.1. Firefox on the other hand, shoves the entire #two element, background color and all, behind #one. Until this is fixed, be careful about using negative numbers for the z-index of an element.

    Go Forth and Experiment

    Definitely take this and play around with it. This is a topic that is best understood by setting up some positioned and non-positioned elements and experimenting with different z-index values. If you are feeling bold, check out the W3C’s really detailed breakdown of the stacking order of not just elements, but their background colors, background images, and borders. As with most topics in CSS, there is more here to understand than we first realize.

    ]]>
    Develop for the Next Guy https://timkadlec.com/2008/01/develop-for-the-next-guy/ Mon, 28 Jan 2008 19:00:00 +0000 https://timkadlec.com/2008/01/develop-for-the-next-guy/ All developers at one point or another will have to work with code that they didn’t develop. Whether we are replacing the person who created the application, or simply trying to work on a project developed by someone else in house, this is always a bit of an interesting experience. It becomes necessary to familiarize ourselves with the existing coding techniques used on the project so that we can quickly edit and maintain it for our purposes.

    Unfortunately, this is often a total mess of a job. The code we have to work with is often quite long, poorly documented, looks like ancient Greek, and leaves us angrily spewing silent (perhaps in some cases not so silent) insults at whoever the poor person was who created this mess. Not only does this leave us frustrated, but it also can frustrate our employers, as projects that should’ve been easily taken care of now require much more time and effort.

    Here then, are a few practices you can start using now to ensure that the next guy working on your application isn’t hoping for your demise.

    Start Commenting

    This is one practice that should be ingrained in your head from your early on in your development career. In addition to making the code easier for you to navigate in a month or two, effective commenting can also make it much easier for a new developer working with your code to understand what is going on. Any section of code that may require more explanation (functions in particular), should have a comment explaining what is going on there.

    It can also be useful in some cases to explain why a particular solution was used instead of another one. If when developing, you found that one solution resulted in better performance than another, comment about it. A person just trying to understand your code may not realize that there is a performance benefit to your code, and ditch it in favor of something he/she is more familiar with.

    Use Descriptive Names

    Few things are more frustrating to someone trying to work with your code than coming across blocks like this:

          
    var j = 0;
    var a = getData();
    for (var i=0; i < a.length; i++) {
        var x = a[i].getName();
        if (x === 'John') {
            j++;
        }
    }
    
    

    Granted, a simple for loop like above is not to difficult to follow, but what exactly are the variables, a, j, and x? This may seem to save you some typing initially, but coming back to this in a few months will drive you crazy. Variable names should make some sense.

          
    var counter = 0;
    var employees = getData();
    for (var i=0; i < employees.length; i++){
        var firstName = employes.getName();
        if (firstName === 'John'){
            counter++;
        }
    }
    
    

    Just by using better variable names, we have made the code much easier to understand. Even to someone completely unfamiliar with your application, it is easy to tell that we are looping through a bunch of employees, and counting how many of them have a first name of ‘John’. Not so easy to tell in our first example.

    Be Consistent

    This goes for naming conventions as well as formatting. Come up with a set way of naming variables and stick to it. Don’t have my_variable on one line, and then otherVariable on the next. If you are going to use underscores, stick to underscores. If you want to use camel casing, then use camel casing on each of your variables. It makes it much easier to tell at a glance what values are variables in your code.

    When you are declaring functions, decide how you want to display them. Some people like to use the following format:

      
    function getName()
    {
        ...
    }
    
    

    Others will use stick the opening bracket of a function on the same line as the initial declaration.

          
    function getName(){
        ...
    }
    
    

    It doesn’t really matter which method you use, just so long as you continue using it throughout your code.

    Utilize Common Design Patterns

    Design patterns are solutions to specific programming problems that have been documented and allows developers not to have to solve the problem again and again. They provide us with a way of quickly communicating the method used to resolve a problem. Common design patterns, like the factory pattern or the singleton pattern, have the added benefit of being used by programmers of various different languages. Now anyone who recognizes the pattern used can tell right away what is being done, it’s just a matter of figuring out the exact syntax of the specific language being used.

    Be careful with this one though. Don’t just use a design pattern to be using design patterns. If you make sure your code can benefit from the use of a design pattern, then go ahead and implement one. Otherwise, you will just end up with over-engineered code that is more complicated than it may need to be.

    Make it Flexible

    Make sure your methods are flexible and can be used in a variety of different ways. You never know the different uses that may be required of your application in the future. Make sure your methods are built in a way that the data returned is then able to be used in various solutions. For example, let’s look at some very simple Javascript that involves getting an employee’s name and outputting it to a div.

      
    function getName(employee){
        var myDiv = document.getElementById("divName");
        var employeeName = employee.name;
        myDiv.innerHTML = employeeName;
    }
    
    

    This works perfectly fine for our solution. What happens though if in 3 months, we decide that we actually want to use the name in an alert instead? Now we have to go back, find our getName function and rework it. Instead, if we make the getName function more flexible, we can allow future developers to use it however they choose.

      
    function getName(employee){
        return employee[i].name;
    }
    
    

    Separate the retrieval of information from the usage of it. It makes the code more flexible, and much easier to adjust in the future.

    These are just five simple techniques you can use to ensure that your code is easier both to understand and to adapt for the next guy who comes along and has to modify it. It also has the added benefit of making your life a little easier when several months down the road, your boss tells you to change some of the functionality. It is now easy to both understand what the code is doing, and how to make it do what you want.

    ]]>
    IE's Questionable Version Targeting https://timkadlec.com/2008/01/ies-questionable-version-targeting/ Tue, 22 Jan 2008 19:00:00 +0000 https://timkadlec.com/2008/01/ies-questionable-version-targeting/ There has been an awful lot of talk around the web community about Microsoft’s new feature in IE8 - version targeting. Initially, I hated the idea. However, instead of jumping in blindly, I thought it deserved a more detailed look on my part.

    What Is It?

    Version targeting, as proposed by Microsoft, will use a X-UA-Compatible declaration, either via a META tag or as a HTTP header on the server, to determine which rendering engine the page will be displayed in. For example, the META tag below will tell IE to use the IE8 rendering engine to display the page:

    If IE8 comes across a site that doesn’t have this declaration either in a META tag or as a HTTP header, than it will render the page using IE7’s rendering engine. This idea is not entirely new. DOCTYPE declarations have been switching IE browsers from ‘quirks mode’ to Web Standards mode since, I believe, IE6. There were some limitations with this. While using a DOCTYPE ensured standards mode, there is a definite difference in what standards mode is in IE6 versus in IE7.

    The X-UA-Compatible declaration is meant to be more robust. Here, we can tell the browser exactly which version of IE to render the page in, thereby alleviating us from the headaches that may be caused by a different rendering engine in IE8 than in IE7 for example. We can also use the ‘edge’ keyword (which is apparently not recommended) instead of declaring a specific version. The ‘edge’ keyword is used below:

    By using the ‘edge’ keyword, we are telling IE to always use the most current rendering engine available. This basically gives us the option of ignoring IE’s new feature. However, this seems like a flawed idea, because as Jeremy Keith said “…even if you want to opt out, you have to opt in.”

    Some Problems

    I agree with Keith in thinking that the idea was implemented wrong. The X-UA-Compatible declaration should be a tool to use, not a required feature. If I want my site rendered in the newest version of IE, I shouldn’t have to tell it that. It should assume that unless I tell it otherwise, I want my site rendered with the most current rendering engine, not the other way around. I guess I understand how from a business perspective this makes sense, this way everything works at least as well as before. However, for a community that puts so much emphasis on progressive enhancement, this doesn’t seem to fit the mold.

    I am also not so sure that this is any better than using conditional comments. If I can develop for standards supporting browsers and then use conditional comments to “fix” the other ones, than what benefit do I really get from using the X-UA-Compatible declaration? Also, what happens years down the road, after IE9 and IE10 are released? If I am one of those people still using IE8 at that time, and I come across a site that declares it should render in IE10, how will IE8 handle that? I would like to assume it would just render it using the highest version it knows (IE8 can only render IE8 or lower so an IE9 declaration results in IE8). Of course that just brings us back to using hacks again to ensure the older browsers still show our site reasonably well, and then we’re back at the beginning.

    I would also be interested to see if this is going to result in substantial code bloat for IE. If IE10 is potentially supporting four different rendering engines (quirks mode, standards mode in IE7, IE8, IE9) how is this going to affect the size of the browser code? I could see this potentially resulting in a pretty hefty amount of disk space being required in the future as more and more engines are being supported.

    Not All Bad

    The idea is not totally off base. It offers a nice feature, we don’t have to scramble to make sure our sites don’t break in the newest version of IE. I just think that it should be an optional feature…I either use the declaration and therefore ensure that my code will be rendered as always, or I don’t use it and allow progressive enhancement to work it’s magic.

    I still say kudos to IE for trying a new idea out. If nothing else, this has gotten the community discussing the advantages and disadvantages of Microsoft’s proposed solution, as well as talking about other routes we could take. Even after looking at it in more detail though, I just don’t think this is going to help solve much of anything. I don’t know that there is that big of an advantage offered by it, and I just don’t think that other browser vendors will think it is worth their time. Who knows though? Maybe in five years, people will be looking at this post and remarking about how short-sighted I was. I guess time will tell.

    Don’t Take My Word For It

    This is a very opinionated topic that has generated some great discussion already across the web. I encourage you to check out some of the varying opinions and arguments presented in the posts below:

    ]]>
    Display a Link's Href When Printing https://timkadlec.com/2008/01/display-a-links-href-when-printing/ Mon, 21 Jan 2008 19:00:00 +0000 https://timkadlec.com/2008/01/display-a-links-href-when-printing/ Using print stylesheets are a nice way to enhance a user’s experience of a site. Our screen stylesheets don’t necessarily turn out that nicely when printed out, so using a few different CSS rules on our print stylesheet we can increase readability and usability.

    One of those nice features we can add is to display a link’s href directly after the link on our print-outs. This will allow someone who has printed out the page to still be able to know where the links on the page are pointing to. We can do this with CSS by using the :after psuedo-class and some generated content.

         
    a[href]:after{
        content: " [" attr(href) "] ";
    }
    
    

    There are really four important parts of the statement above:

    1. a[href]: Here, we use an attribute selector to select all links in our page with an href attribute.

    2. :after: The :after pseudo-class allows us to insert some content after the links and style it if necessary.

    3. content: This is what actually generates the content. We could just insert, for example, the letter “a” with a style call like content: “a”.

    4. attr(href): This gets the href attribute of the link currently being styled. This way, each link will display it’s own href.

    If we put this style in our print stylesheet, all of our links that actually have an href will print out like this: TimKadlec.com [https://www.timkadlec.com]

    Obviously, this is a pretty handy enhancement to our print stylesheets. Now, the links printed out actually have some meaning to them. The problem is, Internet Explorer doesn’t support the :after pseudo class, nor does it support the content style. So if a user is using Internet Explorer and tries to print our page out, they still won’t see any href’s displayed.

    Javascript to the Rescue

    We can use a little bit of browser specific Javascript to fix this problem. Internet Explorer (version 5.0 and up) has a little known proprietary event called onbeforeprint. Just like it sounds, this event fires right before printing a page or viewing a print preview of the page. Since IE is the only major browser that doesn’t create the effect using CSS, a proprietary event is the perfect fix. Now, we can draw up a simple function like so:

          
    window.onbeforeprint = function(){
        var links = document.getElementsByTagName("a");
        for (var i=0; i< links.length; i++){
            var theContent = links[i].getAttribute("href");
            if (!theContent == ""){
                links[i].newContent = " [" + theContent + "] ";
                links[i].innerHTML = links[i].innerHTML + links[i].newContent;
            }
        }
    }
    
    

    Our function simply gets all the links on a page, and appends their respective href’s immediately after them, creating the same effect that we were able to do using CSS in other browsers. You might be wondering why we set the new content we created to be a property of each link. That’s because right after printing or canceling out of the print preview screen, we are now seeing the href on our actual web site. We obviously don’t want this, and it’s simple enough to get rid of with another IE proprietary function, onafterprint.

      
    window.onafterprint = function(){
        var links = document.getElementsByTagName("a");
        for (var i=0; i< links.length; i++){
            var theContent = links[i].innerHTML;
            if (!theContent == ""){
                var theBracket = theContent.indexOf(links[i].newContent);
                var newContent = theContent.substring(0, theBracket);
                links[i].innerHTML = newContent;
             }
        }
    }
    
    

    Here we just again, loop through all the links, find the position of the new content we added, and remove it from the link. This returns the appearance of our site to the original view before trying to print.

    Obviously, it would be ideal if we could simply use CSS to manage this. However, as we’ve seen, there is no need to wait for IE to support this feature before we implement it. Some proprietary Javascript events allow us to replicate the effect until it is supported later on.

    The script/css effect has been tested in IE7, Opera, Firefox, and Safari. If you are interested, the complete Javascript to create the effect in IE is here: printlinks.js

    ]]>
    Branching Out https://timkadlec.com/2008/01/branching-out/ Thu, 17 Jan 2008 18:00:00 +0000 https://timkadlec.com/2008/01/branching-out/ Utilizing branching in Javascript can allow us to create more efficient code. Branching essentially allows us to create “conditional” functions at run-time so we don’t have to keep running the same verifications each time a function is called. That last sentence is probably as clear as mud, so let’s take a look at an example.

    A very common check to perform is whether a browser supports the getElementById() method, like so:

          
    if (!document.getElementById) return;
    var myContainer = document.getElementById('container');
    
    

    That is just a very simple verification. We check to see if the browser recognizes the getElementById() method. If it doesn’t, we quit what we are doing and don’t go any further. If it does, we continue on with our code. It can be quite annoying to have to type out document.getElementById each time you have to use it, so let’s create a shorter helper function.

      
    var id = function(attr){
        if (!document.getElementById) return undefined;
        return document.getElementById(attr);
    }
    var myContainer = id('container');
    
    

    Above, we create an id function that checks to see if the browser supports the getElementById() method, and if it does, it returns the value for us. There are two major benefits here. First, our function does the check for us to ensure the method is supported. Secondly, it’s less typing; instead of having to type document.getElementById() each time we want to get an element, we can just type id().

    However, let’s say that we have a pretty intensive script here and we have to use the id method let’s say 20 times. That means that 20 times over the course of our script, we are running a check to see if the browser supports the method, when we already know the answer after the first time we ran the check. Obviously, that isn’t very ideal.

    Using branching, we can make the check once on runtime, and then return a function that doesn’t require checking anymore.

      
    var id = function(){
        if (!document.getElementById){
            return function(){
                return undefined;
            };
        } else {
            return function(attr){
                return document.getElementById(attr);
            };
        }
    }();
    
    

    The key here is the parentheses after our function declaration (line 11). This makes the function run right away as soon as the browser sees it.

    So while loading the page, the browser comes across this function and runs it. If the getElementById method is supported, it assigns a function that returns the element to the id variable. If the browser does not support the getElementById method, than it assigns a function that returns an undefined value to the id variable.

    It may help to look at it this way. By using branching in our function above, we have essentially applied one of two functions to our id variable:

          
    // if getElementById is not supported
    id = function(){
        return null;
    }
    // if getElementById is supported
    id = function(attr){
        return document.getElementById(attr);
    }
    //Example usage
    var myContainer = id('container');
    
    

    So now, when we are getting the element using the id function, it doesn’t run the check to see if it is supported, because it doesn’t need to. If we use our id function 20 times, the browser support check is only performed once: initially as the script is being loaded.

    It is important to note that branching is not always going to provide a performance increase. Using branching results in higher memory usage because we are creating multiple objects. So whenever you consider using branching, you need to be able to compare the benefits you will get from not having to run the comparison over and over versus the higher memory usage that branching requires. However, when used properly, branching can be a very handy tool for optimizing your Javascript performance.

    ]]>
    Don't Be Ashamed of Your Code https://timkadlec.com/2008/01/dont-be-ashamed-of-your-code/ Fri, 11 Jan 2008 19:00:00 +0000 https://timkadlec.com/2008/01/dont-be-ashamed-of-your-code/ I came across a post the other day by Guy Davis about 3 Levels of Programmers. In it, he stated that programmers fall into one of 3 Levels, the Good, the Lazy, and the Bad. According to his post, the bad are those that say “I can do that myself!”, and then create a solution that is rather messy. The lazy are those that don’t build from scratch and instead find existing solutions. The good are those who are the ones creating the frameworks and libraries.

    While he admits in his comments that he was mainly venting against as he calls them “weak build-it-yourselfers” I thought it brought up an unfortunate opinion that at times seems to rise up in this industry. Now please note, I am not trying to pick on Guy Davis at all. There are some very valid points raised in his post, and as I stated before, he does admit he was mainly venting. I am sure all of us have vented about such things before.

    Developers sometimes give a cold shoulder to developers whose code is not up to par. This is an unfair judgment though. Particularly in an industry as fast moving as the web development industry is, you cannot afford to wait until you are an “expert” to code. You have to code using the best ways you know how, and continue to learn. This means that there are undoubtedly some projects you wish you could pretend you never touched. I know I have them.

    I think that a better classification is to broaden it out a bit and say there are two kinds of developers. The first kind are those that are content just getting the job done. They are not particularly concerned with how it is taken care of, so long as it functions relatively well. If they have to use inline Javascript to create an effect, then so be it. They don’t push themselves to learn more and find better ways of doing things, they just accept that what they do works well enough and why would they ever need to progress further.

    The second type of developer, however, is never satisfied with where they are at. Yes, they will use the knowledge they have to get the job done, which results in some unseemly coding at times. But they know that they have more to learn, and are constantly pushing themselves to find a better method of doing it. These are the types of developers that push the industry forward. They will take their bumps and bruises along the way, but they will continue to further their understanding and knowledge of whatever skill it is they are using.

    This doesn’t mean that they always develop the solution themselves, or never use libraries or frameworks. Sometimes the best solutions are libraries. It simply means that this kind of developer is always wondering, is there a better way to accomplish this.

    Therefore, I say if you are new to a language, be it CSS, Javascript, PHP, or whatever else, don’t be ashamed of the code you produce. As long as you are trying to learn more and come up with more fool-proof and efficient ways of developing, there is nothing to be embarrassed about.

    If you asked some of the biggest names in web development today if they had ever done a project using coding methods they aren’t exactly proud of, I am sure the answer would be “yes”. If everyone thought that someone smarter out there would develop a better solution, the industry would become quite stagnant.

    ]]>
    Getting Specific With CSS https://timkadlec.com/2008/01/getting-specific-with-css/ Sun, 06 Jan 2008 19:00:00 +0000 https://timkadlec.com/2008/01/getting-specific-with-css/ One very fundamental and integral part of CSS is understanding specificity. Understanding this basic concept can help to make your CSS development, and more specifically (no pun intended) your troubleshooting go more smoothly.

    Every rule in CSS has a specificity value that is calculated by the user agent (the web browser for most web development purposes), and assigned to the declaration. The user agent uses this value to determine which styles should be assigned to an element when there are more than one rule for that particular element.

    This is a basic concept most of us have at least a general understanding of. For example, most developers can tell you that the second declaration below carries more weight than the first:

     
    h1{color: blue;}
    h1#title{color: red;}
    
    

    If both styles are defined in the same stylesheet, any h1 with an id of ‘title’ will of course be red. But just how is this determined?

    Calculating Specificity

    Specificity in CSS is determined by using four number parts. Each type of value in the style declaration receives some sort of specificity rate:

    • Each id attribute value is assigned a specificity of 0,1,0,0.

    • Each class, attribute, or pseudo-class value is assigned a specificity of 0,0,1,0.

    • Each element or pseudo-element is assigned a specificity of 0,0,0,1.

    • Universal selectors are assigned a specificity of 0,0,0,0 and therefore add nothing to the specificity value of a rule.

    • Combinator selectors have no specificity. You will see how this differs from having a zero specificity later.

    So going back to our previous example, the first rule has one element value, so its specificity is 0,0,0,1. The second rule has one element value and an id attribute, so its specificity is 0,1,0,1. Looking at their respective specificity values, it becomes quite clear why the second rule carries more weight.

    Just so we are clear on how specificity is calculated, here are some more examples, listed in order of increasing specificity:

     
    h1{color: blue;} //0,0,0,1
    body h1{color: silver;} //0,0,0,2
    h1.title{color: purple;} //0,0,1,1
    h1#title{color: pink;} //0,1,0,1
    #wrap h1 em{color: red;} //0,1,0,2
    
    

    You should also note that the numbers go from left to right in order of importance. So a specificity of 0,0,1,0 wins over a specificity of 0,0,0,13.

    At this point, you may be wondering where the fourth value comes into play. Actually, prior to CSS 2.1, there was no fourth value. However, now the value furthest to the left is reserved for inline styles, which carry a specificity of 1,0,0,0. So, obviously, inline styles carry more weight than styles defined elsewhere.

    It’s Important

    This can be changed, however, by the !important declaration. Important declarations always win out over standard declarations. In fact, they are considered separately from your standard declarations. To use the !important declaration, you simply insert !important directly in front of the semicolon. For example:

    
    h1.title{color:purple !important;}
    
    

    Now any h1 with a class of ‘title’ will be purple, regardless of what any inline styles may say.

    No Specificity

    As promised, I said I would explain the difference between no specificity and zero specificity. To see the difference, you need a basic understanding of inheritance in CSS. CSS allows us to define styles on an element, and have that style be picked up by the element’s descendants. For example:

     
    h1.title{color: purple;}
    
    
     
    <h1 class="title">This is <em>purple</em></h1>
    
    

    The em element above is a descendant of the h1 element, so it inherits the purple font color. Inherited values have no specificity, not even a zero specificity. That means that a zero specificity would overrule an inherited property:

    
    *{color: gray} //0,0,0,0
    h1.title{color: purple;}
    
    
    
    <h1 class="title">This is <em>purple</em></h1>
    
    

    The em element inherits the purple font color as it is a descendant of h1. But remember, inherited values have no specificity. So even though our universal declaration has a specificity of 0,0,0,0, it will still overrule the inherited property. The result is the text inside of the em element is gray, and the rest of the text is purple.

    Hopefully this introduction to specificity will help make your development process go smoother. It is not a new concept, or a terribly difficult one to learn, but understanding it can be very helpful.

    ]]>
    Using Prototypes in Javascript https://timkadlec.com/2008/01/using-prototypes-in-javascript/ Wed, 02 Jan 2008 19:00:00 +0000 https://timkadlec.com/2008/01/using-prototypes-in-javascript/ As mentioned in my previous post, I think using prototypes is powerful enough to deserve a more detailed explanation. To start off, let me say we are talking about the prototype method here, not the JavaScript library.

    Prototypes allow you to easily define methods to all instances of a particular object. The beauty is that the method is applied to the prototype, so it is only stored in the memory once, but every instance of the object has access to it. Let’s use the Pet object that we created in the previous post. In case you don’t remember it or didn’t read the article (please do) here is the object again:

      
    function Pet(name, species){
        this.name = name;
        this.species = species;
    }
    function view(){
        return this.name + " is a " + this.species + "!";
    }
    Pet.prototype.view = view;
    var pet1 = new Pet('Gabriella', 'Dog');
    alert(pet1.view()); //Outputs "Gabriella is a Dog!"
    
    

    As you can see, by using simply using prototype when we attached the view method, we have ensured that all Pet objects have access to the view method. You can use the prototype method to create much more robust effects. For example, let’s say we want to have a Dog object. The Dog object should inherit each of the methods and properties utilized in the Pet object and we also want a special function that only our Dog objects have access to. Prototype makes this possible.

      
    function Pet(name, species){
        this.name = name;
        this.species = species;
    }
    function view(){
        return this.name + " is a " + this.species + "!";
    }
    Pet.prototype.view = view;
    function Dog(name){
        Pet.call(this, name, "dog");
    }
    Dog.prototype = new Pet();
    Dog.prototype.bark = function(){
        alert("Woof!");
    }
    
    

    We set up the Dog object, and have it call the Pet function using the call() method. The call method allows us to call a specific target function within an object by passing in the object we want to run the function on (referenced by ‘this’ on line 10) followed by the arguments. Theoretically, we don’t need to do this. We could just create a ‘name’ and ‘species’ property inside of the Dog object instead of calling the Pet function. Our Dog object would still inherit from the Pet object because of line 12. However that would be a little redundant. Why recreate these properties when we already have access to identical properties inside of the Pet object?

    Moving on, we then give Dog a custom method called bark that only Dog objects have access to. Keeping this in mind consider the following:

      
    var pet1 = new Pet('Trudy', 'Bird');
    var pet2 = new Dog('Gabriella');
    alert(pet2.view()); // Outputs "Gabriella is a Dog!"
    pet2.bark(); // Outputs "Woof!"
    pet1.bark(); // Error
    
    

    As you can see, the Dog object has inherited the view method from the Pet object, and it has a custom bark method that only Dog objects have access to. Since pet1 is just a Pet, not a Dog, it doesn’t have a bark method and when we try to call it we get an error.

    It is important to understand that prototype follows a chain. When we called pet2.view(), it first checked the Dog object (since that is the type of object pet2 is) to see if the Dog object has a view method. In this case it doesn’t, so it moves up a step. Dog inherits from Pet, so it next checks to see if the Pet object has a view method. It does, so that is what runs. The bottom most layer of inheritance is actually from the Object.prototype itself. Every object inherits from that. So, in theory we could do this:

    
    Object.prototype.whoAmI = function(){
        alert("I am an object!");
    }
    pet1.whoAmI(); //Outputs 'I am an object!'
    pet2.whoAmI(); //Outputs 'I am an object!'
    
    

    Since all objects inherit from the Object.prototype, pet1 and pet2 both can run the whoAmI method. In short, prototype is an immensely powerful tool you can use in your coding. Once you understand how prototype inherits, and the chain of objects it inherits from, you can start to create some really advanced and powerful object combinations. Use the code examples used in this post to play around with and see the different ways you can use prototype to create more robust objects. With something like this, hands-on is definitely the best approach (at least I think so!).

    ]]>
    An Introduction to Classy Javascript https://timkadlec.com/2007/12/an-introduction-to-classy-javascript/ Fri, 28 Dec 2007 19:00:00 +0000 https://timkadlec.com/2007/12/an-introduction-to-classy-javascript/ Let me first tell you the why and then I will explain the what. By using classes in Javascript, you will notice a couple immediate benefits:

    1. Custom classes make your code more reusable. If many of your applications use a similar functionality, you can define a class to help and facilitate that functionality. Now you can just use your new class in multiple projects to provide the common functionality. For example, let’s say you create a custom accordion effect. If you use classes to define the effect you can use the same code to provide the same effect on another page simply by utilizing the class you created.

    2. Using classes helps to organize your code. If you are using classes, you will see that instead of just one really long piece of code, your code will become broken into smaller pieces of related methods and properties. This will make your coding easier to maintain and troubleshoot.

    So what is this terrific sounding little tool and how do we use them? A class is used to define a common type of object that will be used in a given application. For example, let’s say that we are creating an application to keep track of animals in a pet store. Each animal will have a name, and a species. We could do the following:

    
    var pet1 = new Object();
    pet1.name = 'Gabriella';
    pet1.species = 'Dog';
    var pet2 = new Object();
    pet2.name = 'Trudy';
    pet2.species = 'Bird';
    // and so on
    
    

    As you can hopefully see, that is just going to get long and annoying very quickly. If I have twenty different pets then it takes 60 lines of code just to create the objects. There is also no good organization to this. We have no indication that pet1 and pet2 are actually the same type of object. A much better way is to declare a class.

      
    function Pet(name, species){
        this.name = name;
        this.species = species;
    }
    var pet1 = new Pet('Gabriella', 'Dog');
    var pet2 = new Pet('Trudy', 'Bird');
    
    

    We have just created a custom Pet class. Each Pet object has two properties: a name and a species. Now we can tell at first glance that pet1 and pet2 are the same type of object, and our code instantly becomes more readable. It also takes only one line to declare an object, shortening the long code we would have had if we had created the objects each individually without a common class.

    What About Methods?

    We have seen how to set properties in classes, but we can also use these classes to define common methods to objects. We could do this by simply adding another line inside of our class declaration.

      
    function Pet(name, species){
        this.name = name;
        this.species = species;
        this.view = view;
    }
    function view(){
        return this.name + " is a " + this.species + "!";
    }
    var pet1 = new Pet('Gabriella', 'Dog');
    alert(pet1.view());
    
    

    We just added a view method to any object that is a Pet. The call above would return “Gabriella is a Dog!”. There is one problem here though. If we have 20 pets, each pet is carrying a view function. That may not seem like much, but as this pet store grows, and we have more and more pet objects, each with the view function, we are going to start running into memory problems.

    What we should be doing here instead, is use the prototype keyword. The prototype keyword allows us to have objects inherit the method from the class they are members of. The prototype keyword is a very powerful tool, and I will go into more detail on it in a later post, but for now some basic understanding should suffice. For example, take a look at the code below:

    
    function Pet(name, species){
        this.name = name;
        this.species = species;
    }
    function view(){
        return this.name + " is a " + this.species + "!";
    }
    Pet.prototype.view = view;
    var pet1 = new Pet('Gabriella', 'Dog');
    alert(pet1.view());
    
    

    We have now dropped the view from the initial construction of our class, saving us some memory space. Now using the prototype keyword, we have set a view method to the Pet object. Since pet1 is a member of Pet, it has access to the function. Essentially, we have created the same effect as before, only now, the view function is only stored once, instead of once for each pet object declared.

    As you can see, classes are a very valuable coding tool. They help to provide organization, and help to make our code more reusable. When used in conjunction with the prototype keyword, they can be extremely powerful and provide a lot of flexibility. This article really just touched the tip of the iceberg. There is so much you can do with this combination, and I highly recommend taking a deeper look. Once you start to use prototypes and classes in your applications, you will find them indispensable and wonder how you got along without them.

    ]]>
    A Less Painful CSS Experience https://timkadlec.com/2007/12/a-less-painful-css-experience/ Sun, 23 Dec 2007 19:00:00 +0000 https://timkadlec.com/2007/12/a-less-painful-css-experience/ CSS can be a tricky little fellow. It’s easy to learn, but difficult to master. There are, after all, 122 CSS Level 2 Properties. Add to that pseudo-classes, selectors, inheritance, and specificity, and you have yourselves quite a bit of information to try and remember. Here are a few things that have made CSS development a little smoother for me, and hopefully they can do the same for you.

    Know the common bugs

    Different browsers will handle CSS differently. This is something every CSS developer learns early on, sometimes painfully. Make sure when you come across a bug you force yourself to take a few minutes to look into it and gain an understanding of what is causing the problem. You will be surprised by how few fancy CSS hacks you will have to resort to if you know how to dodge the problems in the first place.

    Check your work often

    After every couple rules you put into your stylesheet, you should be checking each browser you have access to so you can see what effect the rule had on the layout. The worst thing you can do, in my opinion, is to create your CSS entirely and then check it in each browser. Now you have to wade through all your CSS and try to find where the problem is coming from. However, if you are checking your work after every couple rules, you will have a pretty good idea where the problem lies, and you will be able to fix it that much more quickly.

    Know your resources

    This may be the most important tip here. Like I said, with so many selectors, properties, bugs, etc. to try and memorize, you will undoubtedly have to turn for help on many occasions. It becomes important for you to know where you can find a solution, and where the solution will be explained in detail enough for you to understand it and be able to avoid it in the future. For example, when I run across a bug that I am not familiar with, the first place I turn to is Position Is Everything. They have wonderful write-ups on various bugs you will find in different browsers. If I just need to lookup a CSS property that I don’t use very often, then I turn to “CSS: The Definitive Guide”, by Eric Meyer. You need to know the places like this that you can turn to for answers.

    Know how to troubleshoot

    Knowing how to find the problem is half the battle. There’s plenty of ways to go about doing this, so you just have to find the techniques that work for you. While I can say that I haven’t ever used diagnostic styling quite to the extent that Eric Meyer posted in his 24ways article, I am a huge fan of using bright colored borders on my block elements to help me locate problem areas. Commenting out blocks of code at a time can also help a lot when trying to find out what elements have the troublesome styles applied to them. And I cannot recommend the Web Developer Toolbar extension for Firefox highly enough. I am so attached to that thing and its many useful troubleshooting features now that it pains me to work on a computer without it.

    Show patience and have a sense of humor

    Don’t worry if it seems like it is taking forever to get to the point where you don’t have to look up every little bug. Patience, young Padawan. There are a lot of bugs out there, and it can take awhile before you get to a point where you can recognize one right away.

    No matter how much you know, how many books you’ve read, or how many designs you’ve developed, there will still come times where a problem comes up that stumps you for awhile. There is just too much information to digest for you to expect to never run into problems. That’s when you just need to grin and bear it. Keep plugging away and be willing to laugh at simple mistakes you may make along the way. If CSS wasn’t challenging at times, wouldn’t that take some of the fun out of it?

    ]]>
    A Microsoft Christmas Miracle https://timkadlec.com/2007/12/a-microsoft-christmas-miracle/ Thu, 20 Dec 2007 19:00:00 +0000 https://timkadlec.com/2007/12/a-microsoft-christmas-miracle/ Just in time for the holiday season, Microsoft has let it be known that IE8 (due out sometime in 2008) passes the Acid 2 test in standards mode. This is excellent news for web developers, and quite refreshing to hear coming from the same people who said passing the Acid 2 test simply wasn’t a priority for IE7.

    For those of you who may be unaware, Acid 2 is a test page for web browser vendors set up by the Web Standards Project (WASP). The intention was for the Acid 2 test to be a tool for browser vendors to use to make sure their browsers could handle some features that we as web developers would love to use. It’s a pretty intense little test. If your curious, the WASP walks you through each of the items that Acid 2 tests for.

    The timing for Microsoft couldn’t have been any better. This announcement comes right after Opera announced they were filing a complaint against Microsoft for their lack of standards compliance.

    Now just because a browser actually passes the test doesn’t guarantee it will be standards compliant, but this is most definitely a step in the right direction. Add to this the rumor going around that hasLayout will be taken care of now in IE8, and I must say I am getting a little excited here. Of course, with the beta version coming out in the first half of 2008, it will still be quite some time before IE8 takes over the market share currently owned by other versions of the browser. Heck, IE7 still hasn’t passed IE6 as the dominant Microsoft browser.

    Not to be outdone, BetaNews claims that Firefox 3 Beta also successfully passed the Acid 2 Test. Looks like we may have a pretty intense battle for browser supremacy starting up here in the new year.

    ]]>
    Reinvent the Wheel https://timkadlec.com/2007/12/reinvent-the-wheel/ Sun, 16 Dec 2007 19:00:00 +0000 https://timkadlec.com/2007/12/reinvent-the-wheel/ A lot of people will tell you not to try and reinvent the wheel. If a script has been written, or a styling effect developed that accomplishes what you want, why spend time trying to create the effect yourself?

    I can see their point, and in some situations, I agree. If you are on a tight deadline for a project, you often don’t have time to develop that functionality from scratch, and it therefore makes more sense to adapt the structure already developed by someone else.

    I do feel, however, that web developers do need to try and create an effect from scratch when they have the opportunity. There are a couple reasons why I feel this is the case.

    First off, by forcing yourself to create that layout using CSS, or that form validation script in Javascript from scratch, you force yourself to analyze and learn the intricacies of the language you are dealing with. This knowledge will help to increase your understanding of both the concepts and techniques involved in arriving at a solution for the task. And as far as I know, more knowledge and understanding is never a bad thing.

    The other main reason for creating something yourself is because you never know how another point of view may help to create a superior solution to a common problem. Challenge yourself to see if you can improve the solution. I guess you could call this ‘modifying the wheel’. If you are going to try and develop a better solution, you should study the ones already out there. Try to see their strengths and weaknesses, and see how you can improve the weaknesses while not losing the strengths.

    So over all, I say go ahead and reinvent the wheel. Challenge yourself to create a better solution, and in the process, increase your knowledge. Remember, the first wheels were stone slabs. I tend to think the wheels currently being manufactured for cars, bikes, etc. are probably a little bit better solution.

    ]]>
    One Clear to Rule Them All https://timkadlec.com/2007/12/one-clear-to-rule-them-all/ Wed, 12 Dec 2007 19:00:00 +0000 https://timkadlec.com/2007/12/one-clear-to-rule-them-all/ One really common situation for web developers to run into is how to properly clear their floats. There are numerous approaches that have been discussed and used, but only recently have I come across a method that I believe is superior to the rest of the ones I had used up to now. In this post, we will first take a look at the problem caused by floats, and then we will look at some of the ways of fixing that problem.

    There are some simple styles that will stay consistent throughout the examples:

    
    #wrap{
        border: 1px solid #000;
    }
    .main{
        float: left;
        width: 70%;
    }
    .side{
        float: right;
        width: 25%;
    }
    
    

    The problem is that when you float an item, you are taking it out of the normal flow of the document, so other elements on the page act as if the floated element is not there. You can see this below (I am using white on black in my examples so they stand out more):

    Broken Float

    As you can see, #wrap doesn’t see .main or .side because they are floated, so our border doesn’t extend down. There are numerous proposed solutions to this problem.

    Extra Markup

    One method that is tried and true, is to add another element inside of #wrap after both of the floats. For example, you could use a div with the class of bottomfix. Now you just set the style of bottomfix to be clear:both, and your wrap will now extend to contain the floats and the bottomfix.

    Obviously, if we are shooting for separation of presentation and content (as we should be), this is not an ideal situation. We now have an element in place simply to create a presentational effect.

    Instead, let’s take a look at some ways of creating the same effect using only CSS. To do so, you have to have a very brief understanding of how Internet Explorer handles floats.

    IE Floats

    So far it may seem that Internet Explorer (IE) handles floats the same as other browsers, but as we look a little closer, we see that is not the case. Internet Explorer has a proprietary property called hasLayout. For the purpose of this article, just understand that for an element to have “layout”, in more cases than not it will need either a width or a height. hasLayout can only be affected indirectly by your CSS styles, there is no hasLayout declaration.

    Why is this important to know? Because if an element’s hasLayout property is equal to true, that element will auto-contain the float elements. What this means, is that to get IE to clear the floats, we really only have to add width: 100% to #wrap. Now #wrap’s hasLayout property is equal to true, and it will now automatically extend to contain the two floated elements.

    It’s far from being ideal though. While #wrap will now extend properly, we have to be careful about our margins. Elements on the page may respect the containing element (#wrap), but they will not respect the floated elements.

    To show this, let’s add another div with an id of next. We’ll give this div a 1 pixel pink border just so it stands out. Let’s also add a 10px bottom margin to the main element. The results in IE are shown below:

    IE with layout

    As you can see, by adding the width to #wrap, IE will now allow #wrap to contain the floats. You can also see that our 10px margin had no effect. In fact, the top margin of our first paragraph, and bottom margin of our last paragraph are also ignored in IE. So, if you want some space here, you need to use padding, not margins. You can also set a margin to #wrap…since it is the containing element, it’s margins are still respected.

    Moving On

    So now we have cleared the floats in IE, and we understand why. What about the other browsers though? Most will allow you to use the :after pseudo-class to add some content and have that content clear the floats.

    
    #wrap:after{
        content: ".";
        display: block;
        clear: both;
        visibility: hidden;
        height: 0;
    }
    
    

    What this does in the browsers that recognize it is add a period after the content of #wrap and have it clear the floats. We then use the height and visibility properties to make sure the period doesn’t show up. Remember, IE still needs to have “layout” on #wrap because it doesn’t recognize the :after pseudo class.

    One problem…IE/Mac doesn’t auto clear, and doesn’t recognize the :after pseudo class. So we have to use some hacks to get IE/Mac and IE/Win to play nicely together. I won’t be getting into this, you can find a really nice article about it at Position Is Everything.

    An Easier Way

    Thankfully, there is an easier way that has been credited to Paul O’Brien. For most browsers to clear the float we simply need to add overflow: hidden to #wrap. Just make sure that there is also a width on #wrap so it has “layout” in IE and you are good to go. Our CSS ends up looking like this:

    
    #wrap{
        border: 1px solid #000;
        width: 100%;
        overflow: hidden;
    }
    .main{
        float: left;
        width: 70%;
    }
    .side{
        float: right;
        width: 25%;
    }
    
    

    No seriously….that’s it. #wrap will now fully contain both the floats. Just keep in mind that if you want to add some space around either of the floated elements, you will want to use padding instead of margins because otherwise IE will ignore it.

    ]]>
    All For One Or One For All https://timkadlec.com/2007/12/all-for-one-or-one-for-all/ Thu, 06 Dec 2007 19:00:00 +0000 https://timkadlec.com/2007/12/all-for-one-or-one-for-all/ Most of us who are just starting in Javascript and more specifically working with the DOM, can probably write some simple scripts using event handlers. However, there is a more memory efficient method that someone relatively new to Javascript (heck, even some people who have been doing this awhile) might not be aware of - event delegation.

    Lucky for us, event delegation is not overly complex, and the jump from using event handlers to using event delegation can be made relatively easily.

    Let’s start by creating a simple script using event handlers, and then recreate it using event delegation. What we want from our simple script, is that whenever a link is clicked on inside of a specified list, we will get the ‘href’ of the link alerted for us.

    First, we will set up the markup. Nothing fancy to see here, just a list with an id of ‘links’ which will serve as our hook.

    Now we can write a simple script that will go through and add an onclick event handler to each of the links in the list. (Note: for the purpose of simplicity, we will just have our functions below. In a real setting, you would want to do some scoping to protect your variables.)

    
    function prepareAnchors(){
        if (!document.getElementById) return false;
        var theList = document.getElementById(“links”);
        var anchors = theList.getElementsByTagName(“a”);
        for (var i=0; i < anchors.length; i++){
            anchors[i].onclick = function(){
                alert(this.getAttribute(“href”));
                return false;
            }
        }
    }
    
    

    Again, like I said nothing spectacular. We just grab all the links inside of the ul, loop through them, and assign a function to each individual link’s onclick event. (Note: At this point, if you are not able to follow the function above, you are probably not going to get anything useful out of this article. I would instead recommend DOM Scripting by Jeremy Keith.) Now let’s recreate the effect using event delegation.

    
    function getTarget(x){
        x = x || window.event;
        return x.target || x.srcElement;
    }
    function prepareAnchors(){
        if (!document.getElementById) return false;
        var theList = document.getElementById(“links”);
        theList.onclick = function(e){
            var target = getTarget(e);
            if (target.nodeName.toLowerCase() ===’a’){
                alert(target.getAttribute(“href”));
            }
            return false;
        }
    }
    
    

    This one probably requires a little more explanation. The getTarget() function simply gets the target of the event function, or according to Internet Explorer, the source of the event function.

    In prepareAnchors() we get the ‘links’ list, and assign an onclick event handler to the list as a whole. Now, when anything inside the list is clicked, we simply use getTarget() to find the element that was clicked. If the clicked element was a link, we alert the ‘href’, if not, we just ignore it.

    What are the advantages to using event delegation? Well, for starters, by using one event handler versus many, there is less memory being used to accomplish the same task. On a script this small, you won’t be able to tell a performance difference, but larger, more intensive apps will most certainly perform better. Also, by using event delegation, we ensure that our script works even if the DOM has been modified since page load. To see an example of what how modifying the DOM can alter the performance of a script using event handlers, take a look at the excellent comparison done by Chris Heilmann.

    ]]>
    CSS, XHTML and Javascript...Oh My!! https://timkadlec.com/2007/11/css-xhtml-and-javascript-oh-my/ Fri, 30 Nov 2007 19:00:00 +0000 https://timkadlec.com/2007/11/css-xhtml-and-javascript-oh-my/ Congratulations! You have managed to stumble across my first attempt at having a personal site. While the site is admittedly a bit short on content right now, my goal is for this to eventually turn into a fairly interesting place to visit on a regular basis. Be patient, Rome as they say, was not built in a day.

    What can you expect…well, there will be many conversations about what I have learned or come across in the world of web development. In particular, you should eventually see information on things like CSS, XHTML, Javascript, and so on.

    There will be some personal updates mixed in I am sure, but I think it is fair to say that the vast majority of posts will be informative and educational in nature.

    This is a custom built blog system, so if anything is quirky, or some feature that you feel is seriously important is not here (other than pretty permalinks, they’ll be coming soon), please feel free to let me know.

    So stay tuned and hopefully I will have something a little more interesting to read for you soon.

    ]]>