CARVIEW |
![]()
Wed 2025-10-01
Mon 2025-09-29
Sun 2025-09-28
Sat 2025-09-27
Fri 2025-09-26
Thu 2025-09-25
Bundler belongs to the Ruby community Tue 2025-09-23
Mon 2025-09-22
Sun 2025-09-21
Sat 2025-09-20
Fri 2025-09-19
Tue 2025-09-16
Mon 2025-09-15
Search
Archives
2024
12 11 10 09 08 07 06 05 04 03 02 01
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003 One good site
MDN
Nelson Minar
Blog licensed under a Creative Commons License
|
![]() Googlebot is essentially blind to Javascript content. It (mostly) treats web pages as static text and refuses to execute any code on the page. That means that Google is incapable of indexing pages that rely on Javascript to render correctly. Not just fancy AJAX web apps, either. Google can't index web pages that render pages client-side, or customize the display in Javascript, or do much of anything dynamically. My recent wind history project is essentially invisible to Google, for example. Enter the hashbang convention, well documented by Google. The #! part is incidental; the spec is all about how site designers need to create a whole second set of statically generated HTML pages just for Googlebot, all behind URLs that include _escaped_fragment_ in their paths. There's a 1:1 mapping between the #! URLs for humans and the _escaped_fragment_ URL for robots. My Twitter page at https://twitter.com/#!/nelson, for example, also exists at https://twitter.com?_escaped_fragment_=/nelson. A special page just for bots, which Google dutifully translates to #! URLs for humans. It's all a big, gross, well-intentioned hack to work around a fundamental limitation in Google's indexing technology. I don't quite understand why Google hasn't tackled this core problem and figured out a way to run Javascript while indexing. It seems bad that Google can't see the web the way users see it. Google has an amazing Javascript engine, of course, and more than enough skilled engineers to apply it to web indexing. It may be a scaling problem; running code for a page would probably increase their indexing workload by 10x to 100x. Or it may simply be that Google feels with its market power they can require every modern web site in the world to build a special version just for them. Hopefully I'm too pessimistic and Google is working on indexing Javscript content already.
Nelson's Weblog
• tech
→ ago, bad, bittorrent, blosxom, dotnet, good, hqnx, iphone, mac, phone, photo, python, webservices |