The TurboGears community is pleased to announced that TurboGears 1.0.6 has been released. This version is a maintenance release that fixes bugs & glitches found since the last two months.
We are really active at the moment working on the 1.1 branch which should enter beta tomorrow. The 1.1 release will be fully retro-compatible with the 1.0 branch but will propose important new things:
SQLAlchemy as the default ORM
Genshi has the default templating engine
A new test system that is forward compatible with branch 1.5 & 2.0 (this is an important step toward migrating old applications written in 1.0 to 2.0 without breaking everything in the process)
Drop the use of RuleDispatch and use Peak Rules instead (no more C compilation)
Enhanced mod_wsgi compatibility to avoid past hacks that we had to do to deploy on Apache
Once 1.1 is out has a stable version we will encourage all 1.x users to go with this new version and to update their unit tests by following the TestMigration procedure.
A big work is undergoing also in the 1.5 branch to bring you CherryPy 3.1 support and further move TG into the WSGI realm.
Enjoy this new release and give feed-back in our mailing list!
This is because I’ve been contributing patches for TurboGears 2 and other packages used by Animador (a TurboGears 2 application), since I started its development, in order to fix bugs and/or add new features that I want in Animador. So now I can apply my changes by myself!
We just cut another release for 1.9.7a3, and it’s even more backwards compatable with TG1.
I occasionally get questions about why we are working on version 2.0 of TurboGears and what it means for users.
The ability to retrieve whole graphs of objects in a single query through the ORM
Commit entire graphs of object changes back to the database in one step
The ability to support multiple databases easily
Out of the box support for a powerful web-based interactive debugger
Full WSGI support
Your app is a WSGI app out of the box
You can run multiple TG2 apps in a single process
You easily call a any WSGI app from inside TG2
You can easily create and add middleware to your TG2 app
Easy access to a large library of helper functions in your templates
Out of the box support for using Routes to overide object dispatch for unusual URL’s
Flexible out of the box caching for pages, intermediary data, etc
Improved object dispatch, to better support resource oriented URL’s
Support of Dojo, jquery, ext.js, mootools, and other javascript libraries via ToscaWidgets
This means that TG2 apps have more flexibility and can can scale better than TG1, and we’re working on trying to make TurboGears 2 better documented than any other framework. Because we have full WSGI support you can easily mount existing WSGI applications in your TurboGears site, and you can also get things like profiling middleware, middleware that helps you find memory leaks, or any of a whole host of other interesting middleware almost for free.
There’s lots more to be done before we hit 2.0 final, we need to transparently support attomic commits across database boundaries (when the underlying stores support it). We need to make it even easier to build reusable site-components with TG2, and we need to continue to improve the TG2 documentaiton.
But I think we’re making huge progress, and I’m looking forward to the next release. The current plan is to release a 1.9.7 stable release in the next 4-8 weeks, and to release 2.0 (with the above mentioned extra features) later this year.
While I do not advertise it directly on this site, in addition to my “day job” I also provide Python training services for companies and individuals that want a comprehensive introduction to the Python programming language on a compressed schedule. One of the most difficult topics to condense and explain to a group of new Python programmers is metaclasses. Over the last year or so I’ve spent a lot of time refining my teaching methods to make this complex topic approachable.
A few months ago, I presented a version of my slides on metaclasses at a PyATL meeting. Following the meeting, I was approached by Doug Hellmann about the possibility of writing an article on metaclasses for Python Magazine. Well, I am pleased to say that my article, entitled Metaclasses Demystified appears in the July 2008 issue of Python Magazine.
This is my first professionally published technical article, and I am really excited that my work will be read by many Python programmers. Depending on feedback from readers, expect to see more contributions to Python Magazine from me in the future. It was a lot of fun crafting the article!
The video runs about 16 minutes, I go through a complete example, from installing sphinx, to inserting a doctest in your code, checking it with nosetests, and finally publishing it to the html pages.
I think virtualenv is horribly useful, because it is very, very common that I want to use more than one version of a library. Sometimes this is just because I want to test TurboGears 2.0 against some new version of some dependency, sometimes it’s because I’m running some old version of some project with out-dated requirements, and other times it’s just because I want to key my OS’s system python clean. Because there’s lots of virtualenvironments on my machine, and because I’m always switching between them I was very happy to find Doug Helman’s article on how to more easily manage switching between the various virtual environments that I’m working on at any given point in time.
If you use virtualenvs — and if you’re a python developer you should ;) — take a look at this article, it might make your life easier.
For a couple of projects now, I've wanted a grid layout engine that is similar to how desktops display lists of icons: nearly fixed width items, but varying slightly in height, displayed on a variable width page, so your layout could end up with 1 column or 8 depending on the width of the browser window. Tables are no good because they're always a fixed number of columns. Div elements using float works, so long as you make all the elements a fixed width, but they also have to be the same height, or you'll end up with gaps. I'm thinking it's going to have to be javascript driven including redrawing when the page size changes, and to manually size all items to the tallest item in the row, but I can't seem to find an example on the web anywhere (or my Google foo is weak).
Dear lazy web, can you point me in the right direction?
I’ve not posted to this blog in over a month, mainly because I have been busy. Busy is a good thing I think. I have been helping clearwired get their newest web application off the ground, working on a number of screencasts, helping Mark with the latest batches of TG2 releases, learning about new and interesting OSS Python projects.
One of the more interesting things I have started to use with some frequency is Sphinx. Sphinx is a documentation system whichallows you to create webpages (among other things) with .rst files. It also grabs directly from your modules and inserts your docstrings (including your doctests.) It is great to have code which is not only documented, but also tested using nose. Look for a screencast from me in the near future on this one.
Two of the recent screencasts I have written show how to use virtualenv, PasteScript, and Nose. I go through a ground-up example showing how to create a virtualenv, a project package, and finally how to test that package and provide robustness with code coverage. The final screencast is about how to install TG2, which has become much easier in the advent of TG2’s first alpha release.
TG2 is actually nearing it’s second alpha release. I think This is the best release yet. I spent some time moving DBSprockets to pep-8 compliance, and the two things TG2 depends on (tg.ext.silverplate and tgcrud) are now both using the newest release of DBSprockets. I also released basketweaver which allows you to create a simple local pypi made of static HTML files. Special thanks to Chris McDonough, who wrote the makeindex.py script. All I did was to fixed it up a little, and package using PasteScript, and provide a console_script for easy usage.
Along with Sphinx, I have been trying to wrap my head around Rum, which is a new project to generate forms from database schema, much like DBSprockets. Although the internals of the system are somewhat mystifying, the API is squeaky clean, and in my mind what DBSprockets was eventually planned to be. The great news is that Alberto has done such a good job laying the groundwork for Rum I should be able to jump right in and apply lessons learned from DBSprockets. At this point it is safe to say that I will continue to maintain DBSprockets for bug fixes, but that the internals of it are going to be converted over to using Rum, before being deprecated altogether. DBSprockets had a great run, but it is time to move on and move forward with a superior design which promises to integrate so many WSGI technologies.
Here’s a quick note on how to use IIS to serve up your TurboGears applications. This is something we need to document better, if we want to increase python/turbogears penetration in the windows market.
It’s not like asp.net is so good that they don’t need TurboGears. And TurboGears multi-threaded+multiprocess model works better on windows than many of the other “dynamic language web-frameworks which depend solely on the multi-process model.
TurboGears has used a tree of controller objects to do URL dispatch since 1.0. Which is nice and easy to understand, and makes getting started very quick. But, it wasn’t always apparent how you should use it to do RESTful dispatch, since the HTTP verbs all ended up going to the same controller method.
You could dispatch within that method, but that never felt totally clean to me. And I’ve been thinking and talking about better ways to do this. The good news is that Rick Copland just “made it happen” after a conversation in atlanta last week.
And then HTTP GET and POST verbs will be routed to their respective methods. This makes working with RESTFul api’s easier. And on that front I’m very much looking forward to Dojo 1.2 which has all of kinds of restful json data store goodness.
Hopefully Rick’s recipe can find it’s way into the 1.x core code somewhere so that it’s even easier to do. But for now a couple dozen lines of code gives you everything you need for RESTful goodness.
Looks like the TG 1.x team has been doing good things. CherryPy 3 is support is a very, very good to hear that it’s been integrated into TG1, because this sets the stage for a much longer support cycle for TG1.
CherryPy 3 is a more solid base to work with, and it’s easier to invision supporting it for several years than it was with CherryPy 2.x. And the test-refactoring work that has gone into making this happen is a huge benefit as well, because you will be able to write application level tests that work in tg 1.5 and will still work in 2.0, easing that migration path.
Florent Aide may be on vacation, but things are still moving forward very quickly on the 1.5 front. I’m looking forward to seeing a release with all this great stuff in it.
We’re hard at work trying to make the TurboGears 2 docs into something best in class. There’s a long way to go, but the toolchain we’re using keeps getting better and better thanks to Sphinx and Bruno José de Moraes Melo and his GSoC work.
Sphinx all by itself provides a great system for turning ReStructured Text (ReST) files into usable documentation. Since the TG docs were in MoinMoin already, and we’d been careful to use ReST, getting started with sphinx was pretty easy. We updated the doc strings in TG2 to take advantage of the sphinx autodoc features to do API level documentiaton.
But the missing piece was pulling example code in from working sample projects. Sphinx had some include helpers, but they weren’t flexible enough for our needs. So, Bruno has created a set of sphinx extensions to help us. The first of these imports working files from projects checked into subversion. This allows us to show progressive enhancement of projects, and it allows us to keep the working example code under the same source control system as the docs.
So now our Wiki20 source doc has sections that look like this:
Thanks to the sphinx goodness the code this pulls in is going to be color coded for automatically. Bruno has also added the ability to mark of specific snipits of source code so that you can import just the bit you want to show off like this:
and where the PageName section is marked off like this:
<!-- ##{PageName} -->
Page Name Goes Here
<!-- ## -->
Bruno is working on a few more extensions to sphinx, that automatically test project code before adding it to the doc, and that automatically zip up the sample project and make it available so that documentation uses can easily download it.
I’m pretty excited about how the docs are shaping up, and I’m hoping that a few more people get involved in writing good docs, because the TG team is very much committed to trying to have the best possible docs. Docs aren’t sexy, but good docs can make or break a the use of a framework for new users getting started, and advanced students who are trying to do complex things.
When I first heard about Apple’s new MobileMe service, I was excited by the prospect of being able to keep all of my devices in sync instantly, with no waiting. I was so excited, that I purchased a boxed version of .Mac so that I’d be one of the first to get my hands on the service when it debuted.
One of the main reasons I purchased MobileMe was because of Apple’s beautiful marketing site describing the features of MobileMe. The site states:
MobileMe stores all your email, contacts, and calendars in the cloud and pushes them down to your iPhone, iPod touch, Mac, and PC. When you make a change on one device, the cloud updates the others. Push happens automatically, instantly, and continuously. You don’t have to wait for it or remember to do anything — such as docking your iPhone and syncing manually — to stay up to date.
As it turns out, this is not only misleading, but is essentially a bold-faced lie. While the “push” does occur from your iPhone to the cloud and vice-versa, your Mac only receives and sends updates once every 15 minutes, even if you instruct it to sync “Automatically” in the MobileMe preferences pane on your Mac. Honestly, I expected better from Apple.
In addition to this very irritating issue, I have several other problems with the service, which is billed as “Exchange for the rest of us,” that greatly diminish its usefulness.
I would love to utilize the push email service to manage my personal email. However, I have a personal domain where all of my email is delivered. In addition, I subscribe to many mailing lists, and receive several hundred emails a day. All of my email is neatly organized into folders automatically, using server-side mail filters through the excellent procmail utility.
MobileMe email has two massive shortcomings that make it useless for my situation:
You cannot set up MobileMe to handle email for a personal domain, unless you set up your existing mail server to forward email onto MobileMe. I could live without this one, if it were not for the second problem.
While rules that you set up in Apple mail “sync” to MobileMe, they are not applied server side. In order for your rules to be applied, you must have a version of Apple Mail running on some computer connected to the internet to apply those rules client-side.
MobileMe is a very compelling service if you just read the marketing, but in the end, in falls extremely short of my expectations from a company like Apple. I have sent my complaints onto MobileMe customer support, and am awaiting a response. Hopefully, they’ll at least acknowledge the misleading marketing.
I’ve been doing package management related stuff a lot recently, and thanks to a nice little script Chris McDonough showed me, it’s easy to turn a directory full of eggs into a PyPI (Python Package Index) style index page.
And now I’m pretty much sold on the the idea of having separate indexes for all our major releases so that there’s a known, tested, set of versions and dependencies, that can be easy_installed with one command to get a working install. When combined with Ian Bicking’s virtualenv package, this pretty much solves the package management problem for users, because they can keep a lightweight virtual environment for each version of the software they want to have, and because they can have a reliable and repeatable way to get a specific version of TurboGears along with all of it’s dependencies.
For projects like TurboGears, Pylons, or Zope, where there’s a pretty large set of dependencies it makes good sense for the development team to take on the burden of managing packages and versions for users. This is of course conceptually similar to creating a Linux distribution, but fortunately it’s orders of magnitude easier. We have around 40 packages in the turbogears 2 index, but Ubuntu has well over 10,000 packages. And in a world of interconnected componets one of the core value-adding propositions that a “megaframework” adds is the ability to get a pre-vetted set of packages that are known to work together.
And it’s in that light that I’m thinking about a time based release schedule makes sense. It’s not so much that we need to change the core TG2 code all that often, but there are lots of components and interactions, and it seems like there’s a value in providing regular “distributions” where we’ve tested everything and proved that it all works together well.
It’s the framework maker’s job to make doing the right thing easy. I was reading an article this morning, and came across this quote:
The right thing should be the right thing partly because it’s easy and natural to do. If the right thing is unnatural, that is kind of an environment smell. It’s beyond a code smell. It’s telling you something.
– Rod Johnson (Founder of the Java Spring framework).
Which is another way of saying:
There should be one — and preferably only one — obvious way to do it.
I think TurboGears is actually pretty good about making the right thing seem natural. Heck Bruce Eckel has said:
I think this is the first time a web framework’s functionality has been so obvious”.
But just because we’ve done a pretty good job in the past does not mean that we shouldn’t be working to do better. And there’s one area which has been bothering me a bit recently.
I want RESTful web services to feel more natural to write in TurboGears. The new TG2 lookup method allows you to use the URL as a way to instantiate a resource object and call methods on it, which is an important first step. We probably should also extend the object dispatch mechanism to call different methods based on HTTP verbs, which I think will help.
And of course in TG2 you can just use Routes to create more restful resources, but this feels a unnatural to some TG users who are very invested in object dispatch.
Either way, REST is something that is often (though not always) the right thing to do, and we need to make it so that it needs to feel natural to TG programmers.
On July first we cut out another 1.0.x TurboGears release.
I see this release as an ongoing effort to maintain our 1.0.x users and provide support and enhancements to existing applications.
While I am working with Mark and others on the 2.0 branch to make a release happen, I also work on the 1.5 front (formerly known as the 1.1 branch) to get this intermediary release out.
The 1.5 release aims to be 100% compatible with 1.0.x applications (running on python >= 2.4). I test regularly my development version against production applications that are running on top of TG 1.0 at the moment.
The real differences in 1.5 compared to 1.0 are:
Different defaults for quickstarted projects, SQLAlchemy, Genshi instead of SQLObject and Kid.
Testing Framework compatible with 2.0 to aid refactoring an application from 1.5 to 2.0 by using your test suit.
RuleDispatch dropped from the required TurboJson version and replaced by PeakRules. Should not change too much for end users, but will help maintainers because we won’t need to precompile binaries.
At the end of the day I could say the TG scene is in a pretty good shape and we begin to see some attraction from the 2.0 branch, which is a good thing IMHO. And last but not least, releases always get more attention than simple SVN commits and we get users’ feedback in a much greater scale on such occasions, which is essential for an Open Source project such as ours.
TG2 has been a wild and crazy place with quite a few API changes over the last 4 months. Cutting a release makes installation easier, at least in a project like TurboGears 2, where dependency management is part of the release process.
It makes writing components easier–it’s easier to certify against a release than against a range of SVN checkouts.
On the other hand, releases seem to make promises about API stability, that we as developers aren’t always ready to make. I think that’s often a bad excuse for not doing the work needed to cut a release. After all we have limited time, and perhaps it is better if we spend that time working on “new stuff” rather than on packaging. The problem with that thinking is that releases provide a way for the development community to manage breaking API changes better, informing users about them in a central place, and providing users with tools to manage the pace of change.
So, releases are even more important when there’s lots of activity and potential API changes. . And they save time. Well, not my time personally, but every hour I spend doing releases makes getting started that much easier for hundreds, or thousands of people. Hopefully karma will result in some of those people contributing back.
Which is why we just cut a release, and why I think we should keep cutting releases on a regular basis from now to 2.0 final.
But how should we manage that process?
Open Source culture has a “we’ll release it when it’s done” philosophy which avoids all kinds of half baked stuff from getting released in a critically buggy state. And that’s a good thing.
On the other hand Ubuntu has taught us that it’s possible to have stable, complete, compelling software releases, in an open source world — and that it can be done on a predictable schedule.
So, I’m hopping that we can do time-based mini-releases of TG2 over the next few months, on our way to 2.0. I’m still thinking this through, but I think we will do a first alpha release this month, and follow that up with monthly releases until we hit 2.0.
2.0 final is very likely several months out, but having monthly releases between now and then will make that process easier, and can help us gather more testers, who are willing to put up with a bit of well managed API change on the way to 2.0.
Long term, I’m considering doing a 6 month time based release schedule, because it seems to have worked out well for Ubuntu.
And TG2 is a bit like Ubuntu or in that it brings together lots of little pieces to make something bigger, so may benefit from structuring our release process after theirs. But that’s just day-dreaming at this point, and I’m very interested in what other people think about how we can best manage the release process of TG.
At the same time, we probably should start thinking about version life-cycles and support timeliness. We’re planning on supporting 1.0 for at least the next a year after 2.0 final. But if there are people willing to help out I think that support could go on for longer (even forever, if enough people want it, that’s the beauty of open source!).
I’m not sure exactly how long is long enough… Heck, in general, what do you think? How should we manage releases and support-timelines in the TurboGears world. Is the Ubuntu system transferable to web-framework world?
We’re very serious about wanting to know what would make users most happy, so please let us know what you think.
I’d like to officially announce the release of TurboGears 1.9.7 alpha 1. We’ve been working on this for a while yet, and while there’s still quite a bit of work to be done, we’ve now got a solid base to work from.
There’s a whole new section of the turbogears website devoted to TurboGears 2. We’re still working on TG2, and on the docs, as is expected with an alpha release, but I’m really excited about the new docs, which are generated using Sphinx.
Thanks to some help from the folks at the Repoze project, we’ve even got out own package index to use for installing TG2, so If you’ve already got setuptools installed, the install process is as easy as:
Of course, we’ve got more detailed install instrutions including information on how to install into a clean virtual environment, so that you eliminate any possibility that you’ll get version conflicts in your main python install:
There are a lot of new features in TG2, an a lot more to come, so be aware that this is an alpha release, and we’ll probably see some API changes between now and 2.0 final. We’ll document those changes and tell you what you need to know to upgrade your project, but we’re not going to be afraid to improve and refine our API’s between now and the 2.0 final. So, if you’re not afraid of change, and are interested in helping to shape the next generation of dynamic web frameworks, hop on in and let us know what you think.
I think it’s worth reaffirming our commitment to the 1.x users. This does not mean we’re dropping support for TG1, if anything it looks like 1.x development is accelerating these days. So, if you’re interested in a stable, well tested, growing environment TurboGears 1 is still a great choice.
I’ll blog more this afternoon about my thoughts on the TurboGears 2 release schedule, and the release process. But, I’m very grateful to all of the people who helped out in order to make this release possible. Thanks a million times!
I keep getting reports that TurboGears seems to be stagnant, and from the inside that just does not make sense: We’ve had 202 checkins in the the last 30 days. Which, by way of reference, is a tiny bit more than Django’s 183.
So, that seems to indicate that we’re definitely not quite dead yet. But, even that’s only half the story.
If you pull in checkins to a few components that are external to TG, but internal to Rails and Django (template language, ORM, forms + form validation, etc) that number grows significantly:
58 — SQLAlchemy — ORM
38 — Genshi — Template Language
113 — ToscaWidgets and tw.forms — Widgets, and Widget based Forms
Thats 353. Now if you pull in the changes to Pylons, related middleware and helpers, there’s another big jump:
56 — Pylons
106 — Paste
55— WebHelpers
36 — Routes
6 — Beaker
Now, of course there are other components which are actively contributing to the development of stuff that TurboGears users get as part of the package. But even ignoring all of that we’re looking at over 600 commits in the last month.
TurboGears ecosystem growth:
There’s also a growing ecosystem of turbogears stuff, and it’s all moving forward very quickly too.
Here’s a quick sample of projects I’m watching:
26 — TGTools
33 — tw.openlayers
31 — tw.dynforms
4— dbsprockets
7 — tw.dojo
16 — tw.jquery
There’s a new TG2 based CMS in the works too (https://code.google.com/p/lymon/). And there are a couple more very interesting projects that have not yet gone public.
Splitting our attention:
One thing that’s worth mentioning about all of this is that we’ve intentionally split our efforts over two areas:
Evolutionary improvements to TG 1.x
A new core in TG2.x
And that slows us down a bit. But it’s important because we want to take care of our installed userbase, and to try to grow into new areas at the same time. Doing one or the other would be way easier, but ignoring either one would be a huge mistake.
If it wasn’t for all the amazing stuff going on in the WSGi component world, TG2 would not be possible.
But because the wider python web community is developing new ways to work together around the WSGI model, tg2 development has been moving forward very well. And that’s one of the reasons why I’m so sold on the “component” model of framework development. Sure, it would be nice to have everything under one roof, and to have a stronger guiding hand on the whole process. But it’s not worth giving up all the innovation that happens “at the edges.”
TurboGears 1.x progress
On the evolutionary improvement front we’ve done lots to support SQLAlchemy, Genshi, ToscaWigets, DBSprockets, improve Json support, and created a brand new testing infrastructure, improved out authorization system, and otherwise made lots of positive changes. We’ve also had a half dozen new releases, with feature enhancements, averaging about a release a month.
TurboGears 2 progress
TG2 constitutes our revolutionary front, we’ve added many, many new features, and tools, and have maintained very significant API backwards compatibility, improved performance, and entirely redesigned the core of TurboGears. And we’re approaching our first alpha release in the next few days, so I don’t want to belabor the new stuff here.
What we need to do better:
I understand that not all of this work has been very visible, and that’s our fault, for not engaging the wider python community better, and it’s my fault in particular for not getting the TurboGears 2 work out into the wider world more quickly. Which is something I definitely intend to change in the very near future!
So, if you’re a part of the TG community, and you’re site/project needs to be better known, let me known, let me know. We need to raise the profile of some of the very interesting stuff that’s being done in the community, because outsiders seem to think that we’re not moving very fast, while insiders talk to me about the “blistering pace” of development.
Tomorrow Night I will be speaking about agile technologies with SQLAlchemy at the Front Range Pythoneers monthly meeting. I spent a few hours last night working out my presentation and creating a number of screencasts which show how to use tools like virtualenv, paster, and nosetests. I also touch apon sqlalchemy, and how one would set up a test environment for their database schema. Even if you don’t live in the Boulder/Denver area the information could be valuable to you, so I decided to set up a googlecode repository to store all of my tutorial-related materials. It is called PythonTutorials.
Lately I am finding that it is worthwhile to separate projects into their own packages, but since they are all in the same domain, I want to have them share elements for importing purposes. Enter namespacing, which I believe is a little-used feature of setuptools that people should take a serious look at.
What is namspacing? Well, if you are familiar with creating packages, you know that often times they share similar traits, which means it would be nice to have sort of a global package where they all reside, but then you would not be able to install the components of said package independently. Lets say we have a solar system package, and inside it are elements for each planet. You might import from them like this:
from solarsystem.earth import echosystem
from solarsystem.venus import atmosphere
But what if you didn’t want to package simulations of all the planets, because sometimes you just want to install one or two of them at a time. Namespacing gives you a way of creating packages which are related to each other, but do not clash.
First off, you are going to need setuptools if you don’t already have it. Download the script here and run it with python. This will install the ubiquitous setuptools package which contains easy_install.
The easiest way I have found to create a namespace package is to use zopeskel, which provides a basic_namespace template for your project. First, we easy_install zopeskel:
easy_install zopeskel
Now we can list the templates that paster provides. Paster is a tool which provides a developer with an easy way to create templates for packages, as well as other useful utilities for python file management. (Paster is automatically installed by zopeskel)
$ paster create –list-templates
Available templates:
archetype: A Plone project that uses Archetypes
basic_namespace: A project with a namespace package
basic_package: A basic setuptools-enabled package
basic_zope: A Zope project
nested_namespace: A project with two nested namespaces.
paste_deploy: A web application deployed through paste.deploy
plone: A Plone project
plone2.5_buildout: A buildout for Plone 2.5 projects
plone2.5_theme: A Theme for Plone 2.5
plone2_theme: A Theme Product for Plone 2.1 & Plone 2.5
plone3_buildout: A buildout for Plone 3 projects
plone3_portlet: A Plone 3 portlet
plone3_theme: A Theme for Plone 3.0
plone_app: A Plone App project
plone_hosting: Plone hosting: buildout with ZEO and any Plone version
plone_pas: A Plone PAS project
recipe: A recipe project for zc.buildout
silva_buildout: A buildout for Silva projects
At the top of the list is basic_namespace, which we will be using. For example, lets make a solarsystem.mars package:
paster create -t basic_namespace solarsystem.mars
Paster will then prompt you with a series of questions, of which the first two are the most important:
Selected and implied templates:
ZopeSkel#basic_namespace A project with a namespace package
Variables:
egg: solarsystem.mars
package: solarsystemmars
project: solarsystem.mars
Enter namespace_package (Namespace package (like plone)) [’plone’]: solarsystem
Enter package (The package contained namespace package (like example)) [’example’]: mars
After you answer the rest of the questions, it will create a directory structure that looks something like:
At which point, you are ready to create a development install of your project. Simply change to the directory:
cd solarsystem.mars
and:
python setup.py develop
Code for your new module would go in the solarsystem/mars/ folder, inside the new package. Any python you have installed this new package will
now be able to:
import solarsystem.mars
Namespacing is a great way to organize your work into digestible morsels, which can be installed one by one as needed by your perspective users. It is a great way to take advantage of all of the tools which have been created to make distribution and versioning easier.
I purchased a first-generation iPhone the day that it came out. Its been a wonderful purchase from the start, in spite of the fact that it cost me $599. Sure, thats a lot of money, but I always felt like I was getting my money’s worth out of the hardware. The first iPhone was the best iPod, best mobile browsing device, and best phone. When Apple reduced the price to $399, I thought that this was an absolute steal.
When I first heard rumors that the iPhone 3G was going to cost less than the current iPhone, I had guessed that this was pure rumor and speculation. There is no way that Apple would sell the iPhone for less than $399, as it would force them to restructure the pricing for their iPod line. By the same token, I couldn’t fathom Apple allowing ATT to subsidize the phone, because this would mean that ATT would be making this subsidy back up by somehow gouging the customer in service fees. Apple did an excellent job of standing up for the consumer with the first iPhone, by negotiating a low-price data plan and allowing for at-home activation through iTunes.
Needless to say, I was shocked at the price point that was announced during the keynote. Only $199? And no mention of contracts, subsidies, or rate increases? Was this too good to be true?
As it turns out, yes.
Currently, I pay ATT $20/month for 200 text messages and unlimited data over their EDGE network. ATT has announced that the new iPhone 3G data plan will cost $30/month, and will include no text messages. ATT charges $5/month for 200 text messages, which means that the iPhone data plan is effectively being increased by $15/month, representing a 75% increase over the existing data plan.
Over the course of the two year contract, an iPhone 3G will cost you a full $360 more than a first-generation iPhone would. This means that with the same ATT service plans, the $199 8GB iPhone 3G will actually end up costing you $559, where an 8 GB iPhone 1.0 will cost you only $399, representing a savings of $160! The iPhone 3G isn’t cheaper at all, it is in fact far more expensive.
Compounding this is the fact that if I upgraded, I’d likely give my current iPhone to my wife, and have to pay an additional $20/month for her data plan. By my math, getting a new iPhone 3G would cost me a minimum of $1039 over two years. Yikes!
So, I’ll be passing on the new “cheaper” iPhone, and keeping my iPhone 1.0. I’ll be able to run all the great new third-party applications, and will have every feature that my iPhone 3G toting friends will have, apart from 3G speed and GPS. As much as I hate the EDGE network, and would like to have GPS, those features simply aren’t worth over a thousand dollars to me. Here’s hoping that Apple and ATT come up with something more compelling and consumer-friendly, rather than misleading consumers.
One common piece of user feedback from the TurboGears 1 community:
Authentication and Authorization are somewhat too closely tied by identity.
At the same time, it was very, very nice to be able to offer people an out of the box solution to their total auth needs, and most brand-new web-application projects use a local database for both kinds of Auth, so TG1’s Identity module was good enough as the 80% solution.
TG2 however, while still aimed at making it easy to get started, and easy to build new web applications, is also aimed at solving some of the more “industrial strength” problems that the current generation of “dynamic” web frameworks has not yet addressed. In the case of Auth*, we’re basically talking about using pre-existing auth services. Identity supported this, but it was non-trivial exercise to get everything working.
So, in TG2, we’re partnering with the Repoze project folks to build up a simple, standard interface for authentication service providers. We’ve got a plugin for Repoze which adds a simple database authentication provider, and it works great. But at the same time, we want to make database backed authorizatoin as simple as possible, so we’ve also included some Authorization decorators (with the same API as the ones provided by Identity in TG1) and extended our basic Authorization provider to “decorate” the request with authorization information as well as the basic user authentication information provided by the standard Repoze.who middleware.
The nice thing is that while tg.repoze.who provides both Authentication and Authorization, it’s much easier to separate them if you want. Also I have high hopes that repoze.who becomes a standard authorization provider in the WSGI world, so non TurboGears 2 wsgi apps can (and hopefully will) be designed to work with it out of the box.
There are LDAP and other plugins on the way, and the whole system is still evolving somewhat, but tg.ext.repoze.who is looking like a clean and useful library for both Authentication and Authorization, and it will provide a great platform for TG, Repoze, and hopefully other WSGI application and framework people to work together on the Auth problem.
Florent Aide and Chris McDonough have done all the heavy lifting on this, and I’m very excited about what they’ve done.
Well, it’s official now, I have a new job with Predictix, doing open source TurboGears and Python web dev stuff. Predictix is very much invested in helping the TurboGears community to grow and thrive, and I’m proud to be working with their team. And I’m even more excited about the fact that they want me to do work on TurboGears 2 as part of my “real job.”
One of my main goals when looking for this job was to make sure that whoever I worked for was committed to growing and takeing care of the TG development community. And I couldn’t have asked for anything better. Working for Predictix will help me to polish up the good work that’s already been done to get us to a TG2 beta release, and they already have a lot of fantastic stuff that they would like to open source, which I’m really excited about.
TG2 is moving forward like crazy. In the last three weeks, we’ve had two sprints, both of which had several people working on docs, and on adding the last few features needed for the beta, and cleaning up the show-stopper bugs in our ticket system.
I’m a bit burned out by all the activity, but at the same time I’m very excited about where we are going. I think 2008 is shaping up to be a really busy year for the TG dev team. I see my job in the very short term as creating some stability and consistancy in the midst of the firestorm of new development that’s going on. So, my highest priority right now is getting a stable beta release out the door, and helping us to move forward the docs so that anybody who wants to try out TG2 has a stable base to work on.
My plan will be to do releases about once a month for the rest of the year (or until we have a TG2 final release), because there’s a lot going on, and I want to make that stuff available to people as soon as possible.
It’s been a while since I have blogged for a few reasons. One, I took a vacation. No work, open source or otherwise for 11 days. I barely even checked my email. One thing that is great about living in Colorado: I don’t have to go anywhere on vacation, and my family will come to me! I took my sister up the First Flatiron, my dad fishing in Cheeseman Canyon, and all of us took a nice stroll through Chautauqua Park. We ate out almost every night and enjoyed catching up since I have moved 2000 miles away.
Previous to that, I managed to win a skirmish in the framework war at work, and spent an obscene amount of time (on and off the clock) bringing my first production-level TG2 site. This was a really fun project because my coworker and I were given the reigns and let loose on implementation. In 3 hours we had a working TG2 site with data in an EXT grid, with database schema expressed as a tree. In 3 weeks we had a complete reporting system (using EXT) with object-based security. We were also able to implement a file upload manager, and a select shuttle (for assigning groups to secure objects) thanks to the help of Sanjiv Singh, my GSoC protege. Thanks to all of you who have helped myself, and TG2 get off the ground.
One of the reasons I was so absent from the OSS community was that I was trying to wrap my head around what was going on with the EXT licensing, which changed under my feet as I was developing our in house app. The move from LGPL to GPL/Commercial was a shock to the community, and in my opinion based primarily on one person’s greed. My personal choice is to finish up whatever EXT support I have to do for our in-house application, and never look back at it again. I will be moving on to Dojo, and have advised my GSoC students and other people involved with TurboGears to do the same.
Now that I am back from vacation/new project hell, I have been able to release a new module for TurboGears, tg.ext.silverplate, which is a plugin for TG2 providing User Management and Profile pages which are customizable. This all fits under the TGTools domain, which has become a home for tg.ext.repoze.who (Authentication for TG2) and tg.ext.geo (An upcoming library for Geographical support in TG2). If you get a chance, you might want to check out the new TGTools googlecode site.
I also did a new release of tg.ext.repoze.who, which provides authentication/authorization to TG2. The only functionality I as able to contribute was a change to the name-spacing, as well as a full test of it’s functionality. Right now I am working on an LDAP bridge which will allow your TG2 site to authenticate against LDAP, and then dump you into the TurboGears Domain for all of the authorization stuff (groups, permissions, etc.) I should have this completed in a few days. Most of the work for tg.ext.repoze.who was done at the last world-wide WSGI-Turbogears sprint. It looks like TG is going to do a release in a few days.
This weekend I am headed to ABQ for some sprinting with clearwired on their next-generation web application. I am hoping this time around TG2 is mature enough in their eyes to take advantage of everything it has to offer, since they have currently chosen to work simply with Pylons as a framework. Either way, I will be working with some of the best people in the business, and we should be able to push TG foreward one way or another.
I just found this blog post about Paris Envies, and why Jon chose TurboGears for that project. I think we come out looking pretty good ;)
He was well aquanted with Ruby on Rails, and Django when he was introduced to TurboGears:
I was sceptical at first, being in love with Django at that time. TurboGears taught me a lot of things and without it I’m sure Paris Envies wouldn’t be where it is today.
This site is a great example of what you can do with TurboGears, since it makes use of lots of TG features and add-ons. Widgets, JSON support, the user Registration module, and lots of other components are used. And the great thing is that he’s been able to rapidly add new features, and has been very happy with the flexibility of the TurboGears framework.
I made the Paris Envies mobile website in two days, no more. This included tests, integration of the WURFL mobile phone database (to get screen sizes) and Google Maps Static (I was using Yahoo Static Maps first, but google ones are much better ;)).
All in all, I would say this seems like a ringing endorsement of TurboGears.
I can code really faster with TurboGears than with any other framework, but when I say faster, I mean it.
And of course all this was done with TurboGears 1, which is still viable, still competitive, and still very much supported. Sure we’re working hard on TurboGears 2 to provide many more industrial strength solutions, and to make getting started even easier. But we’re also committed to maintaining and growing the TurboGears 1 platform at the same time. Unlike some other web-frameworks out there, we’re not abandoning 1.0 in favor of 2.0. Sure, we’ll eventually phase out TG1 support when people don’t want it anymore — but that’s a long way off, and don’t think shafting existing users is a path towards future success.
For various reasons, I released Paver 0.7.3 on Friday and Paver 0.8 yesterday. I have an idea of what the coming 1.0’s zc.buildout integration will be like, and I think it will be quite cool and useful.
In the meantime, though, I’ve got some new features that set the stage for things I need to do in 1.0. Specifically, you can now pass in a dictionary in your options search ordering. So, you can pull options from any source you’ve got at the time the task is running and stick them at the front of the line. I expect to use this in buildout options handling.
A nice new feature is the ability to set options on the command line. You can do something like:
Doing that, paver will set some.option to hello, run task1 and then change the option to goodbye before running task2.
The new cog.include_markers and cog.delete_code options allow you to remove Cog’s markers from the output and instead put a nicer bit of text to say where the snippet of code came from. Letting the user know where a sample code snippet came from is quite valuable, so I want to make it possible to do so in as pleasing a way as possible.
For Paver’s Getting Started Guide, I ended up not using the new include_markers feature and instead just changed the Cog delimiters. I did this because Paver runs shell commands in addition to including file sections when generating the docs. I wanted those shell commands to be included. I think the new markers are more pleasant to look at, and I’ll be curious to get feedback since I heard from more than one person that the Cog delimiters looked like they were left in by mistake.
Paver is starting to get some traction as it has picked up its first patches from outsiders, and I’ve started to get some feedback on breakage from Windows users (fixed in 0.8). Mark let me put Paver into TurboGears 2, and I think it will help out there, so that will introduce quite a number more people to the project. As always, come and join us on the mailing list if you have any questions or problems!
Threads may not be be best way, or the only way, to scale out your code. Multi-process solutions seem more and more attractive to me.
Unfortunately multi-process and the JVM are currently two tastes that don’t taste great together. You can do it, but it’s not the kind of thing you want to do too much. So, the Jruby guys had a problem — Rail’s scalability story is only multi-process (rails core is NOT thread safe), and Java’s not so good that that….
Solution: Running “multiple isolated execution environments” in a single java process.
I think that’s a neat hack. The JRuby team is to be congratulated in making this work. It lets Rails mix multi-process concurrency with multi-threaded concurrency, if only on the JVM. But it’s likely to incur some memory bloat, so it’s probably not as good as it would be if Rails itself were to become threadsafe.
I’m not sure that the Jython folks have done anything like this. And I’m not sure they should. It’s a solution python folks don’t really have. Django used to have some thread-safety issues, but those have been worked out on some level. While the Django people aren’t promising anything about thread safety, it seems that there are enough people using it in a multi-threaded environment to notice if anything’s not working right.
At the same time, TurboGears has been threadsafe, from the beginning, as has Pylons, Zope, and many other python web dev tools. The point is, you have good web-framework options, without resorting to multiple python environments in one JVM.
Why you actually want multi-threaded execution…
In TurboGears we’ve found that the combination of both multi-threaded and multi-process concurrency works significantly better than either one would alone. This allows us to use threads to maximize the throughput of one process up to the point where python’s interpreter lock becomes the bottleneck, and use multi-processing to scale beyond that point, and to provide additional system redundancy.
A multi threaded system is particularly important for people who use Windows, which makes multi-process computing much more memory intensive than it needs to be. As my Grandma always said Windows “can’t fork worth a damn.” ;)
But, given how hard multi-threaded computing can be to get right TurboGears and related projects work hard to keep our threads isolated and not manipulate any shared resources across threads. So, really it’s kinda like shared-memory optimized micro-processes running inside larger OS level processes, and that makes multi-threaded applications a lot more reasonable to wrap your brain around. Once you start down the path of lock managment the non-deterministic character of the system can quickly overwhelm your brain.
As far as i can see, the same would be true for a Ruby web server in Ruby 1.9, where there is both OS level thread support and an interpreter lock.
I’m well aware of the fact that stackless, twisted, and Nginx have proved that there are other (asynchronous) methods that can easily outperform the multi-threaded+multi-process model throughput/concurrency per unit of server hardware. The async model requires thinking about the problem space pretty differently, so it’s not a drop in replacement, but for some problems async is definitely the way to go.
Anyway, hats off to the Jruby team, and here’s hoping that Rails itself becomes threadsafe at some point in the future.
Today you’d be nuts not to look seriously at PHP, Python, and Ruby.
So, the rise of the so-called scripting languages is one of the inflection points, but it’s not the only one.
He singles out web-framework development as one place where there’s a lot of stuff happening, and a lot of new “rails-like” frameworks are cropping up all the time. TurboGears will live or die in the context of a much larger web-development revolution, and we need to be prepared to make our way forward in the midst of that.
What comes after rails will not be a rails clone. It will learn the right lessons from rails, avoid the pitfalls of rails, but it will also need to carve out something new and better than rails. For RDBMS users, I think the key difference between TG and Rails is the power and flexibility of SQLAlchemy. We need to “sell” this better.
There are a lot of other revolutions coming according to Tim. And I do think we’re looking at big changes in terms of everything from programming language choice, to web-development tools, to end-user desktops, and data persistence mechanisms. We’re also just beginning to see what the world of high-end javascript and other “rich” internet applications is going to do to our view of end-user software.
He doesn’t even mention the rise of EC2 and the Google App Engine as sea-changes in the way we buy computational resources, and I think that’s going to have a huge impact.
In the end my prediction is that the way we develop applications will change more in the next 5 years than it did in the last 5, and it’s time to start getting our heads wrapped around these issues, or we’ll be left behind.