The python packaging situation has been causing quite some controversy for some time. The venerable distutils has been augmented with setuptools, zc.buildout, pip, yolk and what not. Some people praise those tools, some other despise them; in particular, discussion about setuptools keeps coming up in the python community, and almost every time, the discussion goes nowhere, because what some people consider broken is a feature for the other. It seems to me that the conclusion of those discussions is obvious: no tool can make everybody happy, so there has to be a system such as different tools can be used for different usage, without intefering with each other. The solution is to agree on common format and data/metadata, so that people can build on it and communicate each other.
You can find a lot of information on people who like setuptools/eggs, and their rationale for it. A good summary, with a web-developer POV is given by Ian Bicking. I thought it would be useful to give another side to the story, that is people like me, whose needs are very different from the web-development crowd (the community which pushes eggs the most AFAICS).
Distutils limitation
Most of those tools are built on top of distutils, which is a first problem. Distutils is a giant mess, with tight, undocumented coupling between vastly different parts. Distutils takes care of configuration (rarely used, except for projects like numpy which need to probe for fairly low level system dependencies), build, installation and package building. I think that’s the fundamental issue of distutils: the installation and deployment parts do not need to know so much about each other, and should be split. The build part should be easily extensible, without too much magic or assumption, because different projects have different needs. The king here is of course make; but ruby for example has rake and rant, etc…
A second problem of distutils is its design, which is not so good. Distutils is based on commands (one command do the build of C extension, one command do the installation, one command build eggs in the case of setuptools, etc…). Commands are fundamentally imperative in distutils: do this, and then that. This is far from ideal for several reasons:
You can’t pass option between commands
For example, if you want to change the compilation flags, you have to pass them to every concerned command.
Building requires handling dependencies
You declare some targets, which depend on some other targets, and the build tool build a dependency graph to build this in the right order. AFAIK, this is the ONLY correct way to build software. Distutils commands are inherently incapable of doint that. That’s one example where the web development crowd may be unaware of the need for this: Ian Bicking for example says that we do pretty well without it. Well, I know I don’t, and having a real dependency system for numpy/scipy would be wonderful. In the scientific area, large, compiled libraries won’t go away soon.
Fragile extension system
Maybe even worse: extending distutils means extending commands, which makes code reuse quite difficult, or cause some weird issue. In particular, in numpy, we need to extend distutils fairly extensively (for fortran support, etc…), and setuptools extends distutils as well. Problem: we have to take into account setuptools monkey patching. It quickly becomes impractical when more tools are involved (the combinations grow exponentially).
Typical problem: how to make setuptools and numpy distutils extensions cohabite ? Another example: paver is a recent, but interesting tool for doing common tasks related to build. Paver extend setuptools commands, which means it does (it can’t) work with numpy.distutils extensions. The problem can be somewhat summarized by: I have class A in project A, class B(A) in project B and class C(A) in project C – how to I handle B and C in a later package. I am starting to think it can’t be done reliably using inheritance (the current way).
Extending commands is also particularly difficult for anything non trivial, due to various issues: lack of documentation, the related distutils code is horrible (attributes added on the fly for no good reason), and nothing is very well specified. You can’t retrieve where distutils build a given file (library, source file, .o file, etc…), for example. You can’t get the name of the sdist target (you have to recreate the logic yourself, which is platform dependent). Etc…
Final problem: you can’t really call commands directly in setup.py. As a recent example encountered in numpy: I want to install a C library build through the libraries argument of setup. I can’t just add the file to the install command. Now, since we extend the install command in numpy.distutils, it should have been simple: just retrieve the name of the library, and add it to the list of files to install. But you can’t retrieve the name of the built library from the install command, and the install command does not know about the build_clib one (the one which builds C libs).
Packaging, dependency management
This is maybe the most controversial issue. By packaging, I mean putting everything which constitute the software (configuration, .py, .so/.pyd, documentation, etc…) in a a format which can be deployed on many machines in a consistent way. For web-developers, it seems this mean something which can be put on a couple of machine, in an known state. For packages like numpy, this means being able to install on many different kind of platforms, with different capabilities (different C runtimes, different math libraries, different optimized libraries, etc…). And other cases exist as well.
For some people, the answer is: use a sane OS with package management, and life goes on. Other people consider setuptools way of doing things almost perfect; it does everything they want, and don’t understand those pesky Debian developers who complain about multiple versions, etc… I will try to summarize the different approaches here, and the related issues.
The underlying problem is simple: any non trivial software depends on other things to work. Obviously, any python package needs a python interpreter. But most will also need other packages: for example, sphinx needs pygments, Jinja to work correctly. This becomes a problem because software evolves: unless you take a great care about it, software will become incompatible with an older version. For example, the package foo 1.1 decided to change the order of arguments in one function, so bar which worked with foo 1.0 will not work with foo 1.1. There are basically three ways to deal with this problem:
- Forbid the situation. Foo 1.1 should not break software which works with foo 1.0. It is a bug, and foo should be fixed. That’s generally the prefered OS vendor approach
- Bypass the problem by bundling foo in bar. The idea is to distribute a snapshot of most of your dependencies, in a known working situation. That’s the bundling situation.
- Install multiple versions: bar will require foo 1.1, but fubar still uses the old foo 1.0, so both foo 1.0 and foo 1.1 should be installed. That’s the “setuptools approach”.
Package management ala linux is the most robust approach in the long term for the OS. If foo has a bug, only one version needs to be repackaged. For system administrators, that’s often the best solution. It has some problems, too: generally, things cannot be installed without admin privileges, and packages are often fairly old. The later point is not really a problem, but inherent to the approach: you can’t request both stability and bleeding edge. And obviously, it does not work for the other OS. It also means you are at the mercy of your OS vendor.
Bundling is the easiest. The developer works with a known working test, and is not dependent on the OS vendor to get an up to date version.
3 sounds like the best solution, but in my opinion, it is the worst, at least in the current state of affairs as far as python is concerned, and when the software target is “average users”. The first problem is that many people seem to ignore the problem caused by multiple, side by side installation. Once you start saying “depends on foo 1.1 and later, but not higher than 1.3”, you start creating a management hell, where many versions of every package is installed. The more it happens, the more likely you get into a situation like the following:
- A depends on B >= 1.1
- A depends on C which depends on B <= 1.0
Meaning a broken dependency. This situation has to be avoided as much as possible, and the best way to avoid it is to maintain compatibility such as B 1.2 can be used as a drop-in replacement for 1.0. I think too often people request multiple version as a poor man’s replacement for backward compatibility. I don’t think it is manageable. If you need a known version of a library which keeps changing, I think bundling is better – generally, if you want deployable software, you should really avoid depending on libraries which change too often, I think there is no way around it. If you don’t care about deploying on many machines (which seem to be the case for web-deployment), then virtualenv and other similar tools are helpful; but they can’t seriously be suggested as a general deployment tool for the same audience as .deb/.rpm/.msi/.pkg. Deployment for testing is very different from deployment to many machines you can’t control at all (the users’ ones)
Now, having a few major versions of the most common libraries should be possible – after all, it is used for C libraries (with the same library installed under different versions with different sonames). But python, contrary to C loaders, does not support explicit version loading independently of the name. You can’t say something like “import foo with v >= 1.1”, but you have to use a new name for the module – meaning changing every library user source code. So you end up with hacks as used by setuptools/easy_install, which are very fragile ( sys.path overriding, PYTHONPATH mess, easy_install.pth, etc…). At least for me, that’s a constant source of frustration, to the point that I effectively forbid setuptools to do anything on my machine: easy-install.pth is read only, and I always install with –single-version-externally-managed.
With thing like virtualenv and pip freeze, I don’t understand the need for multiple versions of the same libraries installed system-wide. I can see how python does not make it easy to support tools like virtualenv and pip directly (that is wo setuptools), but maybe people should focus on enabling virtualenv/zc.buildout usage without setuptools hacks (sys.path hacking, easy_install.pth), basically without setuptools, instead of pushing the multiple library thing on everyone ?
Standardize on data, not on tools
As mentioned previously, I don’t think python should standardize on one tool. The problem is just too vast. I would be very frustrated if setuptools becomes the tool of choice for python – but I understand that it solves issues for some people. Instead, I hope the python community will be able to stdandardize on metadata. Most packages have relatively simple need, which could be covered with a set of static metadata.
It looks like such a design already exists: cabal, the packaging tool for haskell (Thanks to Fernando Perez for pointing me to cabal):
http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/
Cabal work with two files:
- setup.hs -> equivalent of our setup.py. Can use haskell, and as such can do pretty much anything
- cabal: static metadata.
For example:
Name: HUnit
Version: 1.1.1
Cabal-Version: >= 1.2
License: BSD3
License-File: LICENSE
Author: Dean Herington
Homepage: http://hunit.sourceforge.net/
Category: Testing
Synopsis: A unit testing framework for Haskell
Library
Build-Depends: base
Exposed-modules:
Test.HUnit.Base, Test.HUnit.Lang, Test.HUnit.Terminal,
Test.HUnit.Text, Test.HUnit
Extensions: CPP
Even for the developer who knows nothing about haskell (like me :) ), this looks obvious. Basically, classifiers and arguments of the distutils setup function goes into the static file in haskell. By being a simple, readable text file, other tools can use it pretty easily. Of course, we would provide an API to get those data, but the common infrastructure is the file format and meta-data, not the API.
Note that the .cabal file enables for conditional, albeit in a very structured form. I don’t know whether this should be followed or not: the point of a static file is that it is easily parsable. Having conditional severly decreases the simplicity. OTOH, a simple way to add options is nice – and other almost static metadata files for packaging, such as RPM .spec file, allow for this.
It could also be simple to convert many distutils packages to such a format; actually, I would be surprised if the majority of packages out there could not be automatically translated to such a mechanism.
Then, we could gradually deprecate some distutils commands (to end up with a configure/build/instasll, with configure optional), such as different build tools could be plugged for the build itself – distutils could be used for the simple packages (the one wo compiled extensions), and other people could use other tools for more advanced needs (something like what I did with numscons, which bypass distutils entirely for building C/C++/Fortran code).
uninstall
Another often requested feature. I think it is a difficult feature to support reliably. Uninstall is not just about removing files: if you install a deamon, you should stop it, you may ask about configuration files, etc… It should at least support pre install/post install hooks and corresponding uninstall equivalents. But the main problem for python is how to keep a list of installed packages/files. Since python packages can be installed in many locations, there should be one db (the db could and most likely should be a simple flat file) for each site-package. I am yet familiar with haskell module management, but it looks like that’s how haskell does it
Conclusion
Different people have different needs. Any solution from one camp which prevents other solutions is very unhelpful and counter productive. I don’t want to get my ubuntu deployment system screwed up by some toy dependency system – but I don’t want to prevent the web developers from using their workflow. I can’t see a single system solving all this altogether – the problem has not been solved by anything I know of – it is too big of a problem to hope for a general solution. Instead of piling complexity and hack over complexity and hack, we should standardize the commonalities (of which there are plenty), and make sure different systems can be used by different projects.