Progress for numpy on windows 64 bits

The numpy 1.3.0 installer for windows 64 does not work very well. On some configurations, it does not even import without crashing. The crashes are mostly likely due to some bad interactions between the 64 bits mingw compilers and python (built with Visual Studio 2008). Although I know it is working, I had no interest in building numpy with MS compiler, because gfortran does not work with VS 2008. There are some incompatibilities because the fortran runtime from gfortran is incompatible with the VS 2008 C runtime (I get some scary linking errors).

So the situation is either building numpy with MS compiler, but with no hope of getting scipy afterwards, or building a numpy with crashes which are very difficult to track down. Today, I realized that I may go somewhere if somehow, I could use gfortran without using the gfortran runtime (e.g. libgfortran.a). I first tried calling a gfortran-built blas/lapack from a C program built with VS 2008, and after a couple of hours, I managed to get it working. Building numpy itself with full blas/lapack was a no-brainer then.

Now, there is the problem of scipy. Since scipy has some fortran code, which itself depends on the gfortran runtime when built with gfortran, I am trying to ‘fake’ a minimal gfortran runtime built with the C compiler. Since this mini runtime is built with the MS compiler and with the same  C runtime as used by python, it should work if the runtime is ABI compatible with the gfortran one. As gfortran is open source, this may not be intractable :)

With this technique, I could go relatively far in a short time. Among the packages which build and pass most of the test suite:
 – scipy.fftpack
 – scipy.lapack
 – some scipy.sparse

Some packages like cluster or spatial are not ANSI C compatible, so they fail to build. This should not be too hard to fix. The main problem is scipy.special: the C code is horrible, and there needs many hacks to build the C code. The Fortran code needs quite a few functions from the fortran runtime, so this needs some work. But ~ 300 unit tests of scipy pass, so this is encouraging.


Python packaging: a few observations, cabal for a solution ?

The python packaging situation has been causing quite some controversy for some time. The venerable distutils has been augmented with setuptools, zc.buildout, pip, yolk and what not. Some people praise those tools, some other despise them; in particular, discussion about setuptools keeps coming up in the python community, and almost every time, the discussion goes nowhere, because what some people consider broken is a feature for the other. It seems to me that the conclusion of those discussions is obvious: no tool can make everybody happy, so there has to be a system such as different tools can be used for different usage, without intefering with each other. The solution is to agree on common format and data/metadata, so that people can build on it and communicate each other.

You can find a lot of information on people who like setuptools/eggs, and their rationale for it. A good summary, with a web-developer POV is given by Ian Bicking. I thought it would be useful to give another side to the story, that is people like me, whose needs are very different from the web-development crowd (the community which pushes eggs the most AFAICS).

Distutils limitation

Most of those tools are built on top of distutils, which is a first problem. Distutils is a giant mess, with tight, undocumented coupling between vastly different parts. Distutils takes care of configuration (rarely used, except for projects like numpy which need to probe for fairly low level system dependencies), build, installation and package building. I think that’s the fundamental issue of distutils: the installation and deployment parts do not need to know so much about each other, and should be split. The build part should be easily extensible, without too much magic or assumption, because different projects have different needs. The king here is of course make; but ruby for example has rake and rant, etc…

A second problem of distutils is its design, which is not so good. Distutils is based on commands (one command do the build of C extension, one command do the installation, one command build eggs in the case of setuptools, etc…). Commands are fundamentally imperative in distutils: do this, and then that. This is far from ideal for several reasons:

You can’t pass option between commands

For example, if you want to change the compilation flags, you have to pass them to every concerned command.

Building requires handling dependencies

You declare some targets, which depend on some other targets, and the build tool build a dependency graph to build this in the right order. AFAIK, this is the ONLY correct way to build software. Distutils commands are inherently incapable of doint that. That’s one example where the web development crowd may be unaware of the need for this: Ian Bicking for example says that we do pretty well without it. Well, I know I don’t, and having a real dependency system for numpy/scipy would be wonderful. In the scientific area, large, compiled libraries won’t go away soon.

Fragile extension system

Maybe even worse: extending distutils means extending commands, which makes code reuse quite difficult, or cause some weird issue. In particular, in numpy, we need to extend distutils fairly extensively (for fortran support, etc…), and setuptools extends distutils as well. Problem: we have to take into account setuptools monkey patching. It quickly becomes impractical when more tools are involved (the combinations grow exponentially).

Typical problem: how to make setuptools and numpy distutils extensions cohabite ? Another example: paver is a recent, but interesting tool for doing common tasks related to build. Paver extend setuptools commands, which means it does (it can’t) work with numpy.distutils extensions. The problem can be somewhat summarized by: I have class A in project A, class B(A) in project B and class C(A) in project C – how to I handle B and C in a later package. I am starting to think it can’t be done reliably using inheritance (the current way).

Extending commands is also particularly difficult for anything non trivial, due to various issues: lack of documentation, the related distutils code is horrible (attributes added on the fly for no good reason), and nothing is very well specified. You can’t retrieve where distutils build a given file (library, source file, .o file, etc…), for example. You can’t get the name of the sdist target (you have to recreate the logic yourself, which is platform dependent). Etc…

Final problem: you can’t really call commands directly in As a recent example encountered in numpy: I want to install a C library build through the libraries argument of setup. I can’t just add the file to the install command. Now, since we extend the install command in numpy.distutils, it should have been simple: just retrieve the name of the library, and add it to the list of files to install. But you can’t retrieve the name of the built library from the install command, and the install command does not know about the build_clib one (the one which builds C libs).

Packaging, dependency management

This is maybe the most controversial issue. By packaging, I mean putting everything which constitute the software (configuration, .py, .so/.pyd, documentation, etc…) in a a format which can be deployed on many machines in a consistent way. For web-developers, it seems this mean something which can be put on a couple of machine, in an known state. For packages like numpy, this means being able to install on many different kind of platforms, with different capabilities (different C runtimes, different math libraries, different optimized libraries, etc…). And other cases exist as well.

For some people, the answer is: use a sane OS with package management, and life goes on. Other people consider setuptools way of doing things almost perfect; it does everything they want, and don’t understand those pesky Debian developers who complain about multiple versions, etc… I will try to summarize the different approaches here, and the related issues.

The underlying problem is simple: any non trivial software depends on other things to work. Obviously, any python package needs a python interpreter. But most will also need other packages: for example, sphinx needs pygments, Jinja to work correctly. This becomes a problem because software evolves: unless you take a great care about it, software will become incompatible with an older version. For example, the package foo 1.1 decided to change the order of arguments in one function, so bar which worked with foo 1.0 will not work with foo 1.1. There are basically three ways to deal with this problem:

  1. Forbid the situation. Foo 1.1 should not break software which works with foo 1.0. It is a bug, and foo should be fixed. That’s generally the prefered OS vendor approach
  2. Bypass the problem by bundling foo in bar. The idea is to distribute a snapshot of most of your dependencies, in a known working situation. That’s the bundling situation.
  3. Install multiple versions: bar will require foo 1.1, but fubar still uses the old foo 1.0, so both foo 1.0 and foo 1.1 should be installed. That’s the “setuptools approach”.

Package management ala linux is the most robust approach in the long term for the OS. If foo has a bug, only one version needs to be repackaged. For system administrators, that’s often the best solution. It has some problems, too: generally, things cannot be installed without admin privileges, and packages are often fairly old. The later point is not really a problem, but inherent to the approach: you can’t request both stability and bleeding edge. And obviously, it does not work for the other OS. It also means you are at the mercy of your OS vendor.

Bundling is the easiest. The developer works with a known working test, and is not dependent on the OS vendor to get an up to date version.

3 sounds like the best solution, but in my opinion, it is the worst, at least in the current state of affairs as far as python is concerned, and when the software target is “average users”. The first problem is that many people seem to ignore the problem caused by multiple, side by side installation. Once you start saying “depends on foo 1.1 and later, but not higher than 1.3”, you start creating a management hell, where many versions of every package is installed. The more it happens, the more likely you get into a situation like the following:

  • A depends on B >= 1.1
  • A depends on C which depends on B <= 1.0

Meaning a broken dependency. This situation has to be avoided as much as possible, and the best way to avoid it is to maintain compatibility such as B 1.2 can be used as a drop-in replacement for 1.0. I think too often people request multiple version as a poor man’s replacement for backward compatibility. I don’t think it is manageable. If you need a known version of a library which keeps changing, I think bundling is better – generally, if you want deployable software, you should really avoid depending on libraries which change too often, I think there is no way around it. If you don’t care about deploying on many machines (which seem to be the case for web-deployment), then virtualenv and other similar tools are helpful; but they can’t seriously be suggested as a general deployment tool for the same audience as .deb/.rpm/.msi/.pkg. Deployment for testing is very different from deployment to many machines you can’t control at all (the users’ ones)

Now, having a few major versions of the most common libraries should be possible – after all, it is used for C libraries (with the same library installed under different versions with different sonames). But python, contrary to C loaders, does not support explicit version loading independently of the name. You can’t say something like “import foo with v >= 1.1”, but you have to use a new name for the module – meaning changing every library user source code. So you end up with hacks as used by setuptools/easy_install, which are very fragile ( sys.path overriding, PYTHONPATH mess, easy_install.pth, etc…). At least for me, that’s a constant source of frustration, to the point that I effectively forbid setuptools to do anything on my machine: easy-install.pth is read only, and I always install with –single-version-externally-managed.

With thing like virtualenv and pip freeze, I don’t understand the need for multiple versions of the same libraries installed system-wide. I can see how python does not make it easy to support tools like virtualenv and pip directly (that is wo setuptools), but maybe people should focus on enabling virtualenv/zc.buildout usage without setuptools hacks (sys.path hacking, easy_install.pth), basically without setuptools, instead of pushing the multiple library thing on everyone ?

Standardize on data, not on tools

As mentioned previously, I don’t think python should standardize on one tool. The problem is just too vast. I would be very frustrated if setuptools becomes the tool of choice for python – but I understand that it solves issues for some people. Instead, I hope the python community will be able to stdandardize on metadata. Most packages have relatively simple need, which could be covered with a set of static metadata.

It looks like such a design already exists: cabal, the packaging tool for haskell (Thanks to Fernando Perez for pointing me to cabal):

Cabal work with two files:

  • setup.hs -> equivalent of our Can use haskell, and as such can do pretty much anything
  • cabal: static metadata.

For example:

Name: HUnit

Version: 1.1.1

Cabal-Version: >= 1.2

License: BSD3

License-File: LICENSE

Author: Dean Herington


Category: Testing

Synopsis: A unit testing framework for Haskell


Build-Depends: base


Test.HUnit.Base, Test.HUnit.Lang, Test.HUnit.Terminal,

Test.HUnit.Text, Test.HUnit

Extensions: CPP

Even for the developer who knows nothing about haskell (like me :) ), this looks obvious. Basically, classifiers and arguments of the distutils setup function goes into the static file in haskell. By being a simple, readable text file, other tools can use it pretty easily. Of course, we would provide an API to get those data, but the common infrastructure is the file format and meta-data, not the API.

Note that the .cabal file enables for conditional, albeit in a very structured form. I don’t know whether this should be followed or not: the point of a static file is that it is easily parsable. Having conditional severly decreases the simplicity. OTOH, a simple way to add options is nice – and other almost static metadata files for packaging, such as RPM .spec file, allow for this.

It could also be simple to convert many distutils packages to such a format; actually, I would be surprised if the majority of packages out there could not be automatically translated to such a mechanism.

Then, we could gradually deprecate some distutils commands (to end up with a configure/build/instasll, with configure optional), such as different build tools could be plugged for the build itself – distutils could be used for the simple packages (the one wo compiled extensions), and other people could use other tools for more advanced needs (something like what I did with numscons, which bypass distutils entirely for building C/C++/Fortran code).


Another often requested feature. I think it is a difficult feature to support reliably. Uninstall is not just about removing files: if you install a deamon, you should stop it, you may ask about configuration files, etc… It should at least support pre install/post install hooks and corresponding uninstall equivalents. But the main problem for python is how to keep a list of installed packages/files. Since python packages can be installed in many locations, there should be one db (the db could and most likely should be a simple flat file) for each site-package. I am yet familiar with haskell module management, but it looks like that’s how haskell does it


Different people have different needs. Any solution from one camp which prevents other solutions is very unhelpful and counter productive. I don’t want to get my ubuntu deployment system screwed up by some toy dependency system – but I don’t want to prevent the web developers from using their workflow. I can’t see a single system solving all this altogether – the problem has not been solved by anything I know of – it is too big of a problem to hope for a general solution. Instead of piling complexity and hack over complexity and hack, we should standardize the commonalities (of which there are plenty), and make sure different systems can be used by different projects.

From ctypes to cython for C library wrapping

Since the cython presentation by R. Bradshaw at Scipy08, I wanted to give cython a shot to wrap existing C libraries. Up to now, my method of choice has been ctypes, because it is relatively simple, and can be done in python directly.

The problem with ctypes

I was not entirely satisfied with ctypes, in particular because it is sometimes difficult to control some platform dependant details, like type size and so on; ctypes has of course the notion of platform-independant type with a given size (int32_t, etc…), but some libraries define their own type, with underlying implementation depending on the platform. Also, making sure the function declarations match the real ones is awckward; ctypes’ uthor Thomas Heller developed a code generator to generate those declarations from headers, but they are dependent on the header you are using; some libraries unfortunately have platform-dependant headers, so in heory you should generate the declarations at installation, but this is awckward because the code generator uses gccxml, which is not widely available.

Here comes cython

One of the advantage of Cython for low leve C wrapping is that cython declarations need not be exact: in theory, you can’t pass an invalid pointer for example, because even if the cython declaration is wrong, the C compiler will complain on the C file generated by cython. Since the generated C file uses the actual header file, you are also pretty sure to avoid any mismatch between declarations and usage; at worse, the failure will happen at compilation time.

Unfortunately, cython does not have a code generator like ctypes. For a long time, I wanted to add sound output capabilities to audiolab, in particular for mac os X and ALSA (linux). Unfortunately, those API are fairly low levels. For example, here is an extract of AudioHardware (the HAL of CoreAudio) usage:

<br />
AudioHardwareGetProperty(kAudioHardwarePropertyDefaultOutputDevice,<br />
&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &amp;count, (void *) &amp;(audio_data.device))</p>
<p>AudioDeviceGetProperty(audio_data.device, 0, false,<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; kAudioDevicePropertyBufferSize,<br />
&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &amp;count, &amp;buffer_size)<br />

Mac OS X conventions is that variables starting with k are enums, defined like:

<br />
kAudioDevicePropertyDeviceName = 'name',<br />
kAudioDevicePropertyDeviceNameCFString = kAudioObjectPropertyName, kAudioDevicePropertyDeviceManufacturer = 'makr',<br />
kAudioDevicePropertyDeviceManufacturerCFString = kAudioObjectPropertyManufacturer,<br />
kAudioDevicePropertyRegisterBufferList = 'rbuf',<br />
kAudioDevicePropertyBufferSize = 'bsiz',<br />
kAudioDevicePropertyBufferSizeRange = 'bsz#',<br />
kAudioDevicePropertyChannelName = 'chnm',<br />
kAudioDevicePropertyChannelNameCFString = kAudioObjectPropertyElementName,<br />
kAudioDevicePropertyChannelCategoryName = 'ccnm',<br />
kAudioDevicePropertyChannelNominalLineLevelNameForID = 'cnlv'<br />
...<br />

Using the implicit conversion char[4] to int – which is not supported by cython AFAIK. With thousand of enums defined this way, any process which is not mostly automatic will be painful.

During Scipy08 cython’s presentation, I asked whether there was any plan toward automatic generation of cython ‘headers’, and Robert fairly answered please feel free to do so. As announced a couple of days ago, I have taken the idea of ctypes code generator, and ‘ported’ it to cython; I have used on scikits.audiolab to write a basic ALSA and CoreAudio player, and used it to convert my old ctypes-based wrapper to sndfile (a C library for audio file IO). This has worked really well: the optional typing in cython makes some part of the wrapper easier to implement than in ctypes (I don’t need to check whether an int-like argument won’t overflow, for example). Kudos to cython developers !

Usage on alsa

For completness, I added a simple example on how to use xml2cython codegen with ALSA, as used in scikits.audiolab. Hopefully, it should show how it can be used for other libraries. First, I parse the headers with gccxml; I use the ctypes codegenlib helper:

h2xml /usr/include/alsa/asoundlib.h -o asoundlib.xml

Now, I use the xml2cython script to parse the xml file and generate some .pxd files. By default, the sript will pull out almost everything from the xml file, which will generate a big cython file. xml2cython has a couple of basic filters, though, so that I only pull out what I want; in the alsa case, I was mostly interested by a couple of functions, so I used the input file filter: -i input -o alsa.pxd alsa/asoundlib.h asoundlib.xml

Which will generates alsa.pxd with declarations of functions whose name matches the list in input, plus all the typedefs/structures used as arguments (they are recursively pulled out, so if one argument is a function pointer, the types in the function pointer should hopefully be pulled out as well). The exception is enums: every enums defined in the parsed tree from the xml are put out automatically in the cython file, because ‘anonymous’ enums are usually not part of function declarations in C (enums are not typed in C, so it is not so useful). This means every enum coming from standard header files will be included as well, and this is ugly – as well as making cython compilation much slower. So I used a location filter as well, which tells xml2cython to pull out only enums which are defined in some files match by the filter: -l alsa -i input -o alsa.pxd alsa/asoundlib.h asoundlib.xml

This works since every alsa header on my system is of the form /usr/include/alsa/*.h. I used something very similar on AudioHardware.h header in CoreAudio. The generated cython can be seen in scikits trunk here. Doing this kind of things by hand would have been particularly error-prone…

cython-codegen: cython code generator from gccxml files

I have enjoyed using cython to wrap from C libraries recently. Unfortunately, some libraries I was interested in (Alsa, CoreAudio) are quite big. In particular, they have a lot of structures, typedefs and enumerations which are easy to get wrong by doing it manually. Since the problem is quite similar to wrapping with ctypes (my former method of choice), I thought it would be interesting to do something like ctypeslib code generator for cython – hencecython-codegen “project”, available on github:

Basic usage goes like this to generate a .pyx file for the foo.h header:

gccxml -I. foo.h -o foo.xml -l 'foo' foo.h foo.xml

I can’t stress enough that this is little more than a throw-away script, and is likely to fail on many header files, or generate invalid cython code. I could use it successfully on non trivial headers though, like alsa or CoreAudio on Mac OS X. Your mileage may vary.

Going away from bzr toward git

(this is a small rant about why I like bzr less and less and like git more and more; this is only a personal experience, not a general git vs bzr thing, take it as such).

Source control systems are a vital tool for any serious software project. They provide an history of the project, are an invaluable tool for release process, etc… When I started to develop some code outside school exercises, I wanted to learn one for my own projects.

Using svn

This was not so long ago – 3-4 years ago, and at that time, SVN was the logical choice. I wanted to use it on my machine, to keep history, and being able to go back; since I mainly code for scientific research, the time and rollback aspects were particularly important.

Using SVN did not really make sense to me at that time: Using it to track other projects was of course easy (checking out, log, commit), but I could not really understand how to use it for my own projects:

  • I could not understand their branches and tags concept. Note that I did not even know what those terms mean at that time; I did not understand why it would matter at all where I would put the tags and   branches, why I needed to copy things for tags, etc… From the svn-book, it was not really clear what the difference between branch and tags was.
  • Setting up svn on one machine is awkward: Why should I create a repository somewhere, and populate it from somewhere else ? How should I do backup of the repository ?
  • Getting back in time is unintuitive: you have to “merge back” in time the revisions you want to rollback. This is really error prone.

Bzr, the first source control which made sense to me

At the end, I found easier to just use tarballs to save the state of my projects (my projects are always quite small). Then, a bit more than two years ago, I discovered bzr (bzr-ng at that time): it was a better arch, the SCS developed by Tom Lord for distributed development. Arch always intrigued me, but was extremely awkward: it could not handle windows very well, there were strange filenames, and it was source code invasive. Even checking out other projects like rhythmbox was painful. bzr on the contrary was really simple:

  • Creating a new project ? bzr init in the top directory, and then adding the code and committing. No separate directory for the db, no “bzradmin” to create the repository
  • branches and tags (tags came a bit later in bzr, starting at version 0.15 IIRC) were dead easy: bzr branch to create the branch, no need to use some copy commands, etc.. tags are even easier.

I have used bzr ever since for all my projects; in the mean time, I have been much more involved with several open source projects, which all use svn, and I always felt svn was an inferior, more complicated tool compared to bzr. With bzr, I understood what branch could be used for, and more generally how a SCS
can be helpful for development.

Since bzr was so pleasant to use, I of course wanted to use it for the projects I was involved with, so I was really excited by bzr-svn to track svn repositories. Unfortunately, bzr-svn has never been a really pleasant
experience. One problem was that the python wrapper of libsvn were really buggy (to the point that bzr-svn has now its own wrapper). Also, it was extremely slow to import revisions, and failed on some repositories I used bzr-svn on. That’s how I started to look at other tools, in particular hg: hg had an ability to import svn, and it was more reliable than bzr-svn in my experience. But it was not really practical to use to commit back to svn repository, so I never investigated this really deeply.

Bzr annoyances

At the same time, there were some things which I was never thrilled by with bzr. Two in particular:

One branch per directory

That’s a conscious design decision from bzr developers. This means it is a bit simpler to know where you are (a branch is a path), but I find it awkward when you need to compare branches / need to “jump” from branch to branch. When you are deep down inside the tree of your project, comparing branches (diff, log, etc…) becomes annoying because you have to refer to branch form their path.

Revision numbers

Each commit is assigned a revid by bzr, which is a unique number per repository. That’s the number bzr deals with internally. But for most UI purpose, you deal with revno, that is simple integers numbers: of course, because of the distributed nature of bzr, those numbers are not unique for a repository, only within a branch. I find this extremely confusing. Again, this appears more clearly when comparing several branches at the same time. For example, when I have not worked on a project for a long time, I may not remember the relative state of different branches: the bzr command missing is then very useful to know which commits are unique to one branch. But the numbers mean different things in different branches, which mean they are useless in that case; being useless would have actually been ok, but they are in fact very confusing.

For example, I recently went back to a branch I have not worked on for more than one month. Let’s say my current development focus in in branch A, and I wanted to see the status of branch B. I can use bzr missing for that purpose. I can see that 5 revisions, from 300 to 305 are missing. I then go into branch B, and study a bit the source code, in particular with bzr blame. I see some code with revision under 300 in branch B, which I could not see in branch A. Now, this was confusing: any revision before 300 is in A too according to bzr missing, so how is it possible for bzr blame to report difference code in A and B, for a section commited with a revno < 300 ? The reason is that revision 305 is actually a merge, and when going through the detailed log in branch B, I can see that revision 305 contains 296.1.1, then 299.1, 299.2, 299.3 and 299.4. I can’t see how this a useful behavior. Maybe I am biased as someone doing a lot of math all day long, but having 296.1.1 after 304 does not make any sense to me. What’s the point of using supposedly simple numbers when they have arbitrary ordering, which changes depending on where you are seeing them ? SVN revno were already quite confusing when using branches, but bzr made it worse in my opinion.


There were also things which were less significant for me, but still unpleasant: bzr startup is really slow, its use in script not really useful – if you want to do anything substantial, you have to study the plugin API. Also, it  tarted to become a bit inflexible for some things: for example, incorporating a second project also tracked by bzr into a first project is difficult (if not impossible; I could never manage to do it), history-related perations are often slow, using a lot of branches takes a lot of space unless you are using shared repository which feel like an hack more than a real solution, etc…

(Re)-Discovering git

About the same time, I had to use git for one project which I was interested in. I found it much easier to use than when I looked at it for the first time. There was no cogito anymore, the basic commands were like bzr. I decided to give git-svn a try, and it was much faster than bzr-svn to import some projects; the repositories were extremely small [1]. Also, although git UI is still quite arcane, I found git itself a pleasure to use: it felt simple, because the concept were simple – much more than bzr, in fact. sha-1 for revision are not awkward, because you barely use them at the UI level (git UI is very powerful for human-revision handling: no number, but you can easily ask for parent in a branch or in the DAG relatively to a given revision, you can look by commiters, by string in the commit or the code, by date, etc…); bzr revno feel like an hack after being used to git. For example, wherever I am, if I want to compare branch2 to branch1, in git I can do:

git log branch1..branch2
git diff branch1..branch2

Also, git is scriptable, which is appealing to the Unix user in me. I can understand the POV of bzr developers concerning extensibility with plugin (it is not unlike the argument of UNIX pipe vs Windows COM extensions as developed by Miguel in his Let’s make Unix not suck [2]), but I prefer the git model at the end. Bzr decision to go toward extensibility with plugins is not without merit: I  think the good error report from bzr is partly a consequence of this choice. OTOH, git messages can be cryptic; but git simplicity at the core level makes this much less significant than I first expected.

A key git difference compared to bzr is that git is really just a content tracker. It does not track directory at all, or filenames for example: it instead tries to detect when you rename files. I remember at least once  then this was mentioned on bzr ML [3], where a bzr developer argued that bzr could do like git, while keeping explicit meta information (when you tell bzr to rename a file). One obvious drawback is that depending on how the change was made to the tree, patch vs merge for example, bzr behavior will be different; this is very serious in my opinion. Specially for a language like python, where the files/directory name matters, directory renames should be quickly propagated, and can never be done lightly anyway. And it means git can be much better at dealing with renames when import external data, merge between unrelated branches, etc…  Because its algorithm for renames detection is used all the time, it has to work quite well. It is a bit similar to the merge capability of distributed SCS: there is no reason for them to be inherently better at merging, but because they would be unusable without good merge tracking capability, this has to work reliably from the start in DVCS. Even if in theory, bzr could detect renames like git (in addition to its explicit rename handling), in practice, it has not happened, and as far as I am aware, nobody has done any work in that direction.

Another advantage of git I did not mention, but that’s because it has been rehashed ad nauseam, and it is the most obvious one to anyone using both tools: git is incredibly fast. Many things I would never do with bzr because it would take too much time are doable with git; sometimes, git favor speed to much (in its rename detection, for example: you should really be aware of the -M and -C options in log and other history-related command), but even when telling git to spend time detecting renames, it is still much faster than bzr.

Finally, git is getting a lot of traction: it is used by Linux, Xorg, android, RoR, a lot of freedesktop projects, is being discussed for KDE. This means it will become even better, and that other DVCS will have a very hard time to compete. As a very concrete example: Git UI improvements were much more significant than bzr speed improvements during the last year (bzr speed has not improved much in my experience since 0.92 and the pack format: long history and network make bzr almost unusable for big projects with large history contributed by a large team across the world; OTOH, git 1.5.3 was the first git version which I could use without hurting my head too much).

For all those reasons – simplicity of the core model, flexibility, scriptability, and speed – I think I will start to use git for all my projects, and give up on bzr. I think bzr is still superior to git for some things, and
depending on the project or the tree you are tracking, bzr may be better (in particular because it tracks directories, which git does not, and this can matter; I am also not sure whether git would be appropriate for tracking /etc or your $HOME).

[1] for every project I have imported so far, the git clone is as big or smaller than a svn checkout; you read that right: one revision checked out from svn is often bigger than a full history; I have imported the full history of numpy, scipy, scikits on my github account, and I have not used much more than half of my 100 Mb account)



A python 2.5.2 binary for Mac OS X with dtrace enabled

As promised a few days ago, I took the time to build a .dmg of python from the official sources + my patch for dtrace. The binary is built with the script in the Mac/ directory of python, and except the dtrace patch, no other modification has been done, so it should be usable as a drop-in replacement for the official binary on You can find the binary here

Again, use it at your own risk. If you prefer building it yourself, or with different options, the patch can be found here

Building dtrace-enabled python from sources on Mac OS X

One highlight of Mac OS X Tiger is dtrace. Providers for ruby and python are also available, but only with the “system” interpreters (the one included out of the box). If you install python from, you can’t use dtrace anymore. Since the code to make dtrace enable python is available on the open source corner of Apple, I thought it would be easy to apply it to pristine sources available on

Unfortunately, for some strange reasons, Apple only provides changes in the form of ed scripts applied through a Makefile; the changes are not feature specific, you just have a bunch of files for each modified file: dtrace, Apple-specific changes are all put together. I managed to extract the dtrace part of this mess, so that I can apply only dtrace related changes. The goal is to have a python as close as possible to the official binary available on The patch can be found there.

How to use ?

  • Untar the python 2.5.2 tarball
  • Apply the patch
  • Regenerate the configure scripts by running autoconf
  • configure as usual, with the additional option –enable-dtrace  (the configuration is buggy, and will fail if you don’t enable dtrace, unfortunately)
  • build python (make, make install).
It time permits, I will post a .dmg. Needless to say, you run this at your own risk.