A few remarks on distutils2

Disclaimer: I am working on a project which may be seen as a concurrent to
distutils2 efforts, and I am quite biased against the existing packaging tools
in python. On the other hand, I know distutils extremely well, and have been
maintaining numpy.distutils extensions for several years, and most of my
criticisims should stand on their own

There is a strong consensus in the python community that the current packaging
tools (distutils) are too limited. There has been various attempts to improve
the situation, through setuptools, the distribute fork, etc… Beginning this
year, the focus has been shifted toward distutils2, which is scheduled to be
part of the stdlib for python 3.3, while staying compatible with python 2.4
onwards. A first alpha has been released recently, and I thought it was a good
occasion to look at what happened in that space.

As far as I can see, distutils2 had at least the three following goals:

  • standardize a lot of setuptools practices through PEPS and implement them.
  • refactor distutils code and add a test suite with a significant coverage.
  • get rid of setup.py for most packages, while adding hooks for people who
    need to customize their build/installation/deployment process

I won’t discuss much about the first point: most setuptools features are
useless to the scipy community, and are generally poor reimplementations of
existing solutions anyway. As far as I can see, the third point is still being
discussed, and not present in the mainline.

The second point is more interesting: distutils code quality was pretty low,
but the main issue was (and still is) the overall design. Unfortunately, adding
tests does not address the reliability issue which have plagued the scipy
community (and I am sure other communitues as well). The main issues w.r.t.
build and installation remain:

  • unreliable installation: distutils install things by simply copying trees
    built into a build directory (build/ by default). This is a problem when
    you decide to change your source code (e.g. renaming some modules), as
    distutils will add things to the existing build tree, and hence install
    will copy both old and new targets. As with distutils, the only way to have
    a reliable build will be to first rm -rf build. This alone is a consistent
    source of issues for numpy/scipy, as many end-users are bitten by this. We
    somewhat alleviate this by distributing binary installers (which know how
    to uninstall things and are built by people familiar with distutils idiocy)
  • Inconsistencies between compiler classes. For example, the MSVCCompiler
    class compiler executable is defined as a string, and set as the attribute
    cc. On the other hand, most other compiler classes define the compiler_so
    attribute (which is a list in that case). They also don’t have the same
    methods.
  • No consistent, centralized API to obtain basic compilation options (CC
    flags, etc…)

Even more significantly, it means that the fundamental issue of extensibility
has not been adressed at all, because the command-based design is still there.
This is by far the worst part of the original distutils design, and I fail to
see the point of a backward-incompatible successor to distutils which does not
address this issue.

Issues with command-based design

Distutils is built around commands, which almost correpond 1 to 1 to command
line command: when you do “python setup.py install”, distutils will essentially
call the install.run command after some initialization stuff. This by itself is
a relatively common pattern, but the issue lies elsewhere.

Options handling

First, each command has its own set of options, but the options of one command
often affect the other commands, and there is no easy way for one command to
know the options from the other one. For example, you may want to know the
options of the install command at build time. The usual pattern to do so is to
call the command you want to know the options, instantiate it and get its
options, by using e.g. get_finalized_command:

install = self.get_finalized_command("install")
install_lib = install.install_lib

This is hard to use correctly because every command can be reset by other
commands, and some commands cannot be instancialized this way depending on the
context. Worse, this can cause unexpected issues later on if you are calling a
command which has not already been run (like the install command in a build
command). Quite a few subtle bugs in setuptools and in numpy.distutils were/are
caused by this.

 

According to Tarek Ziade (the main maintainer of distutils2), this is addressed in a distutils2 development branch. I cannot comment on it as I have not looked at the code yet.

Sub-commands

Distutils has a notion of commands and “sub-commands”. Subcommands may override
each other’s options, through set_undefined_options function, which create
new attributes on the fly. This is every bit as bad as it sounds.

Moreover, the harcoding of dependencies between commands and sub-commands
significantly hampers extensibility. For example, in numpy, we use some
templated source files which are processed into .c: this is done in the
build_src command. Now, because the build command of distutils does not know
about build_src, we need to override build as well to call build_src. Then
came setuptools, which of course did not know about build_src, so we had to
conditionally subclass from setuptools to run build_src too [1]. Every command
which may potentially trigger this command may need to be overriden, with all
the complexity that follows. This is completely insane.

Hook

Distutils2 has added the notion of hooks, which are functions to be run/before
the command they hook into. But because they interact with distutils2 through
the command instances, they share all the issues aforementioned, and I suspect
they won’t be of much use.

More concretely, let’s consider a simple example: a simple file generated from
a template (say config.pkg.in), containing some information only known at
runtime (like the version and build time). Doing this correctly is
surprisingly difficult:

  • you need to generate the file in a build command, and put it at the right
    place in the build directory
  • you need to install it at the right place (in-place vs normal build, egg
    install vs non-egg install vs externally_managed install)
  • you may want to automatically include the version.py.in in sdist
  • you may want the file to be installed in bdist/msi/mpkg, so you may need to
    know all the details of those commands

Each of this step may be quite complex and error-prone. Some are impossible with a
simple hook: it is currently impossible to add files to sdist without rewriting
the sdist.run function AFAIK.

To deal with this correctly, the whole command business needs a significant
redesign. Several extremely talented people in the scipy community have
indepedently attempted to improve this in the last decade or so, without any
succes. Nothing short of a rewrite will work there, and commands constitutes a
good third of distutils code.

Build customization

distutils2 does not improve the situation w.r.t. building compiled code, but I
guess that’s relatively specific to the big packages like numpy, scipy or
pywin32. Needless to say, the compilers classes are practically impossible to
extend (they don’t even share a consistent interface), and very few people know
how to add support for new compilers, new tools or new binaries (ctypes
extensions, for example).

Overall, I don’t quite understand the rationale for distutils2. It seems that
most setuptools-standardization could have happened without breaking backward
compatibility, and the improvements are too minor for people with significant
distutils extensions to switch. Certainly, I don’t see myself porting
numpy.distutils to distutils2 anytime soon.

[1]: it should be noted that most setuptools issues are really distutils
issues, in the sense that distutils does not provide the right abstractions to
be extended.

Bento 0.0.4 released !

I have just released the new version of Bento, 0.0.4. You can get it on github as usual

 

Bento itself did not change too much, except for the support of sub-packages and a few things. But now bento can build both numpy and scipy on the “easy” platforms (linux + Atlas + gcc/clang). This posts shows a few cool things that you can do now with bento

Full distribution check

The best way to use this version of bento is to do the following:

# Download bento and create bentomaker
git clone http://github.com/cournape/Bento.git bento-git
cd bento-git && python bootstrap.py && cd ..
# Download the _bento_build branch from numpy
git clone http://github.com/cournape/numpy.git numpy-git
cd numpy-git && git checkout -b bento_build origin/_bento_build
# Create a source tarball from numpy, configure, build and test numpy
# from that tarball
../bento-git/bentomaker distcheck

For some reasons I am still unclear about, the test suite fails to run from distcheck for scipy, but that seems to be more of a nose issue than bento proper.

Building numpy with clang

Assuming you are on Linux, you can try to build numpy with clang, the LLVM-based C compiler. Clang is faster at compiling than gcc, and generally gives better error messages than gcc. Although bento itself does not have any support for clang yet, you can easily play with the bento scripts to do so. In the top bscript file from numpy, at the end of the post_configure hook, replace every compiler with clang, i.e.:

for flag in ["CC", "PYEXT_CC"]:
     yctx.env[flag] = ["clang"]

Once the project is configured, you can also get a detailed look at the configured options, in the file build/default.env.py. You should not modify this file, but it is very useful to debug build issues. Another aid for debugging configuration options is the build/config.log file. Not only does it list every configuration command (both success and failures), but it also shows the source content as well as the command output.

What’s coming next ?

Version 0.0.5 will hopefully have a shorter release period than 0.0.4. The goal for 0.0.5 is to make bento good enough so that other people can jump in bento development.

The main features I am thinking about are windows and python 3 support + a lot of code cleaning/documentation. Windows should not be too difficult, it is mainly about ripping off numscons/scons code for Visual studio support and adapt it into yaku. I have already started working on python 3 support as well – the main issue is bootstrapping bento, and finding an efficient process to work on both python 2 and 3 at the same time. Depending on the difficulty, I will also try to add proper dependency handling in yaku for compiled libraries and dependent headers: ATM, yaku does not detect header change, nor does it rebuild an extension if the linked libraries changed. An alternative is to bite the bullet and start working on integration with waf, which already does all this internally.

What’s coming in for bento 0.0.4

I initially intended to release the new version (0.0.4) of bento around mid-end
of August, but it now seems like end of september is more likely. The reason is
that I have been working pretty hard on bento and yaku for complex builds in
the last few weeks, where complex means numpy/scipy.

The main reason for making scipy buildable with bento is to get a “feel” of how
really extensible bento is. I think that since 0.0.3, bento is fairly usable,
but extensibility is really what bento is about. The only way that I know to
have an extensible design is to actually extend it in as many scenario as
possible, and as far as complex distutils-based build go, scipy is a pretty
good scenario.

The bottom line: I expect a fully working bento build of scipy within a few
days
(most hairy fortran stuff now builds and run the tests ok)

Major changes from 0.0.3

No backward-incompatible changes are required for the bento.info format. The
major change is recursive package support, which ended up being more complex
than anticipated. I already described this feature in a previous post: it
mostly boils down to splitting a big bento.info into several “sub-bento” files
in subdirectories.

Implementation-wise, it required a redesign of internal representation for
files. The issue is how to know that two file names represent the same files: I
quickly realize that using filenames is too complex and too fragile, and I
decided to re-use the Node class from waf, which builds an internal
representation of the filesystem. The conversion is still going on, but it
simplified a lot of hairy code that I used to write in bento (and distutils
previously). It particularly helps to compute the relative paths between too
paths:

relpos = node.path_from(othernode)

If node is /foo/bar and othernode is /foo, relpos will be bar, and .. if node
and othernode are inverted. Doing this from the filenames alone has many
corner cases, and path name computation are surprisingly slow on python (waf
Node class caches things like absolute path name computation).

Thanks to the waf Node class, I can now easily list the packages, extensions,
etc… specific to one sub bento, relatively to the sub bento directory, and
translates packages, extensions, etc… as seen from the top directory.

I am happy with the internals, but the “API” for recursive build description is
not good, to put it mildly. To add a subpackage description with associated
bscript (hook file), you need to:

  • add the sub bento.info to the Subento field in the parent bento.info
  • add the bscript file into the list of the recursive decorator inside
    the parent bscript file. Even though the decorator may be put on e.g.
    the configure hook, the build command will also look there for sub
    bscript files, which is not intuitive at all.

You can see some examples
there
I am still looking for a good solution to this issue.

Yaku enhancements

Except for recursive package description, not much has changed in bento, and
most of the work has happened in yaku. The first big change is that yaku itself
also uses a waf-like Node class: although I resisted this at first, I think it
is for the best, and it also simplified a lot of hairy corner cases inside
yaku.

The other big change in yaku is overriding/extending it. I am interested in the
following cases:

  • adding new tool (clang, intel compiler, etc…)
  • adding a new process in the chain (say building extensions from .c.src
    instead of .c without monkey-patching original code)
  • overriding flags for some extensions (say building one extension with -Os
    instead of -O2)
  • overriding extension hook for some extensions. For example in general,
    fortran source files are compiled into .o directly for “pure” (not using
    python C API) libraries, but f2py allows to build a python extension from
    the .f directly. Yaku now allows for temporary overriding the command
    associated to .f file

Now, all those four cases are implemented. Chaining a templating system to
cython (for .pyx.in -> .pyx -> .c -> .o -> .so/.pyd) is now very simple,
supporting new compilers can be done easily, and playing with compiling options
straightforward internally. There are a few issues, though. Besides how the
API should look like, a corny situation is dealing with dictionaries of
configurations. In yaku, each task has an environment attached to it, which is
a simple dictionary containing things like CFLAGS, CC, etc… Most of the
time, you want to share those dictionaries across tasks. Unfortunately, python
semantics for dictionaries don’t make that easy, and deepcopy is too expensive.
A Copy-On-Write dictionary, which internally share common parts between
dictionaries, would be ideal, but I am afraid implementing one in python would
be very difficult.

I am also still not entirely convinced that yaku is warranted:
fbuild is nearly the ideal system if it were
not limited to python 3, and the new waf 1.6 looks great (T. Nagy, the waf
maintainer, recently updated fortran support for 1.6). Fortunately, bento is
build-tool agnostic from the start, and trying waf inside bento for a real
project is on the TODO list.

Other bento features

I put an hold on other features planned for 0.0.4. The main missing features
for bento are:

  • distutils compatibility mode (so that bento.info may be used within distutils)
  • wininst <-> egg conversion
  • good documentation
  • python 3 compatibility
  • virtualenv and pip support
  • automatic command dependency (e.g. automatically re-run configure before
    build if necessary)

Python 3 support will definitely not go into 0.0.4. Virtualenv/pip support
should not be difficult, automatic dependency for commands is badly needed.

All being said, I think bento is shaping up quite ok. At my work, I constantly
have to deal with distutils idiosyncraties for the most trivial things, and I
am looking forward to seeing it replaced with something saner.

Recent progress on bento – build numpy !

I have spent the last few days on a relatively big feature for bento: recursive package description. The idea is to be able to simply describe packages in a deeply nested hierarchy without having to write long paths, and to split complicated packages descriptions into several files.

At the bento.info level, the addition is easy:

Subento: numpy/core, numpy/lib ...

It took me more time to figure out a way to do it in the hook file. I ended up with a recurse decorator:

@recurse(["numpy/core/bscript", "numpy/lib/bscript"])
@post_configure
def some_func(ctx):
    ....

I am not sure it is the right solution yet, but it works for now. My first idea was to simply use a recurse function attached to hook contexts (the ctx argument), but I did not find a good way to guarantee an execution order (declaration order == execution order), and it was a bit unintuitive to integrate both hook decorator and the recurse together.

The reason why I tackle this now is that bento is at a stage where it need to be used on “real” builds to get a feeling of what works and what does not. The target is numpy and hopefully later scipy. Although I still hope to integrate waf or scons in bento as the canonical way of building numpy/scipy with bento, this also gives a good test for yaku (my simple build system).

It took me less than half a day to port the scons scripts to bento/yaku. A full build, unnoptimized build of numpy with clang is less than 10 seconds. A no-op build is ~ 150 ms, but as yaku does not have all the infrastructure for header dependency tracking yet, the number for no-op build is rather meaningless.