Adding a distutils compatibility layer to bento

From the beginning, it was clear that one of the major hurdle for bento would be transition from distutils. This is a hard issue for any tool trying to improve existing ones, but even more so for distribution/packaging tools, as it impacts everyone (developers and users of the tools).

Since almost day one, bento had some basic facilities to convert existing distutils projects into bento.info. I have now added something to do the exact contrary, that is maintaing some distutils extensions which are driven by bento.info. Concretely, it means that if you have a bento package, you can write something like:

import setuptools # this comes first so that setuptools does its monkey dance
import bento.distutils # this monkey patches on top of setuptools
setuptools.setup()

as your setup.py, it will give the “illusion” of a distutils package. Of course, it won’t give you all the goodies given by bento (if it could, I would not have written bento in the first place), but it is good enough to enable the following:

  • installing through the usual “python setup.py install”
  • building source distributions
  • more significantly: it will make your package easy_install-able/pip-able

This feature will be in bento 0.0.5, which will be released very soon (before pycon 2011 where I will present bento). More details may be found on bento’s documentation

Bento at Pycon2011 and what’s coming in bento 0.0.5

I could not spend much time (if any) on bento the last few weeks of 2010, but I fortunately got back some time to work on it this month. It is a good time to describe a bit what I hope will happen in bento in the next few months.

Bento poster @ Pycon2011

First, my bento proposal has been rejected for PyCon 2011, so it will only be presented as a poster. It is a bit unfortunate because I think it would have worked much better as a talk than as a poster. Nevertheless, I hope it will help bringing awareness of bento outside the scipy community, and give me a better understanding of people’s need for packaging (poster should be better for the latter point).

Bento 0.0.5

Bento 0.0.5 should be coming soon (mid-february). Contrary to the 0.0.4 release, this version won’t bring major user-visible features, but it got a lot of internal redesigns to make bento easier to use:

Automatic command dependency

One does not need to run each command separately anymore. If you run “bentomaker install”, it will automatically run configure and build on its own, in the right order. What’s interesting about it is how dependencies are specified. In distutils, subcommand order is hardcoded inside the parent command, which makes it virtually impossible to extend them. Bento does not suffer from this major deficiency:

  • Dependencies are specified outside the classes: you just need to say which class must be run before/after
  • Class order is then computed at run time using a simple topological sort. Although the API is not there yet, this will enable arbitrary insertion of new commands between existing commands without the need to monkey patch anything

Virtualenv support

If a bento package is installed under virtualenv, the package will be installed inside the virtualenv by default:

virtualenv .env
source .env/bin/activate
bentomaker install # this will install the package inside the virtualenv

Of course, if the install path has been customized (through prefix/eprefix), those take precedence over virtualenv.

List files to be installed

The install command can optionally print the list of files to be installed and their actual installation path. This can be used to check where things are installed. This list is exactly what bento would install by design, so it is more difficult to have weird corner cases where the list and what is actually installed is different.

First steps toward uninstall

Initial “transaction-based” install is available: in this mode, a transaction log will be generated, which can be used to rollback an install. For example, if the install fails in the middle, already installed files will be removed to keep the system in a clean state. This is a first step toward uninstall support.

Refactoring to help using waf inside bento

Bentos internal have been improved to enable easier customization of the build tool. I have a proof of concept where bento can be customized to use waf to build extensions. The whole point is to be able to do so without changing bento’s code itself, of course. The same scheme can be used to build extensions with distutils(for compatibility reasons, to help complex packages to move to bento one step at a time.

Bentoshop: a framework to manage installed packages

I am hoping to have at least a proof of concept for a package manager based around bento for Pycon 2011. As already stated on this blog, there are few non-negotiable features that the design must follow:

  1. Robust by design: things that can be installed can be removed, avoid synchronisation issues between metadata and installed packages
  2. Transparent: it should play well with native packaging tools and not go in the way of anyone’s workflow.
  3. No support whatsoever for multiple version: this can be handled with virtualenv for trivial cases, and through native “virtualization” scheme when virtualenv is not enough (chroot for fs “virtualziation”, or actual virtual machines for more)
  4. Efficient

This means PEP376 is out of the question (it breaks points 1 and 4). I will follow a first proof of concept following the haskell cabal and R (CRAN) systems, but backed with a db for performances.

The main design issue is point 2: ideally, one would want a user-specific, python-specific package manager to be aware of packages installed through the native system, but I am not sure it is really possible without breaking other points.

A few remarks on distutils2

Disclaimer: I am working on a project which may be seen as a concurrent to
distutils2 efforts, and I am quite biased against the existing packaging tools
in python. On the other hand, I know distutils extremely well, and have been
maintaining numpy.distutils extensions for several years, and most of my
criticisims should stand on their own

There is a strong consensus in the python community that the current packaging
tools (distutils) are too limited. There has been various attempts to improve
the situation, through setuptools, the distribute fork, etc… Beginning this
year, the focus has been shifted toward distutils2, which is scheduled to be
part of the stdlib for python 3.3, while staying compatible with python 2.4
onwards. A first alpha has been released recently, and I thought it was a good
occasion to look at what happened in that space.

As far as I can see, distutils2 had at least the three following goals:

  • standardize a lot of setuptools practices through PEPS and implement them.
  • refactor distutils code and add a test suite with a significant coverage.
  • get rid of setup.py for most packages, while adding hooks for people who
    need to customize their build/installation/deployment process

I won’t discuss much about the first point: most setuptools features are
useless to the scipy community, and are generally poor reimplementations of
existing solutions anyway. As far as I can see, the third point is still being
discussed, and not present in the mainline.

The second point is more interesting: distutils code quality was pretty low,
but the main issue was (and still is) the overall design. Unfortunately, adding
tests does not address the reliability issue which have plagued the scipy
community (and I am sure other communitues as well). The main issues w.r.t.
build and installation remain:

  • unreliable installation: distutils install things by simply copying trees
    built into a build directory (build/ by default). This is a problem when
    you decide to change your source code (e.g. renaming some modules), as
    distutils will add things to the existing build tree, and hence install
    will copy both old and new targets. As with distutils, the only way to have
    a reliable build will be to first rm -rf build. This alone is a consistent
    source of issues for numpy/scipy, as many end-users are bitten by this. We
    somewhat alleviate this by distributing binary installers (which know how
    to uninstall things and are built by people familiar with distutils idiocy)
  • Inconsistencies between compiler classes. For example, the MSVCCompiler
    class compiler executable is defined as a string, and set as the attribute
    cc. On the other hand, most other compiler classes define the compiler_so
    attribute (which is a list in that case). They also don’t have the same
    methods.
  • No consistent, centralized API to obtain basic compilation options (CC
    flags, etc…)

Even more significantly, it means that the fundamental issue of extensibility
has not been adressed at all, because the command-based design is still there.
This is by far the worst part of the original distutils design, and I fail to
see the point of a backward-incompatible successor to distutils which does not
address this issue.

Issues with command-based design

Distutils is built around commands, which almost correpond 1 to 1 to command
line command: when you do “python setup.py install”, distutils will essentially
call the install.run command after some initialization stuff. This by itself is
a relatively common pattern, but the issue lies elsewhere.

Options handling

First, each command has its own set of options, but the options of one command
often affect the other commands, and there is no easy way for one command to
know the options from the other one. For example, you may want to know the
options of the install command at build time. The usual pattern to do so is to
call the command you want to know the options, instantiate it and get its
options, by using e.g. get_finalized_command:

install = self.get_finalized_command("install")
install_lib = install.install_lib

This is hard to use correctly because every command can be reset by other
commands, and some commands cannot be instancialized this way depending on the
context. Worse, this can cause unexpected issues later on if you are calling a
command which has not already been run (like the install command in a build
command). Quite a few subtle bugs in setuptools and in numpy.distutils were/are
caused by this.

 

According to Tarek Ziade (the main maintainer of distutils2), this is addressed in a distutils2 development branch. I cannot comment on it as I have not looked at the code yet.

Sub-commands

Distutils has a notion of commands and “sub-commands”. Subcommands may override
each other’s options, through set_undefined_options function, which create
new attributes on the fly. This is every bit as bad as it sounds.

Moreover, the harcoding of dependencies between commands and sub-commands
significantly hampers extensibility. For example, in numpy, we use some
templated source files which are processed into .c: this is done in the
build_src command. Now, because the build command of distutils does not know
about build_src, we need to override build as well to call build_src. Then
came setuptools, which of course did not know about build_src, so we had to
conditionally subclass from setuptools to run build_src too [1]. Every command
which may potentially trigger this command may need to be overriden, with all
the complexity that follows. This is completely insane.

Hook

Distutils2 has added the notion of hooks, which are functions to be run/before
the command they hook into. But because they interact with distutils2 through
the command instances, they share all the issues aforementioned, and I suspect
they won’t be of much use.

More concretely, let’s consider a simple example: a simple file generated from
a template (say config.pkg.in), containing some information only known at
runtime (like the version and build time). Doing this correctly is
surprisingly difficult:

  • you need to generate the file in a build command, and put it at the right
    place in the build directory
  • you need to install it at the right place (in-place vs normal build, egg
    install vs non-egg install vs externally_managed install)
  • you may want to automatically include the version.py.in in sdist
  • you may want the file to be installed in bdist/msi/mpkg, so you may need to
    know all the details of those commands

Each of this step may be quite complex and error-prone. Some are impossible with a
simple hook: it is currently impossible to add files to sdist without rewriting
the sdist.run function AFAIK.

To deal with this correctly, the whole command business needs a significant
redesign. Several extremely talented people in the scipy community have
indepedently attempted to improve this in the last decade or so, without any
succes. Nothing short of a rewrite will work there, and commands constitutes a
good third of distutils code.

Build customization

distutils2 does not improve the situation w.r.t. building compiled code, but I
guess that’s relatively specific to the big packages like numpy, scipy or
pywin32. Needless to say, the compilers classes are practically impossible to
extend (they don’t even share a consistent interface), and very few people know
how to add support for new compilers, new tools or new binaries (ctypes
extensions, for example).

Overall, I don’t quite understand the rationale for distutils2. It seems that
most setuptools-standardization could have happened without breaking backward
compatibility, and the improvements are too minor for people with significant
distutils extensions to switch. Certainly, I don’t see myself porting
numpy.distutils to distutils2 anytime soon.

[1]: it should be noted that most setuptools issues are really distutils
issues, in the sense that distutils does not provide the right abstractions to
be extended.

Bento 0.0.4 released !

I have just released the new version of Bento, 0.0.4. You can get it on github as usual

 

Bento itself did not change too much, except for the support of sub-packages and a few things. But now bento can build both numpy and scipy on the “easy” platforms (linux + Atlas + gcc/clang). This posts shows a few cool things that you can do now with bento

Full distribution check

The best way to use this version of bento is to do the following:

# Download bento and create bentomaker
git clone http://github.com/cournape/Bento.git bento-git
cd bento-git && python bootstrap.py && cd ..
# Download the _bento_build branch from numpy
git clone http://github.com/cournape/numpy.git numpy-git
cd numpy-git && git checkout -b bento_build origin/_bento_build
# Create a source tarball from numpy, configure, build and test numpy
# from that tarball
../bento-git/bentomaker distcheck

For some reasons I am still unclear about, the test suite fails to run from distcheck for scipy, but that seems to be more of a nose issue than bento proper.

Building numpy with clang

Assuming you are on Linux, you can try to build numpy with clang, the LLVM-based C compiler. Clang is faster at compiling than gcc, and generally gives better error messages than gcc. Although bento itself does not have any support for clang yet, you can easily play with the bento scripts to do so. In the top bscript file from numpy, at the end of the post_configure hook, replace every compiler with clang, i.e.:

for flag in ["CC", "PYEXT_CC"]:
     yctx.env[flag] = ["clang"]

Once the project is configured, you can also get a detailed look at the configured options, in the file build/default.env.py. You should not modify this file, but it is very useful to debug build issues. Another aid for debugging configuration options is the build/config.log file. Not only does it list every configuration command (both success and failures), but it also shows the source content as well as the command output.

What’s coming next ?

Version 0.0.5 will hopefully have a shorter release period than 0.0.4. The goal for 0.0.5 is to make bento good enough so that other people can jump in bento development.

The main features I am thinking about are windows and python 3 support + a lot of code cleaning/documentation. Windows should not be too difficult, it is mainly about ripping off numscons/scons code for Visual studio support and adapt it into yaku. I have already started working on python 3 support as well – the main issue is bootstrapping bento, and finding an efficient process to work on both python 2 and 3 at the same time. Depending on the difficulty, I will also try to add proper dependency handling in yaku for compiled libraries and dependent headers: ATM, yaku does not detect header change, nor does it rebuild an extension if the linked libraries changed. An alternative is to bite the bullet and start working on integration with waf, which already does all this internally.

Bento (ex-toydist): what’s coming for 0.0.3

A lot has happened feature-wise since the 0.0.2 release of toydist. This is a
short summary of what is about to come in the 0.0.3 release.

Toydist renamed to bento

I have finally found a not too sucky name for toydist: bento. As you may know, bento is a Japanese word for lunch-box (see picture if you have no idea what I am talking about). The idea is that those are often nicely prepared, and bentomaker becomes the command to get a nicely packaged software :)

Integration of yaku, a micro build framework

The 0.0.2 release of toydist was still dependent on distutils to build C
extensions. I have since then integrated a small package to build things, yaku
(“grill, bake” in Japanese). This gives the following features when building C extensions

  • basic dependency handling (soon auto-detection
    of header file dependency through compiler-specific extensions)
  • reliable out-of-date detection though file content checksum
  • reliable parallel execution

I still think complex packages should use a real build system like waf or
scons, and in that regard, bento will remain completely agnostic (the distutils
build is still available as a configuration option).

Hook

Any command may now be overridden, and some hooks have been added as well.
Here is a list of possible customizations through hooks:

  • adding custom commands (for example build_doc to build doc)
  • adding dynamically generated files in sdist
  • using waf as a build tool
  • adding autoconf-like tests in configure

This opens a lot of possibilities. Some examples are found in the hook subdirectory

Distcheck command

This command configure, build, install and optionally test a package from the
tarball generated by sdist. This is very useful to test a release.

This command is still very much in infancy, but quite useful already.

One file distribution

Since bento is still in the planning phase, its API is subject to significant
changes, and I obviously don’t care about backward compatibility at this stage.
Nevertheless, several people want to use it already, so I intend to support
a waf-like one file support. It would be a self-extracting file which looks
like a python script, and could be included to avoid any extra dependency. This
would solve both distribution and compatibility issues until bento stabilized.
There is a nice explanation on how this works on the waf-devel blog

Bug fixes, python 2.4 support

I have started to fix the numerous but mostly trivial issues under
python 2.4. Bento 0.0.3 should be compatible with any python version from 2.4
to 2.7. Although python 3.x support should not be too difficult, it is rather
low priority. Let me know if you think otherwise.

Yaku, a simple python build system for toydist

[EDIT] Of course, just after having written this post, I came across two
interesting projects: mem and fbuild. That’s what I get for not
having Internet for weeks now … Both projects are based on memoization
instead of a dependency graph, and seem quite advanced feature-wise.
Unfortunately, fbuild requires python 3.1. Maybe mem would do. If so, consider
yaku dead[/EDIT]

While working on toydist, I was considering re-using distutils ability to build
C code at first, with the idea that people would use waf/scons/etc… if they
have involved compilation needs. But distutils is so horrendous that I realized
that implementing something significantly better and simpler would be possible.
After a few hours of coding, I had something which could build extensions on a
few platforms: yaku (“bake” in Japanese).

Yaku main design goal is simplicity: I don’t want the core code to be more than
~ 1000 LOC. Fortunately, this is more than enough to create something
significantly better than distutils. The current codebase is strongly inspired
by waf (and scons to some extent), and has the following features:

  • Task-based: a yaku task is like a rule in make, with a list of
    targets, dependencies, and a list of executable commands
  • Each task knows about its environment (e.g. flags for C compilation),
    and environment changes as well as dependencies changes trigger a
    task (re)-execution
  • Extension through callback: adding support for new source files
    (cython, swig, fortran, etc…) requires neither monkey patching or
    inheritence. This is one of my biggest grip with distutils
  • Primitive autoconf-like features to check for header, libraries, etc…

Besides polishing the API, I intend to add the following features:

  • Parallel build
  • Automatically find header dependencies for C/C++ code (through
    scannning sources)

I want to emphasize that yaku is not meant as a replacement for a real build
tool. To keep it simple, yaku has no abstraction of the filesystem (node
concept in scons and waf), which has serious impact on the reliability and
power as a build tool. The graph of dependencies is also built in one shot, and
cannot be changed dynamically (so yaku won’t ever be able to detect dependency
on generated code, for example foo.c which depends on foo.h generated from
foo.h.in).

Nevertheless, I believe yaku’s features are significant enough to warrant the
project. If the project takes off, it may be possible to integrate yaku within
the Distribute project, for example, whereas integrating waf or scons is out of
the question.

First public release of toydist

Toydist 0.0.2 has just been announced, and since this is the first public release since I announced it at Scipy India 2009, I thought it would be the occasion of summarizing the current status of toydist, and where I see it going the next few months.

Toydist is an experimental alternative to distutils/setuptools, and aims at replacing the whole packaging infrastructure for python softwares, without requiring people to throw away their current infrastructure. The main philosophy of toydist is simplicity + extensibility:

  • simple: it should be simpler than distutils for simple packages, to the point where it is difficult to get it wrong. Although packaging is difficult, there are known good practices, and the tools should at least hint at those practices.
  • extensible: it should be possible to do things as complex as wanted in some parts of packaging, while still benefiting from toydist capabilities otherwise.

In other words, making toydist more pythonic, with OOWTDI, without getting in your way.

The present

The focus of this first release has been the design of a declarative package description, and implementing just enough features so that toydist can install itself. A simple command line interface, called toymaker, is provided as well. Installing a package with toymaker is very similar to the autotools’ way

toymaker configure
toymaker build
toymaker install
toymaker sdist # Assemble a tarball

I have also implemented preliminary support to build eggs and windows installers (.exe-based), through the buildegg and buildwininst commands.

This first release also brings a few distribution-related features which have been big pain points in distutils/setuptools. First, the flexibility of autotools installation scheme is available at configuration stage

    toymaker configure --prefix=somepath --libdir=someotherpath --mandir=yetanotherpath

works as expected, and every customized path is available inside toydist from the beginning, instead of being available only at install time as in distutils.

Secondly, data files are correctly handled, instead of the distutils/setuptools’ mess. Toydist makes the difference between extra source files, which are not intended to be installed (say .rst source documentation), and data files which are installed. For the later, you can declare as many data files sections as possible, and each data file section potentially has a different installation path

DataFiles: manpath
SourceDir: doc/
        TargetDir: $manpath
        Files: man1/foo.1, man3/foo.3

This syntax, inspired from automake, will cause doc/man1/foo.1 to be installed as $manpath/man1/foo.1 and doc/man3/foo.3 as $manpath/man3/foo.3. As the TargetDir field accepts non-expanded path variables, and because you can define new path variables, you can be as flexible as possible.

For toydist to be successful at all, transition from a setup.py-based build must be straightforward. For simple packages, this is as simple as

toymaker convert

inside the same directory as setup.py. Packages such as Jinja2 and Sphinx can already be converted pretty accurately using this method. Packages which rely heavily on distutils extensions, like NumPy or Twisted will most likely never be convertible this way.

As there is a lot of existing infrastructure based on distutils (and setuptools), with tools like virtualenv, pip or buildout, going from toydist to setup.py is also desirable. This can be done manually at the moment

from distutils.core import setup
from toydist.core import PackageDescription

pkg = PackageDescription.from_file("toysetup.info")

DESCR = pkg.description
CLASSIFIERS = pkg.classifiers

METADATA = {
            'name': pkg.name,
            'version': pkg.version,
            'description': pkg.summary,
            'url': pkg.url,
            'author': pkg.author,
            'author_email': pkg.author_email,
            'license': pkg.license,
            'long_description': pkg.description,
            'platforms': 'any',
            'classifiers': pkg.classifiers,
}

PACKAGE_DATA = {
            'packages': pkg.packages,
}

if __name__ == '__main__':
        config = {}
        for d in (METADATA, PACKAGE_DATA):
                for k, v in d.items():
                        config[k] = v
        setup(**config)

Toydist own setup.py is basically as above. The next version of toydist will have a distutils compatibility layer so that this will look as follows

from toydist.distutils_compat import setup

if __name__ == '__main__':
        setup("toysetup.info")

Depending on the required compatibility level with distutils, one can write distutils command to support some toydist features.

What’s coming next ?

Easy interoperation with distutils, setuptools, etc…

For toydist 0.0.3, I intend to add support for a single-file distribution of toydist, ala waf. Integrating the full code of the packaging program in a source distribution is sometimes quite useful in my experience (that’s how autotools manage its cross-platformness that to a some degree), and this would make distributing toydist-enabled packages easier.

Except on windows, it should be possible to make this single bootstrapping file not bigger than 100-200 kb, so space would not be an issue. Windows needs more as building windows installers require binaries which take a lot of space.

Extensibility through commands hooks

My minimal threshold to consider toydist succesfull is the ability to build numpy and scipy. I am convinced that a packaging tool should leverage existing build tools for complex extension builds, be it scons, waf or even the venerable make. Toydist started as a prototype to make writing things like numscons easier and it is still a major design principle I intend to follow throughout toydist development.

I am currently working on a hook API so that any toymaker command can be customized in an auxiliary python file. Toydist 0.0.3 will contains examples to build simple python C extensions with waf in a couple of lines of code. Building extensions with a real build system like waf brings automatic dependency handling, parallel builds and other features which are near impossible to implement correctly in distutils.

Replacement for pkg_resources

There are currently only two ways to retrieve data files from an installed python package: through __file__ and pkg_resources. file has the advantage of simplicity, but it is inflexible. pkg_resources is too complicated, and significantly slows down everything which uses it, and I have no use for its other features (plugins).

Using something akin to autoheader to install-time generated data locations should be easy to implement:

  • no more imports slow down (pkg_resources can easily increase import times by a factor of 2 to 3)
  • much more robust, without the possibility to break other packages (pkg_resources is a single point of failure for every package which uses it – I have had some experience where installing one setuptools package broke unrelated existing packages on my system).
October 2014
M T W T F S S
« Oct    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 10 other followers

Follow

Get every new post delivered to your Inbox.