numscons, part 2 : Why scons ?

This is the 2nd part of the serie about numscons. This part will present scons in more details, to show it can solve problems mentioned in part 1.

scons is a software intended as a replacement to the venerable make software. It is written in python, making it a logical candidate to build complex extension code like numpy and scipy. The scons process is driven by a scons script, as make process is driven by a Makefile. As makefiles, scons scripts are declarative, and scons automatically builds the Directed Acyclic Graph (DAG) from the description in scons scripts to build the software in a correct order. The comparison stops here, though, because scons is fundamentally different than make in many aspects.

Scons scripts are python scripts

Not only Scons itself is written in python, but scons scripts themselves are also python scripts. Almost anything possible in python is possible in scons script; rules in makefiles are mostly replaced by Builders in scons parlance, which are python functions. This also means that anything fancy done in numpy.distutils can be used in scons script if the need arises, which is not a small feat.

Scons has a top notch dependency system

This is one of the reason people go from make to scons. Although make does handle dependency, you have to set up the dependencies in the rules, for example, for a simple object file hello.c which has a header hello.h:

hello.o : hello.c hello.h
        $(CC) -c hello.c -o hello.o

If you don’t set the hello.h, and changes hello.h later, make will not detect it as a change, and will consider hello.o as up to date. This is quickly becoming intractable for large projects, and thus several softwares exist to automatically handle dependency and generate rules for make. Automake (used in most projects using autotools) does this, for example; distutils itself does this, but it is not really reliable. With make files, you have to regenerate the make files every time the dependency changes.

On the contrary, scons does this automatically: if you have #include “hello.h” in your source file, scons will automatically add hello.h as a dependency to hello.c. It does though by scanning hello.c content. Even better, scons automatically adds for each target a dependency on the code and commands used to build the target; concretely, if you build some C code, and the compiler changes, scons detects it.

Thus, scons solves for free the dependency problem, one of the fundamental problem of distutils for extension code (this problem is the first in the list of distutils revamp goals).

build configurations are handled in objects, not in code:

Another fundamental problem of distutils is the way it stores knowledge about build a particular kind of target: the compilation flags, compilers, paths are embedded in the code of distutils itself, and not available programmatically. Some of it is available through distutils.sysconfig, but not always (in particular, it is not available for python built with MS Visual Studio).

On the other hand, Scons stores compiler flags and any kind of build specific knowledge in environment objects. In that regard, Environment instances are like python dictionaries, which store compiler, compiler flags, etc… Those environment can be copied, modified at will. They can also be used to compile differently different source files, for example with different optimization or warning level. For example

warnflags = ['-Wall', '-W']
env = Environment()
warnenv = env.Clone(CFLAGS = warnflags)

Will create two environments, and any build command related to env will use the default compiler flags, whereas warnenv will use the warning flags. This also makes customization by the user much easier. People often have trouble compiling numpy with different options, for example for more agressive compilation:

CFLAGS="-O3 -funroll-loops" python setup.py build

Does not work because CFLAGS overrides CFLAGS as used by distutils, and all compiler flags are kept in the same variable (Flags from distutils and flags from the user are stored at the same place). With scons, those can easily be put in different locations. With numscons, those work out of the box:

python setup.py build # Default build
CFLAGS="-W -Wall -Wextra -DDEBUG -g" python setup.py build # Unoptimized, debug build
CFLAGS="-funroll-loops -O3" python setup.py build # Agressive build

scons enables straightforward compilation customization through the command line. This is important for users who like to build numpy/scipy on special configuration (which is quite common in the scientific community), and also for packagers, who complain a lot about distutils and its weird arguments handling.

Scons is extensible

scons is also extensible. Although it has some quircks, in particular some unpythonic way of doing things, it is built with customization in mind. As mentionned earlier, scons generate targets from source (for example hello.o from hello.c) through special methods called Builders. It is possible and relatively easy to create your own builder. Builders can be complex, though, but that’s because they can be very flexible:

  • Builders can have their own scanner. For example, the f2py builder in numscons has its own scanner to automatically handle dependencies in <include_file=…> f2py directives.
  • Builders can have their own emitters: an emitter is a function which generate the list of targets from the list of sources. It can be used to dynamically add new source files, and modify the list of targets. For example, when building f2py extensions, some extra files are needed, and emitter is a way to do it.
  • Builders have many other options which I won’t talk about here.

The scons wiki also contains a vast range of builders for different kind of tasks (building documentation, tarballs, etc…). With builders, building code using swig, cython, ctypes is possible, and does not require some distutils magic: if you know how to build them from the command line, implementing builders for them is relatively straifgtforward, as long as they fit in the DAG view (f2py for example was quite difficult to fit there).

Scons has a configure subsystem

When building numpy/scipy, we need to check for dependencies such as BLAS/LAPACK, fft libraries, etc… The way numpy.distutils does it is to look for files in some paths. This is highly unreliable, because the mere existence of a file does not mean it is usable; in particular, maybe it is too old, or nor usable by the used compiler, etc… Scons has a configure subsystem which works in a manner similar to autotools: to check for libfoo with the foo.h header, scons will try to compile a code snippet including foo.h, and try to link it with -lfoo (or /LIB:foo.lib with MS compiler). This is much more robust. Robustness is important here because people often try to build their own blas/lapack, make some mistake in the process, and then can build numpy successfully. Only once they try to run numpy do they have some problems. Another problem with the current scheme in numpy.distutils is that it is fragile, and difficult to modify by people with unusual configuration (Using Intel or AMD optimized libraries for example); thus, only the few people who know enough about numpy.distutils can do it. Finally, the scons subsystem is much easier to use:


config = Configure()

config.CheckLibraryWithHeader('foo', 'foo.h')

config.Finish()

Is straightfoward, whereas the same thing in numpy.distutils takes around 50 lines of code. Out of the box, the scons configure subsystem has the following checks:

  • CheckHeader: to check for the availability of a C/C++ header
  • CheckLib: to check for the availability of a library
  • CheckType/CheckTypeSize: to check for the availability of a type and its size
  • CheckDeclaration: to check for #define

An example I find striking is to compare the setup.py and the scons script for numpy.core. Because of the configure subsystem, the scons script is much easier to understsand IMHO.

Now, the scons subsystem is not ideal either: internally, it relies heavily on some obscure features of scons itself for the dependency handling, which means it is quite fragile.  For most usages (in particular checking for libraries/headers, which is the only thing that the vast majority of numscons users will use), this works perfectly. For some advanced uses of the subsystem, this is problematic: the fortran configuration subsystem of numscons for example requires grepping through the output (both stdout/stderr) of the builders inside the checkers, and this does not work well in scons (I have to bypass the configure buidlers, basically).

Conclusion

When looking at the list prepared by David M. Cook for distutils improvements, one can see that scons already solve most of them:

  • better dependency handling: done by scons DAG handling
  • make it easier to use a specific compiler or compiler option: through scons environments
  • allow .c files to specify what options they should/shouldn’t be compiled with (such as using -O1 when optimization screws up, or not using -Wall for .c made from Pyrex files: through scons environments
  • simplify system_info so that adding checks for libraries, etc., is easier: through scons configure subsytem
  • a more “pluggable” architecture: adding source file generators (such as Pyrex or SWIG) should be easy: through builders, actions, etc..

And more interesting for me, when I see some problems in scons, I can solve them upstream, so that it benefit other people, not just numpy/scipy. In particular, the fortran support was problematic in scons, and since scons 0.98.2, my work for a new fortran support is available. CheckTypeSize and CheckDeclaration, as well as some configuration header generation improvements were also committed upstream.

In Part 3, I will explain the basic design of numscons, and how it brings scons power into numpy build system.

numscons, part 1: the problems with building numpy/scipy with distutils

This will be the first post of a serie about numscons, a project I have been working now for a bit more than 6 months. Simply put, numscons is an alternative build system to build numpy/scipy and other python softwares which heavily rely on compiled code. Before talking about numscons, this first post will be a list of problems with the current build system.

Current flaws in distutils/numpy.distutils:

Here are some things that several people, including, would like to be able to do:

  1. If a package depends on a library, it is difficult to test for the dependency (header, library). In autoconf, it is one line to test for the headers/libraries. With numpy.distutils, you have to use 50 lines of code,  and it is quite fragile.
  2. Not possible to build ctypes extensions in a portable way.
  3. Not possible to compile different part of a package with different compilation options.
  4. No dependency system: if you change some C code, the only reliable way to build correctly is to start from scratch.
  5. CFLAGS/FFLAGS/LDFLAGS do not have the expected semantics : instead of prepending options to the one used for actual compilation, they override the flags, which means that doing something like CFLAGS=”-O3″ will break, since -fPIC and all necessary options to build python extensions are missing.
  6. The way to use different BLAS/LAPACK/Compilers is arcane, with too many options, which may fail in different ways.

Why not improving the current build system ?

I sent last year an email on the numpy ML explaining the problems I got with distutils and its extensions numpy.distutils. The majority agreed that the current situation was less than ideal, but the people who knew enough about the current system to improve it could not spend a lot of time on it. The current build system is a set of extensions around distutils, the standard package for build/distribution under python. Here lies the first problem: distutils is a big mess. The code is ugly, badly designed, and not documented. In particular:

  1. Difficult to extend: although in theory, distutils has the Command class which can be inherited from, a lot of magic is going on, and there is not clear public API. Depending on the way you call distutils, the classes have different attributes !!!
  2. Distutils fundamentally works as a set of commands. You first do that, then that, then that. That’s a wrong model for building softwares; the right model is a DAG of dependencies (ala make). In particular, for numpy/scipy, when you change some C code, the only way to reliably rebuild the package is to start from scratch.
  3. the compilation options are spread everywhere in the code. Depending on the platform, it is available in distutils.sysconfig (UNIX) or not (windows). On the later, it is not possible to retrieve the options for compilation. This, combined with the lack of extensibility means simple things like building ctypes extensions is much more difficult than it should be.

Using scons to build compiled extensions:

For this reason, I thought it may be better to use a build system which knows about dependencies and compiled code, and preferably in python. The most known contender with those characteristics is scons. scons is a make replacement, written 100% in python. In particular:

  1. scons is built around the DAG concept. Its dependency system is top-notch: if you change link option, it will only relink; if header files change, scons automatically detects it.
  2. scons has a primitive but working system to check for dependencies (check for headers, libraries, etc…). It works like autoconf, that is instead of looking for files, it tries to build code snippets. This is much more robust than the current numpy.distutils ways, because if for example your blas/lapack is buggy, you can detect it. Since many people build their own blas/lapack for numpy/scipy, and get it wrong, this is important
  3. scons is heavily commented, reasonably well documented, and some relatively high-profiles companies are using it, so it is a proven software (vmware uses for some of its main products, Intel uses it, Doom and all Id-softwares on Linux are built with scons; it seems that generally, scons is quite popular in the gaming community, both open source and proprietary).

Scons has also some disadvantages:

  1. It uses ancient python (compatible with 1.5.2). This has many consequences which are unfortunate IMO, and the advantages of compatibility are outweights by its disadvantages IMO. In particular, some code is quite arcane because of it (use of apply instead of the foo(*args, **kw) idiom).
  2. A lot of things are ‘unpythonic’, and a lot of the logic in harcoded in the main callee, meaning you cannot really use it as a library within your project. You have to let scons drive the whole process.
  3. It misses a lot of essential features for packaging, meaning it is not often used for open source projects.
  4. It is relatively slow, although this is not a big problem for numpy/scipy.
  5. scons developers community is not large: it is mainly the job of 2-3 people, and I believe this is partly a consequence of 1 and 3.

Nevertheless, I decided to use scons, and I believe it was the right choice. One thing which pleased me is that instead of improving numpy.distutils, a fragile system that nobody outside numpy/scipy will use anyway, I instead spend time implementing missing features in scons, some of which are already integrated upstream (better fortran support, better support of some fortran compilers, etc…). This way, everybody can benefit of those new features.

Next post in the serie will be about the features I was interested in implementing in numscons, and how I implemented them.

Using stow with setuptools


setuptools annoys me. There are so many warts into it for so called easy installation that even simple things are totally wrong. Typically, it keeps refusing on installing a package if the installed package will not be in your PYTHONPATH, which means it is not possible to use it with stow. You have to use arcane command lines with weird names to do it:

From https://www.zooko.com/log.html

How To Install Into That Place With Distutils

python ./setup.py install --prefix=/usr/local/stow/$PKG_NAME

How To Install Into That Place With Setuptools

python ./setup.py install --single-version-externally-managed --record=/usr/local/stow/$PKG_NAME/$PKG_NAME-install.log --prefix=/usr/local/stow/$PKG_NAME