My blog has moved to cournape.github.io
I am glad to see discussions about the problem of distributing python programs in the wild. A recent post by Glyph articulates the main issues better than I could. The developers vs end-users focus is indeed critical, as is making the platform an implementation detail.
There is one solution that Glyph did not mention, the freeze tool in python itself. While not for the faint of the heart, it allows building a single, self-contained executable. Since the process is not really documented, I thought I would do it here.
Setting up a statically linked python
The freeze tool is not installed by default, so you need to get it from the sources, e.g. one of the source tarball. You also need to build python statically, which is itself a bit of an adventure.
I prepared a special build of static python on OS X which statically link sqlite (3.8.11) and ssl (1.0.2d), both from homebrew.
Building a single-file, hello world binary
Let’s say you have a script hello.py with the following content:
To freeze it, simply do as follows:
<static-python>/bin/python <static-python>/lib/python2.7/freeze/freeze.py hello.py make
You should now have an executable called hello of approximately 7-8 MB. This binary should be relatively portable across machines, although in this case I built the binary on Yosemite, so I am not sure whether it would work on older OS X versions.
How does it work ?
The freeze tool works by bytecompiling every dependent module, and creating a corresponding .c file containing the bytecode as a string. Every module is then statically linked into a new executable.
I have used this process successfully to build non trivial applications that depend on dozens of libraries. If you want a single executable, the main limitation is no C extension requirement.
More generally, the main limitations are:
- you need to statically build python
- you have to use unix
- you are not depending on C extensions
- none of your dependency uses shenanigans for package data or import
1 and 2 are linked. There is no reason why it should not work on windows, but statically linking python on windows is even less supported than doing it on unix. It would be nice for python itself to support static builds better.
3 is one of the feature that has been solved over and over by the multiple freezer tools. It would be nice to get a minimal, well-written library solving this problem. Alternatively, a way to load C extensions from within a file would be even better, but not every platform can do this.
4 is actually the main issue in practice, it would be nice for good solution here. Something like pkg_resources, but more hackable/tested.
I would argue that the pieces for a better deployment story in python are there: what is needed is taking the existing pieces to build a cohesive solution.
This is a quick post to show how to build NumPy/SciPy with OpenBlas on Mac OS X. OpenBlas is a recently open-sourced version of Blas/Lapack that is competitive with the proprietary implementations, without being as hard to build as Atlas.
Note: this is experimental, largely untested, and I would not recommend using this for anything worthwhile at the moment.
After checking out the sources from github, I had the most luck building openblas with a custom-build clang (I used llvm 3.1). With the apple-provided clang, I got some errors related to unsupported opcodes (fsubp).
With the correct version of clang, building is a simple matter of running make (CPU is automatically detected).
I have just added a initial support for customizable blas/lapack in the bento build of NumPy (and scipy). You will need a very recent clone of NumPy git repo,and a recent bento. The single file distribution of bento is the simplest way to make this work:
./bentomaker.py configure --with-blas-lapack-libdir=$OPENBLAS_DIRECTORY --blas-lapack-type=openblas .. ./bentomaker.py build -j4 # build with 4 processes in //
Same for SciPy. The code for bento’s blas/lapack detection is not very robust nor well tested, so it will likely not work on most platforms.
Armin wrote an article on why he loves setuptools, and one of the main takeaway of his text is that one should not replace X with Y without understanding why X was created in the first place. There is another takeaway, though: none of the features Armin mentioned matters much to me. This is not to say they are not important: given the success of setuptools or pip, it would be stupid not to recognize they fulfill an important gap for a lot of people.
But while those solutions provide a useful set of features, it is important to realize what they prevent as well. Nick touches this topic a bit on python-dev, but I mean something a bit different here. Some examples:
- First, the way setuptools install eggs by adding things to sys.path caused a lot of additional stat on the filesystem. In the scientific community (and in corporate environments as well), people often have to use NFS. This can cause import speed to take a lot of time (above 1 minute is not unheard of).
- Setuptools monkey patches distutils. This has a serious consequence for people who have their own distutils extensions, since you essentially have to deal with two code paths for anything that setuptools monkey patches.
As mentioned by Armin, setuptools had to do the the things it did to support multi-versioning. But this means that it has a significant cost for people who do not care about having multiple versions of the same package. This matters less today than it used to, though, thanks for virtual env, and pip that installs things as non-eggs.
Similar argument can be made about monkey-patching: distutils is not designed to be extensible, especially because of how commands are tightly coupled together. You effectively can NOT extend distutils without monkey-patching it significantly.
A couple of years ago, I decided that I could not put up with numpy.distutils extensions and the aforementioned distutils issues anymore. I started working on Bento sometimes around fall 2009, with the intend to bootstrap it by reusing the low-level distutils code, and getting rid of commands and distribution. I also wanted to experiment with simpler solutions to some more questionable setuptools designs such as data resource with pkg_resources.
I think hackable solutions are the key to help people solving packaging solution(s). There is no solution that will work for everyone, because the usecases are so different and clash with each other. Personally, having a system that works like apt-get (reliable and fast metadata search, reliable install/uninstall, etc…) is the holy grail, but I understand that that’s not what other people are after.
What matters the most is to only put in the stdlib what is uncontroversial and battle-tested in the wild. Tarek’s and the rest of the packaging team efforts to specify and write PEP around the metadata are a very good step in that direction. The PEP for metadata works well because it essentially specify things that have been used succesfully (and relatively uncontroversial).
But an elusive PEP around compilers as has been suggested is not that interesting IMO: I could write something to point every API issues with how compilation work in distutils, but that sounds pointless without a proposal for a better system. And I don’t want to design a better system, I want to be able to use one (waf, scons, fbuilt, gyp, whatever). Writing bento is my way of discovering a good design to do just that.
From the beginning, it was clear that one of the major hurdle for bento would be transition from distutils. This is a hard issue for any tool trying to improve existing ones, but even more so for distribution/packaging tools, as it impacts everyone (developers and users of the tools).
Since almost day one, bento had some basic facilities to convert existing distutils projects into bento.info. I have now added something to do the exact contrary, that is maintaing some distutils extensions which are driven by bento.info. Concretely, it means that if you have a bento package, you can write something like:
import setuptools # this comes first so that setuptools does its monkey dance import bento.distutils # this monkey patches on top of setuptools setuptools.setup()
as your setup.py, it will give the “illusion” of a distutils package. Of course, it won’t give you all the goodies given by bento (if it could, I would not have written bento in the first place), but it is good enough to enable the following:
- installing through the usual “python setup.py install”
- building source distributions
- more significantly: it will make your package easy_install-able/pip-able
This feature will be in bento 0.0.5, which will be released very soon (before pycon 2011 where I will present bento). More details may be found on bento’s documentation
I could not spend much time (if any) on bento the last few weeks of 2010, but I fortunately got back some time to work on it this month. It is a good time to describe a bit what I hope will happen in bento in the next few months.
Bento poster @ Pycon2011
First, my bento proposal has been rejected for PyCon 2011, so it will only be presented as a poster. It is a bit unfortunate because I think it would have worked much better as a talk than as a poster. Nevertheless, I hope it will help bringing awareness of bento outside the scipy community, and give me a better understanding of people’s need for packaging (poster should be better for the latter point).
Bento 0.0.5 should be coming soon (mid-february). Contrary to the 0.0.4 release, this version won’t bring major user-visible features, but it got a lot of internal redesigns to make bento easier to use:
Automatic command dependency
One does not need to run each command separately anymore. If you run “bentomaker install”, it will automatically run configure and build on its own, in the right order. What’s interesting about it is how dependencies are specified. In distutils, subcommand order is hardcoded inside the parent command, which makes it virtually impossible to extend them. Bento does not suffer from this major deficiency:
- Dependencies are specified outside the classes: you just need to say which class must be run before/after
- Class order is then computed at run time using a simple topological sort. Although the API is not there yet, this will enable arbitrary insertion of new commands between existing commands without the need to monkey patch anything
If a bento package is installed under virtualenv, the package will be installed inside the virtualenv by default:
virtualenv .env source .env/bin/activate bentomaker install # this will install the package inside the virtualenv
Of course, if the install path has been customized (through prefix/eprefix), those take precedence over virtualenv.
List files to be installed
The install command can optionally print the list of files to be installed and their actual installation path. This can be used to check where things are installed. This list is exactly what bento would install by design, so it is more difficult to have weird corner cases where the list and what is actually installed is different.
First steps toward uninstall
Initial “transaction-based” install is available: in this mode, a transaction log will be generated, which can be used to rollback an install. For example, if the install fails in the middle, already installed files will be removed to keep the system in a clean state. This is a first step toward uninstall support.
Refactoring to help using waf inside bento
Bentos internal have been improved to enable easier customization of the build tool. I have a proof of concept where bento can be customized to use waf to build extensions. The whole point is to be able to do so without changing bento’s code itself, of course. The same scheme can be used to build extensions with distutils(for compatibility reasons, to help complex packages to move to bento one step at a time.
Bentoshop: a framework to manage installed packages
I am hoping to have at least a proof of concept for a package manager based around bento for Pycon 2011. As already stated on this blog, there are few non-negotiable features that the design must follow:
- Robust by design: things that can be installed can be removed, avoid synchronisation issues between metadata and installed packages
- Transparent: it should play well with native packaging tools and not go in the way of anyone’s workflow.
- No support whatsoever for multiple version: this can be handled with virtualenv for trivial cases, and through native “virtualization” scheme when virtualenv is not enough (chroot for fs “virtualziation”, or actual virtual machines for more)
This means PEP376 is out of the question (it breaks points 1 and 4). I will follow a first proof of concept following the haskell cabal and R (CRAN) systems, but backed with a db for performances.
The main design issue is point 2: ideally, one would want a user-specific, python-specific package manager to be aware of packages installed through the native system, but I am not sure it is really possible without breaking other points.
Getting this error on a new chef client:
/usr/lib/ruby/1.8/net/http.rb:2101:in `error!’: 404 “Not Found” (Net::HTTPServerException)
is actually caused by having an old chef-client. Took me a while to realize, and google was not that helpful.
I have just submitted a talk proposal for bento at pycon 2011. If accepted, the talk will be a good deadline to get a first alpha ready.
In the meantime, I have added windows support, and I can now build numpy on windows 64 bits with the MKL library. There are still a few rough edges, but I think bento will soon be on par with numscons as far as supported platforms go.
Disclaimer: I am working on a project which may be seen as a concurrent to
distutils2 efforts, and I am quite biased against the existing packaging tools
in python. On the other hand, I know distutils extremely well, and have been
maintaining numpy.distutils extensions for several years, and most of my
criticisims should stand on their own
There is a strong consensus in the python community that the current packaging
tools (distutils) are too limited. There has been various attempts to improve
the situation, through setuptools, the distribute fork, etc… Beginning this
year, the focus has been shifted toward distutils2, which is scheduled to be
part of the stdlib for python 3.3, while staying compatible with python 2.4
onwards. A first alpha has been released recently, and I thought it was a good
occasion to look at what happened in that space.
As far as I can see, distutils2 had at least the three following goals:
- standardize a lot of setuptools practices through PEPS and implement them.
- refactor distutils code and add a test suite with a significant coverage.
- get rid of setup.py for most packages, while adding hooks for people who
need to customize their build/installation/deployment process
I won’t discuss much about the first point: most setuptools features are
useless to the scipy community, and are generally poor reimplementations of
existing solutions anyway. As far as I can see, the third point is still being
discussed, and not present in the mainline.
The second point is more interesting: distutils code quality was pretty low,
but the main issue was (and still is) the overall design. Unfortunately, adding
tests does not address the reliability issue which have plagued the scipy
community (and I am sure other communitues as well). The main issues w.r.t.
build and installation remain:
- unreliable installation: distutils install things by simply copying trees
built into a build directory (build/ by default). This is a problem when
you decide to change your source code (e.g. renaming some modules), as
distutils will add things to the existing build tree, and hence install
will copy both old and new targets. As with distutils, the only way to have
a reliable build will be to first rm -rf build. This alone is a consistent
source of issues for numpy/scipy, as many end-users are bitten by this. We
somewhat alleviate this by distributing binary installers (which know how
to uninstall things and are built by people familiar with distutils idiocy)
- Inconsistencies between compiler classes. For example, the MSVCCompiler
class compiler executable is defined as a string, and set as the attribute
cc. On the other hand, most other compiler classes define the compiler_so
attribute (which is a list in that case). They also don’t have the same
- No consistent, centralized API to obtain basic compilation options (CC
Even more significantly, it means that the fundamental issue of extensibility
has not been adressed at all, because the command-based design is still there.
This is by far the worst part of the original distutils design, and I fail to
see the point of a backward-incompatible successor to distutils which does not
address this issue.
Issues with command-based design
Distutils is built around commands, which almost correpond 1 to 1 to command
line command: when you do “python setup.py install”, distutils will essentially
call the install.run command after some initialization stuff. This by itself is
a relatively common pattern, but the issue lies elsewhere.
First, each command has its own set of options, but the options of one command
often affect the other commands, and there is no easy way for one command to
know the options from the other one. For example, you may want to know the
options of the install command at build time. The usual pattern to do so is to
call the command you want to know the options, instantiate it and get its
options, by using e.g. get_finalized_command:
install = self.get_finalized_command("install") install_lib = install.install_lib
This is hard to use correctly because every command can be reset by other
commands, and some commands cannot be instancialized this way depending on the
context. Worse, this can cause unexpected issues later on if you are calling a
command which has not already been run (like the install command in a build
command). Quite a few subtle bugs in setuptools and in numpy.distutils were/are
caused by this.
According to Tarek Ziade (the main maintainer of distutils2), this is addressed in a distutils2 development branch. I cannot comment on it as I have not looked at the code yet.
Distutils has a notion of commands and “sub-commands”. Subcommands may override
each other’s options, through set_undefined_options function, which create
new attributes on the fly. This is every bit as bad as it sounds.
Moreover, the harcoding of dependencies between commands and sub-commands
significantly hampers extensibility. For example, in numpy, we use some
templated source files which are processed into .c: this is done in the
build_src command. Now, because the build command of distutils does not know
about build_src, we need to override build as well to call build_src. Then
came setuptools, which of course did not know about build_src, so we had to
conditionally subclass from setuptools to run build_src too . Every command
which may potentially trigger this command may need to be overriden, with all
the complexity that follows. This is completely insane.
Distutils2 has added the notion of hooks, which are functions to be run/before
the command they hook into. But because they interact with distutils2 through
the command instances, they share all the issues aforementioned, and I suspect
they won’t be of much use.
More concretely, let’s consider a simple example: a simple file generated from
a template (say config.pkg.in), containing some information only known at
runtime (like the version and build time). Doing this correctly is
- you need to generate the file in a build command, and put it at the right
place in the build directory
- you need to install it at the right place (in-place vs normal build, egg
install vs non-egg install vs externally_managed install)
- you may want to automatically include the version.py.in in sdist
- you may want the file to be installed in bdist/msi/mpkg, so you may need to
know all the details of those commands
Each of this step may be quite complex and error-prone. Some are impossible with a
simple hook: it is currently impossible to add files to sdist without rewriting
the sdist.run function AFAIK.
To deal with this correctly, the whole command business needs a significant
redesign. Several extremely talented people in the scipy community have
indepedently attempted to improve this in the last decade or so, without any
succes. Nothing short of a rewrite will work there, and commands constitutes a
good third of distutils code.
distutils2 does not improve the situation w.r.t. building compiled code, but I
guess that’s relatively specific to the big packages like numpy, scipy or
pywin32. Needless to say, the compilers classes are practically impossible to
extend (they don’t even share a consistent interface), and very few people know
how to add support for new compilers, new tools or new binaries (ctypes
extensions, for example).
Overall, I don’t quite understand the rationale for distutils2. It seems that
most setuptools-standardization could have happened without breaking backward
compatibility, and the improvements are too minor for people with significant
distutils extensions to switch. Certainly, I don’t see myself porting
numpy.distutils to distutils2 anytime soon.
: it should be noted that most setuptools issues are really distutils
issues, in the sense that distutils does not provide the right abstractions to
I have just released the new version of Bento, 0.0.4. You can get it on github as usual
Bento itself did not change too much, except for the support of sub-packages and a few things. But now bento can build both numpy and scipy on the “easy” platforms (linux + Atlas + gcc/clang). This posts shows a few cool things that you can do now with bento
Full distribution check
The best way to use this version of bento is to do the following:
# Download bento and create bentomaker git clone http://github.com/cournape/Bento.git bento-git cd bento-git && python bootstrap.py && cd .. # Download the _bento_build branch from numpy git clone http://github.com/cournape/numpy.git numpy-git cd numpy-git && git checkout -b bento_build origin/_bento_build # Create a source tarball from numpy, configure, build and test numpy # from that tarball ../bento-git/bentomaker distcheck
For some reasons I am still unclear about, the test suite fails to run from distcheck for scipy, but that seems to be more of a nose issue than bento proper.
Building numpy with clang
Assuming you are on Linux, you can try to build numpy with clang, the LLVM-based C compiler. Clang is faster at compiling than gcc, and generally gives better error messages than gcc. Although bento itself does not have any support for clang yet, you can easily play with the bento scripts to do so. In the top bscript file from numpy, at the end of the post_configure hook, replace every compiler with clang, i.e.:
for flag in ["CC", "PYEXT_CC"]: yctx.env[flag] = ["clang"]
Once the project is configured, you can also get a detailed look at the configured options, in the file build/default.env.py. You should not modify this file, but it is very useful to debug build issues. Another aid for debugging configuration options is the build/config.log file. Not only does it list every configuration command (both success and failures), but it also shows the source content as well as the command output.
What’s coming next ?
Version 0.0.5 will hopefully have a shorter release period than 0.0.4. The goal for 0.0.5 is to make bento good enough so that other people can jump in bento development.
The main features I am thinking about are windows and python 3 support + a lot of code cleaning/documentation. Windows should not be too difficult, it is mainly about ripping off numscons/scons code for Visual studio support and adapt it into yaku. I have already started working on python 3 support as well – the main issue is bootstrapping bento, and finding an efficient process to work on both python 2 and 3 at the same time. Depending on the difficulty, I will also try to add proper dependency handling in yaku for compiled libraries and dependent headers: ATM, yaku does not detect header change, nor does it rebuild an extension if the linked libraries changed. An alternative is to bite the bullet and start working on integration with waf, which already does all this internally.