The “every Linux distribution should have the same package manager” fallacy

I have heard several times that every linux distribution should have the same package manager (where it is understood that there is one-too-many within the rpm vs deb), and it was mentioned once again recently in a well publicized video (see on linux hater blog)

The argument goes as follows: doing packaging takes time, and making packages for every distribution is a waste of time. If every distribution used the same package system, it would be much better for 3rd party distributors. Many people answer that competition is good, having many distributions is what makes Linux great – [insert usual stuff about how good Linux is].

While it is true that multiple packages systems means more work, saying that there should only be one is kinda clueless – I wonder if anyone pushing for this has even done any rpm/deb pacaking. What makes deb vs rpm a problem is not that they are different formats, like say zip vs gunzip, but that they are deployed on different systems. A RHEL rpm won’t work great on Mandrake, and even if a lot of debian .deb work on Ubuntu, it is not always ideal. The problem is that each distribution-specific package needs to be designed for the target distribution. To build a rpm or a deb package, you need:

  • To decide where to put what
  • To encode the exact versions for the dependencies
  • To decide how to handle configuration files, set up start/stop scripts for servers, etc…

Basically, almost everything which makes the difference between a distribution A and B ! For file locations, the LSB tries to standardize on this, but some things are different, like where to put 64 vs
32 bits libraries. One distribution may have libfoo 1.2, another one 1.3, so even if they are compatible, you can’t use the same for every distribution. Or some libraries do not have the same name under different distributions.

So requesting the same package manager for every distribution is almost equivalent to asking that every distribution should be the same. You can’t have one without the other. You can argue that there should be only one distribution, but don’t forget that Ubuntu appeared like 5 years ago.

Why people should stop talking about git speed

As I have already written in a previous post, I have moved away from bzr to git for most of my software projects (I still prefer bzr for documents, like my research papers). A lot if not most of the comparison of git vs other tools focus on speed. True, git is quite fast for source code management, but I think this kinds of miss the point of git. It took me time to appreciate it, but one of the git’s killer feature for source code control is the notion of content tracking. Bzr (and I believe hg although I could not find good information on that point) use file id, i.e. they track files, and a tree is a set of files. Git, on the contrary, tracks content, not files. In other words, it does not treat files individually, but always internally consider the whole tree.

This may seem like an internal detail, and an annoyance because it leaks at the UI level quite a lot (the so-called index is linked to this). But this means that it can record the history of code instead of files quite accurately. This is especially visible with git blame. One example: I recently started a massive surgery on the numpy C source code. Because of some C limitations, the numpy core C code was in a couple of giantic source files, and I split this into more logical units. But this breaks svn blame heavily. If you just rename a file, svn blame is lost can follow renames. But if you split one file into two, it becomes useless. Because git tracks the whole tree, the blame command can be asked to detect code moves across files. For example, git blame with rename detections gives me the following on one file in numpy:

dc35f24e numpy/core/src/arrayobject.c         1) #define PY_SSIZE_T_CLEAN
dc35f24e numpy/core/src/arrayobject.c         2) #include <Python.h>
dc35f24e numpy/core/src/arrayobject.c         3) #include "structmember.h"
dc35f24e numpy/core/src/arrayobject.c         4)
65d13826 numpy/core/src/arrayobject.c         5) /*#include <stdio.h>*/
5568f288 scipy/base/src/multiarraymodule.c    6) #define _MULTIARRAYMODULE
2f91f91e numpy/core/src/multiarraymodule.c    7) #define NPY_NO_PREFIX
2f91f91e numpy/core/src/multiarraymodule.c    8) #include "numpy/arrayobject.h"
dc35f24e numpy/core/src/arrayobject.c         9) #include "numpy/arrayscalars.h"
38f46d90 numpy/core/src/multiarray/common.c  10)
38f46d90 numpy/core/src/multiarray/common.c  11) #include "config.h"
0f81da6f numpy/core/src/multiarray/common.c  12)
71875d5c numpy/core/src/multiarray/common.c  13) #include "usertypes.h"
71875d5c numpy/core/src/multiarray/common.c  14)  
0f81da6f numpy/core/src/multiarray/common.c  15) #include "common.h"
5568f288 scipy/base/src/arrayobject.c        16)
65d13826 numpy/core/src/arrayobject.c        17) /*
65d13826 numpy/core/src/arrayobject.c        18)  * new reference
65d13826 numpy/core/src/arrayobject.c        19)  * doesn't alter refcount of chktype or mintype ---
65d13826 numpy/core/src/arrayobject.c        20)  * unless one of them is returned
65d13826 numpy/core/src/arrayobject.c        21)  */

You can notice that the original file can be found for every line of code in the new file. The original author and date may be found as well, I just removed them for the blog post.

This is truely impressive, and is one of the reason why git is so far ahead of the competition IMHO. This kind of features is extremely useful for open source projects, much more than rename support. I am ready to deal with quite a few (real) Git UI annoyances for this.


It looks like my example was not very clear. I am not interested in following the renames of the file: in the example above, the file was not arrayobject.c first, then renamed to multiarraymodules.c, and later to common.c. The file was created from scratch, with content taken from those files at some point. You can try the following simplified example. First, create two files prod.c and sum.c:

double sum(const double* in, int n)
int i;
double acc = 0;

for(i = 0; i < n; ++i) { acc += in[i]; } return acc; } [/sourcecode] [sourcecode language='c'] #include

double prod(const double* in, int n)
int i;
double acc = 1;

for(i = 0; i < n; ++i) { acc *= in[i]; } return acc; } [/sourcecode] Commit to your favorite VCS. Then, you reorganize the code, and in particular you put the code of both files into a new file common.c. So you create a new file common.c: [sourcecode language='c']#include

double prod(const double* in, int n)
int i;
double acc = 1;

for(i = 0; i < n; ++i) { acc *= in[i]; } return acc; } double sum(const double* in, int n) { int i; double acc = 0; for(i = 0; i < n; ++i) { acc += in[i]; } return acc; } [/sourcecode] And commit. Then, try blame. Rename tracking won't help at all, since nothing was renamed. On this very simple example, you could improve things by first renaming say sum.c to common.c, then adding the content of prod.c to common.c, but you will still loose that the prod function comes from prod.c. git blame -C -M gives me the following:

^ae7f28a prod.c  1) #include <math.h>
^ae7f28a prod.c  2)
^ae7f28a prod.c  3) double prod(const double* in, int n)
^ae7f28a prod.c  4) {
^ae7f28a prod.c  5)         int i;
^ae7f28a prod.c  6)         double acc = 1;
^ae7f28a prod.c  7)
^ae7f28a prod.c  8)         for(i = 0; i < n; ++i) {
^ae7f28a prod.c  9)                 acc *= in[i];
^ae7f28a prod.c 10)         }
^ae7f28a prod.c 11)
^ae7f28a prod.c 12)         return acc;
^ae7f28a prod.c 13) }
^ae7f28a sum.c  14)
^ae7f28a sum.c  15) double sum(const double* in, int n)
^ae7f28a sum.c  16) {
^ae7f28a sum.c  17)         int i;
^ae7f28a sum.c  18)         double acc = 0;
^ae7f28a sum.c  19)
^ae7f28a sum.c  20)         for(i = 0; i < n; ++i) {
^ae7f28a sum.c  21)                 acc += in[i];
^ae7f28a sum.c  22)         }
^ae7f28a sum.c  23)
^ae7f28a sum.c  24)         return acc;
^ae7f28a sum.c  25) }

hg blame on the contrary will tell me everything comes from common.c. Even when using the rename trick, I cannot get more than the following with hg blame -f -c:

81c4468e59f9    sum.c: #include <math.h>
81c4468e59f9    sum.c:
81c4468e59f9    sum.c: double sum(const double* in, int n)
81c4468e59f9    sum.c: {
81c4468e59f9    sum.c:         int i;
81c4468e59f9    sum.c:         double acc = 0;
81c4468e59f9    sum.c:
81c4468e59f9    sum.c:         for(i = 0; i < n; ++i) {
81c4468e59f9    sum.c:                 acc += in[i];
81c4468e59f9    sum.c:         }
81c4468e59f9    sum.c:
81c4468e59f9    sum.c:         return acc;
81c4468e59f9    sum.c: }
3c1ac7db76ba common.c:
3c1ac7db76ba common.c: double prod(const double* in, int n)
3c1ac7db76ba common.c: {
3c1ac7db76ba common.c:         int i;
3c1ac7db76ba common.c:         double acc = 1;
3c1ac7db76ba common.c:
3c1ac7db76ba common.c:         for(i = 0; i < n; ++i) {
3c1ac7db76ba common.c:                 acc *= in[i];
3c1ac7db76ba common.c:         }
3c1ac7db76ba common.c:
3c1ac7db76ba common.c:         return acc;
3c1ac7db76ba common.c: }

First steps toward C code coverage in numpy

For quite some time, I wanted to add code coverage to the C part of numpy. The upcoming port to python 3k will make this even more useful, and besides, Stefan Van Der Walt promised me a beer if I could do it.

There are several tools to do code coverage of C code – the most well known is gcov (I obviously discard non-free tools – those tend to be fairly expensive anyway). The problem with gcov is its inability to do code coverage for dynamically loaded code such as python extensions. The solution is thus to build numpy and statically link it into python, which is not totally straightforward.

Statically linking simple extensions

I first looked into simpler extensions: the basic solution is to add the source files of the extensions into Modules/Setup.local in python sources. For example, to build the zlib module statically, you add

zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz

And run make, this will statically link the zlib module to python. One simple way to check whether the extension is indeed statically link is to look into the  __file__ attribute of the extension. In the dynamically loaded case, the __file__ returns the location of the .so, but the attribute does not exist in the static case.

Code coverage

To use gcov, two compilation flags are needed, and one link flag:

gcc -c -fprofile-arcs -ftest-coverage …
gcc … -lgcov

Note that -lgcov must be near the end of the link command (after other libraries flags). To do code coverage of e.g. the zlib module, the following works in Modules/Setup.local:

zlib zlibmodule.c -I$(prefix)/include -fprofile-arcs -ftest-coverage -L$(exec_prefix)/lib -lz -lgcov

If everything goes right after a make call, you should have two files zlibmodule.gcda and zlibmodule.gcno into your Modules directory. You can now run gcov in Modules to get code coverage:

cd Modules && gcov zlibmodule

Of course, since nothing was run yet, the code coverage is 0. After running the zlib test suite, things are better though:

./python Lib/test/ && gcov -o Modules Modules/zlibmodule

The -o tells gcov where to look for gcov data (the .gcda an .gcno files), and the output is

File ‘./Modules/zlibmodule.c’
Lines executed:74.55% of 448

Build numpy statically

I quickly added a hack to build numpy C code statically instead of dynamically in numscons, static_build branch, available on github. As it is, numpy will not work, some source code modifications are needed to make it work. The modifications reside in the static_link branch on github as well.

Then, to statically build numpy with code coverage:

LINKFLAGSEND=”-lgcov” CFLAGS=”-pg -fprofile-arcs -ftest-coverage” $PYTHON scons –static=1

where $PYTHON refers to the python you build from sources. This will build every extension as a static library. To link them to the python binary, I simply added a fake source file and link the numpy as libraries to the fake source in Modules/Setup.local

multiarray fake.c -L$LIBPATH -lmultiarray -lnpymath
umath fake.c -L$LIBPATH -lumath -lnpymath
_sort fake.c -L$LIBPATH -l_sort -lnpymath

where LIBPATH refers to the path where to find the static numpy libraries (e.g. build/scons/numpy/core in your numpy source tree). To run the testsuite, one has to make sure to import a numpy where multiarray, umath and _sort extensions have been removed, it will crash otherwise (as the extesions would be present twice in the python process, one for the dynamically loaded code, one for the statically linked code). The test suite kind of run (~1500 tests), and on can get code coverage afterwards. For multiarray extension, here is what I get:

File ‘build/scons/numpy/core/src/multiarray/common.c’
Lines executed:52.56% of 293
build/scons/numpy/core/src/multiarray/common.c:creating ‘common.c.gcov’

File ‘build/scons/numpy/core/include/numpy/npy_math.h’
Lines executed:50.00% of 12
build/scons/numpy/core/include/numpy/npy_math.h:creating ‘npy_math.h.gcov’

File ‘build/scons/numpy/core/src/multiarray/arraytypes.c’
Lines executed:62.23% of 1030
build/scons/numpy/core/src/multiarray/arraytypes.c:creating ‘arraytypes.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/hashdescr.c’
Lines executed:68.38% of 117
build/scons/numpy/core/src/multiarray/hashdescr.c:creating ‘hashdescr.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/numpyos.c’
Lines executed:81.48% of 189
build/scons/numpy/core/src/multiarray/numpyos.c:creating ‘numpyos.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/scalarapi.c’
Lines executed:47.43% of 350
build/scons/numpy/core/src/multiarray/scalarapi.c:creating ‘scalarapi.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/descriptor.c’
Lines executed:61.96% of 1028
build/scons/numpy/core/src/multiarray/descriptor.c:creating ‘descriptor.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/flagsobject.c’
Lines executed:42.31% of 208
build/scons/numpy/core/src/multiarray/flagsobject.c:creating ‘flagsobject.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/ctors.c’
Lines executed:64.69% of 1583
build/scons/numpy/core/src/multiarray/ctors.c:creating ‘ctors.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/iterators.c’
Lines executed:70.41% of 774
build/scons/numpy/core/src/multiarray/iterators.c:creating ‘iterators.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/mapping.c’
Lines executed:77.95% of 721
build/scons/numpy/core/src/multiarray/mapping.c:creating ‘mapping.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/number.c’
Lines executed:51.80% of 361
build/scons/numpy/core/src/multiarray/number.c:creating ‘number.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/getset.c’
Lines executed:44.09% of 372
build/scons/numpy/core/src/multiarray/getset.c:creating ‘getset.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/sequence.c’
Lines executed:50.00% of 60
build/scons/numpy/core/src/multiarray/sequence.c:creating ‘sequence.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/methods.c’
Lines executed:47.35% of 942
build/scons/numpy/core/src/multiarray/methods.c:creating ‘methods.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/convert_datatype.c’
Lines executed:56.11% of 442
build/scons/numpy/core/src/multiarray/convert_datatype.c:creating ‘convert_datatype.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/convert.c’
Lines executed:66.67% of 183
build/scons/numpy/core/src/multiarray/convert.c:creating ‘convert.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/shape.c’
Lines executed:76.81% of 345
build/scons/numpy/core/src/multiarray/shape.c:creating ‘shape.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/item_selection.c’
Lines executed:55.07% of 937
build/scons/numpy/core/src/multiarray/item_selection.c:creating ‘item_selection.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/calculation.c’
Lines executed:59.08% of 523
build/scons/numpy/core/src/multiarray/calculation.c:creating ‘calculation.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/usertypes.c’
Lines executed:0.00% of 111
build/scons/numpy/core/src/multiarray/usertypes.c:creating ‘usertypes.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/refcount.c’
Lines executed:66.67% of 129
build/scons/numpy/core/src/multiarray/refcount.c:creating ‘refcount.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/conversion_utils.c’
Lines executed:59.49% of 316
build/scons/numpy/core/src/multiarray/conversion_utils.c:creating ‘conversion_utils.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/buffer.c’
Lines executed:56.00% of 25
build/scons/numpy/core/src/multiarray/buffer.c:creating ‘buffer.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/scalartypes.c’
Lines executed:42.42% of 877
build/scons/numpy/core/src/multiarray/scalartypes.c:creating ‘scalartypes.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/ucsnarrow.c’
Lines executed:89.36% of 47
build/scons/numpy/core/src/multiarray/ucsnarrow.c:creating ‘ucsnarrow.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/arrayobject.c’
Lines executed:58.75% of 514
build/scons/numpy/core/src/multiarray/arrayobject.c:creating ‘arrayobject.c.gcov’

File ‘build/scons/numpy/core/src/multiarray/multiarraymodule.c’
Lines executed:49.12% of 1134
build/scons/numpy/core/src/multiarray/multiarraymodule.c:creating ‘multiarraymodule.c.gcov’

The figures themselves are not that meaningful ATM, since the test suite does not run completely, and the built numpy is a quite bastardized version of the real numpy.

The numpy modifications, although small, are very hackish – I just wanted to see if that could work at all. If time permits, I hope to be able to automate most of this, and have a system where it can be integrated in the trunk. I am still not sure about the best way to build the extensions themselves. I can see other solutions, such as producing a single file per extension, with every internal numpy header/source integrated, so that they could be easily build from Setup.local. Or maybe a patch to the python sources so that make in python sources would automatically build numpy.