Building MacVim with full Python EPD support

January 2, 2011

If you want to use the :python or :pyfile commands or the omnicompletion plugin of Vim on a Mac with EPD, you will most probably want to recompile it with the Python support linked on the EPD instllation in place of the Mac Python.

Here are the steps to follow (done on the latest stable 7.3 sources):

1. As EPD 6.x 64bit on MacOSX does not have the full ETS, you will have to compile it against the 32bit version. So, set the right flags to make it work :


export CFLAGS=-m32
export CPPFLAGS=-m32

Then configure it to use the EPD installation. If your PATH is update to point to EPD, it is straightforward :


./configure --enable-pythoninterp --with-macarchs=i386

The output should contains something like the following lines :


...
checking Python version... 2.6
checking Python is 1.4 or better... yep
checking Python's install prefix... /Library/Frameworks/Python.framework/Versions/6.3
checking Python's execution prefix... /Library/Frameworks/Python.framework/Versions/6.3
checking Python's configuration directory... /Library/Frameworks/Python.framework/Versions/6.3/lib/python2.6/config
...

Then calling make will result in an issue when generating the icons. It has to use the default Python installation to make them but the default Python installation works in 64bit mode. The EPD Python will miss some modules. To fix that, edit src/MacVim/icons/Makefile and prefix the python calls with the following :


arch -i386 /usr/bin/python make_icons.py $(OUTDIR)

This will ensure you use the default Mac installation in 32bit mode. Do the same for all the python calls in the Makefile.

After that, the app must build fine and you can benefit from the full EPD module list from within MacVim

Advertisements

Interfacing GSL with Python using Cython / comparison with weave

April 22, 2010

The second example I wrote while testing how easy it is to use a C library from Python using Cython is the Gnu Scientific Library, gsl. The goal here is to compare how easy it is to do it using Cython or weave.

The first example is a bessel function (from gsl/gsl_sf_bessel.h):

double gsl_sf_bessel_J0(const double x);

Read the rest of this entry »


Interfacing ta-lib with Python using Cython : moving average function example

April 22, 2010

Following up on my previous post about how Cython could be used to improve the performance, I wanted to show how easy it is to interact with a C library. Here is the example of ta-lib : TA-Lib is widely used by trading software developers requiring to perform technical analysis of financial market data. ta-lib has a SWIG Python interface but I wanted to have my own quick and dirty interface to one specific function. It would be interesting to compare the effectivness of the SWIG interface compared to the Cython (for a later post).

1. Interfacing ta-lib

What you need is a compiled version of the library. On Windows using EPD, I had to install msys to have the needed command for autotools build systems. With msys and the EPD mingw install, I compiled ta-lib with a standard .configure, make, make install. The library and header files were then available within c:\msys\1.0\local\lib and c:\msys\1.0\local\include
Read the rest of this entry »


Surface plots using a 4th parameter for colormap in Mayavi

March 30, 2010

Playing with Mayavi, I wanted to find an easy way to use the colormap in an mlab.surf() call to display of fourth dimension. It is a very nice way to explore datasets. The image here below shows an option valuation tool that overlays the Black & Scholes call delta value on top of the surface made using the strikes, time to expiry and Black & Scholes call value.

How can you do it ?
Read the rest of this entry »


NumPy/MKL vs Matlab performance

March 16, 2010

Further to the post I wrote on the MKL performance improvement on NumPy, I have tried to get some figures comparing it to Matlab. Here are some results. Any suggestion to improve the comparison is welcome. I will update it with the different values I do collect.

Here is the Matlab script I used to compare the Python code :

disp('Eig');tic;data=rand(500,500);eig(data);toc;
disp('Svd');tic;data=rand(1000,1000);[u,s,v]=svd(data);s=svd(data);toc;
disp('Inv');tic;data=rand(1000,1000);result=inv(data);toc;
disp('Det');tic;data=rand(1000,1000);result=det(data);toc;
disp('Dot');tic;a=rand(1000,1000);b=inv(a);result=a*b-eye(1000);toc;
disp('Done');

Each line is linked to the corresponding Python function in my test script (see my MKL post).

The results are interesting. Tests have been made with R2007a and R2008a versions and compared to and EPD 6.1 with NumPy using MKL.

Here are the timings on an Intel dual core computer running Matlab R2007a :

Eig : Elapsed time is 0.718934 seconds.
Svd : Elapsed time is 17.039340 seconds.
Inv : Elapsed time is 0.525181 seconds.
Det : Elapsed time is 0.200815 seconds.
Dot : Elapsed time is 0.958015 seconds.

And those are the timings on an Intel Xeon 8 cores machine running Matlab R2008a :

Eig : Elapsed time is 1.235884 seconds.
Svd : Elapsed time is 25.971139 seconds.
Inv : Elapsed time is 0.277503 seconds.
Det : Elapsed time is 0.142898 seconds.
Dot : Elapsed time is 0.354413 seconds.

Compared to the NumPy/MKL tests, here are the results :

Function Core2Duo-With MKL Core2Duo-R2007a Speedup using numpy
test_eigenvalue 752ms 718ms 0.96
test_svd 4608ms 17039ms 3.70
test_inv 418ms 525ms 1.26
test_det 186ms 200ms 1.07
test_dot 666ms 958ms 1.44
Function Xeon8core-With MKL Xeon8core-R2008a Speedup using numpy
test_eigenvalue 772ms 986ms 1.28
test_svd 2119ms 26081ms 12.5
test_inv 153ms 230ms 1.52
test_det 65ms 105ms 1.61
test_dot 235ms 287ms 1.23

NumPy performance improvement with the MKL

January 15, 2010

After the relase of EPD 6.0 now linking numpy agains the Intel MKL library (10.2), I wanted to have some insight about the performance impact of the MKL usage.

What impact does the MKL have on numpy performance ?

I have very roughly started a basic benchmark comparing EPD 5.1 with EPD 6.0. The former is using numpy 1.3 with BLAS and the latter numpy 1.4 with the MKL. I am using a Thinkpad T60 with an Intel dual-core 2Ghz CPU running Windows 32bit.

! The benchmarking methodology is really poor and can be made much more realistic but it gives a first insight.

Contrary to what I said at the last LFPUG meeting on Wednesday, you can control the maximal number of threads used by the system using the OMP_NUM_THREADS environment variables. I have updated the benchmark script to show its value when running it.

Here are some results :

  1. Testing linear algebra functions

I took some of the often used methods and barely compared the cpu time using the ipython timeit command.

Example 1 : eigenvalues

def test_eigenvalue():
 i= 500
 data = random((i,i))
 result = numpy.linalg.eig(data)

The results are interesting 752ms for the MKL version versus 3376 for the ATLAS. That is a 4.5x faster.  Testing the very same code on Matlab 7.4 (R2007a) gives a timing of 790ms.

Example 2 :  single value decompositions

def test_svd():
 i = 1000
 data = random((i,i))
 result = numpy.linalg.svd(data)
 result = numpy.linalg.svd(data, full_matrices=False)

Results are 4608ms  with the MKL versus 15990ms without. This is nearly 3.5x faster.

Example 3 : matrix inversion

def test_inv():
 i = 1000
 data = random((i,i))
 result = numpy.linalg.inv(data)

Results are 418ms with the MKL versus 1457ms without.  This is 3.5x faster

Example 4 :  det()

def test_det():
 i=1000
 data = random((i,i))
 result = numpy.linalg.det(data)

Results are 186ms with the MKL versus 400ms without. This is 2x faster.

Example 5 :  dot()

def test_dot():
 i = 1000
 a = random((i, i))
 b = numpy.linalg.inv(a)
 result = numpy.dot(a, b) - numpy.eye(i)

Results are 666ms with the MKL versus 2444ms without. This is 3.5x faster.

Conclusion :

Linear algebra functions show a clear performance improvement.  I am open to collect more information on that if you have some home made benchmarking. If the amount of information, we should consider publishing the results as official benchmark somewhere.

Function Without MKL With MKL Speed up
test_eigenvalue 3376ms 752ms 4.5x
test_svd 15990ms 4608ms 3.5x
test_inv 1457ms 418ms 3.5x
test_det 400ms 186ms 2x
test_dot 2444ms 666ms 3.5x

For those of you wanting to test your environment, feel free to use the script here below.
Read the rest of this entry »