Python/Harmattan/Performance Considerations for Python Apps

Based on Python faster (for fmms initially) and Qt startup time tips

See the python.org page for general python optimizations.

Contents

Profiling

Do not worry about performance unless you notice a problem. Then only optimize what you can justify with profiling.

To profile Python code, run it with

$ python -m cProfile -o .profile TOPLEVEL_SCRIPT.py

To then analyze the results

$ python -m pstats .profile
> sort cumulative
> stats 40

That sorted the results by the time it took for a function and all the functions it called. It then displays the top 40 results.

See the python.org page for more information on profiling

Improving Performance

Interpreter Choice

Unladen Swallow

PEP 3146 - Merging of Unladen Swallow

Currently Unladen Swallow has not seen too much performance benefit but has a longer start up time and takes more memory

Psyco / Cython

Do these work with Arm?

Shedskin

C with CTypes

Startup

/usr/bin/python Startup

Preloaders exists like PyLauncher that keep a python process around with heavy weight imports like gtk already imported. On application launch it forks the preloader process.

Preloaders were favored back in the Maemo 4.1 days but has fallen out of favor lately. Concerns center around always keeping an unused python process with heavy pieces of code imported always around [1].

Parsing .py files

Stripping the Code

A major downside is that the code that your users is running is different than the code you develop with. This means any stack traces that users provide will be a bit more complicated to decipher.

Benchmarks from stripping code[2]

First test - normal code

2104 lines of code
580 blank lines
215 code lines
Load time from icon click to fully loaded - 10.04 seconds

Second Test - Cleared up code

2104 lines of code
0 blank lines
80 code lines
Load time from icon click to fully loaded - 9.25 seconds

Third - Cleared up code!!

1469 lines of code
0 blank lines
80 code lines
Load time from icon click to fully loaded - 8.40 (5 tests , from 8.09 to 8.60)

Generating pyc/pyo files

Python serializes its state after importing a file to save on re-parsing. It saves these next to the .py files which means if the user does not have write access, Python will not be able to cache it.

Generating pyc/pyo files should be done as a package postinst/postrm per Debian Python Policy[3]

Approaches

py_compilefiles src/*.py [4]

pyo Files

A decent description of pyo files [5]

  • When the Python interpreter is invoked with the -O flag, optimized code is generated and stored in ‘.pyo’ files. The optimizer currently doesn't help much; it only removes assert statements. When -O is used, all bytecode is optimized; .pyc files are ignored and .py files are compiled to optimized bytecode.
  • Passing two -O flags to the Python interpreter (-OO) will cause the bytecode compiler to perform optimizations that could in some rare cases result in malfunctioning programs. Currently only __doc__ strings are removed from the bytecode, resulting in more compact ‘.pyo’ files. Since some programs may rely on having these available, you should only use this option if you know what you're doing.
  • A program doesn't run any faster when it is read from a ‘.pyc’ or ‘.pyo’ file than when it is read from a ‘.py’ file; the only thing that's faster about ‘.pyc’ or ‘.pyo’ files is the speed with which they are loaded.
  • When a script is run by giving its name on the command line, the bytecode for the script is never written to a ‘.pyc’ or ‘.pyo’ file. Thus, the startup time of a script may be reduced by moving most of its code to a module and having a small bootstrap script that imports that module. It is also possible to name a ‘.pyc’ or ‘.pyo’ file directly on the command line.
  • The module ‘compileall’{} can create ‘.pyc’ files (or ‘.pyo’ files when -O is used) for all modules in a directory.

Perceived Startup Performance

hildon_gtk_window_take_screenshot takes advantage of user perception to make the user think the app is launched faster.

Responsiveness

Worker Threads

The One Ring has separate threads for its DBus logic and its networking logic. it does this separation through a worker thread that the DBus thread posts tasks to. Results come as callbacks in the DBus thread.

See AsyncLinearExecutor and some example code

Memory Usage

FAQ

Is Python slow?

The standard response of "it depends". For a graphical application not doing too much processing a user will probably not notice it is written in Python. Compare that to an experiment by epage in writing a GST video filter in python that at best ran at 2 seconds per frame.