I get a lot of use out of PyPy. In fact, it’s become my default python interpreter, replacing CPython, at least for Python 2.x code. Python 3.x support in PyPy is coming real soon now; most of the tests are passing, so the next release will probably make it happen.
So, I wanted to write a little about the virtues of PyPy, and its potential to be your default Python interpreter, too. I also want to talk about the main issues that might present roadblocks at the moment, and how you might work around them. Finally, while most benchmarks focus on PyPy’s speed, I’m also going to examine its memory usage.
Before we talk about improvements, it’s difficult to overstate how important this is. PyPy is, at least for my use cases, a drop-in replacement for CPython 2.7. It’s so compatible that I’ve symlink’d <code>/usr/local/bin/python2.7</code> to pypy, and have been using it that way for so long, without incident, that I’d forgotten I still had that set up. There are lots of Python “variants” out there, like Cython, but when Python is the basis for many tools you use daily, that compatibility is the first stumbling block for any would-be replacements.
There are some compatibility issues. From my understanding of this, the C-level API (ctypes) for CPython isn’t fully supported yet. PyPy has its own C-level interface, CFFI, and ctypes is a new addition. So, a few non-Python extension libraries for Python will not work with PyPy as yet. If that sounds bad, the thing is that it’s never really been an issue for me. Without paying attention to which of my libraries are pure Python code, and which are native code, virtually everything I’ve tried to do with PyPy just worked, first time.
PyPy provides replacements for some of these libraries, like NumPy and SciPy, which are actually optimised better by being part of PyPy itself. Would other any other major libraries be an issue? Probably. Do you actually use or need those libraries? Possibly not. Also consider, that, even if you’re using some native extension library, another library implemented in pure Python may be a valid alternative. That’s because, with PyPy, pure Python code is fast: almost as fast as C code in some cases — at least in the same ballpark.
For example, I believe PyPy now implements CElementTree as native code just as CPython does, but in PyPy 1.7, it was pure python code. Stefan Welts benchmarks showed that, one or two (4.5 MB structure and 274KB hamlet.xml) of five tests, speed was greatly improved over CPython running the same pure-python ElementTree code.
Of course, this doesn’t help if you really need a C library’s functionality, for, say, accessing some new piece of hardware. In that case, it’s a matter of porting the code to CFFI.
There’s a full breakdown of PyPy compatibility on the official website.
If PyPy is about any one thing, it’s about boosting speed. There are lots of great performance boosts to be had, just by running your code in PyPy instead of CPython. Overall, according to the speed.pypy.org site, PyPy is currently, on average, around six times faster than CPython.
Now, a sixfold improvement in performance isn’t always a big deal — not if your application is IO-bound, or gluing other libraries together, at least. Consider that your average python might will load some stuff from disk, then call a C library to generate a UI, or open a socket to another machine and start talking at WAN speeds. Even the slowest language can probably keep up with relatively simple, lightweight desktop applications like a basic email client. When Python is only used as glue to bind these separate system/C libraries together, you might not care about PyPy’s speed. What might matter to you is the memory footprint of running many Python apps, but we’ll get to memory later.
Does performance matter to Python?
Setting IO-bound apps and “glue apps” aside, there are still many areas where speed is relatively critical. Python is increasingly popular for data crunching, thanks to its simplicity, power, and wealth of libraries that let you just get on with the actual task at hand, rather than yak shaving. Did you know that the US Securities and Exchange Commission is pushing Python as the language of choice for processing financial transactions? High-profile, high-impact libraries like SciPy and NumPy also help a lot. There are also multimedia applications, processing sound waves, handling many players and events on a game’s battlefield, applications running simulations of many people in crowded buildings during emergencies, etc. The list goes on.
Probably the most frequent and compelling need for high performance in Python, though, is in server-side applications. Consider web apps like those based on Django, or TurboGears: for every single page requested, they’re probably many of the following:
- Handling lots of contextual information about security middleware for handling authentication and authorisation
- Parsing URLs requested, and routing through layers of the application to the right function which can handle that request
- Talking to a database, parsing the results
- Compiling multiple template files into a single page
- Doing subsequent database lookups, merging and filtering lists of objects (if it couldn’t be optimised into a single database query)
- Parsing and reformatting application-level data
- Applying business logic
- Generating forms from widgets
- Looking up default values and previously posted values
- Checking for errors on forms
- Storing the results of updates, by modifying databases etc.
- Posting errors to the user
- Merging that all into an output page.
So, for me, crunching data, and for lots of others, crunching web requests etc., having Python run 6x faster is a huge improvement.
Bear in mind that PyPy will do this essentially for free, because it’s almost a drop-in replacement.
Memory overhead is another issue that doesn’t matter for some applications. Well, if the application is a nothing more than a small script, at least. Do you really want your little systray app using 8MB of memory, though, or your server app using 4GB instead of 2GB?
In the C language, when you create 4-byte, 32-bit integer, that’s exactly the memory you require: 4 bytes, on the stack. The only other overhead is to update the stack pointer: adding 4 to a number, in other words. That’s it: creating a new number means updating a number which counts how much you’ve created so far. It’s very fast, and very lightweight. When you name a variable and say what type it will be, you’re saying, “let me have a little space for this, right here.”
In contrast, languages like Python and Java, the underlying details can be a lot different: you have “boxed” objects, which contain a lot of hidden, runtime information: what type the thing you just made is, where it is in memory, how many times that new object has been referenced, etc. All of this is overhead: not the information you wanted, but meta-information: information about the information that you wanted.
Now, those boxes of metadata can be great. You can use them for lots of cool things at run-time, like asking what kind of data someone passed into the function. You can use them for debugging, to say, “I’ve been passed this object. It’s an int, and its value is x”. In languages like C, some of that is possible, on some types of data (like class objects), but not all, and not on the simplest, native data types. The upshot is that, compared to Python, languages like C can do a lot more in the same amount of system memory. In Python, all objects require some metadata. And in Python, because it’s such a powerfully dynamic language, it’s (relatively speaking) a lot of metadata.
What does all this have do to with CPython vs. PyPy? Simply that PyPy is better at this. Like for like, PyPy will probably use about half as much memory as CPython. That matters, if you’ve just loaded 1,000,000 lines of text, and parsed them into 1,000,000 objects, for example. It also matters a little for performance, when all that data needs to be created, moved around, copied, updated, etc.
In part 2, I’ll post some code, and the benchmarks which result from it, for comparison.