"Too slow", says I, "you must be doing something wrong".
"Make it faster then", he said.
My final version completed the same work in under 5 seconds which is an improvement of around 40000 times or 4 orders of magnitude. I was thinking about putting together a talk on optimization for the next DFW Pythoneer meeting but after looking around for information, it seems there is plenty on the web written by wiser people than me.. so instead here is a blog post.
Rules of optimization (applicable to all code, not just Python):
- Premature optimization is the root of all evil (D. Knuth)
- Make it right then make it fast: not much point in getting a wrong answer quickly.
- Measure, don't guess. You might think you know what is fast and what is not .. and you may be surprised.
- Rewrite the inner loop (Guido van Rossum). If you don't have any loops then you may not have any opportunity for improvement but otherwise most of your cpu time is spent in the innermost loop so fix that first.
- Rethink the algorithm. Sometimes, the profiled and optimized code can still be made faster but only by accepting that you need to rethink your approach. As an example, you could check if a number is prime by dividing it by every number that is smaller than it but that would be silly.
- timeit.py - a great script that ships with python, look in the lib/python directory, useful to measure small snippets.
- profile - (cProfile or hotshot) collects information about the number of function calls and the time per call and accumulated time.
When I used profile, I discovered that xpath queries of lxml are expensive .. so I rewrote lookups to take advantage of knowledge of the structure of the document.
Rewrite the inner loop (or move things out of loops):
- Prefer generators/iterators/map/reduce/filter over hand-written loops and take advantage of the C language level speed-up (and reduced chances for error)
- Use opportunities for caching (need to look something up? Store the result) or pooling(re-use threads or database connections and amortize start-up costs).
- Prefer iterators over lookups: If we know we need to process most of the contents of a list then go ahead and iterate over the entire list rather than perform multiple lookups in random order (as was the case in original problem)
XML entities were discovered and then a string ID was passed to the inner loop where a query looked again for the entity and found child elements. All this was replaced with lxml iterators resulting in a huge speed-up.
Go Psyco!
Psyco is a great optimizer but doesn't work for every problem. If most of the work is done by pure Python code then you should get a great improvement but if you are using C extensions then it may not make any difference. I was using lxml which is mostly written in C and using Psyco slowed things down a little. Of course, you can pick which functions to speed up by looking at your profiler report. You do have a profiler report, don't you?
Prove It
Don't guess, test! Don't test with one set of data, try scaling up. If you are expecting linear behaviour and you don't get it then it may be time to check things. Again the output from profile will help you.
Links to Other Stuff:
Guido's essay
EffBot's example - discusses profiling
David Goodger's presentation on idomatic python
No comments:
Post a Comment