Python class, dict, named tuple performance and memory usage

While most would say to use C or C++ or Rust or C# or Java… I decided I wanted to look at the edges of Python performance and memory usage. Specifically, I set out to figure out the best approach for efficiently approximating C structs or classes that are more about properties than functionality.

I wanted to find what Python3 “class” or dictionary was most memory efficient but also fast for creating, updating and reading a single object. I chose to look at the following:

  • Python dict (the old standby)
  • Python class
  • Python class with __slots__ (this idea was added after suggestion from an engineer)
  • dataclass
  • recordclass (still beta)
  • NamedTuple an extension of collections

In the end I borrowed from these gists to create some Python code to test all of the above. Then I also found a “total size” function to estimate the size of the data structures in memory from this gist. Here’s the code to measure and test Python’s performance and memory usage.

The test involved the following:

  1. Create dictionary of 100,000 objects for each of the various “classes”.
  2. Read each of object entry value.
  3. Read a sub-property of the each entry.
  4. Read and make a small calculation of two sub-properties.
  5. Make a top-level change/overwrite to each object.
  6. Read a property via a class method rather than directly.
  7. Change/overwrite a property via a class method rather than directly.
  8. Measure the memory footprint of the 100,000 object dictionary

Raw performance and memory test results

TestPython dictPython classPython class + slotsdataclassrecordclassNamedTuple
creates / sec369,377264,405354,373274,175418,359307,269
reads / sec17,076,39424,402,51325,380,03121,810,11928,528,79824,930,480
sub-reads / sec8,383,5777,880,32610,508,6167,355,0738,597,8806,196,159
read + calc / sec1,434,5531,386,1201,478,9811,183,2871,298,7351,129,088
top-level change / sec6,586,6365,969,8598,098,5196,318,3407,849,210849,501
class read / sec1,437,6461,165,2681,265,4061,143,7191,101,8671,000,530
class update / sec10,171,9554,178,1765,626,8414,727,6804,761,006787,321
bytes per entry (memory)658170178170154346

Overall, performance was pretty high across the board. However creation of new objects was consistently slower across all approaches. No doubt this is due to the fact that memory has to be managed at some point. It was encouraging to see Python can generally manage millions of reads and updates per second in a single process/single thread. It’s also pretty apparent that our default dictionary approach does indeed have some cost in terms of memory.

Conclusion: What’s is the best performing approach for managing “objects” in Python?

Let’s just go with the ranked list approach from best to worst:

  1. Python class + slots – This approach really balanced everything. High performance, low memory usage.
  2. recordclass – This could have taken the #1 spot, but it’s reads where a bit slower and it’s still considered beta,.
  3. (Tie) Python class & dataclass – Both of these approaches did pretty well, though creates are slowest in the bunch.
  4. Python dict – if you don’t care about memory or “class-style” properties, then python dict is very good, but with nearly 4x the memory overhead, it was moved down the list.
  5. NamedTuple – This approach doesn’t really buy you anything compared to everything else on the list. More memory and slower perf due to working around immutability means it’s the best of no worlds.
Prev Self-signed Chrome browser certs with Mac OSX Catalina

Leave a comment