One of Python’s long-standing weaknesses, its inability to scale well in multithreaded environments, is the target of a new proposal among the core developers of the popular programming language.
If accepted, Gross’s proposal would rewrite the way Python serialises access to objects in its runtime from multiple threads, and would boost multithreaded performance significantly.
The GIL has long been seen as an obstacle to better multithreaded performance in CPython (and thus Python generally). Many efforts have been made to remove it over the years, but at the cost of hurting single-threaded performance -- in other words, by making the vast majority of existing Python applications slower.
Python’s current metaphors for dealing with threading and multiprocessing don’t make it impossible to achieve high parallelism. But they make it hard enough that developers often turn to third-party modules like Dask to get that job done.
The new proposal makes changes to the way reference counting works for Python objects, so that references from the thread that owns an object are handled differently from those coming from other threads.
The overall effect of this change, and a number of others with it, actually boosts single-threaded performance slightly -- by around 10 per cent, according to some benchmarks performed on a forked version of the interpreter versus the mainline CPython 3.9 interpreter.
Multi-threaded performance, on some benchmarks, scales almost linearly with each new thread in the best case -- e.g., when using 20 threads, an 18.1× speed-up on one benchmark and a 19.8× speed-up on another.
These changes are major enough that a fair number of existing Python libraries that work directly with Python’s internals (e.g., Cython) would need to be rewritten. But the cadence of Python’s release schedule just means such breaking changes would need to be made in a major point release instead of a minor one.