1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
|
Garbage collector design
========================
Abstract
========
The main garbage collection algorithm used by CPython is reference counting. The basic idea is
that CPython counts how many different places there are that have a reference to an
object. Such a place could be another object, or a global (or static) C variable, or
a local variable in some C function. When an object’s reference count becomes zero,
the object is deallocated. If it contains references to other objects, their
reference counts are decremented. Those other objects may be deallocated in turn, if
this decrement makes their reference count become zero, and so on. The reference
count field can be examined using the `sys.getrefcount()` function (notice that the
value returned by this function is always 1 more as the function also has a reference
to the object when called):
```pycon
>>> x = object()
>>> sys.getrefcount(x)
2
>>> y = x
>>> sys.getrefcount(x)
3
>>> del y
>>> sys.getrefcount(x)
2
```
The main problem with the reference counting scheme is that it does not handle reference
cycles. For instance, consider this code:
```pycon
>>> container = []
>>> container.append(container)
>>> sys.getrefcount(container)
3
>>> del container
```
In this example, `container` holds a reference to itself, so even when we remove
our reference to it (the variable "container") the reference count never falls to 0
because it still has its own internal reference. Therefore it would never be
cleaned just by simple reference counting. For this reason some additional machinery
is needed to clean these reference cycles between objects once they become
unreachable. This is the cyclic garbage collector, usually called just Garbage
Collector (GC), even though reference counting is also a form of garbage collection.
Starting in version 3.13, CPython contains two GC implementations:
- The default build implementation relies on the
[global interpreter lock](https://docs.python.org/3/glossary.html#term-global-interpreter-lock)
for thread safety.
- The free-threaded build implementation pauses other executing threads when
performing a collection for thread safety.
Both implementations use the same basic algorithms, but operate on different
data structures. See the section on
[Differences between GC implementations](#Differences-between-GC-implementations)
for the details.
Memory layout and object structure
==================================
The garbage collector requires additional fields in Python objects to support
garbage collection. These extra fields are different in the default and the
free-threaded builds.
GC for the default build
------------------------
Normally the C structure supporting a regular Python object looks as follows:
```
object -----> +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ \
| ob_refcnt | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ | PyObject_HEAD
| *ob_type | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ /
| ... |
```
In order to support the garbage collector, the memory layout of objects is altered
to accommodate extra information **before** the normal layout:
```
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ \
| *_gc_next | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ | PyGC_Head
| *_gc_prev | |
object -----> +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ /
| ob_refcnt | \
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ | PyObject_HEAD
| *ob_type | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ /
| ... |
```
In this way the object can be treated as a normal python object and when the extra
information associated to the GC is needed the previous fields can be accessed by a
simple type cast from the original object: `((PyGC_Head *)(the_object)-1)`.
As is explained later in the
[Optimization: reusing fields to save memory](#optimization-reusing-fields-to-save-memory)
section, these two extra fields are normally used to keep doubly linked lists of all the
objects tracked by the garbage collector (these lists are the GC generations, more on
that in the [Optimization: incremental collection](#Optimization-incremental-collection) section), but
they are also reused to fulfill other purposes when the full doubly linked list
structure is not needed as a memory optimization.
Doubly linked lists are used because they efficiently support the most frequently required operations. In
general, the collection of all objects tracked by GC is partitioned into disjoint sets, each in its own
doubly linked list. Between collections, objects are partitioned into "generations", reflecting how
often they've survived collection attempts. During collections, the generation(s) being collected
are further partitioned into, for example, sets of reachable and unreachable objects. Doubly linked lists
support moving an object from one partition to another, adding a new object, removing an object
entirely (objects tracked by GC are most often reclaimed by the refcounting system when GC
isn't running at all!), and merging partitions, all with a small constant number of pointer updates.
With care, they also support iterating over a partition while objects are being added to - and
removed from - it, which is frequently required while GC is running.
GC for the free-threaded build
------------------------------
In the free-threaded build, Python objects contain a 1-byte field
`ob_gc_bits` that is used to track garbage collection related state. The
field exists in all objects, including ones that do not support cyclic
garbage collection. The field is used to identify objects that are tracked
by the collector, ensure that finalizers are called only once per object,
and, during garbage collection, differentiate reachable vs. unreachable objects.
```
object -----> +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ \
| ob_tid | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ |
| pad | ob_mutex | ob_gc_bits | ob_ref_local | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ | PyObject_HEAD
| ob_ref_shared | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ |
| *ob_type | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ /
| ... |
```
Note that not all fields are to scale. `pad` is two bytes, `ob_mutex` and
`ob_gc_bits` are each one byte, and `ob_ref_local` is four bytes. The
other fields, `ob_tid`, `ob_ref_shared`, and `ob_type`, are all
pointer-sized (that is, eight bytes on a 64-bit platform).
The garbage collector also temporarily repurposes the `ob_tid` (thread ID)
and `ob_ref_local` (local reference count) fields for other purposes during
collections.
C APIs
------
Specific APIs are offered to allocate, deallocate, initialize, track, and untrack
objects with GC support. These APIs can be found in the
[Garbage Collector C API documentation](https://docs.python.org/3/c-api/gcsupport.html).
Apart from this object structure, the type object for objects supporting garbage
collection must include the `Py_TPFLAGS_HAVE_GC` in its `tp_flags` slot and
provide an implementation of the `tp_traverse` handler. Unless it can be proven
that the objects cannot form reference cycles with only objects of its type or unless
the type is immutable, a `tp_clear` implementation must also be provided.
Identifying reference cycles
============================
The algorithm that CPython uses to detect those reference cycles is
implemented in the `gc` module. The garbage collector **only focuses**
on cleaning container objects (that is, objects that can contain a reference
to one or more objects). These can be arrays, dictionaries, lists, custom
class instances, classes in extension modules, etc. One could think that
cycles are uncommon but the truth is that many internal references needed by
the interpreter create cycles everywhere. Some notable examples:
- Exceptions contain traceback objects that contain a list of frames that
contain the exception itself.
- Module-level functions reference the module's dict (which is needed to resolve globals),
which in turn contains entries for the module-level functions.
- Instances have references to their class which itself references its module, and the module
contains references to everything that is inside (and maybe other modules)
and this can lead back to the original instance.
- When representing data structures like graphs, it is very typical for them to
have internal links to themselves.
To correctly dispose of these objects once they become unreachable, they need
to be identified first. To understand how the algorithm works, let’s take
the case of a circular linked list which has one link referenced by a
variable `A`, and one self-referencing object which is completely
unreachable:
```pycon
>>> import gc
>>> class Link:
... def __init__(self, next_link=None):
... self.next_link = next_link
>>> link_3 = Link()
>>> link_2 = Link(link_3)
>>> link_1 = Link(link_2)
>>> link_3.next_link = link_1
>>> A = link_1
>>> del link_1, link_2, link_3
>>> link_4 = Link()
>>> link_4.next_link = link_4
>>> del link_4
# Collect the unreachable Link object (and its .__dict__ dict).
>>> gc.collect()
2
```
The GC starts with a set of candidate objects it wants to scan. In the
default build, these "objects to scan" might be all container objects or a
smaller subset (or "generation"). In the free-threaded build, the collector
always scans all container objects.
The objective is to identify all the unreachable objects. The collector does
this by identifying reachable objects; the remaining objects must be
unreachable. The first step is to identify all of the "to scan" objects that
are **directly** reachable from outside the set of candidate objects. These
objects have a refcount larger than the number of incoming references from
within the candidate set.
Every object that supports garbage collection will have an extra reference
count field initialized to the reference count (`gc_ref` in the figures)
of that object when the algorithm starts. This is because the algorithm needs
to modify the reference count to do the computations and in this way the
interpreter will not modify the real reference count field.
![gc-image1](images/python-cyclic-gc-1-new-page.png)
The GC then iterates over all containers in the first list and decrements by one the
`gc_ref` field of any other object that container is referencing. Doing
this makes use of the `tp_traverse` slot in the container class (implemented
using the C API or inherited by a superclass) to know what objects are referenced by
each container. After all the objects have been scanned, only the objects that have
references from outside the “objects to scan” list will have `gc_ref > 0`.
![gc-image2](images/python-cyclic-gc-2-new-page.png)
Notice that having `gc_ref == 0` does not imply that the object is unreachable.
This is because another object that is reachable from the outside (`gc_ref > 0`)
can still have references to it. For instance, the `link_2` object in our example
ended having `gc_ref == 0` but is referenced still by the `link_1` object that
is reachable from the outside. To obtain the set of objects that are really
unreachable, the garbage collector re-scans the container objects using the
`tp_traverse` slot; this time with a different traverse function that marks objects with
`gc_ref == 0` as "tentatively unreachable" and then moves them to the
tentatively unreachable list. The following image depicts the state of the lists in a
moment when the GC processed the `link_3` and `link_4` objects but has not
processed `link_1` and `link_2` yet.
![gc-image3](images/python-cyclic-gc-3-new-page.png)
Then the GC scans the next `link_1` object. Because it has `gc_ref == 1`,
the gc does not do anything special because it knows it has to be reachable (and is
already in what will become the reachable list):
![gc-image4](images/python-cyclic-gc-4-new-page.png)
When the GC encounters an object which is reachable (`gc_ref > 0`), it traverses
its references using the `tp_traverse` slot to find all the objects that are
reachable from it, moving them to the end of the list of reachable objects (where
they started originally) and setting its `gc_ref` field to 1. This is what happens
to `link_2` and `link_3` below as they are reachable from `link_1`. From the
state in the previous image and after examining the objects referred to by `link_1`
the GC knows that `link_3` is reachable after all, so it is moved back to the
original list and its `gc_ref` field is set to 1 so that if the GC visits it again,
it will know that it's reachable. To avoid visiting an object twice, the GC marks all
objects that have already been visited once (by unsetting the `PREV_MASK_COLLECTING`
flag) so that if an object that has already been processed is referenced by some other
object, the GC does not process it twice.
![gc-image5](images/python-cyclic-gc-5-new-page.png)
Notice that an object that was marked as "tentatively unreachable" and was later
moved back to the reachable list will be visited again by the garbage collector
as now all the references that that object has need to be processed as well. This
process is really a breadth first search over the object graph. Once all the objects
are scanned, the GC knows that all container objects in the tentatively unreachable
list are really unreachable and can thus be garbage collected.
Pragmatically, it's important to note that no recursion is required by any of this,
and neither does it in any other way require additional memory proportional to the
number of objects, number of pointers, or the lengths of pointer chains. Apart from
`O(1)` storage for internal C needs, the objects themselves contain all the storage
the GC algorithms require.
Why moving unreachable objects is better
----------------------------------------
It sounds logical to move the unreachable objects under the premise that most objects
are usually reachable, until you think about it: the reason it pays isn't actually
obvious.
Suppose we create objects A, B, C in that order. They appear in the young generation
in the same order. If B points to A, and C to B, and C is reachable from outside,
then the adjusted refcounts after the first step of the algorithm runs will be 0, 0,
and 1 respectively because the only reachable object from the outside is C.
When the next step of the algorithm finds A, A is moved to the unreachable list. The
same for B when it's first encountered. Then C is traversed, B is moved *back* to
the reachable list. B is eventually traversed, and then A is moved back to the reachable
list.
So instead of not moving at all, the reachable objects B and A are each moved twice.
Why is this a win? A straightforward algorithm to move the reachable objects instead
would move A, B, and C once each. The key is that this dance leaves the objects in
order C, B, A - it's reversed from the original order. On all *subsequent* scans,
none of them will move. Since most objects aren't in cycles, this can save an
unbounded number of moves across an unbounded number of later collections. The only
time the cost can be higher is the first time the chain is scanned.
Destroying unreachable objects
==============================
Once the GC knows the list of unreachable objects, a very delicate process starts
with the objective of completely destroying these objects. Roughly, the process
follows these steps in order:
1. Handle and clear weak references (if any). Weak references to unreachable objects
are set to `None`. If the weak reference has an associated callback, the callback
is enqueued to be called once the clearing of weak references is finished. We only
invoke callbacks for weak references that are themselves reachable. If both the weak
reference and the pointed-to object are unreachable we do not execute the callback.
This is partly for historical reasons: the callback could resurrect an unreachable
object and support for weak references predates support for object resurrection.
Ignoring the weak reference's callback is fine because both the object and the weakref
are going away, so it's legitimate to say the weak reference is going away first.
2. If an object has legacy finalizers (`tp_del` slot) move it to the
`gc.garbage` list.
3. Call the finalizers (`tp_finalize` slot) and mark the objects as already
finalized to avoid calling finalizers twice if the objects are resurrected or
if other finalizers have removed the object first.
4. Deal with resurrected objects. If some objects have been resurrected, the GC
finds the new subset of objects that are still unreachable by running the cycle
detection algorithm again and continues with them.
5. Call the `tp_clear` slot of every object so all internal links are broken and
the reference counts fall to 0, triggering the destruction of all unreachable
objects.
Optimization: incremental collection
====================================
In order to bound the length of each garbage collection pause, the GC implementation
for the default build uses incremental collection with two generations.
Generational garbage collection takes advantage of what is known as the weak
generational hypothesis: Most objects die young.
This has proven to be very close to the reality of many Python
programs as many temporary objects are created and destroyed very quickly.
To take advantage of this fact, all container objects are segregated into
two generations: young and old. Every new object starts in the young generation.
Each garbage collection scans the entire young generation and part of the old generation.
The time taken to scan the young generation can be controlled by controlling its
size, but the size of the old generation cannot be controlled.
In order to keep pause times down, scanning of the old generation of the heap
occurs in increments.
To keep track of what has been scanned, the old generation contains two lists:
* Those objects that have not yet been scanned, referred to as the `pending` list.
* Those objects that have been scanned, referred to as the `visited` list.
To detect and collect all unreachable objects in the heap, the garbage collector
must scan the whole heap. This whole heap scan is called a full scavenge.
Increments
----------
Each full scavenge is performed in a series of increments.
For each full scavenge, the combined increments will cover the whole heap.
Each increment is made up of:
* The young generation
* The old generation's least recently scanned objects
* All objects reachable from those objects that have not yet been scanned this full scavenge
The surviving objects (those that are not collected) are moved to the back of the
`visited` list in the old generation.
When a full scavenge starts, no objects in the heap are considered to have been scanned,
so all objects in the old generation must be in the `pending` space.
When all objects in the heap have been scanned a cycle ends, and all objects are moved
to the `pending` list again. To avoid having to traverse the entire list, which list is
`pending` and which is `visited` is determined by a field in the `GCState` struct.
The `visited` and `pending` lists can be swapped by toggling this bit.
Correctness
-----------
The [algorithm for identifying cycles](#Identifying-reference-cycles) will find all
unreachable cycles in a list of objects, but will not find any cycles that are
even partly outside of that list.
Therefore, to be guaranteed that a full scavenge will find all unreachable cycles,
each cycle must be fully contained within a single increment.
To make sure that no partial cycles are included in the increment we perform a
[transitive closure](https://en.wikipedia.org/wiki/Transitive_closure)
over reachable, unscanned objects from the initial increment.
Since the transitive closure of objects reachable from an object must be a (non-strict)
superset of any unreachable cycle including that object, we are guaranteed that a
transitive closure cannot contain any partial cycles.
We can exclude scanned objects, as they must have been reachable when scanned.
If a scanned object becomes part of an unreachable cycle after being scanned, it will
not be collected this at this time, but it will be collected in the next full scavenge.
> [!NOTE]
> The GC implementation for the free-threaded build does not use incremental collection.
> Every collection operates on the entire heap.
In order to decide when to run, the collector keeps track of the number of object
allocations and deallocations since the last collection. When the number of
allocations minus the number of deallocations exceeds `threshold0`,
collection starts. `threshold1` determines the fraction of the old
collection that is included in the increment.
The fraction is inversely proportional to `threshold1`,
as historically a larger `threshold1` meant that old generation
collections were performed less frequently.
`threshold2` is ignored.
These thresholds can be examined using the
[`gc.get_threshold()`](https://docs.python.org/3/library/gc.html#gc.get_threshold)
function:
```pycon
>>> import gc
>>> gc.get_threshold()
(700, 10, 10)
```
The content of these generations can be examined using the
`gc.get_objects(generation=NUM)` function and collections can be triggered
specifically in a generation by calling `gc.collect(generation=NUM)`.
```pycon
>>> import gc
>>> class MyObj:
... pass
...
# Move everything to the old generation so it's easier to inspect
# the young generation.
>>> gc.collect()
0
# Create a reference cycle.
>>> x = MyObj()
>>> x.self = x
# Initially the object is in the young generation.
>>> gc.get_objects(generation=0)
[..., <__main__.MyObj object at 0x7fbcc12a3400>, ...]
# After a collection of the youngest generation the object
# moves to the old generation.
>>> gc.collect(generation=0)
0
>>> gc.get_objects(generation=0)
[]
>>> gc.get_objects(generation=1)
[]
>>> gc.get_objects(generation=2)
[..., <__main__.MyObj object at 0x7fbcc12a3400>, ...]
```
Optimization: reusing fields to save memory
===========================================
In order to save memory, the two linked list pointers in every object with GC
support are reused for several purposes. This is a common optimization known
as "fat pointers" or "tagged pointers": pointers that carry additional data,
"folded" into the pointer, meaning stored inline in the data representing the
address, taking advantage of certain properties of memory addressing. This is
possible as most architectures align certain types of data
to the size of the data, often a word or multiple thereof. This discrepancy
leaves a few of the least significant bits of the pointer unused, which can be
used for tags or to keep other information – most often as a bit field (each
bit a separate tag) – as long as code that uses the pointer masks out these
bits before accessing memory. For example, on a 32-bit architecture (for both
addresses and word size), a word is 32 bits = 4 bytes, so word-aligned
addresses are always a multiple of 4, hence end in `00`, leaving the last 2 bits
available; while on a 64-bit architecture, a word is 64 bits = 8 bytes, so
word-aligned addresses end in `000`, leaving the last 3 bits available.
The CPython GC makes use of two fat pointers that correspond to the extra fields
of `PyGC_Head` discussed in the `Memory layout and object structure`_ section:
> [!WARNING]
> Because the presence of extra information, "tagged" or "fat" pointers cannot be
> dereferenced directly and the extra information must be stripped off before
> obtaining the real memory address. Special care needs to be taken with
> functions that directly manipulate the linked lists, as these functions
> normally assume the pointers inside the lists are in a consistent state.
- The `_gc_prev` field is normally used as the "previous" pointer to maintain the
doubly linked list but its lowest two bits are used to keep the flags
`PREV_MASK_COLLECTING` and `_PyGC_PREV_MASK_FINALIZED`. Between collections,
the only flag that can be present is `_PyGC_PREV_MASK_FINALIZED` that indicates
if an object has been already finalized. During collections `_gc_prev` is
temporarily used for storing a copy of the reference count (`gc_ref`), in
addition to two flags, and the GC linked list becomes a singly linked list until
`_gc_prev` is restored.
- The `_gc_next` field is used as the "next" pointer to maintain the doubly linked
list but during collection its lowest bit is used to keep the
`NEXT_MASK_UNREACHABLE` flag that indicates if an object is tentatively
unreachable during the cycle detection algorithm. This is a drawback to using only
doubly linked lists to implement partitions: while most needed operations are
constant-time, there is no efficient way to determine which partition an object is
currently in. Instead, when that's needed, ad hoc tricks (like the
`NEXT_MASK_UNREACHABLE` flag) are employed.
Optimization: delay tracking containers
=======================================
Certain types of containers cannot participate in a reference cycle, and so do
not need to be tracked by the garbage collector. Untracking these objects
reduces the cost of garbage collection. However, determining which objects may
be untracked is not free, and the costs must be weighed against the benefits
for garbage collection. There are two possible strategies for when to untrack
a container:
1. When the container is created.
2. When the container is examined by the garbage collector.
As a general rule, instances of atomic types aren't tracked and instances of
non-atomic types (containers, user-defined objects...) are. However, some
type-specific optimizations can be present in order to suppress the garbage
collector footprint of simple instances. Some examples of native types that
benefit from delayed tracking:
- Tuples containing only immutable objects (integers, strings etc,
and recursively, tuples of immutable objects) do not need to be tracked. The
interpreter creates a large number of tuples, many of which will not survive
until garbage collection. It is therefore not worthwhile to untrack eligible
tuples at creation time. Instead, all tuples except the empty tuple are tracked
when created. During garbage collection it is determined whether any surviving
tuples can be untracked. A tuple can be untracked if all of its contents are
already not tracked. Tuples are examined for untracking in all garbage collection
cycles. It may take more than one cycle to untrack a tuple.
- Dictionaries containing only immutable objects also do not need to be tracked.
Dictionaries are untracked when created. If a tracked item is inserted into a
dictionary (either as a key or value), the dictionary becomes tracked. During a
full garbage collection (all generations), the collector will untrack any dictionaries
whose contents are not tracked.
The garbage collector module provides the Python function `is_tracked(obj)`, which returns
the current tracking status of the object. Subsequent garbage collections may change the
tracking status of the object.
```pycon
>>> gc.is_tracked(0)
False
>>> gc.is_tracked("a")
False
>>> gc.is_tracked([])
True
>>> gc.is_tracked({})
False
>>> gc.is_tracked({"a": 1})
False
>>> gc.is_tracked({"a": []})
True
```
Differences between GC implementations
======================================
This section summarizes the differences between the GC implementation in the
default build and the implementation in the free-threaded build.
The default build implementation makes extensive use of the `PyGC_Head` data
structure, while the free-threaded build implementation does not use that
data structure.
- The default build implementation stores all tracked objects in a doubly
linked list using `PyGC_Head`. The free-threaded build implementation
instead relies on the embedded mimalloc memory allocator to scan the heap
for tracked objects.
- The default build implementation uses `PyGC_Head` for the unreachable
object list. The free-threaded build implementation repurposes the
`ob_tid` field to store a unreachable objects linked list.
- The default build implementation stores flags in the `_gc_prev` field of
`PyGC_Head`. The free-threaded build implementation stores these flags
in `ob_gc_bits`.
The default build implementation relies on the
[global interpreter lock](https://docs.python.org/3/glossary.html#term-global-interpreter-lock)
for thread safety. The free-threaded build implementation has two "stop the
world" pauses, in which all other executing threads are temporarily paused so
that the GC can safely access reference counts and object attributes.
The default build implementation is a generational collector. The
free-threaded build is non-generational; each collection scans the entire
heap.
- Keeping track of object generations is simple and inexpensive in the default
build. The free-threaded build relies on mimalloc for finding tracked
objects; identifying "young" objects without scanning the entire heap would
be more difficult.
> [!NOTE]
> **Document history**
>
> Pablo Galindo Salgado - Original author
>
> Irit Katriel - Convert to Markdown
|