To further illustrate the problem with shared state, let's look at a simple example of a counter that is shared between two threads:

import threading
  from time import sleep
  
  counter = [0]
  
  def increment():
      count = counter[0]
      sleep(0) # try to force a switch to the other thread
      counter[0] = count + 1
  
  other = threading.Thread(target=increment, args=())
  other.start()
  increment()
  print('count is now: ', counter[0])
  

In this program, two threads attempt to increment the same counter. The CPython interpreter can switch between threads at almost any time. Only the most basic operations are atomic, meaning that they appear to occur instantly, with no switch possible during their evaluation or execution. Incrementing a counter requires multiple basic operations: read the old value, add one to it, and write the new value. The interpreter can switch threads between any of these operations.

In order to show what happens when the interpreter switches threads at the wrong time, we have attempted to force a switch by sleeping for 0 seconds. When this code is run, the interpreter often does switch threads at the sleep call. This can result in the following sequence of operations:

Thread 0                    Thread 1
  read counter[0]: 0
                              read counter[0]: 0
  calculate 0 + 1: 1
  write 1 -> counter[0]
                              calculate 0 + 1: 1
                              write 1 -> counter[0]
  

The end result is that the counter has a value of 1, even though it was incremented twice! Worse, the interpreter may only switch at the wrong time very rarely, making this difficult to debug. Even with the sleep call, this program sometimes produces a correct count of 2 and sometimes an incorrect count of 1.

This problem arises only in the presence of shared data that may be mutated by one thread while another thread accesses it. Such a conflict is called a race condition, and it is an example of a bug that only exists in the parallel world.

In order to avoid race conditions, shared data that may be mutated and accessed by multiple threads must be protected against concurrent access. For example, if we can ensure that thread 1 only accesses the counter after thread 0 finishes accessing it, or vice versa, we can guarantee that the right result is computed. We say that shared data is synchronized if it is protected from concurrent access. In the next few subsections, we will see multiple mechanisms providing synchronization.