Skip to content

gh-116738: Make _json module safe in the free-threading build #119438

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 36 commits into
base: main
Choose a base branch
from

Conversation

eendebakpt
Copy link
Contributor

@eendebakpt eendebakpt commented May 22, 2024

(updated description)

Writing JSON files (or encoding to a string) is not thread-safe in the sense that when encoding data to json while another thread is mutating the data, the result is not well-defined (this is true for both the normal and free-threading build). But the free-threading build can crash the interpreter while writing JSON because of the usage of methods like PySequence_Fast_GET_ITEM. In this PR we make the free-threading build safe by adding locks in three places in the JSON encoder.

Reading from a JSON file is safe: objects constructed are only known to the executing thread. Encoding data to JSON needs a bit more care: mutable Python objects such as a list or a dict could be modified by another thread during encoding.

  • When encoding a list use Py_BEGIN_CRITICAL_SECTION_SEQUENCE_FAST to project against mutation the list
  • When encoding a dict, we use a critical section for iteration over exact dicts (PyDict_Next is used there). The non-exact dicts use PyMapping_Items to create a list of tuples. PyMapping_Items itself is assumed to be thread safe, but the resulting list is not a copy and can be mutated.

Update 2025-02-10: refactored to avoid using Py_EXIT_CRITICAL_SECTION_SEQUENCE_FAST

  • The script below was used to test the free-threading implementation. Similar code was added to the tests.
Test script
import json
from threading import Thread
import time

class JsonThreadingTest:
    
    def __init__(self, number_of_threads=4, number_of_json_dumps=10):
    
        self.data = [ [], [], {}, {}, {}]
        self.json = {str(ii): d for ii, d in enumerate(self.data)}
        self.results =[]
        self.number_of_threads=number_of_threads
        self.number_of_json_dumps =number_of_json_dumps
            
    def modify(self, index):
        while self.continue_thread:
            for d in self.data:
                if isinstance(d, list ):
                    if len(d)>20:
                        d.clear()
                    else:
                        d.append(index)
                else:
                    if len(d)>20:
                        try:
                            d.pop(list(d)[0])
                        except KeyError:
                            pass
                    else:
                        if index%2:                            
                            d[index] = index
                        else:
                            d[bytes(index)] = bytes(index)
                    
    def test(self):
        self.continue_thread = True
        self.modifying_threads = []
        for ii in range(self.number_of_threads):
            t = Thread(target=self.modify, args=[ii])
            self.modifying_threads.append(t)

        self.results.clear()
        for t in self.modifying_threads:
            print(f'start {t}')
            t.start()
            
        for ii in range(self.number_of_json_dumps):
            print(f'dump {ii}')
            time.sleep(0.01)
            
            indent = ii if ii%3==0 else None
            if ii%5==0:
                try:
                    j = json.dumps(self.data, indent=indent, skipkeys=True)
                except TypeError:
                        pass
            else:
                j = json.dumps(self.data, indent=indent)
            self.results.append(j)
        self.continue_thread= False
        
        print([hash(r) for r in self.results])
            


t=JsonThreadingTest(number_of_json_dumps=102, number_of_threads=8)
t0=time.time()
t.test()
dt=time.time()-t0
print(t.results[-1])        
print(f'Done: {dt:.2f}')
  • The test script with t=JsonThreadingTest(number_of_json_dumps=102, number_of_threads=8) is a factor 25 faster using free-threading. Nice!

@nineteendo
Copy link
Contributor

You need to include the file that defines that macro.

Copy link
Contributor

@nineteendo nineteendo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Revert newlines

@eendebakpt eendebakpt changed the title Draft: gh-116738: Make _json module thread-safe #117530 gh-116738: Make _json module thread-safe #117530 May 31, 2024
@eendebakpt eendebakpt changed the title gh-116738: Make _json module thread-safe #117530 gh-116738: Make _json module thread-safe May 31, 2024
@eendebakpt eendebakpt changed the title gh-116738: Make _json module thread-safe gh-116738: Make _json module safe in the free-threading build Aug 14, 2024
@nineteendo
Copy link
Contributor

The python implementation first checks if the list is empty and then iterates over it. Instead of making a shallow copy of the list, checking the length of the copy and iterating over it. A different thread could probably make the list empty between these two statements. (Like the subclass is simulating)

if not lst:
    yield "[]"
    return

time.sleep(10) # allow thread to modify the list
for value in lst:
    ...

My question is: do we fix just the broken subclass or also this?

@eendebakpt
Copy link
Contributor Author

The python implementation first checks if the list is empty and then iterates over it. Instead of making a shallow copy of the list, checking the length of the copy and iterating over it. A different thread could probably make the list empty between these two statements. (Like the subclass is simulating)

if not lst:
    yield "[]"
    return

time.sleep(10) # allow thread to modify the list
for value in lst:
    ...

My question is: do we fix just the broken subclass or also this?

In my opinion there is nothing to fix: when different threads are mutating the underlying data, we give no guarantees on the output. But we do guarantee we will not crash the python interpreter. The python implementation will not crash (since all individual python statements are safe). In this PR we modify the C implementation so that no crashes can occur. On the C side we want to make sure that if the underlying list is emptied we do not index into deallocated memory (this would crash the interpreter). (note: for the json encoder the C method that is unsafe for the list access is PyList_GET_ITEM)

There are some other PRs addressing safety under the free-threading builds and the feedback there was similar: address the crashes, but don't make guarantees on correct output (at the cost of performance). See
#120496 for example

@nineteendo
Copy link
Contributor

nineteendo commented Aug 20, 2024

There's a precedent for guarding against a broken int.__repr__() and float.__repr__(), so I've created an issue: #123183.

@eendebakpt eendebakpt marked this pull request as draft February 10, 2025 10:48
@eendebakpt eendebakpt marked this pull request as ready for review February 10, 2025 11:05
@eendebakpt
Copy link
Contributor Author

@colesbury @mpage Would one you be able to review the PR? Thanks

@kumaraditya303 kumaraditya303 requested review from kumaraditya303 and removed request for nineteendo August 8, 2025 05:44
Py_ssize_t indent_level, PyObject *indent_cache, PyObject *separator)
{
for (Py_ssize_t i = 0; i < PySequence_Fast_GET_SIZE(s_fast); i++) {
PyObject *obj = PySequence_Fast_GET_ITEM(s_fast, i);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using borrowed reference is not safe here because if the critical section on sequence get's suspended then other thread can decref or free the object.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants