Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

slpmodule_new() doesn't initialize extra members added by type_new() #22

Closed
ghost opened this issue Nov 29, 2013 · 9 comments
Closed

slpmodule_new() doesn't initialize extra members added by type_new() #22

ghost opened this issue Nov 29, 2013 · 9 comments

Comments

@ghost
Copy link

ghost commented Nov 29, 2013

Originally reported by: RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew)


(originally reported in Trac by ndade on 2013-06-08 02:47:27)

slpmodule_new() allocates the module using

        m = PyObject_GC_New(PySlpModuleObject, PySlpModule_TypePtr);

and that uses PySlpModule_TypePtr->tp_basicsize to determine how many bytes to allocate. On amd64, for example, we expect there to be 40 bytes allocated (sizeof(*m)), but in fact 48 are allocated[1] because tp_basicsize was set to 48 back when the SLP module type was derived in flextype.c. type_new() computed 48 by taking the base type's tp_basicsize of 40 and adding space for one more pointer to hold the weak reference list. Grep for "may_add_weak" in type_new() for the details.

Since this silently added field in *m is not initialized, when the slpmodule object is cleaned up during exit, if that field had a non-NULL value that value gets interpreted as the head of a list of PyWeakReference objects and garbage ensues.

Apparently with the default memory allocators, and perhaps since slpmodule is allocated pretty early, it was NULL. But I've been replacing the GC allocator with my own and there the memory was recycled and the pointer was non-NULL.

The fix is to init the entire memory/object allocated. Something like

@@ -806,6 +806,8 @@
        m->__tasklet__ = NULL;
        nameobj = PyString_FromString(name);
        m->md_dict = PyDict_New();
+        // set the extra fields to 0 (type_new() added a weak reference list field automatically)
+        memset(m+1, 0, PySlpModule_TypePtr->tp_basicsize - sizeof(*m));
        if (m->md_dict == NULL || nameobj == NULL)
                goto fail;
        if (PyDict_SetItemString(m->md_dict, "__name__", nameobj) != 0)

does the trick.

You can also reproduce this by making the base memory allocator memset() the memory to random values.

[1] 48 is not including the Py_GC_Head header, which adds another 24 or 32 bytes depending on alignment requirements.


@ghost
Copy link
Author

ghost commented Jan 5, 2014

Original comment by Anselm Kruis (Bitbucket: akruis, GitHub: akruis):


I wonder if we should include this patch in 2.7.6-slp?

@ghost
Copy link
Author

ghost commented Jan 5, 2014

Original comment by RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew):


I think we need to do releases in a timely fashion, but that we also need to ensure all outstanding bug fixes are applied. Including this one.

However, note that the reporter clearly states that in order to reproduce this without his custom allocator, you need to manually corrupt the memory. I was going to say that CCP used Stackless for years and never encountered this problem, but I don't think that's a valid claim. What I can say is that I've used the Stackless interpreter personally for over a decade and this has never been an issue for me. Is this a real problem? Is there a better fix?

@ghost
Copy link
Author

ghost commented Jan 6, 2014

Original comment by Anonymous:


This is clearly a fix, and a very small one.
Due to the complexity of creating this bug, I think it is of less priority.
But with confirmation that this change does not create other problems,
I leave this to your reasoning.


But what I really consider an improvement would remove the problem completely:
The whole problem comes from Flextype, which made a lot of sense in the
early days when Python was not at all optimized.

I think flextype should go away, and we should no longer have a module with
methods. That can be done with other approaches on the Python level.

So in the (hopefully not so) long term, please consider how much it would take to
get rid of flextype and the extra slpmodule type. If you can do that for the next
release, I would appreciate very much.

Less code, less complication, same functionality, more simplicity

@ghost
Copy link
Author

ghost commented Jan 9, 2014

Original comment by RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew):


Will put on hold pending issue #47 being resolved.

@ghost
Copy link
Author

ghost commented Jan 9, 2014

Original comment by RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew):


See previous comment.

@ghost
Copy link
Author

ghost commented Feb 13, 2014

Original comment by Kristján Valur Jónsson (Bitbucket: krisvale, GitHub: kristjanvalur):


Prerequisite issue #47 has been closed.

@ghost
Copy link
Author

ghost commented Feb 13, 2014

Original comment by Kristján Valur Jónsson (Bitbucket: krisvale, GitHub: kristjanvalur):


This problem is likely to have gone away now. Richard, can you verify?

@ghost
Copy link
Author

ghost commented Feb 13, 2014

Original comment by RMTEW FULL NAME (Bitbucket: rmtew, GitHub: rmtew):


This wasn't my problem, or something I've ever reproduced, but merely created under my name due to the porting over of issues.

If the size of the object is now consistent with the mainline Python object, then we should just close this.

@ghost
Copy link
Author

ghost commented Feb 14, 2014

Original comment by Kristján Valur Jónsson (Bitbucket: krisvale, GitHub: kristjanvalur):


Resolved as a side effect of removing flextype.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

0 participants