Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Pickle support #371

Closed

Conversation

nada-ben-ali
Copy link
Collaborator

I am working on caching the PDX config parsing result to avoid reparsing it each time, using Pickle for serialization.

During the pickling process, no issues were encountered. However, during the unpickling step, the following exception occurred: "maximum recursion depth exceeded" which originated in the NamedItemList class.
The recursion occurred due to infinite calls to the NamedItemList.getattr method. Specifically, the code attempted to access a property that was uninitialized at the time, causing the getattr to be invoked repeatedly during the reconstruction of the object.
To address this, I added the reduce method which explicitly defines how an object is serialized and reconstructed during pickling and unpickling. After adding this method, the recursion issue was resolved.

After fixing the recursion issue, I got the following exception with some PDX files:

image

This issue arises because the NamedItemList assumes all its elements conform to the OdxNamed interface, including having a short_name attribute. While the PDX files were correctly parsed initially, the issue arose because TableRow objects were reconstructed in a state where their short_name attribute was not properly restored or initialized.
Thus, the solution was to add the reduce method for the TableRow class, ensuring its attributes are correctly restored during unpickling.

odxtools: 8.3.4

andlaus and others added 28 commits December 9, 2024 09:41
due to some copy-and-pasto, `DynDefinedSpec._build_odxlinks()`
unconditionally called itself and the `oid` parameter was missing from
the fallback code for DTC decoding. (Be aware that the latter points
towards an incomplete dataset.)

Signed-off-by: Andreas Lauser <andreas.lauser@mbition.io>
Signed-off-by: David Helwig <david.helwig@mbition.io>
Fix the issue of short name reference when generating ODX
Co-authored-by: Andreas Lauser <github@poware.org>
this turned out to be quite a bit more involved than anticipated: In
contrast to what the term "linked" implies, linked DTC DOPs are not
just informative, but an inheritance mechanism for DTCs...

Signed-off-by: Andreas Lauser <andreas.lauser@mbition.io>
Signed-off-by: Gerrit Ecke <gerrit.ecke@mbition.io>
as usual, thanks to [at]kayoub5 for the review.

Signed-off-by: Andreas Lauser <andreas.lauser@mbition.io>
Signed-off-by: Gerrit Ecke <gerrit.ecke@mbition.io>
…z#359)

* change list command output from tabulate to rich tables

* Add indentation to rich tables

* fix coding style errors

* Resolve static type checking error in _print_utils.py

* Update browse.py from tabulate to rich table

* Remove commented code

* Fix lint coding style issues

* Fix lint code quality issue

* Fix static type definition for paramters

* Fix static type definitions for variables defined as List[Any]
table columns are now always of type `List[str]` and better variable
names for the column lists (´*_column´) ...

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
Signed-off-by: Katja Köhler <katja.koehler@mercedes-benz.com>
this is now replaced by `rich.table`.

Note to self: the `compare` tool *really* needs cleanup.

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
Signed-off-by: Katja Köhler <katja.koehler@mercedes-benz.com>
thanks to [at]kayoub5 for the nudge!

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
Signed-off-by: Katja Köhler <katja.koehler@mercedes-benz.com>
the code for handling the upper limit was indented one level too deep
which caused exceptions for `free_parameters_info()`.

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
Signed-off-by: Christian Hackenbeck <christian.hackenbeck@mercedes-benz.com>
Codecs are any objects that can be en- and decoded into PDUs. The
approach so far was to derive all "composite" codecs from
`BasicStructure` and to use a subset of that API for "simple" codable
objects like DOPs or parameters. In particular, the previous approach
to composite objects was sub-optimal because `BASIC-STRUCTURE`
features subtags that other codable objects like `Request` and
`Response` do not exhibit (i.e., `.byte_size`).

Since codecs do not play very nicely with inheritance, the new
approach is based on typing protocols (Java would call these
"interfaces"): `codec.py` defines a few runtime checkable
`typing.Protocol` classes which codecs must implement: `Codec` for the
basic functionality, `CompositeCodec` defines the API for codecs that
are composed of a list of parameters like structures, requests,
responses or environment datas, and `ToplevelCodec` defines the
"external" API for requests and responses that is supposed to be
called by user scripts. Any codable class implementing these APIs can
check if it is compliant using `isinstance()` in the `__post_init__()`
method of the respective class.

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
Signed-off-by: Alexander Walz <alexander.walz@mercedes-benz.com>
Signed-off-by: Gunnar Andersson <gunnar.andersson@mercedes-benz.com>
…odec APIs at runtime

as [at]kayoub5 correctly notes, this is already done by `mypy` ahead of time.

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
this property has been deprecated for more than a year and after this
pull request, request and response objects are no longer structures.

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
the intention of this was to make the code more robust in non-strict
mode, but come think about it, exceptions raised by
`._resolve_snref()` will not be caught even in non-strict mode, so we
can as well simply skip the exception handling entirely. (besides
this, `context.response` is now properly set in
`Response._resolve_snrefs()`.)

thanks to [at]kayoub5 for noticing this...

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
As [at]kayoub5 rightfully notes, this was not used anywere for
real. (Also, the `Response` class did not even strictly implement it,
because the `.decode()` and `.encode()` methods accept the additional
`coded_request` argument.)

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
this was an artifact stemming from the development history of this PR:
originally, the `Codec` type protocol mandated this property, but it
turned out that the better approach was to move handling of
statically-sized structures from the generic function
(`composite_codec_get_static_bit_length()`) to the
`get_static_bit_length()` method of structures.

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
Thanks for the catch, [at]kayoub5.

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
thanks to [at]kayoub5

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
thanks to [at]kayoub5

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
since parameters do not ensure that they adhere to the codec protocol
anymore, this can now be done. (though it is still slighly ugly IMO.)

as usual, this was noted by [at]kayoub5...

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
…ttributes

as usual, thanks to [at]kayoub5...

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
As proposed by [at]kayoub5, this will be done in a separate PR. Stay
tuned.

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
This emerged in the context of mercedes-benz#364. We remove all deprecated
functionality. All of this has been deprecated for at least a year, so
the impact should not be too big. (Anyway, since this is potentially
breaking change, the next odxtools release after this patch will be
9.0.0)

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
Signed-off-by: Gerrit Ecke <gerrit.ecke@mercedes-benz.com>
since it seems that nobody ever tried to use odxtools with comparam subsets and
comparam specs that use SNREFs, this did not crop up earlier...

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
Signed-off-by: Gerrit Ecke <gerrit.ecke@mercedes-benz.com>
The spec defines `OdxCategory` as the base for any top-level content
tag in an ODX XML file. odxtools currently implements
`DIAG-LAYER-CONTAINER`, `COMPARAM-SPEC` and `COMPARAM-SUBSET` (all of
them are relevant for diagnostics based on UDS). Currently missing are
the categories `ECU-CONFIG` (variant coding), `FLASH` (firmware blobs
for flashing), `FUNCTION-DICTIONARY` (functionality distributed over
multiple ECUs) and `MULTIPLE-ECU-JOB-SPEC` (multiple-ecu jobs).

Signed-off-by: Andreas Lauser <andreas.lauser@mercedes-benz.com>
Signed-off-by: Christian Hackenbeck <christian.hackenbeck@mercedes-benz.com>
@nada-ben-ali nada-ben-ali deleted the NaBe/pickle_support branch December 9, 2024 09:22
@andlaus
Copy link
Collaborator

andlaus commented Dec 9, 2024

I would not recommend pickling or deepcopying large ODX databases directly, because the heavy referencing of ODX screws with the inner workings of these mechanisms, so they often do not work (because of e.g., cyclic references, very deep nesting depths, etc) or -- if they do -- they are often slower than loading the PDX file from scratch. What is beneficial is to only copy the dataclass fields of objects and pickle/unpickle those:

def deepcopy_dataclass(obj: Any, memo: dict[int, Any]) -> Any:
    obj_id = id(obj)
    if (obj_copy := memo.get(obj_id)) is not None:
        return obj_copy

    if isinstance(obj, list):
        result: list[Any] = []
        memo[obj_id] = result
        for x in obj:
            result.append(deepcopy_dataclass(x, memo))
        return result
    elif obj is None:
        return None
    elif isinstance(obj, int | float | str | bytes | bytearray):
        tmp = obj
        memo[obj_id] = tmp
        return tmp
    elif dataclasses.is_dataclass(obj):
        tmp2 = type(obj).__new__(type(obj))  # type: ignore
        memo[obj_id] = tmp2

        kwargs = {}
        for field in dataclasses.fields(obj):
            kwargs[field.name] = deepcopy_dataclass(getattr(obj, field.name), memo)

        tmp2.__init__(**kwargs)  # type: ignore

        memo[obj_id] = tmp2
        return tmp2

    else:
        tmp3 = copy(obj)
        memo[obj_id] = tmp3

        return tmp3

def copy_db(db: Database) -> Database:
    new_db = copy(db)
    new_db._diag_layer_containers = deepcopy_dataclass(db._diag_layer_containers,
                                                            {})
    new_db._comparam_subsets = deepcopy_dataclass(db._comparam_subsets, {})
    new_db._comparam_specs = deepcopy_dataclass(db._comparam_specs, {})

    return new_db   

the relevant parts of an existing ODX database can then be duplicated using db2 = copy_db(db) and the db2 object should play nicely with pickle(), deepcopy(), etc. (before you use it in "normal" code, be sure to call db2.refresh() to resolve the references, deal with inheritance, etc.)

@nada-ben-ali nada-ben-ali mentioned this pull request Dec 10, 2024
@nada-ben-ali
Copy link
Collaborator Author

@andlaus, I tried your suggestion and added a comment with the results I got in the new PR. (This PR is just a mess, so I closed it, sorry :) )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants