-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added eu analog conversion to test_decom #274
Added eu analog conversion to test_decom #274
Conversation
assert engineering_data.shape == validation_data.shape | ||
|
||
# Define a tolerance for floating-point precision | ||
tolerance = 1e-6 # Adjust this tolerance as needed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be deleted. Put it there as an "option" if we need something like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you might be able to do a numpy array comparison with allclose for your tolerance:
https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html#numpy.testing.assert_allclose
If you don't need a tolerance you should remove it though so we don't accidentally introduce some math slop somewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it is needed. I was just thinking about the decimal places because they aren't whole numbers. Def do not want math slop. I will look into the links.
) | ||
|
||
for mnemonic in validation_data.columns: | ||
matching_row = filtered_analog[filtered_analog["mnemonic"] == mnemonic] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to update filtered_analog
. Right now it doesn't exist..sry
].values[0] | ||
|
||
raw_data = validation_data[mnemonic] | ||
engineering_unit_data = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think @tech3371 was using Numpy's Polynomial class for this, are you able to do the same?
https://numpy.org/doc/stable/reference/generated/numpy.polynomial.polynomial.Polynomial.html#numpy.polynomial.polynomial.Polynomial
np.polynomial.Polynomial(coefficients)(raw_data)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tech3371 did mention this to me in person and I saw it in her PR. I will look at her code again and look into this.
assert engineering_data.shape == validation_data.shape | ||
|
||
# Define a tolerance for floating-point precision | ||
tolerance = 1e-6 # Adjust this tolerance as needed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you might be able to do a numpy array comparison with allclose for your tolerance:
https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html#numpy.testing.assert_allclose
If you don't need a tolerance you should remove it though so we don't accidentally introduce some math slop somewhere.
# Loop through each column and each row to compare the values | ||
for column in validation_data.columns: | ||
for index, row in validation_data.iterrows(): | ||
# Compare each value in the column | ||
assert abs(engineering_data.at[index, column] - row[column]) < tolerance |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you compare the arrays directly without needing to loop over the quantities?
np.testing.assert_allclose(engineering_data, validation_data, rtol=1e-6)
@tech3371 can you review this an make suggestions please. There is an issue in the last test |
@bourque can you review, and see what I need to change. What did I miss from yesterday's convo? |
@@ -28,13 +31,13 @@ def decom_test_data(): | |||
|
|||
|
|||
@pytest.fixture(scope="session") | |||
def validation_data(): | |||
def raw_data(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would revert that name to what you had before. It made more sense
dataset = xr.Dataset() | ||
dataset["First_mnemonic"] = xr.DataArray(np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are doing right by creating dataset as input to convert_raw_to_eu()
but looks like we are doing at the wrong place. when we call convert_raw_to_eu()
from imap_processing/utils.py
we assume, we have unpacked raw packets data and saved in l1a cdf file. Therefore, the input is in xr.Dataset
. I am not sure what's best way to do it here. I see why you are doing it here. They gave you raw unpacked data in csv file. I feel like you would want store unpacked packets data into xr.Dataset
and then keep that csv data as validation still. Then use that dataset as input to the function that converts raw to engineering unit.
return raw_data | ||
|
||
|
||
def convert_raw_to_eu(dataset: xr.Dataset, packet_name): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know I guided you to copy the function for development. Will rebasing with latest change conflict with your work? If not, I would rebase and then import that function into test and use that.
This function looks similar to what we have in imap_processing/utils.py
but you are missing last operation which converts raw data to EU value.
|
||
# Return the data | ||
return validation_data | ||
return raw_data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could do something similar to what I did here create_dataset()
. If you look at references, you can do same thing here. Its input is packet list which you already have from decom_test_data()
. create_dataset()
gives back xr.Dataset
which you can then use as input to convert_raw_to_eu()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to rename this to use underscore instead of a whitespace (i.e. `COD_NHK_20230822_122700_2.csv). Things can get really messy in Python trying to deal with whitespaces in filenames.
dataset = xr.Dataset() | ||
dataset["First_mnemonic"] = xr.DataArray(np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this?
# Determine the number of CCSDS header fields | ||
num_ccsds_header_fields = 7 | ||
|
||
# Compare EU values of housekeeping data, skipping CCSDS header fields | ||
for idx, field in enumerate(eu_hk_data): | ||
# Skip the first num_ccsds_header_fields fields | ||
if idx < num_ccsds_header_fields: | ||
continue | ||
|
||
# Extract the specific EU values for the field | ||
eu_values = eu_hk_data[field].data | ||
|
||
# Extract the corresponding validation data for the field | ||
validation_values = validation_data[field].values | ||
|
||
# Compare each individual element | ||
for eu_val, validation_val in zip(eu_values, validation_values): | ||
assert round(eu_val, 5) == round(validation_val, 5) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice work!
You could move all above and this new changes into a new function because it's testing engineering unit of housekeeping. so you could call it like test_eu_hk_data()
or something in that manner.
I made final updates and cleaned up based on @tech3371 review. I am going to merge for now and wrap it up. |
…ter#274) * added eu analog conversion to test_decom
Change Summary
This is a WIP draft. The new test does not pass right now, so will be fixing and updating. Just wanted to share the progress for the ticket #247
Overview
This PR will apply analog conversions to
validation_data
and test against the eu csv.New Files
Updated Files
test_decom.py
Testing
The new test function
def test_unit_conversion
will not pass! TODO: update and get feedback on how to pass this.closes #247