Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

with weights = 'equal', mne.combine_evoked computes sum, not mean #7865

Closed
SophieHerbst opened this issue Jun 5, 2020 · 8 comments · Fixed by #7869
Closed

with weights = 'equal', mne.combine_evoked computes sum, not mean #7865

SophieHerbst opened this issue Jun 5, 2020 · 8 comments · Fixed by #7869
Assignees

Comments

@SophieHerbst
Copy link
Contributor

Following up on a strange variation in the scale of my evokeds, I realized that
mne.combine_evoked computes the sum, rather than the mean when weights = 'equal'
(mne version: '0.21.dev0'):

evoked.py, line 876:

naves = np.array([evk.nave for evk in all_evoked], float)
  if isinstance(weights, str):
      _check_option('weights', weights, ['nave', 'equal'])
      if weights == 'nave':
          weights = naves / naves.sum()
      else:
          weights = np.ones_like(naves)
  else:
      weights = np.array(weights, float)

Shouldn't it be
weights = np.ones_like(naves) / len(all_evoked)
instead of
weights = np.ones_like(naves) ?

The documentation says

The weights to apply to the data of each evoked instance. Can also be 'nave' to weight according to evoked.nave, or "equal" to use equal weighting (each weighted as 1/N).

@larsoner
Copy link
Member

larsoner commented Jun 5, 2020

@drammock @jasmainak thoughts?

@drammock
Copy link
Member

drammock commented Jun 5, 2020

a quick glance suggests that the code and documentation do not match. I'll look more closely later today.

@SophieHerbst
Copy link
Contributor Author

SophieHerbst commented Jun 5, 2020

Besides the mismatch, I think the sum is difficult to interpret, because the amplitude depends on the number of evokeds that one tries to combine.

@drammock
Copy link
Member

drammock commented Jun 5, 2020

There is definitely something wrong here. If I recall correctly the discussion among myself, @jasmainak, @jona-sassenhagen, and @agramfort, it was decided that weights='equal' should work the way it does in order for subtractions of evokeds to work sensibly. I think between that concern and trying hard to get the nave computation correct, we lost sight of the common use case where users want 1/N weighting. For a quick work-around you could pass an array of 1/N values as weights, but then nave will be wrong. I think it's time to re-think this implementation so that the most common use cases are correct, obvious, and hard to mess up.

@drammock drammock self-assigned this Jun 5, 2020
@jasmainak
Copy link
Member

jasmainak commented Jun 5, 2020

The problem is that if you do:

combine_evoked([evoked1, -evoked2], 'equal')

vs

combine_evoked([evoked1, evoked2], 'equal')

they should do the same thing but with sign flipped. If you want to do a proper average, you should use weights='nave'.

@SophieHerbst
Copy link
Contributor Author

SophieHerbst commented Jun 5, 2020

I see your point now @jasmainak, but that didn't become clear to me from the documentation.
Also, there might be cases where one wants to combine two conditions with equal weights, even if the trial numbers are different.
The clearest solution to me would be to keep 'equal' for weights that are truly equal (no sign changes), but divide them by N, and introduce a third option for 'custom' weights, such as -1/+1 or other contrasts, with an example in the documentation.

@jasmainak
Copy link
Member

but that didn't become clear to me from the documentation.

humm ... how would you suggest we improve the wording?

Also, there might be cases where one wants to combine two conditions with equal weights, even if the trial numbers are different.

you can still do this with:

combine_evoked([evoked1, evoked2], [0.5, 0.5])

The clearest solution to me would be to keep 'equal' for weights that are truly equal (no sign changes), but divide them by N, and introduce a third option for 'custom' weights, such as -1/+1 or other contrasts, with an example in the documentation.

I think EEG people tend to do this combine_evoked([evoked1, -evoked2], 'equal') very often to contrast e.g., in oddball paradigms. So I wouldn't exactly call it a "custom" scenario. I remember this was introduced because of @jona-sassenhagen

@drammock
Copy link
Member

drammock commented Jun 5, 2020

it seems to me like the definitive question here is "what is a more common use case: equally-weighted sum, or equally-weighted average?" Both are possible, it's a question of which we want to make easy for users with a string argument to weights. It is my impression that equally-weighted average is more common, and is also slightly more difficult for users to implement themselves:

equal_weighted_sum = combine_evokeds(evks, weights=np.ones(len(evks)))
equal_weighted_avg = combine_evokeds(evks, weights=np.ones(len(evks)) / len(evks))

Therefore I'm preparing a PR to make the code align with the docstring for weights='equal' and also to change the docstring's recommendation about how to do a subtraction, since it will no longer be correct to use weights='equal' for subtraction after that change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants