Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

metrics output giving all zeros / nans #8

Closed
ifahim opened this issue May 22, 2018 · 5 comments
Closed

metrics output giving all zeros / nans #8

ifahim opened this issue May 22, 2018 · 5 comments

Comments

@ifahim
Copy link

ifahim commented May 22, 2018

Hi
I am trying to run motmetrics in conda environment using the instructions provided but I am getting the following error while running the example data provided in the folder.

$ python -m motmetrics.apps.eval_motchallenge motmetrics/data/TUD-Campus/gt.txt motmetrics/data/TUD-Campus/test.txt


(motmetrics-env) my-mac $ python -m motmetrics.apps.eval_motchallenge motmetrics/data/TUD-Campus/gt.txt  motmetrics/data/TUD-Campus/test.txt 
05:01:09 INFO - Found 0 groundtruths and 0 test files.
05:01:09 INFO - Available LAP solvers ['scipy']
05:01:09 INFO - Default LAP solver 'scipy'
05:01:09 INFO - Loading files.
05:01:09 INFO - Running metrics
~/Desktop/py-motmetrics/motmetrics/metrics.py:378: RuntimeWarning: invalid value encountered in double_scalars
  return 2 * idtp / (num_objects + num_predictions)
~/Desktop/py-motmetrics/motmetrics/metrics.py:370: RuntimeWarning: invalid value encountered in double_scalars
  return idtp / (idtp + idfp)
~/Desktop/py-motmetrics/motmetrics/metrics.py:374: RuntimeWarning: invalid value encountered in double_scalars
  return idtp / (idtp + idfn)
~/Desktop/py-motmetrics/motmetrics/metrics.py:302: RuntimeWarning: invalid value encountered in long_scalars
  return num_detections / num_objects
~/Desktop/py-motmetrics/motmetrics/metrics.py:298: RuntimeWarning: invalid value encountered in long_scalars
  return num_detections / (num_false_positives + num_detections)
~/Desktop/py-motmetrics/motmetrics/metrics.py:294: RuntimeWarning: invalid value encountered in long_scalars
  return 1. - (num_misses + num_switches + num_false_positives) / num_objects
~/Desktop/py-motmetrics/motmetrics/metrics.py:290: RuntimeWarning: invalid value encountered in double_scalars
  return df.noraw['D'].sum() / num_detections
        IDF1  IDP  IDR Rcll Prcn GT MT PT ML FP FN IDs  FM MOTA MOTP
OVERALL nan% nan% nan% nan% nan%  0  0  0  0  0  0   0   0 nan%  nan
05:01:09 INFO - Completed

Then, I tried to run pytest and got the following error. Any idea what might be wrong?

$ pytest
============================================================================= test session starts =============================================================================
platform darwin -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: ~/Desktop/py-motmetrics, inifile:
collected 18 items                                                                                                                                                            

motmetrics/tests/test_distances.py ...                                                                                                                                  [ 16%]
motmetrics/tests/test_io.py ..                                                                                                                                          [ 27%]
motmetrics/tests/test_lap.py ..                                                                                                                                         [ 38%]
motmetrics/tests/test_metrics.py ...F.F                                                                                                                                 [ 72%]
motmetrics/tests/test_mot.py F...F                                                                                                                                      [100%]

================================================================================== FAILURES ===================================================================================
_______________________________________________________________________________ test_mota_motp ________________________________________________________________________________

   def test_mota_motp():
       acc = mm.MOTAccumulator()
   
       # All FP
       acc.update([], ['a', 'b'], [], frameid=0)
       # All miss
       acc.update([1, 2], [], [], frameid=1)
       # Match
       acc.update([1, 2], ['a', 'b'], [[1, 0.5], [0.3, 1]], frameid=2)
       # Switch
       acc.update([1, 2], ['a', 'b'], [[0.2, np.nan], [np.nan, 0.1]], frameid=3)
       # Match. Better new match is available but should prefer history
       acc.update([1, 2], ['a', 'b'], [[5, 1], [1, 5]], frameid=4)
       # No data
       acc.update([], [], [], frameid=5)
   
       mh = mm.metrics.create()
       metr = mh.compute(acc, metrics=['motp', 'mota', 'num_predictions'], return_dataframe=False, return_cached=True)
   
       assert metr['num_matches'] == 4
       assert metr['num_false_positives'] == 2
       assert metr['num_misses'] == 2
       assert metr['num_switches'] == 2
       assert metr['num_detections'] == 6
>       assert metr['num_objects'] == 8
E       assert 10 == 8

motmetrics/tests/test_metrics.py:90: AssertionError
___________________________________________________________________________ test_motchallenge_files ___________________________________________________________________________

self = <pandas.core.indexing._LocIndexer object at 0x10f626e58>, key = 'nan', axis = 0

   @Appender(_NDFrameIndexer._validate_key.__doc__)
   def _validate_key(self, key, axis):
       ax = self.obj._get_axis(axis)
   
       # valid for a label where all labels are in the index
       # slice of labels (where start-end in labels)
       # slice of integers (only if in the labels)
       # boolean
   
       if isinstance(key, slice):
           return
   
       elif com.is_bool_indexer(key):
           return
   
       elif not is_list_like_indexer(key):
   
           def error():
               if isna(key):
                   raise TypeError("cannot use label indexing with a null "
                                   "key")
               raise KeyError(u"the label [{key}] is not in the [{axis}]"
                              .format(key=key,
                                      axis=self.obj._get_axis_name(axis)))
   
           try:
               key = self._convert_scalar_indexer(key, axis)
               if not ax.contains(key):
>                   error()

../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1790: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

   def error():
       if isna(key):
           raise TypeError("cannot use label indexing with a null "
                           "key")
       raise KeyError(u"the label [{key}] is not in the [{axis}]"
                      .format(key=key,
>                              axis=self.obj._get_axis_name(axis)))
E       KeyError: 'the label [nan] is not in the [index]'

../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1785: KeyError

During handling of the above exception, another exception occurred:

   def test_motchallenge_files():
       dnames = [
           'TUD-Campus',
           'TUD-Stadtmitte',
       ]
   
       def compute_motchallenge(dname):
           df_gt = mm.io.loadtxt(os.path.join(dname,'gt.txt'))
           df_test = mm.io.loadtxt(os.path.join(dname,'test.txt'))
           return mm.utils.compare_to_groundtruth(df_gt, df_test, 'iou', distth=0.5)
   
       accs = [compute_motchallenge(os.path.join(DATA_DIR, d)) for d in dnames]
   
       # For testing
       # [a.events.to_pickle(n) for (a,n) in zip(accs, dnames)]
   
       mh = mm.metrics.create()
>       summary = mh.compute_many(accs, metrics=mm.metrics.motchallenge_metrics, names=dnames, generate_overall=True)

motmetrics/tests/test_metrics.py:133: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
motmetrics/metrics.py:191: in compute_many
   partials = [self.compute(acc, metrics=metrics, name=name) for acc, name in zip(dfs, names)]
motmetrics/metrics.py:191: in <listcomp>
   partials = [self.compute(acc, metrics=metrics, name=name) for acc, name in zip(dfs, names)]
motmetrics/metrics.py:142: in compute
   cache[mname] = self._compute(df_map, mname, cache, parent='summarize')
motmetrics/metrics.py:203: in _compute
   v = cache[depname] = self._compute(df_map, depname, cache, parent=name)
motmetrics/metrics.py:203: in _compute
   v = cache[depname] = self._compute(df_map, depname, cache, parent=name)
motmetrics/metrics.py:205: in _compute
   return minfo['fnc'](df_map, *vals)
motmetrics/metrics.py:335: in id_global_assignment
   df_o = df.loc[o, 'D'].dropna()
../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1472: in __getitem__
   return self._getitem_tuple(key)
../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:870: in _getitem_tuple
   return self._getitem_lowerdim(tup)
../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:998: in _getitem_lowerdim
   section = self._getitem_axis(key, axis=i)
../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1911: in _getitem_axis
   self._validate_key(key, axis)
../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1798: in _validate_key
   error()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

   def error():
       if isna(key):
           raise TypeError("cannot use label indexing with a null "
                           "key")
       raise KeyError(u"the label [{key}] is not in the [{axis}]"
                      .format(key=key,
>                              axis=self.obj._get_axis_name(axis)))
E       KeyError: 'the label [nan] is not in the [index]'

../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1785: KeyError
_________________________________________________________________________________ test_events _________________________________________________________________________________

   def test_events():
       acc = mm.MOTAccumulator()
   
       # All FP
       acc.update([], ['a', 'b'], [], frameid=0)
       # All miss
       acc.update([1, 2], [], [], frameid=1)
       # Match
       acc.update([1, 2], ['a', 'b'], [[1, 0.5], [0.3, 1]], frameid=2)
       # Switch
       acc.update([1, 2], ['a', 'b'], [[0.2, np.nan], [np.nan, 0.1]], frameid=3)
       # Match. Better new match is available but should prefer history
       acc.update([1, 2], ['a', 'b'], [[5, 1], [1, 5]], frameid=4)
       # No data
       acc.update([], [], [], frameid=5)
   
       expect = mm.MOTAccumulator.new_event_dataframe()
       expect.loc[(0, 0), :] = ['RAW', np.nan, 'a', np.nan]
       expect.loc[(0, 1), :] = ['RAW', np.nan, 'b', np.nan]
       expect.loc[(0, 2), :] = ['FP', np.nan, 'a', np.nan]
       expect.loc[(0, 3), :] = ['FP', np.nan, 'b', np.nan]
   
       expect.loc[(1, 0), :] = ['RAW', 1, np.nan, np.nan]
       expect.loc[(1, 1), :] = ['RAW', 2, np.nan, np.nan]
       expect.loc[(1, 2), :] = ['MISS', 1, np.nan, np.nan]
       expect.loc[(1, 3), :] = ['MISS', 2, np.nan, np.nan]
   
       expect.loc[(2, 0), :] = ['RAW', 1, 'a', 1.0]
       expect.loc[(2, 1), :] = ['RAW', 1, 'b', 0.5]
       expect.loc[(2, 2), :] = ['RAW', 2, 'a', 0.3]
       expect.loc[(2, 3), :] = ['RAW', 2, 'b', 1.0]
       expect.loc[(2, 4), :] = ['MATCH', 1, 'b', 0.5]
       expect.loc[(2, 5), :] = ['MATCH', 2, 'a', 0.3]
   
       expect.loc[(3, 0), :] = ['RAW', 1, 'a', 0.2]
       expect.loc[(3, 1), :] = ['RAW', 1, 'b', np.nan]
       expect.loc[(3, 2), :] = ['RAW', 2, 'a', np.nan]
       expect.loc[(3, 3), :] = ['RAW', 2, 'b', 0.1]
       expect.loc[(3, 4), :] = ['SWITCH', 1, 'a', 0.2]
       expect.loc[(3, 5), :] = ['SWITCH', 2, 'b', 0.1]
   
       expect.loc[(4, 0), :] = ['RAW', 1, 'a', 5.]
       expect.loc[(4, 1), :] = ['RAW', 1, 'b', 1.]
       expect.loc[(4, 2), :] = ['RAW', 2, 'a', 1.]
       expect.loc[(4, 3), :] = ['RAW', 2, 'b', 5.]
       expect.loc[(4, 4), :] = ['MATCH', 1, 'a', 5.]
       expect.loc[(4, 5), :] = ['MATCH', 2, 'b', 5.]
       # frame 5 generates no events
   
>       assert pd.DataFrame.equals(acc.events, expect)
E       AssertionError: assert False
E        +  where False = <function NDFrame.equals at 0x1067a3488>(                 Type  OId  HId    D\nFrameId Event                       \n0       0         RAW  nan    a  NaN\n       ...  a  1.0\n        3         RAW    2    b  5.0\n        4       MATCH    1    a  5.0\n        5       MATCH    2    b  5.0,                  Type  OId  HId    D\nFrameId Event                       \n0       0         RAW  NaN    a  NaN\n       ...  a  1.0\n        3         RAW    2    b  5.0\n        4       MATCH    1    a  5.0\n        5       MATCH    2    b  5.0)
E        +    where <function NDFrame.equals at 0x1067a3488> = <class 'pandas.core.frame.DataFrame'>.equals
E        +      where <class 'pandas.core.frame.DataFrame'> = pd.DataFrame
E        +    and                    Type  OId  HId    D\nFrameId Event                       \n0       0         RAW  nan    a  NaN\n       ...  a  1.0\n        3         RAW    2    b  5.0\n        4       MATCH    1    a  5.0\n        5       MATCH    2    b  5.0 = <motmetrics.mot.MOTAccumulator object at 0x10f7f2550>.events

motmetrics/tests/test_mot.py:57: AssertionError
____________________________________________________________________________ test_merge_dataframes ____________________________________________________________________________

   def test_merge_dataframes():
       acc = mm.MOTAccumulator()
   
       acc.update([], ['a', 'b'], [], frameid=0)
       acc.update([1, 2], [], [], frameid=1)
       acc.update([1, 2], ['a', 'b'], [[1, 0.5], [0.3, 1]], frameid=2)
       acc.update([1, 2], ['a', 'b'], [[0.2, np.nan], [np.nan, 0.1]], frameid=3)
   
       r, mappings = mm.MOTAccumulator.merge_event_dataframes([acc.events, acc.events], return_mappings=True)
   
       expect = mm.MOTAccumulator.new_event_dataframe()
       expect.loc[(0, 0), :] = ['RAW', np.nan, mappings[0]['hid_map']['a'], np.nan]
       expect.loc[(0, 1), :] = ['RAW', np.nan, mappings[0]['hid_map']['b'], np.nan]
       expect.loc[(0, 2), :] = ['FP', np.nan, mappings[0]['hid_map']['a'], np.nan]
       expect.loc[(0, 3), :] = ['FP', np.nan, mappings[0]['hid_map']['b'], np.nan]
   
>       expect.loc[(1, 0), :] = ['RAW', mappings[0]['oid_map'][1], np.nan, np.nan]
E       KeyError: 1

motmetrics/tests/test_mot.py:126: KeyError
===================================================================== 4 failed, 14 passed in 2.18 seconds =====================================================================
(motmetrics-env) 
@cheind
Copy link
Owner

cheind commented May 22, 2018

Note the usage in

https://github.com/cheind/py-motmetrics/blob/master/motmetrics/apps/eval_motchallenge.py

Layout for ground truth data
    <GT_ROOT>/<SEQUENCE_1>/gt/gt.txt
    <GT_ROOT>/<SEQUENCE_2>/gt/gt.txt
    ...
Layout for test data
    <TEST_ROOT>/<SEQUENCE_1>.txt
    <TEST_ROOT>/<SEQUENCE_2>.txt
    ...
Sequences of ground truth and test will be matched according to the `<SEQUENCE_X>`
string.""", formatter_class=argparse.RawTextHelpFormatter)

    parser.add_argument('groundtruths', type=str, help='Directory containing ground truth files.')   
    parser.add_argument('tests', type=str, help='Directory containing tracker result files')
    parser.add_argument('--loglevel', type=str, help='Log level', default='info')
    parser.add_argument('--fmt', type=str, help='Data format', default='mot15-2D')
    parser.add_argument('--solver', type=str, help='LAP solver to use')

So you should pass only the parent directory containing ground-truths and tests files.

Best,
Christoph

@cheind cheind closed this as completed May 22, 2018
@Sebastian-Vogt
Copy link

Sebastian-Vogt commented Sep 2, 2022

I have the same problem.
But I think, I used the correct format:
Filesystem:
GT:
"..\GT\Bdd100k_MOT\gt\gt.txt"
"..\GT\Stadtmitte_MOT\gt\gt.txt"
Tracker:
"..\TracksMOTA\Bdd100k_MOT.txt"
"..\TracksMOTA\Stadtmitte_MOT.txt"

I run python -m motmetrics.apps.eval_motchallenge "..\GT" "..\TracksMOTA" and got all 0s , nan and -inf%.
I checked that the boxes overlap. Even when I use the tracker files as GT as well as for the tracker, so it should be a 100% match, it gives out just 0s.

Here are some sample lines from one file (used as gt and tracker):
1,1,399,423,327,377,-1,-1,-1,-1
1,2,92,161,91,324,-1,-1,-1,-1
1,3,179,216,95,281,-1,-1,-1,-1
1,5,177,256,92,314,-1,-1,-1,-1
1,6,356,433,75,329,-1,-1,-1,-1
1,7,439,522,91,329,-1,-1,-1,-1
1,8,577,646,78,277,-1,-1,-1,-1
1,9,526,561,111,243,-1,-1,-1,-1
2,1,398.922,422.961,326.806,376.728,-1,-1,-1,-1
2,2,86.375,156.438,91.4375,324.313,-1,-1,-1,-1
2,3,178.91,216.03,94.8955,280.552,-1,-1,-1,-1
2,5,181.357,260.179,92.0714,313.571,-1,-1,-1,-1
2,6,357,435.809,75.1618,328.441,-1,-1,-1,-1

Any ideas?

@jvlmdr
Copy link
Collaborator

jvlmdr commented Sep 5, 2022

The original issue was that no files were found. Did you also see a line like the following in the logs?

INFO - Found 0 groundtruths and 0 test files.

The issue in your case could be the back-/forward-slash issue between windows and unix. Perhaps we should change this line:

gtfiles = glob.glob(os.path.join(args.groundtruths, '*/gt/gt.txt'))

to something like glob.glob(os.path.join(args.groundtruths, '*', 'gt', 'gt.txt'))

@Sebastian-Vogt
Copy link

The output is:

08:48:34 INFO - Found 2 groundtruths and 2 test files.
08:48:34 INFO - Available LAP solvers ['scipy']
08:48:34 INFO - Default LAP solver 'scipy'
08:48:34 INFO - Loading files.
08:48:34 INFO - Comparing Bdd100k_MOT...
08:48:34 INFO - Comparing Stadtmitte_MOT...
08:48:35 INFO - Running metrics
08:48:35 INFO - partials: 0.168 seconds.
08:48:35 INFO - mergeOverall: 0.170 seconds.
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm
Bdd100k_MOT 0.0% 0.0% NaN NaN 0.0% 0 0 0 0 709 0 0 0 -inf% NaN 0 0 0
Stadtmitte_MOT 0.0% 0.0% NaN NaN 0.0% 0 0 0 0 1165 0 0 0 -inf% NaN 0 0 0
OVERALL 0.0% 0.0% NaN NaN 0.0% 0 0 0 0 1874 0 0 0 -inf% NaN 0 0 0
08:48:35 INFO - Completed

I also uploaded the input where I replaced the GT with the Tracker. So it sould be all 100%.
input.zip

@jvlmdr
Copy link
Collaborator

jvlmdr commented Sep 5, 2022

Looking at your gt data, it seems like the confidence column is set to -1 (see https://motchallenge.net/instructions/).

The eval_motchallenge script enforces a minimum threshold of 1 for ground-truth data:

gt = OrderedDict([(Path(f).parts[-3], mm.io.loadtxt(f, fmt=args.fmt, min_confidence=1)) for f in gtfiles])

This would result in your ground-truth being empty, which could explain the zeros and nans.

Did you generate this ground-truth yourself? If so, change the confidence from -1 to 1 for the ground-truth boxes. Otherwise, perhaps we need to make an exception that allows values of -1 in the groundtruth.

In any case, it seems unrelated to the original issue, which has already been closed. Can you please open a new issue if you need to respond?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants