Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

<gluonts.transform.split.InstanceSplitter object at 0x0000016230000E50> #180

Open
youandyourself opened this issue Jan 19, 2025 · 4 comments

Comments

@youandyourself
Copy link

when i run Time-Grad-Electricity.ipynb predictor = estimator.train(dataset_train, num_workers=8).
the error occurs:

Exception Traceback (most recent call last)
Cell In[13], line 1
----> 1 predictor = estimator.train(dataset_train, num_workers=8)

File c:\Users\youandyourself\Desktop\zcy\AA_test\model\pytorch-ts-master\pts\model\estimator.py:179, in PyTorchEstimator.train(self, training_data, validation_data, num_workers, prefetch_factor, shuffle_buffer_length, cache_data, **kwargs)
169 def train(
170 self,
171 training_data: Dataset,
(...)
177 **kwargs,
178 ) -> PyTorchPredictor:
--> 179 return self.train_model(
180 training_data,
181 validation_data,
182 num_workers=num_workers,
183 prefetch_factor=prefetch_factor,
184 shuffle_buffer_length=shuffle_buffer_length,
185 cache_data=cache_data,
186 **kwargs,
187 ).predictor

File c:\Users\youandyourself\Desktop\zcy\AA_test\model\pytorch-ts-master\pts\model\estimator.py:151, in PyTorchEstimator.train_model(self, training_data, validation_data, num_workers, prefetch_factor, shuffle_buffer_length, cache_data, **kwargs)
133 validation_iter_dataset = TransformedIterableDataset(
134 dataset=validation_data,
135 transform=transformation
(...)
139 cache_data=cache_data,
140 )
141 validation_data_loader = DataLoader(
142 validation_iter_dataset,
143 batch_size=self.trainer.batch_size,
(...)
148 **kwargs,
149 )
--> 151 self.trainer(
152 net=trained_net,
153 train_iter=training_data_loader,
154 validation_iter=validation_data_loader,
155 )
157 return TrainOutput(
158 transformation=transformation,
159 trained_net=trained_net,
(...)
162 ),
163 )

File c:\Users\youandyourself\Desktop\zcy\AA_test\model\pytorch-ts-master\pts\trainer.py:63, in Trainer.call(self, net, train_iter, validation_iter)
61 # training loop
62 with tqdm(train_iter, total=total) as it:
---> 63 for batch_no, data_entry in enumerate(it, start=1):
64 optimizer.zero_grad()
66 inputs = [v.to(self.device) for v in data_entry.values()]

File d:\Anaconda\Lib\site-packages\tqdm\notebook.py:259, in tqdm_notebook.iter(self)
257 try:
258 it = super(tqdm_notebook, self).iter()
--> 259 for obj in it:
260 # return super(tqdm...) will not catch exception
261 yield obj
262 # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt

File d:\Anaconda\Lib\site-packages\tqdm\std.py:1195, in tqdm.iter(self)
1192 time = self._time
1194 try:
-> 1195 for obj in iterable:
1196 yield obj
1197 # Update and possibly print the progressbar.
1198 # Note: does not call self.update(1) for speed optimisation.

File d:\Anaconda\Lib\site-packages\torch\utils\data\dataloader.py:631, in _BaseDataLoaderIter.next(self)
628 if self._sampler_iter is None:
629 # TODO(pytorch/pytorch#76750)
630 self._reset() # type: ignore[call-arg]
--> 631 data = self._next_data()
632 self._num_yielded += 1
633 if self._dataset_kind == _DatasetKind.Iterable and
634 self._IterableDataset_len_called is not None and
635 self._num_yielded > self._IterableDataset_len_called:

File d:\Anaconda\Lib\site-packages\torch\utils\data\dataloader.py:1346, in _MultiProcessingDataLoaderIter._next_data(self)
1344 else:
1345 del self._task_info[idx]
-> 1346 return self._process_data(data)

File d:\Anaconda\Lib\site-packages\torch\utils\data\dataloader.py:1372, in _MultiProcessingDataLoaderIter._process_data(self, data)
1370 self._try_put_index()
1371 if isinstance(data, ExceptionWrapper):
-> 1372 data.reraise()
1373 return data

File d:\Anaconda\Lib\site-packages\torch_utils.py:722, in ExceptionWrapper.reraise(self)
718 except TypeError:
719 # If the exception takes multiple arguments, don't try to
720 # instantiate since we don't know how to
721 raise RuntimeError(msg) from None
--> 722 raise exception

Exception: Caught Exception in DataLoader worker process 0.
Original Traceback (most recent call last):
File "d:\Anaconda\Lib\site-packages\torch\utils\data_utils\worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
^^^^^^^^^^^^^^^^^^^^
File "d:\Anaconda\Lib\site-packages\torch\utils\data_utils\fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
^^^^^^^^^^^^^^^^^^^^^^^
File "d:\Anaconda\Lib\site-packages\gluonts\transform_base.py", line 111, in iter
yield from self.transformation(
File "d:\Anaconda\Lib\site-packages\gluonts\transform_base.py", line 132, in call
for data_entry in data_it:
File "d:\Anaconda\Lib\site-packages\gluonts\transform_base.py", line 132, in call
for data_entry in data_it:
File "d:\Anaconda\Lib\site-packages\gluonts\transform_base.py", line 197, in call
raise Exception(
Exception: Reached maximum number of idle transformation calls.
This means the transformation looped over 1 inputs without returning any output.
This occurred in the following transformation:
<gluonts.transform.split.InstanceSplitter object at 0x0000016230000E50>
when i write the code as same as what README.md said. the same error occurs too.
how can i solve it

@youandyourself
Copy link
Author

i know how to deel with it.
pip install gluonts == 0.10.0
download another gluonts 0.10.x from https://github.com/awslabs/gluonts
find src/gluonts/torch/distributions from 0.10.x, copy it to your Lib/site-packages/gluonts/torch/distributions,
rename distribution_output.py -> output.py

@kashif
Copy link
Collaborator

kashif commented Jan 20, 2025

@youandyourself can you kindly try to use the 0.7.0 branch?

@youandyourself
Copy link
Author

@youandyourself can you kindly try to use the 0.7.0 branch?
My issue has been resolved using the methods mentioned above.

but I tried pytorch-ts-version-0.7.0, but I gave up because of the following error:
ImportError Traceback (most recent call last)
Cell In[3], line 1
----> 1 from pts.model.tempflow import TempFlowEstimator
2 from pts.model.time_grad import TimeGradEstimator
3 from pts.model.transformer_tempflow import TransformerTempFlowEstimator

ImportError: cannot import name 'TempFlowEstimator' from 'pts.model.tempflow' (c:\Users\youandyourself\Desktop\zcy\AA_test\model\pytorch-ts-version-0.7.0\pts\model\tempflow_init_.py)

@lhyuehh
Copy link

lhyuehh commented Feb 27, 2025

when i run Time-Grad-Electricity.ipynb predictor = estimator.train(dataset_train, num_workers=8).

the error occurs:

Exception Traceback (most recent call last) Cell In[13], line 1 ----> 1 predictor = estimator.train(dataset_train, num_workers=8)

File c:\Users\youandyourself\Desktop\zcy\AA_test\model\pytorch-ts-master\pts\model\estimator.py:179, in PyTorchEstimator.train(self, training_data, validation_data, num_workers, prefetch_factor, shuffle_buffer_length, cache_data, **kwargs) 169 def train( 170 self, 171 training_data: Dataset, (...) 177 **kwargs, 178 ) -> PyTorchPredictor: --> 179 return self.train_model( 180 training_data, 181 validation_data, 182 num_workers=num_workers, 183 prefetch_factor=prefetch_factor, 184 shuffle_buffer_length=shuffle_buffer_length, 185 cache_data=cache_data, 186 **kwargs, 187 ).predictor

File c:\Users\youandyourself\Desktop\zcy\AA_test\model\pytorch-ts-master\pts\model\estimator.py:151, in PyTorchEstimator.train_model(self, training_data, validation_data, num_workers, prefetch_factor, shuffle_buffer_length, cache_data, **kwargs) 133 validation_iter_dataset = TransformedIterableDataset( 134 dataset=validation_data, 135 transform=transformation (...) 139 cache_data=cache_data, 140 ) 141 validation_data_loader = DataLoader( 142 validation_iter_dataset, 143 batch_size=self.trainer.batch_size, (...) 148 **kwargs, 149 ) --> 151 self.trainer( 152 net=trained_net, 153 train_iter=training_data_loader, 154 validation_iter=validation_data_loader, 155 ) 157 return TrainOutput( 158 transformation=transformation, 159 trained_net=trained_net, (...) 162 ), 163 )

File c:\Users\youandyourself\Desktop\zcy\AA_test\model\pytorch-ts-master\pts\trainer.py:63, in Trainer.call(self, net, train_iter, validation_iter) 61 # training loop 62 with tqdm(train_iter, total=total) as it: ---> 63 for batch_no, data_entry in enumerate(it, start=1): 64 optimizer.zero_grad() 66 inputs = [v.to(self.device) for v in data_entry.values()]

File d:\Anaconda\Lib\site-packages\tqdm\notebook.py:259, in tqdm_notebook.iter(self) 257 try: 258 it = super(tqdm_notebook, self).iter() --> 259 for obj in it: 260 # return super(tqdm...) will not catch exception 261 yield obj 262 # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt

File d:\Anaconda\Lib\site-packages\tqdm\std.py:1195, in tqdm.iter(self) 1192 time = self._time 1194 try: -> 1195 for obj in iterable: 1196 yield obj 1197 # Update and possibly print the progressbar. 1198 # Note: does not call self.update(1) for speed optimisation.

File d:\Anaconda\Lib\site-packages\torch\utils\data\dataloader.py:631, in _BaseDataLoaderIter.next(self) 628 if self._sampler_iter is None: 629 # TODO(pytorch/pytorch#76750) 630 self._reset() # type: ignore[call-arg] --> 631 data = self._next_data() 632 self._num_yielded += 1 633 if self._dataset_kind == _DatasetKind.Iterable and 634 self._IterableDataset_len_called is not None and 635 self._num_yielded > self._IterableDataset_len_called:

File d:\Anaconda\Lib\site-packages\torch\utils\data\dataloader.py:1346, in _MultiProcessingDataLoaderIter._next_data(self) 1344 else: 1345 del self._task_info[idx] -> 1346 return self._process_data(data)

File d:\Anaconda\Lib\site-packages\torch\utils\data\dataloader.py:1372, in _MultiProcessingDataLoaderIter._process_data(self, data) 1370 self._try_put_index() 1371 if isinstance(data, ExceptionWrapper): -> 1372 data.reraise() 1373 return data

File d:\Anaconda\Lib\site-packages\torch_utils.py:722, in ExceptionWrapper.reraise(self) 718 except TypeError: 719 # If the exception takes multiple arguments, don't try to 720 # instantiate since we don't know how to 721 raise RuntimeError(msg) from None --> 722 raise exception

Exception: Caught Exception in DataLoader worker process 0. Original Traceback (most recent call last): File "d:\Anaconda\Lib\site-packages\torch\utils\data_utils\worker.py", line 308, in _worker_loop data = fetcher.fetch(index) ^^^^^^^^^^^^^^^^^^^^ File "d:\Anaconda\Lib\site-packages\torch\utils\data_utils\fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) ^^^^^^^^^^^^^^^^^^^^^^^ File "d:\Anaconda\Lib\site-packages\gluonts\transform_base.py", line 111, in iter yield from self.transformation( File "d:\Anaconda\Lib\site-packages\gluonts\transform_base.py", line 132, in call for data_entry in data_it: File "d:\Anaconda\Lib\site-packages\gluonts\transform_base.py", line 132, in call for data_entry in data_it: File "d:\Anaconda\Lib\site-packages\gluonts\transform_base.py", line 197, in call raise Exception( Exception: Reached maximum number of idle transformation calls. This means the transformation looped over 1 inputs without returning any output. This occurred in the following transformation: <gluonts.transform.split.InstanceSplitter object at 0x0000016230000E50> when i write the code as same as what README.md said. the same error occurs too. how can i solve it

I have solved this problem through downgrade pip “python -m pip install pip==24.0” and pip install gluonts==0.10.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants