You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope you're having a great day. I am currently working on a personal learning project involving financial time-series prediction using LTC models. Specifically, I aim to predict the price of natural gas as a case study.
I am trying to understand the "ltc_example_sinusoidal.ipynb" Google Colab notebook. There are a few aspects that I'm struggling to grasp, and I would greatly appreciate any insights or clarifications.
In the section "Plotting the prediction of the trained model," I encountered the line "prediction = model(data_x).numpy()". It seems that the prediction is made on the same initial data that was used for training. Shouldn't predictions be made on a new set of data to properly test the model's performance? Am I misunderstanding something here?
For my specific use case, I am preparing a dataset containing daily natural gas prices in the US for the last 20 years, along with other related variables like daily average temperature, production levels, storage data, etc. I plan to use these vectors as input features (data_x), similar to the example in the notebook: "data_x = np.stack([np.sin(np.linspace(0, 3 * np.pi, N)), np.cos(np.linspace(0, 3 * np.pi, N))], axis=1)". The target variable (data_y) will be the price of the next day or week.
a. Should I provide the entire historical dataset for 20 years and let the model run on it, or would it be more appropriate to use year-long batches for training?
b. Is there any guideline or indication of how many neurons of each type I should use in the model relative to the dataset size or the number of expected cause-consequence patterns?
In theory, the LTC model should have learned the causal relations within the dataset. My intention is to use the trained model to predict the gas price for the next day or week by calling the function "prediction = model(NEWdata_x).numpy()". The NEWdata_x would represent the set of vectors from the last year.
Based on your experience with LTC models, does this approach make sense for financial time-series prediction?
I apologize if some of these questions sound basic; I'm relatively new to this area of study. Any insights would be greatly appreciated. Thank you for your time and support!
The text was updated successfully, but these errors were encountered:
Hello @vladyskai ,
You can use the LTC or CfC layer as any other tf.keras layer while building the model.
For better understanding of time series forecasting using Neural Networks please follow articles available online. I followed the one which uses LSTM for forecasting.
Replace the LSTM model with LTC or CfC or combination of both the layers and you can run it perfectly fine.
How to add LTC/CfC layers to keras model?
Follow the codes mentioned in the following link
Hello @vladyskai , You can use the LTC or CfC layer as any other tf.keras layer while building the model. For better understanding of time series forecasting using Neural Networks please follow articles available online. I followed the one which uses LSTM for forecasting. Replace the LSTM model with LTC or CfC or combination of both the layers and you can run it perfectly fine. How to add LTC/CfC layers to keras model? Follow the codes mentioned in the following link
Hello everyone,
I hope you're having a great day. I am currently working on a personal learning project involving financial time-series prediction using LTC models. Specifically, I aim to predict the price of natural gas as a case study.
I am trying to understand the "ltc_example_sinusoidal.ipynb" Google Colab notebook. There are a few aspects that I'm struggling to grasp, and I would greatly appreciate any insights or clarifications.
https://colab.research.google.com/drive/1IvVXVSC7zZPo5w-PfL3mk1MC3PIPw7Vs?usp=sharing
In the section "Plotting the prediction of the trained model," I encountered the line "prediction = model(data_x).numpy()". It seems that the prediction is made on the same initial data that was used for training. Shouldn't predictions be made on a new set of data to properly test the model's performance? Am I misunderstanding something here?
For my specific use case, I am preparing a dataset containing daily natural gas prices in the US for the last 20 years, along with other related variables like daily average temperature, production levels, storage data, etc. I plan to use these vectors as input features (data_x), similar to the example in the notebook: "data_x = np.stack([np.sin(np.linspace(0, 3 * np.pi, N)), np.cos(np.linspace(0, 3 * np.pi, N))], axis=1)". The target variable (data_y) will be the price of the next day or week.
a. Should I provide the entire historical dataset for 20 years and let the model run on it, or would it be more appropriate to use year-long batches for training?
b. Is there any guideline or indication of how many neurons of each type I should use in the model relative to the dataset size or the number of expected cause-consequence patterns?
In theory, the LTC model should have learned the causal relations within the dataset. My intention is to use the trained model to predict the gas price for the next day or week by calling the function "prediction = model(NEWdata_x).numpy()". The NEWdata_x would represent the set of vectors from the last year.
Based on your experience with LTC models, does this approach make sense for financial time-series prediction?
I apologize if some of these questions sound basic; I'm relatively new to this area of study. Any insights would be greatly appreciated. Thank you for your time and support!
The text was updated successfully, but these errors were encountered: