Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scalar writing frequency too high will lead to data loss? #201

Closed
chivee opened this issue Jul 30, 2018 · 5 comments
Closed

Scalar writing frequency too high will lead to data loss? #201

chivee opened this issue Jul 30, 2018 · 5 comments

Comments

@chivee
Copy link

chivee commented Jul 30, 2018

Hi @lanpa ,

Thanks for your great work. I really enjoy the tensorboardx journey.
However, the data loss issues are haunting me, which was that if the Summary writer writing too many scalars. there will be a data loss issue.

do you have any comment on this?

@lanpa
Copy link
Owner

lanpa commented Jul 30, 2018

I just saw a bug report on missing text event. What did you mean by too many and how did you spot the missing value? It will be good to provide some code for me to debug.

@chivee
Copy link
Author

chivee commented Jul 30, 2018

image

import torch
import random
import torchvision.models as models
from tensorboardX import SummaryWriter

writer = SummaryWriter()
writer_list = []

for i in range(10):
    writer_list.append(SummaryWriter('test/{0}'.format(i)))

writer_item = []
for i in range(10):
    writer_item.append("data/test_name_{0}".format(i))

for n_iter in range(100000):
    for i in range(10):
        writer = writer_list[i]
        for j in range(10):
            rnd_value = random.uniform(0.9,1.1)
            writer.add_scalar('{0}'.format(writer_item[j]), rnd_value, n_iter)

# export scalar data to JSON for external processing
writer.export_scalars_to_json("./all_scalars.json")
writer.close()

@chivee
Copy link
Author

chivee commented Jul 30, 2018

@lanpa these code could reproduce the issue

@lanpa
Copy link
Owner

lanpa commented Aug 4, 2018

I didn't run the code, but I guess the data points should be there. see #44

@lanpa
Copy link
Owner

lanpa commented Jul 17, 2019

Screen Shot 2019-07-17 at 10 54 34 PM

Confirms that it should working and the size of event file is 53M per writer, which implies 53B per each record, reasonable.

@lanpa lanpa closed this as completed Jul 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants