Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

C/C++: Read/Write to temporary file is adding Overhead to Duration in Compact format #159

Open
alizalisan opened this issue Jul 3, 2024 · 3 comments

Comments

@alizalisan
Copy link
Collaborator

No description provided.

@tpatki
Copy link
Member

tpatki commented Jul 3, 2024

@alizalisan @loudsun1997

On the python side, we don't have this problem as we have an "around" decorator and we don't need to call the before/after functions to mimic around; as a result we don't have to keep track of values separately in a file (or in memory).

As a first cut, we're going to make sure that the start timestamp is updated here:

myfile << std::to_string(my_ts) << "\n";
. Essentially, either we create the event here, or update my_ts here, so we don't include the file creation time (PFA overhead) in our reporting (measurement of user/application function).

In the future, we need to think about how to handle this differently: is there a way to not use files that is convenient? Using file I/O here can add to overheads.

@alizalisan
Copy link
Collaborator Author

alizalisan commented Jul 14, 2024

@tpatki @loudsun1997

  1. The ending timestamp is being recorded at the correct and same time for both compact and verbose formats. It does not need to be moved as the ending timestamp is recorded at the very beginning of the after function. It does not include durations for file opening and reading.

  2. The beginning timestamp can be updated in the before function up until the point where it is written to the file which makes a little difference. The issue is that the single file write operation is taking around 250-430 microseconds.

  3. The difference seems more significant with tiny functions like bas or bar which are not compute intensive. This can be ignored in compute intensive/actual functions i.e. becomes insignificant.

  4. Following are the durations for compact and verbose formats for the three functions from smoketest3.py. This data includes the change where the beginning timestamp is being recorded before writing it to the file.

Function            Verbose             Compact
foo                309ms 87us         309ms 651us
bar                108us              2ms 209us
bas                55us               439us
  • Handling of multiple threads and processes use-case: The file name consists of the function name, its processes and thread ID. This ensures that the beginning timestamp for a certain function is read from the correct file in the after function.

  • Do we delete temp files? Yes, we do, in the after function.

  • For now, we have decided to leave the functionality as it is.

@tpatki
Copy link
Member

tpatki commented Oct 8, 2024

@slabasan Tagging you on this issue.

One efficient solution for this is to probably have a per-thread "stack" (function values will be pushed in order of arrival and popped as they exit). We can push the timestamp in "before", so we capture these in the order of arrival, and we pop from the stack as we enter the "after". The ideal way to implement this is with a dynamically allocated "stack" (can be implemented with std::deque) that is per-thread. But this can be complex to track, as PFA will need to be aware of the number of threads (and the associated process IDs) to ensure that we create enough of these stacks. But such an approach should reduce our overhead drastically compared to the potentially hundreds of files we are creating right now.

As of right now, we're creating num_processes * num_threads * all_annotated_functions number of temp files (!!), which was our first cut at supporting this 2 years ago. This overwhelms the file system and causes a massive overhead when computation time is small. If you have a lot of function calls, this also can trigger ulimit problems with respect to open file descriptors and the file-sizes. Note that in an application that's very compute intensive, this overhead is very small, but if you have an app that has less compute, the I/O time from all the files we create dominates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants