-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pull: PR on stable-23 and hammersim23 rebase #4
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Change-Id: Ia68f1bef2bb9e4e5e18476b6100be80f8cf1c799
Change-Id: I58215dddc34476695c7aedc77b55d338e0304198
Change-Id: I243046c17264eb5c522285096ecf9c7e5e968322
Prior to this change we were limited to a root partition with only 60GB of space which caused issues when running larger simulations (see: gem5/gem5#165). There are two factors in this issue which this patch resolves: 1. The root partition in the VM was capped at 60GB despite the virtual machines size being capped at 128GB. This resulted in libvirt giving the VM free space it couldn't use. To fix this `lvextend` was added to the "provision_root.sh" script to resize the root partition to fill the available space. 2. The virtual machine size can be set via the `machine_virtual_size` parameter. The minimum and default value is 128GB. This wasn't exposed previously. Now, if we required, we can increase the size of the VM/Root partition if we require (though I believe 128GB is more than sufficient for now). Fixes: gem5/gem5#165 Change-Id: I82dd500d8807ee0164f92d91515729d5fbd598e3
This patch removed the bespoke "vm_manager.sh" script in favor of a Multi-Machine Vagrantfile. With this the users needs to only change the variables in Vagrantfile then use the standard `vagrant` commands to launch the VMs/Runners. Change-Id: Ida5d2701319fd844c6a5b6fa7baf0c48b67db975
Change-Id: I01e637f09084acb6c5fbd7800b3e578a43487849
Change-Id: If9ecf467efa5c7118d34166953630e6c436c55a4
1. All VMs are deployable from a single Vagrantfile (per host machine). 2. Runners within VMs are now ephemeral. They cease to exist after a job is complete. After the VM cleans the workspace and creates a new runner. This will reduce old data, scripts, and images causing space issues on our VMs 3. No more 'vm_manager.sh' script. The standard `vagrant` command to manage the VMs will work. 4. Adds Copyright notices where missing.
The PR is fixing the CHI fromSequencer helper function which is making use of the undefined tbe entry. This has been broken by #177 Change-Id: I52feff4b5ab2faf0aa91edd6572e3e767c88e257
The syscall emulation of brk() incorrectly did not ensure that newly allocated memory was zero-initialized, which Linux guarantees and which seems to be the expectation of glibc's malloc() and free() implementation. This patch fixes the incorrect behavior by zero- initalizing all memory allocations via brk(). GitHub issue: gem5/gem5#342 Change-Id: I53cf29d6f3f83285c8e813e18c06c2e9a69d7cc2
This info can be used during TLB invalidation Change-Id: I81247e40b11745f0207178b52c47845ca1b92870 Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
This is still trying to completely remove any artifact which implies virtualization is only supported in non-secure mode (NS=1) Change-Id: I83fed1c33cc745ecdf3c5ad60f4f356f3c58aad5 Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
Change-Id: I7eb020573420e49a8a54e1fc7a89eb6e2236dacb Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
- A new abstract BTB class is created to enable different BTB implementations. The new BTB class gets its own parameter and stats. - An enum is added to differentiate branch instruction types. This enum is used to enhance statistics and BPU management. - The existing BTB is moved into `simple_btb` as default. - An additional function is added to store the static instruction in the BTB. This function is used for the decoupled front-end. - Update configs to match new BTB parameters. Change-Id: I99b29a19a1b57e59ea2b188ed7d62a8b79426529 Signed-off-by: David Schall <david.schall@ed.ac.uk>
Modified the x86 KVM-in-SE syscall handler to flush the TLB following each syscall, in case the page table has been modified. This is done by reloading the value in %cr3. Doing this requires an intermediate GPR, which we store in a new scratch buffer following the syscall code at address `syscallDataBuf`. GitHub issue: gem5/gem5#409
We define a new parent (ClusterSystem) to model a system with one or more cpu clusters within it. The idea is to make this new base class reusable by SE systems/scripts as well (like starter_se.py) Change-Id: I1398d773813db565f6ad5ce62cb4c022cb12a55a Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com> Reviewed-by: Richard Cooper <richard.cooper@arm.com>
Change-Id: I9d120fbaf0c61c5a053163ec1e5f4f93c583df52 Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com> Reviewed-by: Richard Cooper <richard.cooper@arm.com>
Change-Id: I742e280e7a2a4047ac4bb3d783a28ee97f461480 Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com> Reviewed-by: Richard Cooper <richard.cooper@arm.com>
According to the original paper [1] the elastic trace generation process requires a cpu with a big number of entries in the ROB, LQ and SQ, so that there are no stalls due to resource limitation. At the moment these numbers are copy pasted from the CpuConfig.config_etrace method [2]. [1]: https://ieeexplore.ieee.org/document/7818336 [2]: https://github.com/gem5/gem5/blob/stable/\ configs/common/CpuConfig.py#L40 Change-Id: I00fde49e5420e420a4eddb7b49de4b74360348c9 Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com> Reviewed-by: Richard Cooper <richard.cooper@arm.com>
The new script will automatically use the newly defined O3_ARM_v7a_3_Etrace CPU to run a simple SE simulation while generating elastic trace files. The script is based on starter_se.py, but contains the following limitations: 1) No L2 cache as it might affect computational delay calculations 2) Supporting SimpleMemory only with minimal memory latency There restrictions were imported by the existing elastic trace generation logic in the common library (collected by grepping elastic_trace_en) [1][2][3] Example usage: build/ARM/gem5.opt configs/example/arm/etrace_se.py \ --inst-trace-file [INSTRUCTION TRACE] \ --data-trace-file [DATA TRACE] \ [WORKLOAD] [1]: https://github.com/gem5/gem5/blob/stable/\ configs/common/MemConfig.py#L191 [2]: https://github.com/gem5/gem5/blob/stable/\ configs/common/MemConfig.py#L232 [3]: https://github.com/gem5/gem5/blob/stable/\ configs/common/CacheConfig.py#L130 Change-Id: I021fc84fa101113c5c2f0737d50a930bb4750f76 Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com> Reviewed-by: Richard Cooper <richard.cooper@arm.com>
Change-Id: If8c37bdccf35a070870900c06dc4640348f0f063 Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
Change-Id: Ifb8c8dc1729cc21007842b950273fe38129d9539 Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
Change-Id: I0396f5938c09b68fcc3303a6fdda1e4dde290869 Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
All runners are now equal, these labels are pointless. Change-Id: I9d5fb31e20e95d30e9726d4bf0353dc87af614d7
These were previously only running on single-threaded machines. Now they'll be running on 4-core VMs so may as well run tests in parallel. Change-Id: I7ee86512dc72851cea307dfd800dcf9a02f2f738
Change-Id: Ib7b2eba5f08a1d8a311dc20cb55f540a5cd7dc7b
This is the first PR in a series of enhancements to the BPU proposed in #358. However, I think putting everything into one PR is not nice to review and prone to oversee I might did. This PR restructures the BTB: - A new abstract BTB class is created to enable different BTB implementations. The new BTB class gets its own parameter and stats. - An enum is added to differentiate branch instruction types. This enum is used to enhance statistics and BPU management. - The existing BTB is moved into `simple_btb` as default. - An additional function is added to store the static instruction in the BTB. This function is used for the decoupled front-end. - Update configs to match new BTB parameters.
This comment was left in the codebase in error. The `set_se_binary_workload` function works fine with multi-threaded applications. This hasn't been a restriction for some time.
…n (#151) Added a parameter (_disk_device) to kernel_disk_workload which allows users to change the disk device location. get_disk_device() now chooses between the parameter and, if no parameter was passed, it calls a new function _get_default_disk_device() which is implemented by each board and has a default disk device according to each board, eg /dev/hda in the x86_board. The previous way of setting a disk device still exists as a default, however, with the new function users can now override this default
The Stride Prefetcher will skip this number of strides ahead of the first identified prefetch, then generate `degree` prefetches at `stride` intervals. A value of zero indicates no skip (i.e. start prefetching from the next identified prefetch address). This parameter can be used to increase the timeliness of prefetches by starting to prefetch far enough ahead of the demand stream to cover the memory system latency. [Richard Cooper <richard.cooper@arm.com>: - Added detail to commit comment and `distance` Param documentation. - Changed `distance` Param from `Param.Int` to `Param.Unsigned`. ] Change-Id: I6c4e744079b53a7b804d8eab93b0f07b566f0c08 Reviewed-by: Giacomo Travaglini <giacomo.travaglini@arm.com> Signed-off-by: Richard Cooper <richard.cooper@arm.com>
This commit optimizes the address generation logic in the strided prefetcher by introducing the following changes (d is the degree of the prefetcher) * Evaluate the fixed prefetch_stride only once (and not d-times) * Replace 2d multiplications (d * prefetch_stride and distance * prefetch_stride) with additions by updating the new base prefetch address while looping Change-Id: I49c52333fc4c7071ac3d73443f2ae07bfcd5b8e4 Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com> Reviewed-by: Richard Cooper <richard.cooper@arm.com> Reviewed-by: Tiberiu Bucur <tiberiu.bucur@arm.com>
- resources field in workload now supports a dict with resources id and version. - Older workload JSON are still supported but added a deprecation waring
Currently, the GPU SQC (L1I$) and TCP (L1D$) have a performance bug where they do not behave correctly when multiple requests to the same cache line overlap one another. The intended behavior is that if the first request that arrives at the Ruby code for the SQC/TCP misses, it should send a request to the GPU TCC (L2$). If any requests to the same cache line occur while this first request is pending, they should wait locally at the L1 in the MSHRs (TBEs) until the first request has returned. At that point they can be serviced, and assuming the line has not been evicted, they should hit. For example, in the following test (on 1 GPU thread, in 1 WG): load Arr[0] load Arr[1] load Arr[2] The expected behavior (confirmed via profiling on real GPUs) is that we should get 1 miss (Arr[0]) and 2 hits (Arr[1], Arr[2]) for such a program. However, the current support in the VIPER SQC/TCP code does not model this correctly. Instead it lets all 3 concurrent requests go straight through to the TCC instead of stopping the Arr[1] and Arr[2] requests locally while Arr[0] is serviced. This causes all 3 requests to be classified as misses. To resolve this, this patch adds support into the SQC/TCP code to prevent subsequent, concurrent requests to a pending cache line from being sent in parallel with the original one. To do this, we add an additional transient state (IV) to indicate that a load is pending to this cache line. If a subsequent request of any kind to the same cache line occurs while this load is pending, the requests are put on the local wait buffer and woken up when the first request returns to the SQC/TCP. Likewise, when the first load is returned to the SQC/TCP, it transitions from IV --> V. As part of this support, additional transitions were also added to account for corner cases such as what happens when the line is evicted by another request that maps to the same set index while the first load is pending (the line is immediately given to the new request, and when the load returns it completes, wakes up any pending requests to the same line, but does not attempt to change the state of the line) and how GPU bypassing loads and stores should interact with the pending requests (they are forced to wait if they reach the L1 after the pending, non-bypassing load; but if they reach the L1 before the non-bypassing load then they make sure not to change the state of the line from IV if they return before the non-bypassing load). As part of this change, we also move the MSHR behavior from internally in the GPUCoalescer for loads to the Ruby code (like all other requests). This is important to get correct hits and misses in stats and other prints, since the GPUCoalescer MSHR behavior assumes all requests serviced out of its MSHR also miss if the original request to that line missed. Although the SQC does not support stores, the TCP does. Thus, we could have applied a similar change to the GPU stores at the TCP. However, since the TCP support assumes write-through caches and does not attempt to allocate space in the TCP, we elected not to add this support since it seems to run contrary to the intended behavior (i.e., the intended behavior seems to be that writes just bypass the TCP and thus should not need to wait for another write to the same cache line to complete). Additionally, making these changes introduced issues with deadlocks at the TCC. Specifically, some Pannotia applications have accesses to the same cache line where some of the accesses are GLC (i.e., they bypass the GPU L1 cache) and others are non-GLC (i.e., they want to be cached in the GPU L1 cache). We have support already per CU in the above code. However, the problem here is that these requests are coming from different CUs and happening concurrently (seemingly because different WGs are at different points in the kernel around the same time). This causes a problem because our support at the TCC for the TBEs overwrites the information about the GPU bypassing bits (SLC, GLC) every time. The problem is when the second (non-GLC) load reaches the TCC, it overwrites the SLC/GLC information for the first (GLC) load. Thus, when the the first load returns from the directory/memory, it no longer has the GLC bit set, which causes an assert failure at the TCP. After talking with other developers, it was decided the best way handle this and attempt to model real hardware more closely was to move the point at which requests are put to sleep on the wakeup buffer from the TCC to the directory. Accordingly, this patch includes support for that -- now when multiple loads (bypassing or non-bypassing) from different CUs reach the directory, all but the first one will be forced to wait there until the first one completes, then will be woken up and performed. This required updating the WTRequestor information at the TCC to pass the information about what CU performed the original request for loads as well (otherwise since the TBE can be updated by multiple pending loads, we can't tell where to send the final result to). Thus, I changed the field to be named CURequestor instead of WTRequestor since it is now used for more than stores. Moreover, I also updated the directory to take this new field and the GLC information from incoming TCC requests and then pass that information back to the TCC on the response -- without doing this, because the TBE can be updated by multiple pending, concurrent requests we cannot determine if this memory request was a bypassing or non-bypassing request. Finally, these changes introduced a lot of additional contention and protocol stalls at the directory, so this patch converted all directory uses of z_stall to instead put requests on the wakeup buffer (and wake them up when the current request completes) instead. Without this, protocol stalls cause many applications to deadlock at the directory. However, this exposed another issue at the TCC: other applications (e.g., HACC) have a mix of atomics and non-atomics to the same cache line in the same kernel. Since the TCC transitions to the A state when an atomic arrives. For example, after the first pending load returns to the TCC from the directory, which causes the TCC state to become V, but when there are still other pending loads at the TCC. This causes invalid transition errors at the TCC when those pending loads return, because the A state thinks they are atomics and decrements the pending atomic count (plus the loads are never sent to the TCP as returning loads). This patch fixes this by changing the TCC TBEs to model the number of pending requests, and not allowing atomics to be issued from the TCC until all prior, pending non-atomic requests have returned. Change-Id: I37f8bda9f8277f2355bca5ef3610f6b63ce93563
The comp_anr parameter is currently unused. Both parameters (comp_wu and comp_anr) are set to false by default Change-Id: If09567504540dbee082191d46fcd53f1363d819f Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com>
This pull request contains a set of small patches which fix some bugs in the gem5 prefetchers, and aligns out-of-the box prefetcher performance more closely with that which a typical user would expect. The performance patches have been tested with an out-of-the-box (untuned) Stride prefetcher configuration against a set of SPEC 2017 SimPoints, and show a modest IPC uplift across the board, with no IPC degradation. The new defaults were identified as part of work on gem5 prefetchers undertaken by Nikolaos Kyparissas while on internship at Arm.
This dockerfile is used to *build* applications (e.g., from gem5-resources) which can be run using full system mode in a GPU build. The next releases disk image will use ROCm 5.4.2, therefore bump the version from 4.2 to that version. Again this is used to *build* input applications only and is not needed to run or compile gem5 with GPUFS. For example: $ docker build -t rocm54-build . /some/gem5-resources/src/gpu/lulesh$ docker run --rm -u $UID:$GID -v \ ${PWD}:${PWD} -w ${PWD} rocm54-build make Change-Id: If169c8d433afb3044f9b88e883ff3bb2f4bc70d2
gem5/gem5#386 included two cases in "src/dev/reg_bank.hh" where `std:: min` was used to compare a an integer of type `size_t` and another of type `Addr`. This cause an error on my Apple Silicon Mac as this is a comparison between an "unsigned long" and an "unsigned long long" which (at least on my setup) was not permitted. To fix this issue the `reg_size` was changed from `size_t` to `Addr`, as well as it the types of the values it was derived from and the variable used to hold the return from the `std::min` calls. Change-Id: I31e9c04a8e0327d4f6f5390bc5a743c629db4746
gem5/gem5#386 included two cases in "src/dev/reg_bank.hh" where `std:: min` was used to compare a an integer of type `size_t` and another of type `Addr`. This causes an error on my Apple Silicon Mac as the comparison between an "unsigned long" and an "unsigned long long" is not permitted. To fix this issue this patch changes `reg_size` from `size_t` to `Addr`, as well as it the types of the values it was derived from and the variable used to hold the return from the `std::min` calls. While not completely correct typing from a labelling perspective (`reg_bytes` is not an address), functions in "src/dev/reg_bank.hh" already abuse `Addr` in this way frequently (for example, `bytes` in the `write` function).
The destination for the response is set twice.
This was deprecated in C++14 and removed in C++17. This has been replaced with std::random. This has been implemented to ensure reproducible results despite (pseudo)random behavior. Change-Id: Idd52bc997547c7f8c1be88f6130adff8a37b4116
Change-Id: I3bbcfd4dd9798149b37d4a2824fe63652e29786c
This is a simple copy of the current state of the .github on the develop branch, as of 2023-07-27. The stable branch .github dir should never be ahead of that on develop. Therefore this should be safe to do. Change-Id: I1e39de2d1f923d1834d0a77f79a1ff3220964bba
Change-Id: I07e5e5f3bc95b78459b77c0f1170923f6c9daf18
Change-Id: Icd673083f23a465205bea12407bf265e2ba6fb4a
Change-Id: I5ae5081e0ac5524271e6c8300917d7d1e16d71ee
Change-Id: I7755f90c1b5d81ff1cf66920f229be921d47e844
This is done periodically as the Github Action's infrastructure reads files from the repos main branch (`stable`) where as changes are made to `develop`'s ".github" which should be made live ASAP. This does not affect the gem5 build or any configuration scripts, only how testing is performed via GitHub and other info used by Github (e.g., ".github/ISSUE_TEMPLATE" which outlines the templates for creating issues via our GitHub issues page.
This change updates DRAMInterface to to be compatible with gem5 v23 Signed-off-by: Kaustav Goswami <kggoswami@ucdavis.edu>
This change adds information on how to use HammerSim in the README.md file. Signed-off-by: Kaustav Goswami <kggoswami@ucdavis.edu>
This change adds a simple statistically generated variation map. It uses VARIUS to generate the initial map. The map is formated into a json file. Change-Id: I2ad252b8fd8ffee95678293892c7232df890f18a Signed-off-by: Kaustav Goswami <kggoswami@ucdavis.edu>
This change changes the default path for the device_file in the util/hammersim directory. Change-Id: I33aa2df7a474276ee9fe2e85df61add209babc47 Signed-off-by: Kaustav Goswami <kggoswami@ucdavis.edu>
This change adds sample configs to simulate rowhammer using hammersim. Change-Id: I8840c55498ad379e953550f765b4880901a120ca Signed-off-by: Kaustav Goswami <kggoswami@ucdavis.edu>
This change cleans most of the non-important comments from the hammersim codebase. Change-Id: I316a067da79ca12c1e6dbb930887436237a687df Signed-off-by: Kaustav Goswami <kggoswami@ucdavis.edu>
kaustav-goswami
force-pushed
the
hammersim23
branch
from
November 22, 2023 06:14
311167b
to
82f57d2
Compare
This change resolves all the conflits encountered while rebasing. Signed-off-by: Kaustav Goswami <kggoswami@ucdavis.edu>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.