Releases: linkedin/dynamometer
Releases · linkedin/dynamometer
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
Release notes were automatically generated by Shipkit
0.1.0
- 2019-03-01 - 44 commits by 6 authors - published to
- Commits: Erik Krogen (36), Christopher Gregorian (3), Chao Sun (2), lfengnan (1), Peretz Cohen (1), Szczepan Faber (1)
- Workaround for bintray user issue (#86)
- Fix bintray config, moving user config to top level (#85)
- Closes #60. Fix issues with the start-workload.sh script. (#84)
- Add travis build step and fix javadoc formatting (#80)
- Add Shipkit integration for Bintray deployment (#79)
- Update parse start script to handle the case that fsimage txid is cov… (#78)
- Add support for a reducer step that aggregates per-user metrics (#76)
- Create a new TimedInputFormat with the time-based virtual split logic used by CreateFileMapper. This allows for future workloads to easily also have time-based mapper logic. (#75)
- Clean up MiniYARNCluster setup to not use fixed ports (#74)
- Add the ability to specify additional dependencies for containers (#73)
- Closes #67. Change the way DataNode host names are specified to be more resilient to hostname resolution. (#68)
- Build is failing on Travis CI due to change in host name resolution (#67)
- Update README.md with minor wording fixes (#61)
- Fix README (replace hadoop_binary with hadoop_binary_path) (#59)
- Closes #57. Overwrite existing files when uploading FSImage. (#58)
- upload-fsimage.sh should check whether VERSION exists before uploading? (#57)
- Closes #52. Fix a bug in -help option processing in the client. (#55)
- Closes #53. Enhance the blockReportThread to continue requesting reports until there are no more DataNodes which match the criteria (#54)
- Continue requesting block reports until all DataNodes have reported (#53)
- start-dynamometer-cluster.sh doesn't print help message as expected (#52)
- Closes #50. Fix a bug in the parse-metrics.sh script which did not properly parse values in scientific notation. (#51)
- parse-metrics script does not properly parse scientific notation (#50)
- Closes #48. Make the percentage-based thresholds for infra app NameNode readiness configurable. (#49)
- Add configurability for NameNode readiness criteria (#48)
- Bump Hadoop 2.8 version in tests to 2.8.4 (#47)
- Closes #41. Remove unused DataNodeLayoutVersionFetcher utility. (#46)
- Closes #43. Support build environment overrides. (#45)
- Closes #42. Add in an Azkaban runner to support running Dynamometer as a HadoopJavaJob. (#44)
- Support build environment overrides. (#43)
- Support being run via Azkaban (#42)
- Remove DataNodeLayoutVersionFetcher (#41)
- Closes #30. Add the ability to specify custom NameNode name and edits dirs. (#40)
- Closes #28. Remove the check for truncate in the integration test. (#39)
- Closes #37. Add in the ability to trigger block reports on DataNodes that haven't reported everything yet. (#38)
- Add the ability to trigger block reports on simulated DataNodes (#37)
- Fix test failures by bumping default Hadoop version to 2.7.6. (#36)
- Closes #31. Add the ability to specify the view ACLs. Piggyback off of the MR config. (#35)
- Closes #32. Add the ability to specify workload-job-only configurations. (#34)
- Closes #29. Support specifying node labels for the DataNode and NameNode containers. (#33)
- Add the ability to specify configurations which apply only to the workload job (#32)
- Provide a way to configure the view ACLs for logs (#31)
- Add in the ability to override the NameNode name and edits directory (#30)
- Add in the ability to configure node labels for Dynamometer NameNode / DataNode containers (#29)
- Remove check for
truncate
command in integration test (#28) - Follow-on to PR #26. Improve TravisCI build: Remove failing/old Hadoop 2.6 build, increase logging for debuggability, and add a status badge. (#27)
- Closes #25. Set up Travis CI and add in a Gradle wrapper. (#26)
- Set up Travis CI to run against new commits (#25)
- Fix TestWorkloadGenerator which was broken by PR #18. (#24)
- Closes #20. Add a progress updating thread to the audit workload replay (#23)
- Closes #21. Add the ability to specify resources from the unpacked JAR. (#22)
- Add the ability to specify resources from the unpacked JAR (#21)
- Add a progress updating thread to the Audit replay workload (#20)
- Closes #16. Use a single shared queue between all threads in each workload mapper. (#19)
- Closes #14. Proxy in workload job to use enable permission checking. (#18)
- Closes #15. Support rate-factor parameter to artificially increase intensity of a given workload by speeding it up. (#17)
- Workload replay threads should share a single DelayQueue (#16)
- Enable the ability to replay workloads at increased / decreased speed (#15)
- Audit workload replay should proxy to use real permissions (#14)
- Closes #11. Bump the Hadoop version to 2.7.5 (#13)
- Closes #9. Gradle to pass relevant system properties into the dynamometer-infra test task. (#12)
- Bump Hadoop version in test dependency up to 2.7.5 (#11)
- Gradle should pass relevant system properties into tests (#9)
- Closes #7. Replace usage of package-private dfs field within DistributedFileSystem with getClient. (#8)
- Fix usage of DFSClient (follow-on to #1) (#7)
- Closes #4. Use MiniDFSCluster / SimulatedFSDataset to be more efficient (#5)
- Use fully in-memory DataNodes in the same JVM (#4)
- Closes #1. Improvements to accuracy of workload audit replay. (#2)
- Improvements to audit replay accuracy (#1)