aboutsummaryrefslogtreecommitdiffstats
path: root/src/tests/perf_tests/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'src/tests/perf_tests/README.md')
-rw-r--r--src/tests/perf_tests/README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/src/tests/perf_tests/README.md b/src/tests/perf_tests/README.md
index 8581f2535..25cb5b4e4 100644
--- a/src/tests/perf_tests/README.md
+++ b/src/tests/perf_tests/README.md
@@ -8,7 +8,7 @@ You can also build your own dashboards. For example, a comparison of ANGLE's bac
You can follow the usual instructions to [check out and build ANGLE](../../../doc/DevSetup.md). Build the `angle_perftests` target. Note that all test scores are higher-is-better. You should also ensure `is_debug=false` in your build. Running with `angle_assert_always_on` or debug validation enabled is not recommended.
-Variance can be a problem when benchmarking. We have a test harness to run a single test in an infinite loop and print some statistics to help mitigate variance. See [`scripts/perf_test_runner.py`](https://chromium.googlesource.com/angle/angle/+/master/scripts/perf_test_runner.py). To use the script first compile `angle_perftests` into a folder with the word `Release` in it. Then provide the name of the test as the argument to the script. The script will automatically pick up the most current `angle_perftests` and run in an infinite loop.
+Variance can be a problem when benchmarking. We have a test harness to run a single test in an infinite loop and print some statistics to help mitigate variance. See [`scripts/perf_test_runner.py`](https://chromium.googlesource.com/angle/angle/+/main/scripts/perf_test_runner.py). To use the script first compile `angle_perftests` into a folder with the word `Release` in it. Then provide the name of the test as the argument to the script. The script will automatically pick up the most current `angle_perftests` and run in an infinite loop.
### Choosing the Test to Run