Auto merge of #19257 - asajeffrey:test-perf-include-date-in-csv, r=jdm

Include a YYYYMMDD date field in the raw test-perf CSV to make Google Data Studio happy

<!-- Please describe your changes on the following line: -->

AFAICT Google Data Studio needs a YYYMMDD date string in the raw data to be able to provide time-indexed graphs. `TODATE` is not converting unix timestamps to YYYYMMDD.

https://www.en.advertisercommunity.com/t5/Data-Studio/Can-Data-Studio-Date-Dimension-use-epoch-in-seconds-from/td-p/839649

This PR is a workaround, and includes a date field in the raw CSV.

---
<!-- Thank you for contributing to Servo! Please replace each `[ ]` by `[X]` when the step is complete, and replace `__` with appropriate data: -->
- [X] `./mach build -d` does not report any errors
- [X] `./mach test-tidy` does not report any errors
- [X] These changes do not require tests because this is test infrastructure

<!-- Also, please make sure that "Allow edits from maintainers" checkbox is checked, so that we can help you if you get stuck somewhere along the way.-->

<!-- Pull requests that do not address these steps are welcome, but they will require additional verification as part of the review process. -->

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/servo/19257)
<!-- Reviewable:end -->
This commit is contained in:
bors-servo 2017-11-16 16:46:26 -06:00 committed by GitHub
commit b5a205d92e

View file

@ -10,11 +10,15 @@ import itertools
import json import json
import os import os
import subprocess import subprocess
from datetime import datetime
from functools import partial from functools import partial
from statistics import median, StatisticsError from statistics import median, StatisticsError
from urllib.parse import urlsplit, urlunsplit, urljoin from urllib.parse import urlsplit, urlunsplit, urljoin
DATE = datetime.now().strftime("%Y%m%d")
def load_manifest(filename): def load_manifest(filename):
with open(filename, 'r') as f: with open(filename, 'r') as f:
text = f.read() text = f.read()
@ -169,6 +173,7 @@ def parse_log(log, testcase, url):
# rather than the url. # rather than the url.
def set_testcase(timing, testcase=None): def set_testcase(timing, testcase=None):
timing['testcase'] = testcase timing['testcase'] = testcase
timing['date'] = DATE
return timing return timing
valid_timing_for_case = partial(valid_timing, url=url) valid_timing_for_case = partial(valid_timing, url=url)
@ -240,6 +245,7 @@ def save_result_json(results, filename, manifest, expected_runs, base):
def save_result_csv(results, filename, manifest, expected_runs, base): def save_result_csv(results, filename, manifest, expected_runs, base):
fieldnames = [ fieldnames = [
'date',
'testcase', 'testcase',
'title', 'title',
'connectEnd', 'connectEnd',