Update to latest wptrunner

This commit is contained in:
James Graham 2016-02-02 09:54:39 +00:00
parent 9baa59a6b4
commit f234d99ac1
15 changed files with 229 additions and 87 deletions

View file

@ -29,7 +29,7 @@ following are most significant:
The path to a binary file for the product (browser) to test against. The path to a binary file for the product (browser) to test against.
``--webdriver-binary`` (required if product is `chrome`) ``--webdriver-binary`` (required if product is `chrome`)
The path to a `*driver` binary; e.g., a `chromedriver` binary. The path to a `driver` binary; e.g., a `chromedriver` binary.
``--certutil-binary`` (required if product is `firefox` [#]_) ``--certutil-binary`` (required if product is `firefox` [#]_)
The path to a `certutil` binary (for tests that must be run over https). The path to a `certutil` binary (for tests that must be run over https).
@ -43,13 +43,18 @@ following are most significant:
``--prefs-root`` (required only when testing a Firefox binary) ``--prefs-root`` (required only when testing a Firefox binary)
The path to a directory containing Firefox test-harness preferences. [#]_ The path to a directory containing Firefox test-harness preferences. [#]_
``--config`` (should default to `wptrunner.default.ini`)
The path to the config (ini) file.
.. [#] The ``--certutil-binary`` option is required when the product is .. [#] The ``--certutil-binary`` option is required when the product is
``firefox`` unless ``--ssl-type=none`` is specified. ``firefox`` unless ``--ssl-type=none`` is specified.
.. [#] The ``--metadata`` path is to a directory that contains: .. [#] The ``--metadata`` path is to a directory that contains:
* a ``MANIFEST.json`` file (the web-platform-tests documentation has * a ``MANIFEST.json`` file (instructions on generating this file are
instructions on generating this file); and available in the `detailed documentation
<http://wptrunner.readthedocs.org/en/latest/usage.html#installing-wptrunner>`_);
and
* (optionally) any expectation files (see below) * (optionally) any expectation files (see below)
.. [#] Example ``--prefs-root`` value: ``~/mozilla-central/testing/profiles``. .. [#] Example ``--prefs-root`` value: ``~/mozilla-central/testing/profiles``.
@ -125,7 +130,7 @@ input to the `wptupdate` tool.
Expectation File Format Expectation File Format
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
Metadat about tests, notably including their expected results, is Metadata about tests, notably including their expected results, is
stored in a modified ini-like format that is designed to be human stored in a modified ini-like format that is designed to be human
editable, but also to be machine updatable. editable, but also to be machine updatable.

View file

@ -28,19 +28,19 @@ environment created as above::
pip install -e ./ pip install -e ./
In addition to the dependencies installed by pip, wptrunner requires In addition to the dependencies installed by pip, wptrunner requires
a copy of the web-platform-tests repository. That can be located a copy of the web-platform-tests repository. This can be located
anywhere on the filesystem, but the easiest option is to put it within anywhere on the filesystem, but the easiest option is to put it
the wptrunner checkout directory, as a subdirectory named ``tests``:: under the same parent directory as the wptrunner checkout::
git clone https://github.com/w3c/web-platform-tests.git tests git clone https://github.com/w3c/web-platform-tests.git
It is also necessary to generate a web-platform-tests ``MANIFEST.json`` It is also necessary to generate a web-platform-tests ``MANIFEST.json``
file. It's recommended to put that within the wptrunner file. It's recommended to also put that under the same parent directory as
checkout directory, in a subdirectory named ``meta``:: the wptrunner checkout, in a directory named ``meta``::
mkdir meta mkdir meta
cd tests cd web-platform-tests
python tools/scripts/manifest.py ../meta/MANIFEST.json python manifest --path ../meta/MANIFEST.json
The ``MANIFEST.json`` file needs to be regenerated each time the The ``MANIFEST.json`` file needs to be regenerated each time the
web-platform-tests checkout is updated. To aid with the update process web-platform-tests checkout is updated. To aid with the update process
@ -74,6 +74,9 @@ takes multiple options, of which the following are most significant:
``--prefs-root`` (required only when testing a Firefox binary) ``--prefs-root`` (required only when testing a Firefox binary)
The path to a directory containing Firefox test-harness preferences. [#]_ The path to a directory containing Firefox test-harness preferences. [#]_
``--config`` (should default to `wptrunner.default.ini`)
The path to the config (ini) file.
.. [#] The ``--certutil-binary`` option is required when the product is .. [#] The ``--certutil-binary`` option is required when the product is
``firefox`` unless ``--ssl-type=none`` is specified. ``firefox`` unless ``--ssl-type=none`` is specified.
@ -94,10 +97,17 @@ The following examples show how to start wptrunner with various options.
Starting wptrunner Starting wptrunner
------------------ ------------------
The examples below assume the following directory layout,
though no specific folder structure is required::
~/testtwf/wptrunner # wptrunner checkout
~/testtwf/web-platform-tests # web-platform-tests checkout
~/testtwf/meta # metadata
To test a Firefox Nightly build in an OS X environment, you might start To test a Firefox Nightly build in an OS X environment, you might start
wptrunner using something similar to the following example:: wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \ wptrunner --metadata=~/testtwf/meta/ --tests=~/testtwf/web-platform-tests/ \
--binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/dist/Nightly.app/Contents/MacOS/firefox \ --binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/dist/Nightly.app/Contents/MacOS/firefox \
--certutil-binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/security/nss/cmd/certutil/certutil \ --certutil-binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/security/nss/cmd/certutil/certutil \
--prefs-root=~/mozilla-central/testing/profiles --prefs-root=~/mozilla-central/testing/profiles
@ -106,7 +116,7 @@ wptrunner using something similar to the following example::
And to test a Chromium build in an OS X environment, you might start And to test a Chromium build in an OS X environment, you might start
wptrunner using something similar to the following example:: wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \ wptrunner --metadata=~/testtwf/meta/ --tests=~/testtwf/web-platform-tests/ \
--binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \ --binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
--webdriver-binary=/usr/local/bin/chromedriver --product=chrome --webdriver-binary=/usr/local/bin/chromedriver --product=chrome
@ -118,7 +128,7 @@ To restrict a test run just to tests in a particular web-platform-tests
subdirectory, specify the directory name in the positional arguments after subdirectory, specify the directory name in the positional arguments after
the options; for example, run just the tests in the `dom` subdirectory:: the options; for example, run just the tests in the `dom` subdirectory::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \ wptrunner --metadata=~/testtwf/meta --tests=~/testtwf/web-platform-tests/ \
--binary=/path/to/firefox --certutil-binary=/path/to/certutil \ --binary=/path/to/firefox --certutil-binary=/path/to/certutil \
--prefs-root=/path/to/testing/profiles \ --prefs-root=/path/to/testing/profiles \
dom dom
@ -180,7 +190,7 @@ Configuration File
wptrunner uses a ``.ini`` file to control some configuration wptrunner uses a ``.ini`` file to control some configuration
sections. The file has three sections; ``[products]``, sections. The file has three sections; ``[products]``,
``[paths]`` and ``[web-platform-tests]``. ``[manifest:default]`` and ``[web-platform-tests]``.
``[products]`` is used to ``[products]`` is used to
define the set of available products. By default this section is empty define the set of available products. By default this section is empty
@ -195,12 +205,12 @@ e.g.::
chrome = chrome =
netscape4 = path/to/netscape.py netscape4 = path/to/netscape.py
``[paths]`` specifies the default paths for the tests and metadata, ``[manifest:default]`` specifies the default paths for the tests and metadata,
relative to the config file. For example:: relative to the config file. For example::
[paths] [manifest:default]
tests = checkouts/web-platform-tests tests = ~/testtwf/web-platform-tests
metadata = /home/example/wpt/metadata metadata = ~/testtwf/meta
``[web-platform-tests]`` is used to set the properties of the upstream ``[web-platform-tests]`` is used to set the properties of the upstream

View file

@ -192,7 +192,7 @@ class B2GExecutorBrowser(ExecutorBrowser):
import sys, subprocess import sys, subprocess
self.device = mozdevice.ADBDevice() self.device = mozdevice.ADBB2G()
self.device.forward("tcp:%s" % self.marionette_port, self.device.forward("tcp:%s" % self.marionette_port,
"tcp:2828") "tcp:2828")
self.executor = None self.executor = None

View file

@ -28,7 +28,8 @@ __wptrunner__ = {"product": "firefox",
"browser_kwargs": "browser_kwargs", "browser_kwargs": "browser_kwargs",
"executor_kwargs": "executor_kwargs", "executor_kwargs": "executor_kwargs",
"env_options": "env_options", "env_options": "env_options",
"run_info_extras": "run_info_extras"} "run_info_extras": "run_info_extras",
"update_properties": "update_properties"}
def check_args(**kwargs): def check_args(**kwargs):
@ -54,7 +55,7 @@ def executor_kwargs(test_type, server_config, cache_manager, run_info_data,
cache_manager, **kwargs) cache_manager, **kwargs)
executor_kwargs["close_after_done"] = True executor_kwargs["close_after_done"] = True
if kwargs["timeout_multiplier"] is None: if kwargs["timeout_multiplier"] is None:
if kwargs["gecko_e10s"] and test_type == "reftest": if test_type == "reftest":
if run_info_data["debug"]: if run_info_data["debug"]:
executor_kwargs["timeout_multiplier"] = 4 executor_kwargs["timeout_multiplier"] = 4
else: else:
@ -71,9 +72,14 @@ def env_options():
"certificate_domain": "web-platform.test", "certificate_domain": "web-platform.test",
"supports_debugger": True} "supports_debugger": True}
def run_info_extras(**kwargs): def run_info_extras(**kwargs):
return {"e10s": kwargs["gecko_e10s"]} return {"e10s": kwargs["gecko_e10s"]}
def update_properties():
return ["debug", "e10s", "os", "version", "processor", "bits"], {"debug", "e10s"}
class FirefoxBrowser(Browser): class FirefoxBrowser(Browser):
used_ports = set() used_ports = set()

View file

@ -17,7 +17,9 @@ __wptrunner__ = {"product": "servo",
"reftest": "ServoRefTestExecutor"}, "reftest": "ServoRefTestExecutor"},
"browser_kwargs": "browser_kwargs", "browser_kwargs": "browser_kwargs",
"executor_kwargs": "executor_kwargs", "executor_kwargs": "executor_kwargs",
"env_options": "env_options"} "env_options": "env_options",
"run_info_extras": "run_info_extras",
"update_properties": "update_properties"}
def check_args(**kwargs): def check_args(**kwargs):
@ -47,8 +49,16 @@ def env_options():
"supports_debugger": True} "supports_debugger": True}
def run_info_extras(**kwargs):
return {"backend": kwargs["servo_backend"]}
def update_properties():
return ["debug", "os", "version", "processor", "bits", "backend"], None
def render_arg(render_backend): def render_arg(render_backend):
return {"cpu": "--cpu"}[render_backend] return {"cpu": "--cpu", "webrender": "--webrender"}[render_backend]
class ServoBrowser(NullBrowser): class ServoBrowser(NullBrowser):

View file

@ -23,7 +23,9 @@ __wptrunner__ = {"product": "servodriver",
"reftest": "ServoWebDriverRefTestExecutor"}, "reftest": "ServoWebDriverRefTestExecutor"},
"browser_kwargs": "browser_kwargs", "browser_kwargs": "browser_kwargs",
"executor_kwargs": "executor_kwargs", "executor_kwargs": "executor_kwargs",
"env_options": "env_options"} "env_options": "env_options",
"run_info_extras": "run_info_extras",
"update_properties": "update_properties"}
hosts_text = """127.0.0.1 web-platform.test hosts_text = """127.0.0.1 web-platform.test
127.0.0.1 www.web-platform.test 127.0.0.1 www.web-platform.test
@ -59,6 +61,14 @@ def env_options():
"supports_debugger": True} "supports_debugger": True}
def run_info_extras(**kwargs):
return {"backend": kwargs["servo_backend"]}
def update_properties():
return ["debug", "os", "version", "processor", "bits", "backend"], None
def make_hosts_file(): def make_hosts_file():
hosts_fd, hosts_path = tempfile.mkstemp() hosts_fd, hosts_path = tempfile.mkstemp()
with os.fdopen(hosts_fd, "w") as f: with os.fdopen(hosts_fd, "w") as f:
@ -88,6 +98,7 @@ class ServoWebDriverBrowser(Browser):
env = os.environ.copy() env = os.environ.copy()
env["HOST_FILE"] = self.hosts_path env["HOST_FILE"] = self.hosts_path
env["RUST_BACKTRACE"] = "1"
debug_args, command = browser_command(self.binary, debug_args, command = browser_command(self.binary,
[render_arg(self.render_backend), "--hard-fail", [render_arg(self.render_backend), "--hard-fail",

View file

@ -107,12 +107,6 @@ class MarionetteProtocol(Protocol):
return True return True
def after_connect(self): def after_connect(self):
# Turn off debug-level logging by default since this is so verbose
with self.marionette.using_context("chrome"):
self.marionette.execute_script("""
Components.utils.import("resource://gre/modules/Log.jsm");
Log.repository.getLogger("Marionette").level = Log.Level.Info;
""")
self.load_runner("http") self.load_runner("http")
def load_runner(self, protocol): def load_runner(self, protocol):

View file

@ -87,7 +87,7 @@ class ServoTestharnessExecutor(ProcessTestExecutor):
env = os.environ.copy() env = os.environ.copy()
env["HOST_FILE"] = self.hosts_path env["HOST_FILE"] = self.hosts_path
env["RUST_BACKTRACE"] = "1"
if not self.interactive: if not self.interactive:
@ -223,6 +223,7 @@ class ServoRefTestExecutor(ProcessTestExecutor):
env = os.environ.copy() env = os.environ.copy()
env["HOST_FILE"] = self.hosts_path env["HOST_FILE"] = self.hosts_path
env["RUST_BACKTRACE"] = "1"
if not self.interactive: if not self.interactive:
self.proc = ProcessHandler(self.command, self.proc = ProcessHandler(self.command,

View file

@ -8,6 +8,7 @@ The manifest is represented by a tree of IncludeManifest objects, the root
representing the file and each subnode representing a subdirectory that should representing the file and each subnode representing a subdirectory that should
be included or excluded. be included or excluded.
""" """
import glob
import os import os
import urlparse import urlparse
@ -90,29 +91,36 @@ class IncludeManifest(ManifestItem):
variant += "?" + query variant += "?" + query
maybe_path = os.path.join(rest, last) maybe_path = os.path.join(rest, last)
paths = glob.glob(maybe_path)
if os.path.exists(maybe_path): if paths:
for manifest, data in test_manifests.iteritems(): urls = []
rel_path = os.path.relpath(maybe_path, data["tests_path"]) for path in paths:
if ".." not in rel_path.split(os.sep): for manifest, data in test_manifests.iteritems():
url = data["url_base"] + rel_path.replace(os.path.sep, "/") + variant rel_path = os.path.relpath(path, data["tests_path"])
break if ".." not in rel_path.split(os.sep):
urls.append(data["url_base"] + rel_path.replace(os.path.sep, "/") + variant)
break
else:
urls = [url]
assert direction in ("include", "exclude") assert direction in ("include", "exclude")
components = self._get_components(url)
node = self for url in urls:
while components: components = self._get_components(url)
component = components.pop()
if component not in node.child_map:
new_node = IncludeManifest(DataNode(component))
node.append(new_node)
new_node.set("skip", node.get("skip", {}))
node = node.child_map[component] node = self
while components:
component = components.pop()
if component not in node.child_map:
new_node = IncludeManifest(DataNode(component))
node.append(new_node)
new_node.set("skip", node.get("skip", {}))
skip = False if direction == "include" else True node = node.child_map[component]
node.set("skip", str(skip))
skip = False if direction == "include" else True
node.set("skip", str(skip))
def add_include(self, test_manifests, url_prefix): def add_include(self, test_manifests, url_prefix):
"""Add a rule indicating that tests under a url path """Add a rule indicating that tests under a url path

View file

@ -49,13 +49,18 @@ def data_cls_getter(output_node, visited_node):
class ExpectedManifest(ManifestItem): class ExpectedManifest(ManifestItem):
def __init__(self, node, test_path=None, url_base=None): def __init__(self, node, test_path=None, url_base=None, property_order=None,
boolean_properties=None):
"""Object representing all the tests in a particular manifest """Object representing all the tests in a particular manifest
:param node: AST Node associated with this object. If this is None, :param node: AST Node associated with this object. If this is None,
a new AST is created to associate with this manifest. a new AST is created to associate with this manifest.
:param test_path: Path of the test file associated with this manifest. :param test_path: Path of the test file associated with this manifest.
:param url_base: Base url for serving the tests in this manifest :param url_base: Base url for serving the tests in this manifest.
:param property_order: List of properties to use in expectation metadata
from most to least significant.
:param boolean_properties: Set of properties in property_order that should
be treated as boolean.
""" """
if node is None: if node is None:
node = DataNode(None) node = DataNode(None)
@ -65,6 +70,8 @@ class ExpectedManifest(ManifestItem):
self.url_base = url_base self.url_base = url_base
assert self.url_base is not None assert self.url_base is not None
self.modified = False self.modified = False
self.boolean_properties = boolean_properties
self.property_order = property_order
def append(self, child): def append(self, child):
ManifestItem.append(self, child) ManifestItem.append(self, child)
@ -229,7 +236,10 @@ class TestNode(ManifestItem):
self.set("expected", status, condition=None) self.set("expected", status, condition=None)
final_conditionals.append(self._data["expected"][-1]) final_conditionals.append(self._data["expected"][-1])
else: else:
for conditional_node, status in group_conditionals(self.new_expected): for conditional_node, status in group_conditionals(
self.new_expected,
property_order=self.root.property_order,
boolean_properties=self.root.boolean_properties):
if status != unconditional_status: if status != unconditional_status:
self.set("expected", status, condition=conditional_node.children[0]) self.set("expected", status, condition=conditional_node.children[0])
final_conditionals.append(self._data["expected"][-1]) final_conditionals.append(self._data["expected"][-1])
@ -308,18 +318,30 @@ class SubtestNode(TestNode):
return True return True
def group_conditionals(values): def group_conditionals(values, property_order=None, boolean_properties=None):
"""Given a list of Result objects, return a list of """Given a list of Result objects, return a list of
(conditional_node, status) pairs representing the conditional (conditional_node, status) pairs representing the conditional
expressions that are required to match each status expressions that are required to match each status
:param values: List of Results""" :param values: List of Results
:param property_order: List of properties to use in expectation metadata
from most to least significant.
:param boolean_properties: Set of properties in property_order that should
be treated as boolean."""
by_property = defaultdict(set) by_property = defaultdict(set)
for run_info, status in values: for run_info, status in values:
for prop_name, prop_value in run_info.iteritems(): for prop_name, prop_value in run_info.iteritems():
by_property[(prop_name, prop_value)].add(status) by_property[(prop_name, prop_value)].add(status)
if property_order is None:
property_order = ["debug", "os", "version", "processor", "bits"]
if boolean_properties is None:
boolean_properties = set(["debug"])
else:
boolean_properties = set(boolean_properties)
# If we have more than one value, remove any properties that are common # If we have more than one value, remove any properties that are common
# for all the values # for all the values
if len(values) > 1: if len(values) > 1:
@ -328,11 +350,9 @@ def group_conditionals(values):
del by_property[key] del by_property[key]
properties = set(item[0] for item in by_property.iterkeys()) properties = set(item[0] for item in by_property.iterkeys())
prop_order = ["debug", "e10s", "os", "version", "processor", "bits"]
include_props = [] include_props = []
for prop in prop_order: for prop in property_order:
if prop in properties: if prop in properties:
include_props.append(prop) include_props.append(prop)
@ -343,28 +363,33 @@ def group_conditionals(values):
if prop_set in conditions: if prop_set in conditions:
continue continue
expr = make_expr(prop_set, status) expr = make_expr(prop_set, status, boolean_properties=boolean_properties)
conditions[prop_set] = (expr, status) conditions[prop_set] = (expr, status)
return conditions.values() return conditions.values()
def make_expr(prop_set, status): def make_expr(prop_set, status, boolean_properties=None):
"""Create an AST that returns the value ``status`` given all the """Create an AST that returns the value ``status`` given all the
properties in prop_set match.""" properties in prop_set match.
:param prop_set: tuple of (property name, value) pairs for each
property in this expression and the value it must match
:param status: Status on RHS when all the given properties match
:param boolean_properties: Set of properties in property_order that should
be treated as boolean.
"""
root = ConditionalNode() root = ConditionalNode()
assert len(prop_set) > 0 assert len(prop_set) > 0
no_value_props = set(["debug", "e10s"])
expressions = [] expressions = []
for prop, value in prop_set: for prop, value in prop_set:
number_types = (int, float, long) number_types = (int, float, long)
value_cls = (NumberNode value_cls = (NumberNode
if type(value) in number_types if type(value) in number_types
else StringNode) else StringNode)
if prop not in no_value_props: if prop not in boolean_properties:
expressions.append( expressions.append(
BinaryExpressionNode( BinaryExpressionNode(
BinaryOperatorNode("=="), BinaryOperatorNode("=="),
@ -397,24 +422,32 @@ def make_expr(prop_set, status):
return root return root
def get_manifest(metadata_root, test_path, url_base): def get_manifest(metadata_root, test_path, url_base, property_order=None,
boolean_properties=None):
"""Get the ExpectedManifest for a particular test path, or None if there is no """Get the ExpectedManifest for a particular test path, or None if there is no
metadata stored for that test path. metadata stored for that test path.
:param metadata_root: Absolute path to the root of the metadata directory :param metadata_root: Absolute path to the root of the metadata directory
:param test_path: Path to the test(s) relative to the test root :param test_path: Path to the test(s) relative to the test root
:param url_base: Base url for serving the tests in this manifest :param url_base: Base url for serving the tests in this manifest
""" :param property_order: List of properties to use in expectation metadata
from most to least significant.
:param boolean_properties: Set of properties in property_order that should
be treated as boolean."""
manifest_path = expected.expected_path(metadata_root, test_path) manifest_path = expected.expected_path(metadata_root, test_path)
try: try:
with open(manifest_path) as f: with open(manifest_path) as f:
return compile(f, test_path, url_base) return compile(f, test_path, url_base, property_order=property_order,
boolean_properties=boolean_properties)
except IOError: except IOError:
return None return None
def compile(manifest_file, test_path, url_base): def compile(manifest_file, test_path, url_base, property_order=None,
boolean_properties=None):
return conditional.compile(manifest_file, return conditional.compile(manifest_file,
data_cls_getter=data_cls_getter, data_cls_getter=data_cls_getter,
test_path=test_path, test_path=test_path,
url_base=url_base) url_base=url_base,
property_order=property_order,
boolean_properties=boolean_properties)

View file

@ -32,7 +32,7 @@ def load_test_manifests(serve_root, test_paths):
def update_expected(test_paths, serve_root, log_file_names, def update_expected(test_paths, serve_root, log_file_names,
rev_old=None, rev_new="HEAD", ignore_existing=False, rev_old=None, rev_new="HEAD", ignore_existing=False,
sync_root=None): sync_root=None, property_order=None, boolean_properties=None):
"""Update the metadata files for web-platform-tests based on """Update the metadata files for web-platform-tests based on
the results obtained in a previous run""" the results obtained in a previous run"""
@ -51,7 +51,9 @@ def update_expected(test_paths, serve_root, log_file_names,
expected_map_by_manifest = update_from_logs(manifests, expected_map_by_manifest = update_from_logs(manifests,
*log_file_names, *log_file_names,
ignore_existing=ignore_existing) ignore_existing=ignore_existing,
property_order=property_order,
boolean_properties=boolean_properties)
for test_manifest, expected_map in expected_map_by_manifest.iteritems(): for test_manifest, expected_map in expected_map_by_manifest.iteritems():
url_base = manifests[test_manifest]["url_base"] url_base = manifests[test_manifest]["url_base"]
@ -127,14 +129,19 @@ def unexpected_changes(manifests, change_data, files_changed):
def update_from_logs(manifests, *log_filenames, **kwargs): def update_from_logs(manifests, *log_filenames, **kwargs):
ignore_existing = kwargs.pop("ignore_existing", False) ignore_existing = kwargs.get("ignore_existing", False)
property_order = kwargs.get("property_order")
boolean_properties = kwargs.get("boolean_properties")
expected_map = {} expected_map = {}
id_test_map = {} id_test_map = {}
for test_manifest, paths in manifests.iteritems(): for test_manifest, paths in manifests.iteritems():
expected_map_manifest, id_path_map_manifest = create_test_tree(paths["metadata_path"], expected_map_manifest, id_path_map_manifest = create_test_tree(
test_manifest) paths["metadata_path"],
test_manifest,
property_order=property_order,
boolean_properties=boolean_properties)
expected_map[test_manifest] = expected_map_manifest expected_map[test_manifest] = expected_map_manifest
id_test_map.update(id_path_map_manifest) id_test_map.update(id_path_map_manifest)
@ -284,15 +291,22 @@ class ExpectedUpdater(object):
del self.test_cache[test_id] del self.test_cache[test_id]
def create_test_tree(metadata_path, test_manifest): def create_test_tree(metadata_path, test_manifest, property_order=None,
boolean_properties=None):
expected_map = {} expected_map = {}
id_test_map = {} id_test_map = {}
exclude_types = frozenset(["stub", "helper", "manual"]) exclude_types = frozenset(["stub", "helper", "manual"])
include_types = set(manifest.item_types) - exclude_types include_types = set(manifest.item_types) - exclude_types
for test_path, tests in test_manifest.itertypes(*include_types): for test_path, tests in test_manifest.itertypes(*include_types):
expected_data = load_expected(test_manifest, metadata_path, test_path, tests) expected_data = load_expected(test_manifest, metadata_path, test_path, tests,
property_order=property_order,
boolean_properties=boolean_properties)
if expected_data is None: if expected_data is None:
expected_data = create_expected(test_manifest, test_path, tests) expected_data = create_expected(test_manifest,
test_path,
tests,
property_order=property_order,
boolean_properties=boolean_properties)
for test in tests: for test in tests:
id_test_map[test.id] = (test_manifest, test) id_test_map[test.id] = (test_manifest, test)
@ -301,17 +315,23 @@ def create_test_tree(metadata_path, test_manifest):
return expected_map, id_test_map return expected_map, id_test_map
def create_expected(test_manifest, test_path, tests): def create_expected(test_manifest, test_path, tests, property_order=None,
expected = manifestupdate.ExpectedManifest(None, test_path, test_manifest.url_base) boolean_properties=None):
expected = manifestupdate.ExpectedManifest(None, test_path, test_manifest.url_base,
property_order=property_order,
boolean_properties=boolean_properties)
for test in tests: for test in tests:
expected.append(manifestupdate.TestNode.create(test.item_type, test.id)) expected.append(manifestupdate.TestNode.create(test.item_type, test.id))
return expected return expected
def load_expected(test_manifest, metadata_path, test_path, tests): def load_expected(test_manifest, metadata_path, test_path, tests, property_order=None,
boolean_properties=None):
expected_manifest = manifestupdate.get_manifest(metadata_path, expected_manifest = manifestupdate.get_manifest(metadata_path,
test_path, test_path,
test_manifest.url_base) test_manifest.url_base,
property_order=property_order,
boolean_properties=boolean_properties)
if expected_manifest is None: if expected_manifest is None:
return return

View file

@ -55,3 +55,18 @@ def load_product(config, product):
browser_cls, browser_kwargs, browser_cls, browser_kwargs,
executor_classes, executor_kwargs, executor_classes, executor_kwargs,
env_options, run_info_extras) env_options, run_info_extras)
def load_product_update(config, product):
"""Return tuple of (property_order, boolean_properties) indicating the
run_info properties to use when constructing the expectation data for
this product. None for either key indicates that the default keys
appropriate for distinguishing based on platform will be used."""
module = product_module(config, product)
data = module.__wptrunner__
update_properties = (getattr(module, data["update_properties"])()
if "update_properties" in data else (None, None))
return update_properties

View file

@ -4,10 +4,21 @@
import os import os
from .. import metadata from .. import metadata, products
from base import Step, StepRunner from base import Step, StepRunner
class GetUpdatePropertyList(Step):
provides = ["property_order", "boolean_properties"]
def create(self, state):
property_order, boolean_properties = products.load_product_update(
state.config, state.product)
state.property_order = property_order
state.boolean_properties = boolean_properties
class UpdateExpected(Step): class UpdateExpected(Step):
"""Do the metadata update on the local checkout""" """Do the metadata update on the local checkout"""
@ -24,7 +35,9 @@ class UpdateExpected(Step):
state.run_log, state.run_log,
rev_old=None, rev_old=None,
ignore_existing=state.ignore_existing, ignore_existing=state.ignore_existing,
sync_root=sync_root) sync_root=sync_root,
property_order=state.property_order,
boolean_properties=state.boolean_properties)
class CreateMetadataPatch(Step): class CreateMetadataPatch(Step):
@ -57,5 +70,6 @@ class CreateMetadataPatch(Step):
class MetadataUpdateRunner(StepRunner): class MetadataUpdateRunner(StepRunner):
"""(Sub)Runner for updating metadata""" """(Sub)Runner for updating metadata"""
steps = [UpdateExpected, steps = [GetUpdatePropertyList,
UpdateExpected,
CreateMetadataPatch] CreateMetadataPatch]

View file

@ -91,6 +91,8 @@ class UpdateMetadata(Step):
state.ignore_existing = kwargs["ignore_existing"] state.ignore_existing = kwargs["ignore_existing"]
state.no_patch = kwargs["no_patch"] state.no_patch = kwargs["no_patch"]
state.suite_name = kwargs["suite_name"] state.suite_name = kwargs["suite_name"]
state.product = kwargs["product"]
state.config = kwargs["config"]
runner = MetadataUpdateRunner(self.logger, state) runner = MetadataUpdateRunner(self.logger, state)
runner.run() runner.run()

View file

@ -155,7 +155,7 @@ def create_parser(product_choices=None):
gecko_group.add_argument("--prefs-root", dest="prefs_root", action="store", type=abs_path, gecko_group.add_argument("--prefs-root", dest="prefs_root", action="store", type=abs_path,
help="Path to the folder containing browser prefs") help="Path to the folder containing browser prefs")
gecko_group.add_argument("--e10s", dest="gecko_e10s", action="store_true", gecko_group.add_argument("--e10s", dest="gecko_e10s", action="store_true",
help="Path to the folder containing browser prefs") help="Run tests with electrolysis preferences")
b2g_group = parser.add_argument_group("B2G-specific") b2g_group = parser.add_argument_group("B2G-specific")
b2g_group.add_argument("--b2g-no-backup", action="store_true", default=False, b2g_group.add_argument("--b2g-no-backup", action="store_true", default=False,
@ -338,12 +338,25 @@ def check_args(kwargs):
return kwargs return kwargs
def check_args_update(kwargs):
set_from_config(kwargs)
def create_parser_update(): if kwargs["product"] is None:
kwargs["product"] = "firefox"
def create_parser_update(product_choices=None):
from mozlog.structured import commandline from mozlog.structured import commandline
import products
if product_choices is None:
config_data = config.load()
product_choices = products.products_enabled(config_data)
parser = argparse.ArgumentParser("web-platform-tests-update", parser = argparse.ArgumentParser("web-platform-tests-update",
description="Update script for web-platform-tests tests.") description="Update script for web-platform-tests tests.")
parser.add_argument("--product", action="store", choices=product_choices,
default=None, help="Browser for which metadata is being updated")
parser.add_argument("--config", action="store", type=abs_path, help="Path to config file") parser.add_argument("--config", action="store", type=abs_path, help="Path to config file")
parser.add_argument("--metadata", action="store", type=abs_path, dest="metadata_root", parser.add_argument("--metadata", action="store", type=abs_path, dest="metadata_root",
help="Path to the folder containing test metadata"), help="Path to the folder containing test metadata"),
@ -386,7 +399,7 @@ def parse_args():
def parse_args_update(): def parse_args_update():
parser = create_parser_update() parser = create_parser_update()
rv = vars(parser.parse_args()) rv = vars(parser.parse_args())
set_from_config(rv) check_args_update(rv)
return rv return rv