mirror of
https://github.com/servo/servo.git
synced 2025-08-03 12:40:06 +01:00
Update to latest wptrunner
This commit is contained in:
parent
9baa59a6b4
commit
f234d99ac1
15 changed files with 229 additions and 87 deletions
|
@ -29,7 +29,7 @@ following are most significant:
|
|||
The path to a binary file for the product (browser) to test against.
|
||||
|
||||
``--webdriver-binary`` (required if product is `chrome`)
|
||||
The path to a `*driver` binary; e.g., a `chromedriver` binary.
|
||||
The path to a `driver` binary; e.g., a `chromedriver` binary.
|
||||
|
||||
``--certutil-binary`` (required if product is `firefox` [#]_)
|
||||
The path to a `certutil` binary (for tests that must be run over https).
|
||||
|
@ -43,13 +43,18 @@ following are most significant:
|
|||
``--prefs-root`` (required only when testing a Firefox binary)
|
||||
The path to a directory containing Firefox test-harness preferences. [#]_
|
||||
|
||||
``--config`` (should default to `wptrunner.default.ini`)
|
||||
The path to the config (ini) file.
|
||||
|
||||
.. [#] The ``--certutil-binary`` option is required when the product is
|
||||
``firefox`` unless ``--ssl-type=none`` is specified.
|
||||
|
||||
.. [#] The ``--metadata`` path is to a directory that contains:
|
||||
|
||||
* a ``MANIFEST.json`` file (the web-platform-tests documentation has
|
||||
instructions on generating this file); and
|
||||
* a ``MANIFEST.json`` file (instructions on generating this file are
|
||||
available in the `detailed documentation
|
||||
<http://wptrunner.readthedocs.org/en/latest/usage.html#installing-wptrunner>`_);
|
||||
and
|
||||
* (optionally) any expectation files (see below)
|
||||
|
||||
.. [#] Example ``--prefs-root`` value: ``~/mozilla-central/testing/profiles``.
|
||||
|
@ -125,7 +130,7 @@ input to the `wptupdate` tool.
|
|||
Expectation File Format
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Metadat about tests, notably including their expected results, is
|
||||
Metadata about tests, notably including their expected results, is
|
||||
stored in a modified ini-like format that is designed to be human
|
||||
editable, but also to be machine updatable.
|
||||
|
||||
|
|
|
@ -28,19 +28,19 @@ environment created as above::
|
|||
pip install -e ./
|
||||
|
||||
In addition to the dependencies installed by pip, wptrunner requires
|
||||
a copy of the web-platform-tests repository. That can be located
|
||||
anywhere on the filesystem, but the easiest option is to put it within
|
||||
the wptrunner checkout directory, as a subdirectory named ``tests``::
|
||||
a copy of the web-platform-tests repository. This can be located
|
||||
anywhere on the filesystem, but the easiest option is to put it
|
||||
under the same parent directory as the wptrunner checkout::
|
||||
|
||||
git clone https://github.com/w3c/web-platform-tests.git tests
|
||||
git clone https://github.com/w3c/web-platform-tests.git
|
||||
|
||||
It is also necessary to generate a web-platform-tests ``MANIFEST.json``
|
||||
file. It's recommended to put that within the wptrunner
|
||||
checkout directory, in a subdirectory named ``meta``::
|
||||
file. It's recommended to also put that under the same parent directory as
|
||||
the wptrunner checkout, in a directory named ``meta``::
|
||||
|
||||
mkdir meta
|
||||
cd tests
|
||||
python tools/scripts/manifest.py ../meta/MANIFEST.json
|
||||
cd web-platform-tests
|
||||
python manifest --path ../meta/MANIFEST.json
|
||||
|
||||
The ``MANIFEST.json`` file needs to be regenerated each time the
|
||||
web-platform-tests checkout is updated. To aid with the update process
|
||||
|
@ -74,6 +74,9 @@ takes multiple options, of which the following are most significant:
|
|||
``--prefs-root`` (required only when testing a Firefox binary)
|
||||
The path to a directory containing Firefox test-harness preferences. [#]_
|
||||
|
||||
``--config`` (should default to `wptrunner.default.ini`)
|
||||
The path to the config (ini) file.
|
||||
|
||||
.. [#] The ``--certutil-binary`` option is required when the product is
|
||||
``firefox`` unless ``--ssl-type=none`` is specified.
|
||||
|
||||
|
@ -94,10 +97,17 @@ The following examples show how to start wptrunner with various options.
|
|||
Starting wptrunner
|
||||
------------------
|
||||
|
||||
The examples below assume the following directory layout,
|
||||
though no specific folder structure is required::
|
||||
|
||||
~/testtwf/wptrunner # wptrunner checkout
|
||||
~/testtwf/web-platform-tests # web-platform-tests checkout
|
||||
~/testtwf/meta # metadata
|
||||
|
||||
To test a Firefox Nightly build in an OS X environment, you might start
|
||||
wptrunner using something similar to the following example::
|
||||
|
||||
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
|
||||
wptrunner --metadata=~/testtwf/meta/ --tests=~/testtwf/web-platform-tests/ \
|
||||
--binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/dist/Nightly.app/Contents/MacOS/firefox \
|
||||
--certutil-binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/security/nss/cmd/certutil/certutil \
|
||||
--prefs-root=~/mozilla-central/testing/profiles
|
||||
|
@ -106,7 +116,7 @@ wptrunner using something similar to the following example::
|
|||
And to test a Chromium build in an OS X environment, you might start
|
||||
wptrunner using something similar to the following example::
|
||||
|
||||
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
|
||||
wptrunner --metadata=~/testtwf/meta/ --tests=~/testtwf/web-platform-tests/ \
|
||||
--binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
|
||||
--webdriver-binary=/usr/local/bin/chromedriver --product=chrome
|
||||
|
||||
|
@ -118,7 +128,7 @@ To restrict a test run just to tests in a particular web-platform-tests
|
|||
subdirectory, specify the directory name in the positional arguments after
|
||||
the options; for example, run just the tests in the `dom` subdirectory::
|
||||
|
||||
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
|
||||
wptrunner --metadata=~/testtwf/meta --tests=~/testtwf/web-platform-tests/ \
|
||||
--binary=/path/to/firefox --certutil-binary=/path/to/certutil \
|
||||
--prefs-root=/path/to/testing/profiles \
|
||||
dom
|
||||
|
@ -180,7 +190,7 @@ Configuration File
|
|||
|
||||
wptrunner uses a ``.ini`` file to control some configuration
|
||||
sections. The file has three sections; ``[products]``,
|
||||
``[paths]`` and ``[web-platform-tests]``.
|
||||
``[manifest:default]`` and ``[web-platform-tests]``.
|
||||
|
||||
``[products]`` is used to
|
||||
define the set of available products. By default this section is empty
|
||||
|
@ -195,12 +205,12 @@ e.g.::
|
|||
chrome =
|
||||
netscape4 = path/to/netscape.py
|
||||
|
||||
``[paths]`` specifies the default paths for the tests and metadata,
|
||||
``[manifest:default]`` specifies the default paths for the tests and metadata,
|
||||
relative to the config file. For example::
|
||||
|
||||
[paths]
|
||||
tests = checkouts/web-platform-tests
|
||||
metadata = /home/example/wpt/metadata
|
||||
[manifest:default]
|
||||
tests = ~/testtwf/web-platform-tests
|
||||
metadata = ~/testtwf/meta
|
||||
|
||||
|
||||
``[web-platform-tests]`` is used to set the properties of the upstream
|
||||
|
|
|
@ -192,7 +192,7 @@ class B2GExecutorBrowser(ExecutorBrowser):
|
|||
|
||||
import sys, subprocess
|
||||
|
||||
self.device = mozdevice.ADBDevice()
|
||||
self.device = mozdevice.ADBB2G()
|
||||
self.device.forward("tcp:%s" % self.marionette_port,
|
||||
"tcp:2828")
|
||||
self.executor = None
|
||||
|
|
|
@ -28,7 +28,8 @@ __wptrunner__ = {"product": "firefox",
|
|||
"browser_kwargs": "browser_kwargs",
|
||||
"executor_kwargs": "executor_kwargs",
|
||||
"env_options": "env_options",
|
||||
"run_info_extras": "run_info_extras"}
|
||||
"run_info_extras": "run_info_extras",
|
||||
"update_properties": "update_properties"}
|
||||
|
||||
|
||||
def check_args(**kwargs):
|
||||
|
@ -54,7 +55,7 @@ def executor_kwargs(test_type, server_config, cache_manager, run_info_data,
|
|||
cache_manager, **kwargs)
|
||||
executor_kwargs["close_after_done"] = True
|
||||
if kwargs["timeout_multiplier"] is None:
|
||||
if kwargs["gecko_e10s"] and test_type == "reftest":
|
||||
if test_type == "reftest":
|
||||
if run_info_data["debug"]:
|
||||
executor_kwargs["timeout_multiplier"] = 4
|
||||
else:
|
||||
|
@ -71,9 +72,14 @@ def env_options():
|
|||
"certificate_domain": "web-platform.test",
|
||||
"supports_debugger": True}
|
||||
|
||||
|
||||
def run_info_extras(**kwargs):
|
||||
return {"e10s": kwargs["gecko_e10s"]}
|
||||
|
||||
|
||||
def update_properties():
|
||||
return ["debug", "e10s", "os", "version", "processor", "bits"], {"debug", "e10s"}
|
||||
|
||||
class FirefoxBrowser(Browser):
|
||||
used_ports = set()
|
||||
|
||||
|
|
|
@ -17,7 +17,9 @@ __wptrunner__ = {"product": "servo",
|
|||
"reftest": "ServoRefTestExecutor"},
|
||||
"browser_kwargs": "browser_kwargs",
|
||||
"executor_kwargs": "executor_kwargs",
|
||||
"env_options": "env_options"}
|
||||
"env_options": "env_options",
|
||||
"run_info_extras": "run_info_extras",
|
||||
"update_properties": "update_properties"}
|
||||
|
||||
|
||||
def check_args(**kwargs):
|
||||
|
@ -47,8 +49,16 @@ def env_options():
|
|||
"supports_debugger": True}
|
||||
|
||||
|
||||
def run_info_extras(**kwargs):
|
||||
return {"backend": kwargs["servo_backend"]}
|
||||
|
||||
|
||||
def update_properties():
|
||||
return ["debug", "os", "version", "processor", "bits", "backend"], None
|
||||
|
||||
|
||||
def render_arg(render_backend):
|
||||
return {"cpu": "--cpu"}[render_backend]
|
||||
return {"cpu": "--cpu", "webrender": "--webrender"}[render_backend]
|
||||
|
||||
|
||||
class ServoBrowser(NullBrowser):
|
||||
|
|
|
@ -23,7 +23,9 @@ __wptrunner__ = {"product": "servodriver",
|
|||
"reftest": "ServoWebDriverRefTestExecutor"},
|
||||
"browser_kwargs": "browser_kwargs",
|
||||
"executor_kwargs": "executor_kwargs",
|
||||
"env_options": "env_options"}
|
||||
"env_options": "env_options",
|
||||
"run_info_extras": "run_info_extras",
|
||||
"update_properties": "update_properties"}
|
||||
|
||||
hosts_text = """127.0.0.1 web-platform.test
|
||||
127.0.0.1 www.web-platform.test
|
||||
|
@ -59,6 +61,14 @@ def env_options():
|
|||
"supports_debugger": True}
|
||||
|
||||
|
||||
def run_info_extras(**kwargs):
|
||||
return {"backend": kwargs["servo_backend"]}
|
||||
|
||||
|
||||
def update_properties():
|
||||
return ["debug", "os", "version", "processor", "bits", "backend"], None
|
||||
|
||||
|
||||
def make_hosts_file():
|
||||
hosts_fd, hosts_path = tempfile.mkstemp()
|
||||
with os.fdopen(hosts_fd, "w") as f:
|
||||
|
@ -88,6 +98,7 @@ class ServoWebDriverBrowser(Browser):
|
|||
|
||||
env = os.environ.copy()
|
||||
env["HOST_FILE"] = self.hosts_path
|
||||
env["RUST_BACKTRACE"] = "1"
|
||||
|
||||
debug_args, command = browser_command(self.binary,
|
||||
[render_arg(self.render_backend), "--hard-fail",
|
||||
|
|
|
@ -107,12 +107,6 @@ class MarionetteProtocol(Protocol):
|
|||
return True
|
||||
|
||||
def after_connect(self):
|
||||
# Turn off debug-level logging by default since this is so verbose
|
||||
with self.marionette.using_context("chrome"):
|
||||
self.marionette.execute_script("""
|
||||
Components.utils.import("resource://gre/modules/Log.jsm");
|
||||
Log.repository.getLogger("Marionette").level = Log.Level.Info;
|
||||
""")
|
||||
self.load_runner("http")
|
||||
|
||||
def load_runner(self, protocol):
|
||||
|
|
|
@ -87,7 +87,7 @@ class ServoTestharnessExecutor(ProcessTestExecutor):
|
|||
|
||||
env = os.environ.copy()
|
||||
env["HOST_FILE"] = self.hosts_path
|
||||
|
||||
env["RUST_BACKTRACE"] = "1"
|
||||
|
||||
|
||||
if not self.interactive:
|
||||
|
@ -223,6 +223,7 @@ class ServoRefTestExecutor(ProcessTestExecutor):
|
|||
|
||||
env = os.environ.copy()
|
||||
env["HOST_FILE"] = self.hosts_path
|
||||
env["RUST_BACKTRACE"] = "1"
|
||||
|
||||
if not self.interactive:
|
||||
self.proc = ProcessHandler(self.command,
|
||||
|
|
|
@ -8,6 +8,7 @@ The manifest is represented by a tree of IncludeManifest objects, the root
|
|||
representing the file and each subnode representing a subdirectory that should
|
||||
be included or excluded.
|
||||
"""
|
||||
import glob
|
||||
import os
|
||||
import urlparse
|
||||
|
||||
|
@ -90,15 +91,22 @@ class IncludeManifest(ManifestItem):
|
|||
variant += "?" + query
|
||||
|
||||
maybe_path = os.path.join(rest, last)
|
||||
paths = glob.glob(maybe_path)
|
||||
|
||||
if os.path.exists(maybe_path):
|
||||
if paths:
|
||||
urls = []
|
||||
for path in paths:
|
||||
for manifest, data in test_manifests.iteritems():
|
||||
rel_path = os.path.relpath(maybe_path, data["tests_path"])
|
||||
rel_path = os.path.relpath(path, data["tests_path"])
|
||||
if ".." not in rel_path.split(os.sep):
|
||||
url = data["url_base"] + rel_path.replace(os.path.sep, "/") + variant
|
||||
urls.append(data["url_base"] + rel_path.replace(os.path.sep, "/") + variant)
|
||||
break
|
||||
else:
|
||||
urls = [url]
|
||||
|
||||
assert direction in ("include", "exclude")
|
||||
|
||||
for url in urls:
|
||||
components = self._get_components(url)
|
||||
|
||||
node = self
|
||||
|
|
|
@ -49,13 +49,18 @@ def data_cls_getter(output_node, visited_node):
|
|||
|
||||
|
||||
class ExpectedManifest(ManifestItem):
|
||||
def __init__(self, node, test_path=None, url_base=None):
|
||||
def __init__(self, node, test_path=None, url_base=None, property_order=None,
|
||||
boolean_properties=None):
|
||||
"""Object representing all the tests in a particular manifest
|
||||
|
||||
:param node: AST Node associated with this object. If this is None,
|
||||
a new AST is created to associate with this manifest.
|
||||
:param test_path: Path of the test file associated with this manifest.
|
||||
:param url_base: Base url for serving the tests in this manifest
|
||||
:param url_base: Base url for serving the tests in this manifest.
|
||||
:param property_order: List of properties to use in expectation metadata
|
||||
from most to least significant.
|
||||
:param boolean_properties: Set of properties in property_order that should
|
||||
be treated as boolean.
|
||||
"""
|
||||
if node is None:
|
||||
node = DataNode(None)
|
||||
|
@ -65,6 +70,8 @@ class ExpectedManifest(ManifestItem):
|
|||
self.url_base = url_base
|
||||
assert self.url_base is not None
|
||||
self.modified = False
|
||||
self.boolean_properties = boolean_properties
|
||||
self.property_order = property_order
|
||||
|
||||
def append(self, child):
|
||||
ManifestItem.append(self, child)
|
||||
|
@ -229,7 +236,10 @@ class TestNode(ManifestItem):
|
|||
self.set("expected", status, condition=None)
|
||||
final_conditionals.append(self._data["expected"][-1])
|
||||
else:
|
||||
for conditional_node, status in group_conditionals(self.new_expected):
|
||||
for conditional_node, status in group_conditionals(
|
||||
self.new_expected,
|
||||
property_order=self.root.property_order,
|
||||
boolean_properties=self.root.boolean_properties):
|
||||
if status != unconditional_status:
|
||||
self.set("expected", status, condition=conditional_node.children[0])
|
||||
final_conditionals.append(self._data["expected"][-1])
|
||||
|
@ -308,18 +318,30 @@ class SubtestNode(TestNode):
|
|||
return True
|
||||
|
||||
|
||||
def group_conditionals(values):
|
||||
def group_conditionals(values, property_order=None, boolean_properties=None):
|
||||
"""Given a list of Result objects, return a list of
|
||||
(conditional_node, status) pairs representing the conditional
|
||||
expressions that are required to match each status
|
||||
|
||||
:param values: List of Results"""
|
||||
:param values: List of Results
|
||||
:param property_order: List of properties to use in expectation metadata
|
||||
from most to least significant.
|
||||
:param boolean_properties: Set of properties in property_order that should
|
||||
be treated as boolean."""
|
||||
|
||||
by_property = defaultdict(set)
|
||||
for run_info, status in values:
|
||||
for prop_name, prop_value in run_info.iteritems():
|
||||
by_property[(prop_name, prop_value)].add(status)
|
||||
|
||||
if property_order is None:
|
||||
property_order = ["debug", "os", "version", "processor", "bits"]
|
||||
|
||||
if boolean_properties is None:
|
||||
boolean_properties = set(["debug"])
|
||||
else:
|
||||
boolean_properties = set(boolean_properties)
|
||||
|
||||
# If we have more than one value, remove any properties that are common
|
||||
# for all the values
|
||||
if len(values) > 1:
|
||||
|
@ -328,11 +350,9 @@ def group_conditionals(values):
|
|||
del by_property[key]
|
||||
|
||||
properties = set(item[0] for item in by_property.iterkeys())
|
||||
|
||||
prop_order = ["debug", "e10s", "os", "version", "processor", "bits"]
|
||||
include_props = []
|
||||
|
||||
for prop in prop_order:
|
||||
for prop in property_order:
|
||||
if prop in properties:
|
||||
include_props.append(prop)
|
||||
|
||||
|
@ -343,28 +363,33 @@ def group_conditionals(values):
|
|||
if prop_set in conditions:
|
||||
continue
|
||||
|
||||
expr = make_expr(prop_set, status)
|
||||
expr = make_expr(prop_set, status, boolean_properties=boolean_properties)
|
||||
conditions[prop_set] = (expr, status)
|
||||
|
||||
return conditions.values()
|
||||
|
||||
|
||||
def make_expr(prop_set, status):
|
||||
def make_expr(prop_set, status, boolean_properties=None):
|
||||
"""Create an AST that returns the value ``status`` given all the
|
||||
properties in prop_set match."""
|
||||
properties in prop_set match.
|
||||
|
||||
:param prop_set: tuple of (property name, value) pairs for each
|
||||
property in this expression and the value it must match
|
||||
:param status: Status on RHS when all the given properties match
|
||||
:param boolean_properties: Set of properties in property_order that should
|
||||
be treated as boolean.
|
||||
"""
|
||||
root = ConditionalNode()
|
||||
|
||||
assert len(prop_set) > 0
|
||||
|
||||
no_value_props = set(["debug", "e10s"])
|
||||
|
||||
expressions = []
|
||||
for prop, value in prop_set:
|
||||
number_types = (int, float, long)
|
||||
value_cls = (NumberNode
|
||||
if type(value) in number_types
|
||||
else StringNode)
|
||||
if prop not in no_value_props:
|
||||
if prop not in boolean_properties:
|
||||
expressions.append(
|
||||
BinaryExpressionNode(
|
||||
BinaryOperatorNode("=="),
|
||||
|
@ -397,24 +422,32 @@ def make_expr(prop_set, status):
|
|||
return root
|
||||
|
||||
|
||||
def get_manifest(metadata_root, test_path, url_base):
|
||||
def get_manifest(metadata_root, test_path, url_base, property_order=None,
|
||||
boolean_properties=None):
|
||||
"""Get the ExpectedManifest for a particular test path, or None if there is no
|
||||
metadata stored for that test path.
|
||||
|
||||
:param metadata_root: Absolute path to the root of the metadata directory
|
||||
:param test_path: Path to the test(s) relative to the test root
|
||||
:param url_base: Base url for serving the tests in this manifest
|
||||
"""
|
||||
:param property_order: List of properties to use in expectation metadata
|
||||
from most to least significant.
|
||||
:param boolean_properties: Set of properties in property_order that should
|
||||
be treated as boolean."""
|
||||
manifest_path = expected.expected_path(metadata_root, test_path)
|
||||
try:
|
||||
with open(manifest_path) as f:
|
||||
return compile(f, test_path, url_base)
|
||||
return compile(f, test_path, url_base, property_order=property_order,
|
||||
boolean_properties=boolean_properties)
|
||||
except IOError:
|
||||
return None
|
||||
|
||||
|
||||
def compile(manifest_file, test_path, url_base):
|
||||
def compile(manifest_file, test_path, url_base, property_order=None,
|
||||
boolean_properties=None):
|
||||
return conditional.compile(manifest_file,
|
||||
data_cls_getter=data_cls_getter,
|
||||
test_path=test_path,
|
||||
url_base=url_base)
|
||||
url_base=url_base,
|
||||
property_order=property_order,
|
||||
boolean_properties=boolean_properties)
|
||||
|
|
|
@ -32,7 +32,7 @@ def load_test_manifests(serve_root, test_paths):
|
|||
|
||||
def update_expected(test_paths, serve_root, log_file_names,
|
||||
rev_old=None, rev_new="HEAD", ignore_existing=False,
|
||||
sync_root=None):
|
||||
sync_root=None, property_order=None, boolean_properties=None):
|
||||
"""Update the metadata files for web-platform-tests based on
|
||||
the results obtained in a previous run"""
|
||||
|
||||
|
@ -51,7 +51,9 @@ def update_expected(test_paths, serve_root, log_file_names,
|
|||
|
||||
expected_map_by_manifest = update_from_logs(manifests,
|
||||
*log_file_names,
|
||||
ignore_existing=ignore_existing)
|
||||
ignore_existing=ignore_existing,
|
||||
property_order=property_order,
|
||||
boolean_properties=boolean_properties)
|
||||
|
||||
for test_manifest, expected_map in expected_map_by_manifest.iteritems():
|
||||
url_base = manifests[test_manifest]["url_base"]
|
||||
|
@ -127,14 +129,19 @@ def unexpected_changes(manifests, change_data, files_changed):
|
|||
|
||||
|
||||
def update_from_logs(manifests, *log_filenames, **kwargs):
|
||||
ignore_existing = kwargs.pop("ignore_existing", False)
|
||||
ignore_existing = kwargs.get("ignore_existing", False)
|
||||
property_order = kwargs.get("property_order")
|
||||
boolean_properties = kwargs.get("boolean_properties")
|
||||
|
||||
expected_map = {}
|
||||
id_test_map = {}
|
||||
|
||||
for test_manifest, paths in manifests.iteritems():
|
||||
expected_map_manifest, id_path_map_manifest = create_test_tree(paths["metadata_path"],
|
||||
test_manifest)
|
||||
expected_map_manifest, id_path_map_manifest = create_test_tree(
|
||||
paths["metadata_path"],
|
||||
test_manifest,
|
||||
property_order=property_order,
|
||||
boolean_properties=boolean_properties)
|
||||
expected_map[test_manifest] = expected_map_manifest
|
||||
id_test_map.update(id_path_map_manifest)
|
||||
|
||||
|
@ -284,15 +291,22 @@ class ExpectedUpdater(object):
|
|||
del self.test_cache[test_id]
|
||||
|
||||
|
||||
def create_test_tree(metadata_path, test_manifest):
|
||||
def create_test_tree(metadata_path, test_manifest, property_order=None,
|
||||
boolean_properties=None):
|
||||
expected_map = {}
|
||||
id_test_map = {}
|
||||
exclude_types = frozenset(["stub", "helper", "manual"])
|
||||
include_types = set(manifest.item_types) - exclude_types
|
||||
for test_path, tests in test_manifest.itertypes(*include_types):
|
||||
expected_data = load_expected(test_manifest, metadata_path, test_path, tests)
|
||||
expected_data = load_expected(test_manifest, metadata_path, test_path, tests,
|
||||
property_order=property_order,
|
||||
boolean_properties=boolean_properties)
|
||||
if expected_data is None:
|
||||
expected_data = create_expected(test_manifest, test_path, tests)
|
||||
expected_data = create_expected(test_manifest,
|
||||
test_path,
|
||||
tests,
|
||||
property_order=property_order,
|
||||
boolean_properties=boolean_properties)
|
||||
|
||||
for test in tests:
|
||||
id_test_map[test.id] = (test_manifest, test)
|
||||
|
@ -301,17 +315,23 @@ def create_test_tree(metadata_path, test_manifest):
|
|||
return expected_map, id_test_map
|
||||
|
||||
|
||||
def create_expected(test_manifest, test_path, tests):
|
||||
expected = manifestupdate.ExpectedManifest(None, test_path, test_manifest.url_base)
|
||||
def create_expected(test_manifest, test_path, tests, property_order=None,
|
||||
boolean_properties=None):
|
||||
expected = manifestupdate.ExpectedManifest(None, test_path, test_manifest.url_base,
|
||||
property_order=property_order,
|
||||
boolean_properties=boolean_properties)
|
||||
for test in tests:
|
||||
expected.append(manifestupdate.TestNode.create(test.item_type, test.id))
|
||||
return expected
|
||||
|
||||
|
||||
def load_expected(test_manifest, metadata_path, test_path, tests):
|
||||
def load_expected(test_manifest, metadata_path, test_path, tests, property_order=None,
|
||||
boolean_properties=None):
|
||||
expected_manifest = manifestupdate.get_manifest(metadata_path,
|
||||
test_path,
|
||||
test_manifest.url_base)
|
||||
test_manifest.url_base,
|
||||
property_order=property_order,
|
||||
boolean_properties=boolean_properties)
|
||||
if expected_manifest is None:
|
||||
return
|
||||
|
||||
|
|
|
@ -55,3 +55,18 @@ def load_product(config, product):
|
|||
browser_cls, browser_kwargs,
|
||||
executor_classes, executor_kwargs,
|
||||
env_options, run_info_extras)
|
||||
|
||||
|
||||
def load_product_update(config, product):
|
||||
"""Return tuple of (property_order, boolean_properties) indicating the
|
||||
run_info properties to use when constructing the expectation data for
|
||||
this product. None for either key indicates that the default keys
|
||||
appropriate for distinguishing based on platform will be used."""
|
||||
|
||||
module = product_module(config, product)
|
||||
data = module.__wptrunner__
|
||||
|
||||
update_properties = (getattr(module, data["update_properties"])()
|
||||
if "update_properties" in data else (None, None))
|
||||
|
||||
return update_properties
|
||||
|
|
|
@ -4,10 +4,21 @@
|
|||
|
||||
import os
|
||||
|
||||
from .. import metadata
|
||||
from .. import metadata, products
|
||||
|
||||
from base import Step, StepRunner
|
||||
|
||||
class GetUpdatePropertyList(Step):
|
||||
provides = ["property_order", "boolean_properties"]
|
||||
|
||||
|
||||
def create(self, state):
|
||||
property_order, boolean_properties = products.load_product_update(
|
||||
state.config, state.product)
|
||||
state.property_order = property_order
|
||||
state.boolean_properties = boolean_properties
|
||||
|
||||
|
||||
class UpdateExpected(Step):
|
||||
"""Do the metadata update on the local checkout"""
|
||||
|
||||
|
@ -24,7 +35,9 @@ class UpdateExpected(Step):
|
|||
state.run_log,
|
||||
rev_old=None,
|
||||
ignore_existing=state.ignore_existing,
|
||||
sync_root=sync_root)
|
||||
sync_root=sync_root,
|
||||
property_order=state.property_order,
|
||||
boolean_properties=state.boolean_properties)
|
||||
|
||||
|
||||
class CreateMetadataPatch(Step):
|
||||
|
@ -57,5 +70,6 @@ class CreateMetadataPatch(Step):
|
|||
|
||||
class MetadataUpdateRunner(StepRunner):
|
||||
"""(Sub)Runner for updating metadata"""
|
||||
steps = [UpdateExpected,
|
||||
steps = [GetUpdatePropertyList,
|
||||
UpdateExpected,
|
||||
CreateMetadataPatch]
|
||||
|
|
|
@ -91,6 +91,8 @@ class UpdateMetadata(Step):
|
|||
state.ignore_existing = kwargs["ignore_existing"]
|
||||
state.no_patch = kwargs["no_patch"]
|
||||
state.suite_name = kwargs["suite_name"]
|
||||
state.product = kwargs["product"]
|
||||
state.config = kwargs["config"]
|
||||
runner = MetadataUpdateRunner(self.logger, state)
|
||||
runner.run()
|
||||
|
||||
|
|
|
@ -155,7 +155,7 @@ def create_parser(product_choices=None):
|
|||
gecko_group.add_argument("--prefs-root", dest="prefs_root", action="store", type=abs_path,
|
||||
help="Path to the folder containing browser prefs")
|
||||
gecko_group.add_argument("--e10s", dest="gecko_e10s", action="store_true",
|
||||
help="Path to the folder containing browser prefs")
|
||||
help="Run tests with electrolysis preferences")
|
||||
|
||||
b2g_group = parser.add_argument_group("B2G-specific")
|
||||
b2g_group.add_argument("--b2g-no-backup", action="store_true", default=False,
|
||||
|
@ -338,12 +338,25 @@ def check_args(kwargs):
|
|||
|
||||
return kwargs
|
||||
|
||||
def check_args_update(kwargs):
|
||||
set_from_config(kwargs)
|
||||
|
||||
def create_parser_update():
|
||||
if kwargs["product"] is None:
|
||||
kwargs["product"] = "firefox"
|
||||
|
||||
def create_parser_update(product_choices=None):
|
||||
from mozlog.structured import commandline
|
||||
|
||||
import products
|
||||
|
||||
if product_choices is None:
|
||||
config_data = config.load()
|
||||
product_choices = products.products_enabled(config_data)
|
||||
|
||||
parser = argparse.ArgumentParser("web-platform-tests-update",
|
||||
description="Update script for web-platform-tests tests.")
|
||||
parser.add_argument("--product", action="store", choices=product_choices,
|
||||
default=None, help="Browser for which metadata is being updated")
|
||||
parser.add_argument("--config", action="store", type=abs_path, help="Path to config file")
|
||||
parser.add_argument("--metadata", action="store", type=abs_path, dest="metadata_root",
|
||||
help="Path to the folder containing test metadata"),
|
||||
|
@ -386,7 +399,7 @@ def parse_args():
|
|||
def parse_args_update():
|
||||
parser = create_parser_update()
|
||||
rv = vars(parser.parse_args())
|
||||
set_from_config(rv)
|
||||
check_args_update(rv)
|
||||
return rv
|
||||
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue