Update wptrunner.

This commit is contained in:
Ms2ger 2015-06-13 11:30:37 +02:00
parent 4624fc18d2
commit c670894aed
119 changed files with 928 additions and 383 deletions

View file

@ -25,9 +25,15 @@ following are most significant:
``--product`` (defaults to `firefox`) ``--product`` (defaults to `firefox`)
The product to test against: `b2g`, `chrome`, `firefox`, or `servo`. The product to test against: `b2g`, `chrome`, `firefox`, or `servo`.
``--binary`` (required) ``--binary`` (required if product is `firefox` or `servo`)
The path to a binary file for the product (browser) to test against. The path to a binary file for the product (browser) to test against.
``--webdriver-binary`` (required if product is `chrome`)
The path to a `*driver` binary; e.g., a `chromedriver` binary.
``--certutil-binary`` (required if product is `firefox` [#]_)
The path to a `certutil` binary (for tests that must be run over https).
``--metadata`` (required) ``--metadata`` (required)
The path to a directory containing test metadata. [#]_ The path to a directory containing test metadata. [#]_
@ -37,6 +43,9 @@ following are most significant:
``--prefs-root`` (required only when testing a Firefox binary) ``--prefs-root`` (required only when testing a Firefox binary)
The path to a directory containing Firefox test-harness preferences. [#]_ The path to a directory containing Firefox test-harness preferences. [#]_
.. [#] The ``--certutil-binary`` option is required when the product is
``firefox`` unless ``--ssl-type=none`` is specified.
.. [#] The ``--metadata`` path is to a directory that contains: .. [#] The ``--metadata`` path is to a directory that contains:
* a ``MANIFEST.json`` file (the web-platform-tests documentation has * a ``MANIFEST.json`` file (the web-platform-tests documentation has
@ -56,7 +65,8 @@ To test a Firefox Nightly build in an OS X environment, you might start
wptrunner using something similar to the following example:: wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \ wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/mozilla-central/obj-x86_64-apple-darwin14.0.0/dist/Nightly.app/Contents/MacOS/firefox \ --binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/dist/Nightly.app/Contents/MacOS/firefox \
--certutil-binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/security/nss/cmd/certutil/certutil \
--prefs-root=~/mozilla-central/testing/profiles --prefs-root=~/mozilla-central/testing/profiles
And to test a Chromium build in an OS X environment, you might start And to test a Chromium build in an OS X environment, you might start
@ -64,18 +74,20 @@ wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \ wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \ --binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
--product=chrome --webdriver-binary=/usr/local/bin/chromedriver --product=chrome
------------------------------------- -------------------------------------
Example: How to run a subset of tests Example: How to run a subset of tests
------------------------------------- -------------------------------------
To restrict a test run just to tests in a particular web-platform-tests To restrict a test run just to tests in a particular web-platform-tests
subdirectory, use ``--include`` with the directory name; for example:: subdirectory, specify the directory name in the positional arguments after
the options; for example, run just the tests in the `dom` subdirectory::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \ wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=/path/to/firefox --prefs-root=/path/to/testing/profiles \ --binary=/path/to/firefox --certutil-binary=/path/to/certutil \
--include=dom --prefs-root=/path/to/testing/profiles \
dom
Output Output
~~~~~~ ~~~~~~
@ -95,7 +107,8 @@ log to a file and a human-readable summary to stdout, you might start
wptrunner using something similar to the following example:: wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \ wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=/path/to/firefox --prefs-root=/path/to/testing/profiles --binary=/path/to/firefox --certutil-binary=/path/to/certutil \
--prefs-root=/path/to/testing/profiles \
--log-raw=output.log --log-mach=- --log-raw=output.log --log-mach=-
Expectation Data Expectation Data

View file

@ -56,9 +56,15 @@ takes multiple options, of which the following are most significant:
``--product`` (defaults to `firefox`) ``--product`` (defaults to `firefox`)
The product to test against: `b2g`, `chrome`, `firefox`, or `servo`. The product to test against: `b2g`, `chrome`, `firefox`, or `servo`.
``--binary`` (required) ``--binary`` (required if product is `firefox` or `servo`)
The path to a binary file for the product (browser) to test against. The path to a binary file for the product (browser) to test against.
``--webdriver-binary`` (required if product is `chrome`)
The path to a `*driver` binary; e.g., a `chromedriver` binary.
``--certutil-binary`` (required if product is `firefox` [#]_)
The path to a `certutil` binary (for tests that must be run over https).
``--metadata`` (required only when not `using default paths`_) ``--metadata`` (required only when not `using default paths`_)
The path to a directory containing test metadata. [#]_ The path to a directory containing test metadata. [#]_
@ -68,6 +74,9 @@ takes multiple options, of which the following are most significant:
``--prefs-root`` (required only when testing a Firefox binary) ``--prefs-root`` (required only when testing a Firefox binary)
The path to a directory containing Firefox test-harness preferences. [#]_ The path to a directory containing Firefox test-harness preferences. [#]_
.. [#] The ``--certutil-binary`` option is required when the product is
``firefox`` unless ``--ssl-type=none`` is specified.
.. [#] The ``--metadata`` path is to a directory that contains: .. [#] The ``--metadata`` path is to a directory that contains:
* a ``MANIFEST.json`` file (the web-platform-tests documentation has * a ``MANIFEST.json`` file (the web-platform-tests documentation has
@ -89,26 +98,30 @@ To test a Firefox Nightly build in an OS X environment, you might start
wptrunner using something similar to the following example:: wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \ wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/mozilla-central/obj-x86_64-apple-darwin14.0.0/dist/Nightly.app/Contents/MacOS/firefox \ --binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/dist/Nightly.app/Contents/MacOS/firefox \
--certutil-binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/security/nss/cmd/certutil/certutil \
--prefs-root=~/mozilla-central/testing/profiles --prefs-root=~/mozilla-central/testing/profiles
And to test a Chromium build in an OS X environment, you might start And to test a Chromium build in an OS X environment, you might start
wptrunner using something similar to the following example:: wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \ wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \ --binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
--product=chrome --webdriver-binary=/usr/local/bin/chromedriver --product=chrome
-------------------- --------------------
Running test subsets Running test subsets
-------------------- --------------------
To restrict a test run just to tests in a particular web-platform-tests To restrict a test run just to tests in a particular web-platform-tests
subdirectory, use ``--include`` with the directory name; for example:: subdirectory, specify the directory name in the positional arguments after
the options; for example, run just the tests in the `dom` subdirectory::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \ wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=/path/to/firefox --prefs-root=/path/to/testing/profiles \ --binary=/path/to/firefox --certutil-binary=/path/to/certutil \
--include=dom --prefs-root=/path/to/testing/profiles \
dom
------------------- -------------------
Running in parallel Running in parallel

View file

@ -0,0 +1,2 @@
prefs: ["browser.display.foreground_color:#FF0000",
"browser.display.background_color:#000000"]

View file

@ -0,0 +1,2 @@
[test_pref_reset.html]
prefs: [@Reset]

View file

@ -0,0 +1 @@
disabled: true

View file

@ -0,0 +1,2 @@
[testharness_1.html]
disabled: @False

View file

@ -0,0 +1 @@
tags: [dir-tag-1, dir-tag-2]

View file

@ -0,0 +1,4 @@
tags: [file-tag]
[testharness_0.html]
tags: [test-tag]

View file

@ -0,0 +1,2 @@
[testharness_0.html]
tags: [test-1-tag]

View file

@ -0,0 +1,4 @@
tags: [file-tag]
[testharness_2.html]
tags: [test-2-tag, @Reset]

View file

@ -101,6 +101,8 @@ def settings_to_argv(settings):
def set_from_args(settings, args): def set_from_args(settings, args):
if args.test: if args.test:
settings["include"] = args.test settings["include"] = args.test
if args.tags:
settings["tags"] = args.tags
def run(config, args): def run(config, args):
logger = structuredlog.StructuredLogger("web-platform-tests") logger = structuredlog.StructuredLogger("web-platform-tests")
@ -139,6 +141,8 @@ def get_parser():
help="Specific product to include in test run") help="Specific product to include in test run")
parser.add_argument("--pdb", action="store_true", parser.add_argument("--pdb", action="store_true",
help="Invoke pdb on uncaught exception") help="Invoke pdb on uncaught exception")
parser.add_argument("--tag", action="append", dest="tags",
help="tags to select tests")
parser.add_argument("test", nargs="*", parser.add_argument("test", nargs="*",
help="Specific tests to include in test run") help="Specific tests to include in test run")
return parser return parser

View file

@ -0,0 +1,10 @@
<!doctype html>
<title>Example pref test</title>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<p>Test requires the pref browser.display.foreground_color to be set to #00FF00</p>
<script>
test(function() {
assert_equals(getComputedStyle(document.body).color, "rgb(255, 0, 0)");
}, "Test that pref was set");
</script>

View file

@ -0,0 +1,10 @@
<!doctype html>
<title>Example pref test</title>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<p>Test requires the pref browser.display.foreground_color to be set to #00FF00</p>
<script>
test(function() {
assert_equals(getComputedStyle(document.body).color, "rgb(0, 0, 0)");
}, "Test that pref was reset");
</script>

View file

@ -0,0 +1,10 @@
<!doctype html>
<title>Example pref test</title>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<p>Test requires the pref browser.display.foreground_color to be set to #FF0000</p>
<script>
test(function() {
assert_equals(getComputedStyle(document.body).color, "rgb(255, 0, 0)");
}, "Test that pref was set");
</script>

View file

@ -1,5 +1,5 @@
<!doctype html> <!doctype html>
<title>Example https test</title> <title>Example pref test</title>
<script src="/resources/testharness.js"></script> <script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script> <script src="/resources/testharnessreport.js"></script>
<p>Test requires the pref browser.display.foreground_color to be set to #00FF00</p> <p>Test requires the pref browser.display.foreground_color to be set to #00FF00</p>

View file

@ -0,0 +1,9 @@
<!doctype html>
<title>Test should be enabled</title>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script>
test(function() {
assert_true(true);
}, "Test that should pass");
</script>

View file

@ -0,0 +1,9 @@
<!doctype html>
<title>Test</title>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script>
test(function() {
assert_true(true);
}, "Test that should pass");
</script>

View file

@ -0,0 +1,9 @@
<!doctype html>
<title>Test</title>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script>
test(function() {
assert_true(true);
}, "Test that should pass");
</script>

View file

@ -0,0 +1,9 @@
<!doctype html>
<title>Test</title>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script>
test(function() {
assert_true(true);
}, "Test that should pass");
</script>

View file

@ -1,12 +1,8 @@
<!doctype html> <!doctype html>
<title>Simple testharness.js usage</title> <title>Test should be disabled</title>
<script src="/resources/testharness.js"></script> <script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script> <script src="/resources/testharnessreport.js"></script>
<script> <script>
test(function() {
assert_true(true);
}, "Test that should pass");
test(function() { test(function() {
assert_true(false); assert_true(false);
}, "Test that should fail"); }, "Test that should fail");

View file

@ -42,7 +42,8 @@ def browser_kwargs(test_environment, **kwargs):
"no_backup": kwargs.get("b2g_no_backup", False)} "no_backup": kwargs.get("b2g_no_backup", False)}
def executor_kwargs(test_type, server_config, cache_manager, **kwargs): def executor_kwargs(test_type, server_config, cache_manager, run_info_data,
**kwargs):
timeout_multiplier = kwargs["timeout_multiplier"] timeout_multiplier = kwargs["timeout_multiplier"]
if timeout_multiplier is None: if timeout_multiplier is None:
timeout_multiplier = 2 timeout_multiplier = 2

View file

@ -20,7 +20,7 @@ __wptrunner__ = {"product": "chrome",
def check_args(**kwargs): def check_args(**kwargs):
require_arg(kwargs, "binary") require_arg(kwargs, "webdriver_binary")
def browser_kwargs(**kwargs): def browser_kwargs(**kwargs):
@ -28,15 +28,16 @@ def browser_kwargs(**kwargs):
"webdriver_binary": kwargs["webdriver_binary"]} "webdriver_binary": kwargs["webdriver_binary"]}
def executor_kwargs(test_type, server_config, cache_manager, **kwargs): def executor_kwargs(test_type, server_config, cache_manager, run_info_data,
**kwargs):
from selenium.webdriver import DesiredCapabilities from selenium.webdriver import DesiredCapabilities
executor_kwargs = base_executor_kwargs(test_type, server_config, executor_kwargs = base_executor_kwargs(test_type, server_config,
cache_manager, **kwargs) cache_manager, **kwargs)
executor_kwargs["close_after_done"] = True executor_kwargs["close_after_done"] = True
executor_kwargs["capabilities"] = dict(DesiredCapabilities.CHROME.items() + executor_kwargs["capabilities"] = dict(DesiredCapabilities.CHROME.items())
{"chromeOptions": if kwargs["binary"] is not None:
{"binary": kwargs["binary"]}}.items()) executor_kwargs["capabilities"]["chromeOptions"] = {"binary": kwargs["binary"]}
return executor_kwargs return executor_kwargs

View file

@ -46,10 +46,13 @@ def browser_kwargs(**kwargs):
"ca_certificate_path": kwargs["ssl_env"].ca_cert_path()} "ca_certificate_path": kwargs["ssl_env"].ca_cert_path()}
def executor_kwargs(test_type, server_config, cache_manager, **kwargs): def executor_kwargs(test_type, server_config, cache_manager, run_info_data,
**kwargs):
executor_kwargs = base_executor_kwargs(test_type, server_config, executor_kwargs = base_executor_kwargs(test_type, server_config,
cache_manager, **kwargs) cache_manager, **kwargs)
executor_kwargs["close_after_done"] = True executor_kwargs["close_after_done"] = True
if run_info_data["debug"] and kwargs["timeout_multiplier"] is None:
executor_kwargs["timeout_multiplier"] = 3
return executor_kwargs return executor_kwargs

View file

@ -29,7 +29,8 @@ def browser_kwargs(**kwargs):
"debug_info": kwargs["debug_info"]} "debug_info": kwargs["debug_info"]}
def executor_kwargs(test_type, server_config, cache_manager, **kwargs): def executor_kwargs(test_type, server_config, cache_manager, run_info_data,
**kwargs):
rv = base_executor_kwargs(test_type, server_config, rv = base_executor_kwargs(test_type, server_config,
cache_manager, **kwargs) cache_manager, **kwargs)
rv["pause_after_test"] = kwargs["pause_after_test"] rv["pause_after_test"] = kwargs["pause_after_test"]

View file

@ -99,7 +99,7 @@ class TestExecutor(object):
self.timeout_multiplier = timeout_multiplier self.timeout_multiplier = timeout_multiplier
self.debug_info = debug_info self.debug_info = debug_info
self.last_environment = {"protocol": "http", self.last_environment = {"protocol": "http",
"prefs": []} "prefs": {}}
self.protocol = None # This must be set in subclasses self.protocol = None # This must be set in subclasses
@property @property

View file

@ -139,55 +139,62 @@ class MarionetteProtocol(Protocol):
def on_environment_change(self, old_environment, new_environment): def on_environment_change(self, old_environment, new_environment):
#Unset all the old prefs #Unset all the old prefs
for name, _ in old_environment.get("prefs", []): for name in old_environment.get("prefs", {}).iterkeys():
value = self.executor.original_pref_values[name] value = self.executor.original_pref_values[name]
if value is None: if value is None:
self.clear_user_pref(name) self.clear_user_pref(name)
else: else:
self.set_pref(name, value) self.set_pref(name, value)
for name, value in new_environment.get("prefs", []): for name, value in new_environment.get("prefs", {}).iteritems():
self.executor.original_pref_values[name] = self.get_pref(name) self.executor.original_pref_values[name] = self.get_pref(name)
self.set_pref(name, value) self.set_pref(name, value)
def set_pref(self, name, value): def set_pref(self, name, value):
if value.lower() not in ("true", "false"):
try:
int(value)
except ValueError:
value = "'%s'" % value
else:
value = value.lower()
self.logger.info("Setting pref %s (%s)" % (name, value)) self.logger.info("Setting pref %s (%s)" % (name, value))
self.marionette.set_context(self.marionette.CONTEXT_CHROME)
script = """ script = """
let prefInterface = Components.classes["@mozilla.org/preferences-service;1"] let prefInterface = Components.classes["@mozilla.org/preferences-service;1"]
.getService(Components.interfaces.nsIPrefBranch); .getService(Components.interfaces.nsIPrefBranch);
let pref = '%s'; let pref = '%s';
let type = prefInterface.getPrefType(pref); let type = prefInterface.getPrefType(pref);
let value = %s;
switch(type) { switch(type) {
case prefInterface.PREF_STRING: case prefInterface.PREF_STRING:
prefInterface.setCharPref(pref, '%s'); prefInterface.setCharPref(pref, value);
break; break;
case prefInterface.PREF_BOOL: case prefInterface.PREF_BOOL:
prefInterface.setBoolPref(pref, %s); prefInterface.setBoolPref(pref, value);
break; break;
case prefInterface.PREF_INT: case prefInterface.PREF_INT:
prefInterface.setIntPref(pref, %s); prefInterface.setIntPref(pref, value);
break; break;
} }
""" % (name, value, value, value) """ % (name, value)
with self.marionette.using_context(self.marionette.CONTEXT_CHROME):
self.marionette.execute_script(script) self.marionette.execute_script(script)
self.marionette.set_context(self.marionette.CONTEXT_CONTENT)
def clear_user_pref(self, name): def clear_user_pref(self, name):
self.logger.info("Clearing pref %s" % (name)) self.logger.info("Clearing pref %s" % (name))
self.marionette.set_context(self.marionette.CONTEXT_CHROME)
script = """ script = """
let prefInterface = Components.classes["@mozilla.org/preferences-service;1"] let prefInterface = Components.classes["@mozilla.org/preferences-service;1"]
.getService(Components.interfaces.nsIPrefBranch); .getService(Components.interfaces.nsIPrefBranch);
let pref = '%s'; let pref = '%s';
prefInterface.clearUserPref(pref); prefInterface.clearUserPref(pref);
""" % name """ % name
with self.marionette.using_context(self.marionette.CONTEXT_CHROME):
self.marionette.execute_script(script) self.marionette.execute_script(script)
self.marionette.set_context(self.marionette.CONTEXT_CONTENT)
def get_pref(self, name): def get_pref(self, name):
self.marionette.set_context(self.marionette.CONTEXT_CHROME) script = """
self.marionette.execute_script("""
let prefInterface = Components.classes["@mozilla.org/preferences-service;1"] let prefInterface = Components.classes["@mozilla.org/preferences-service;1"]
.getService(Components.interfaces.nsIPrefBranch); .getService(Components.interfaces.nsIPrefBranch);
let pref = '%s'; let pref = '%s';
@ -202,8 +209,9 @@ class MarionetteProtocol(Protocol):
case prefInterface.PREF_INVALID: case prefInterface.PREF_INVALID:
return null; return null;
} }
""" % (name)) """ % name
self.marionette.set_context(self.marionette.CONTEXT_CONTENT) with self.marionette.using_context(self.marionette.CONTEXT_CHROME):
self.marionette.execute_script(script)
class MarionetteRun(object): class MarionetteRun(object):
def __init__(self, logger, func, marionette, url, timeout): def __init__(self, logger, func, marionette, url, timeout):
@ -383,10 +391,7 @@ class MarionetteRefTestExecutor(RefTestExecutor):
timeout).run() timeout).run()
def _screenshot(self, marionette, url, timeout): def _screenshot(self, marionette, url, timeout):
try:
marionette.navigate(url) marionette.navigate(url)
except errors.MarionetteException:
raise ExecutorException("ERROR", "Failed to load url %s" % (url,))
marionette.execute_async_script(self.wait_script) marionette.execute_async_script(self.wait_script)

View file

@ -62,9 +62,8 @@ class ServoTestharnessExecutor(ProcessTestExecutor):
self.result_data = None self.result_data = None
self.result_flag = threading.Event() self.result_flag = threading.Event()
debug_args, command = browser_command(self.binary, ["--cpu", "--hard-fail", "-z", debug_args, command = browser_command(self.binary,
"-u", "Servo/wptrunner", ["--cpu", "--hard-fail", "-u", "Servo/wptrunner", "-z", self.test_url(test)],
self.test_url(test)],
self.debug_info) self.debug_info)
self.command = command self.command = command
@ -101,16 +100,19 @@ class ServoTestharnessExecutor(ProcessTestExecutor):
self.proc.wait() self.proc.wait()
proc_is_running = True proc_is_running = True
if self.result_flag.is_set() and self.result_data is not None:
if self.result_flag.is_set():
if self.result_data is not None:
self.result_data["test"] = test.url self.result_data["test"] = test.url
result = self.convert_result(test, self.result_data) result = self.convert_result(test, self.result_data)
else: else:
if self.proc.poll() is not None: self.proc.wait()
result = (test.result_cls("CRASH", None), []) result = (test.result_cls("CRASH", None), [])
proc_is_running = False proc_is_running = False
else: else:
result = (test.result_cls("TIMEOUT", None), []) result = (test.result_cls("TIMEOUT", None), [])
if proc_is_running: if proc_is_running:
if self.pause_after_test: if self.pause_after_test:
self.logger.info("Pausing until the browser exits") self.logger.info("Pausing until the browser exits")
@ -188,8 +190,8 @@ class ServoRefTestExecutor(ProcessTestExecutor):
with TempFilename(self.tempdir) as output_path: with TempFilename(self.tempdir) as output_path:
self.command = [self.binary, "--cpu", "--hard-fail", "--exit", self.command = [self.binary, "--cpu", "--hard-fail", "--exit",
"-Z", "disable-text-aa,disable-canvas-aa", "--output=%s" % output_path, "-u", "Servo/wptrunner", "-Z", "disable-text-aa",
full_url] "--output=%s" % output_path, full_url]
env = os.environ.copy() env = os.environ.copy()
env["HOST_FILE"] = self.hosts_path env["HOST_FILE"] = self.hosts_path
@ -200,7 +202,8 @@ class ServoRefTestExecutor(ProcessTestExecutor):
try: try:
self.proc.run() self.proc.run()
rv = self.proc.wait(timeout=test.timeout) timeout = test.timeout * self.timeout_multiplier + 5
rv = self.proc.wait(timeout=timeout)
except KeyboardInterrupt: except KeyboardInterrupt:
self.proc.kill() self.proc.kill()
raise raise

View file

@ -84,7 +84,7 @@ class ServoWebDriverProtocol(Protocol):
class ServoWebDriverRun(object): class ServoWebDriverRun(object):
def __init__(self, func, session, url, timeout): def __init__(self, func, session, url, timeout, current_timeout=None):
self.func = func self.func = func
self.result = None self.result = None
self.session = session self.session = session
@ -93,18 +93,10 @@ class ServoWebDriverRun(object):
self.result_flag = threading.Event() self.result_flag = threading.Event()
def run(self): def run(self):
timeout = self.timeout
try:
self.session.timeouts.script = timeout + extra_timeout
except IOError:
self.logger.error("Lost webdriver connection")
return Stop
executor = threading.Thread(target=self._run) executor = threading.Thread(target=self._run)
executor.start() executor.start()
flag = self.result_flag.wait(timeout + 2 * extra_timeout) flag = self.result_flag.wait(self.timeout + extra_timeout)
if self.result is None: if self.result is None:
assert not flag assert not flag
self.result = False, ("EXTERNAL-TIMEOUT", None) self.result = False, ("EXTERNAL-TIMEOUT", None)
@ -144,6 +136,7 @@ class ServoWebDriverTestharnessExecutor(TestharnessExecutor):
self.protocol = ServoWebDriverProtocol(self, browser, capabilities=capabilities) self.protocol = ServoWebDriverProtocol(self, browser, capabilities=capabilities)
with open(os.path.join(here, "testharness_servodriver.js")) as f: with open(os.path.join(here, "testharness_servodriver.js")) as f:
self.script = f.read() self.script = f.read()
self.timeout = None
def on_protocol_change(self, new_protocol): def on_protocol_change(self, new_protocol):
pass pass
@ -154,10 +147,20 @@ class ServoWebDriverTestharnessExecutor(TestharnessExecutor):
def do_test(self, test): def do_test(self, test):
url = self.test_url(test) url = self.test_url(test)
timeout = test.timeout * self.timeout_multiplier + extra_timeout
if timeout != self.timeout:
try:
self.protocol.session.timeouts.script = timeout
self.timeout = timeout
except IOError:
self.logger.error("Lost webdriver connection")
return Stop
success, data = ServoWebDriverRun(self.do_testharness, success, data = ServoWebDriverRun(self.do_testharness,
self.protocol.session, self.protocol.session,
url, url,
test.timeout * self.timeout_multiplier).run() timeout).run()
if success: if success:
return self.convert_result(test, data) return self.convert_result(test, data)
@ -172,8 +175,9 @@ class ServoWebDriverTestharnessExecutor(TestharnessExecutor):
"url": strip_server(url), "url": strip_server(url),
"timeout_multiplier": self.timeout_multiplier, "timeout_multiplier": self.timeout_multiplier,
"timeout": timeout * 1000})) "timeout": timeout * 1000}))
if "test" not in result: # Prevent leaking every page in history until Servo develops a more sane
result["test"] = strip_server(url) # page cache
session.back()
return result return result
@ -194,7 +198,7 @@ class ServoWebDriverRefTestExecutor(RefTestExecutor):
self.protocol = ServoWebDriverProtocol(self, browser, self.protocol = ServoWebDriverProtocol(self, browser,
capabilities=capabilities) capabilities=capabilities)
self.implementation = RefTestImplementation(self) self.implementation = RefTestImplementation(self)
self.timeout = None
with open(os.path.join(here, "reftest-wait_servodriver.js")) as f: with open(os.path.join(here, "reftest-wait_servodriver.js")) as f:
self.wait_script = f.read() self.wait_script = f.read()
@ -217,7 +221,17 @@ class ServoWebDriverRefTestExecutor(RefTestExecutor):
return test.result_cls("ERROR", message), [] return test.result_cls("ERROR", message), []
def screenshot(self, test): def screenshot(self, test):
timeout = test.timeout * self.timeout_multiplier if self.debug_info is None else None timeout = (test.timeout * self.timeout_multiplier + extra_timeout
if self.debug_info is None else None)
if self.timeout != timeout:
try:
self.protocol.session.timeouts.script = timeout
self.timeout = timeout
except IOError:
self.logger.error("Lost webdriver connection")
return Stop
return ServoWebDriverRun(self._screenshot, return ServoWebDriverRun(self._screenshot,
self.protocol.session, self.protocol.session,
self.test_url(test), self.test_url(test),

View file

@ -5,17 +5,20 @@
window.wrappedJSObject.timeout_multiplier = %(timeout_multiplier)d; window.wrappedJSObject.timeout_multiplier = %(timeout_multiplier)d;
window.wrappedJSObject.explicit_timeout = %(explicit_timeout)d; window.wrappedJSObject.explicit_timeout = %(explicit_timeout)d;
window.wrappedJSObject.done = function(tests, status) { window.wrappedJSObject.addEventListener("message", function listener(event) {
if (event.data.type != "complete") {
return;
}
window.wrappedJSObject.removeEventListener("message", listener);
clearTimeout(timer); clearTimeout(timer);
var test_results = tests.map(function(x) { var tests = event.data.tests;
return {name:x.name, status:x.status, message:x.message, stack:x.stack} var status = event.data.status;
});
marionetteScriptFinished({test:"%(url)s", marionetteScriptFinished({test:"%(url)s",
tests:test_results, tests: tests,
status: status.status, status: status.status,
message: status.message, message: status.message,
stack: status.stack}); stack: status.stack});
} }, false);
window.wrappedJSObject.win = window.open("%(abs_url)s", "%(window_id)s"); window.wrappedJSObject.win = window.open("%(abs_url)s", "%(window_id)s");

View file

@ -5,17 +5,16 @@
var callback = arguments[arguments.length - 1]; var callback = arguments[arguments.length - 1];
window.timeout_multiplier = %(timeout_multiplier)d; window.timeout_multiplier = %(timeout_multiplier)d;
window.done = function(tests, status) { window.addEventListener("message", function(event) {
var tests = event.data[0];
var status = event.data[1];
clearTimeout(timer); clearTimeout(timer);
var test_results = tests.map(function(x) {
return {name:x.name, status:x.status, message:x.message, stack:x.stack}
});
callback({test:"%(url)s", callback({test:"%(url)s",
tests:test_results, tests: tests,
status: status.status, status: status.status,
message: status.message, message: status.message,
stack: status.stack}); stack: status.stack});
} }, false);
window.win = window.open("%(abs_url)s", "%(window_id)s"); window.win = window.open("%(abs_url)s", "%(window_id)s");

View file

@ -444,7 +444,7 @@ class Session(object):
body = {"id": frame.json()} body = {"id": frame.json()}
else: else:
body = {"id": frame} body = {"id": frame}
print body
return self.send_command("POST", url, body) return self.send_command("POST", url, body)
@command @command

View file

@ -29,6 +29,42 @@ def data_cls_getter(output_node, visited_node):
raise ValueError raise ValueError
def disabled(node):
"""Boolean indicating whether the test is disabled"""
try:
return node.get("disabled")
except KeyError:
return None
def tags(node):
"""Set of tags that have been applied to the test"""
try:
value = node.get("tags")
if isinstance(value, (str, unicode)):
return {value}
return set(value)
except KeyError:
return set()
def prefs(node):
def value(ini_value):
if isinstance(ini_value, (str, unicode)):
return tuple(ini_value.split(":", 1))
else:
return (ini_value, None)
try:
node_prefs = node.get("prefs")
if type(node_prefs) in (str, unicode):
prefs = {value(node_prefs)}
rv = dict(value(item) for item in node_prefs)
except KeyError:
rv = {}
return rv
class ExpectedManifest(ManifestItem): class ExpectedManifest(ManifestItem):
def __init__(self, name, test_path, url_base): def __init__(self, name, test_path, url_base):
"""Object representing all the tests in a particular manifest """Object representing all the tests in a particular manifest
@ -71,6 +107,32 @@ class ExpectedManifest(ManifestItem):
return urlparse.urljoin(self.url_base, return urlparse.urljoin(self.url_base,
"/".join(self.test_path.split(os.path.sep))) "/".join(self.test_path.split(os.path.sep)))
@property
def disabled(self):
return disabled(self)
@property
def tags(self):
return tags(self)
@property
def prefs(self):
return prefs(self)
class DirectoryManifest(ManifestItem):
@property
def disabled(self):
return disabled(self)
@property
def tags(self):
return tags(self)
@property
def prefs(self):
return prefs(self)
class TestNode(ManifestItem): class TestNode(ManifestItem):
def __init__(self, name): def __init__(self, name):
@ -100,21 +162,17 @@ class TestNode(ManifestItem):
def id(self): def id(self):
return urlparse.urljoin(self.parent.url, self.name) return urlparse.urljoin(self.parent.url, self.name)
@property
def disabled(self): def disabled(self):
"""Boolean indicating whether the test is disabled""" return disabled(self)
try:
return self.get("disabled")
except KeyError:
return False
@property
def tags(self):
return tags(self)
@property
def prefs(self): def prefs(self):
try: return prefs(self)
prefs = self.get("prefs")
if type(prefs) in (str, unicode):
prefs = [prefs]
return [item.split(":", 1) for item in prefs]
except KeyError:
return []
def append(self, node): def append(self, node):
"""Add a subtest to the current test """Add a subtest to the current test
@ -159,9 +217,28 @@ def get_manifest(metadata_root, test_path, url_base, run_info):
manifest_path = expected.expected_path(metadata_root, test_path) manifest_path = expected.expected_path(metadata_root, test_path)
try: try:
with open(manifest_path) as f: with open(manifest_path) as f:
return static.compile(f, run_info, return static.compile(f,
run_info,
data_cls_getter=data_cls_getter, data_cls_getter=data_cls_getter,
test_path=test_path, test_path=test_path,
url_base=url_base) url_base=url_base)
except IOError: except IOError:
return None return None
def get_dir_manifest(metadata_root, path, run_info):
"""Get the ExpectedManifest for a particular test path, or None if there is no
metadata stored for that test path.
:param metadata_root: Absolute path to the root of the metadata directory
:param path: Path to the ini file relative to the metadata root
:param run_info: Dictionary of properties of the test run for which the expectation
values should be computed.
"""
full_path = os.path.join(metadata_root, path)
try:
with open(full_path) as f:
return static.compile(f,
run_info,
data_cls_getter=lambda x,y: DirectoryManifest)
except IOError:
return None

View file

@ -153,17 +153,32 @@ def update_from_logs(manifests, *log_filenames, **kwargs):
return expected_map return expected_map
def directory_manifests(metadata_path):
rv = []
for dirpath, dirname, filenames in os.walk(metadata_path):
if "__dir__.ini" in filenames:
rel_path = os.path.relpath(dirpath, metadata_path)
rv.append(os.path.join(rel_path, "__dir__.ini"))
return rv
def write_changes(metadata_path, expected_map): def write_changes(metadata_path, expected_map):
# First write the new manifest files to a temporary directory # First write the new manifest files to a temporary directory
temp_path = tempfile.mkdtemp(dir=os.path.split(metadata_path)[0]) temp_path = tempfile.mkdtemp(dir=os.path.split(metadata_path)[0])
write_new_expected(temp_path, expected_map) write_new_expected(temp_path, expected_map)
# Keep all __dir__.ini files (these are not in expected_map because they
# aren't associated with a specific test)
keep_files = directory_manifests(metadata_path)
# Copy all files in the root to the temporary location since # Copy all files in the root to the temporary location since
# these cannot be ini files # these cannot be ini files
keep_files = [item for item in os.listdir(metadata_path) if keep_files.extend(item for item in os.listdir(metadata_path) if
not os.path.isdir(os.path.join(metadata_path, item))] not os.path.isdir(os.path.join(metadata_path, item)))
for item in keep_files: for item in keep_files:
dest_dir = os.path.dirname(os.path.join(temp_path, item))
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
shutil.copyfile(os.path.join(metadata_path, item), shutil.copyfile(os.path.join(metadata_path, item),
os.path.join(temp_path, item)) os.path.join(temp_path, item))

View file

@ -56,8 +56,8 @@ class Reducer(object):
self.test_loader = wptrunner.TestLoader(kwargs["tests_root"], self.test_loader = wptrunner.TestLoader(kwargs["tests_root"],
kwargs["metadata_root"], kwargs["metadata_root"],
[self.test_type], [self.test_type],
test_filter, run_info,
run_info) manifest_filer=test_filter)
if kwargs["repeat"] == 1: if kwargs["repeat"] == 1:
logger.critical("Need to specify --repeat with more than one repetition") logger.critical("Need to specify --repeat with more than one repetition")
sys.exit(1) sys.exit(1)

View file

@ -9,7 +9,9 @@ add_completion_callback(function() {
var test_results = tests.map(function(x) { var test_results = tests.map(function(x) {
return {name:x.name, status:x.status, message:x.message, stack:x.stack} return {name:x.name, status:x.status, message:x.message, stack:x.stack}
}); });
var results = JSON.stringify({tests:test_results, var id = location.pathname + location.search + location.hash;
var results = JSON.stringify({test: id,
tests:test_results,
status: status.status, status: status.status,
message: status.message, message: status.message,
stack: status.stack}); stack: status.stack});

View file

@ -3,7 +3,8 @@
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */ * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
var props = {output:%(output)d, var props = {output:%(output)d,
explicit_timeout: true}; explicit_timeout: true,
message_events: ["completion"]};
if (window.opener && "timeout_multiplier" in window.opener) { if (window.opener && "timeout_multiplier" in window.opener) {
props["timeout_multiplier"] = window.opener.timeout_multiplier; props["timeout_multiplier"] = window.opener.timeout_multiplier;
@ -16,6 +17,14 @@ if (window.opener && window.opener.explicit_timeout) {
setup(props); setup(props);
add_completion_callback(function() { add_completion_callback(function() {
add_completion_callback(function(tests, status) { add_completion_callback(function(tests, status) {
window.opener.done(tests, status) var harness_status = {
"status": status.status,
"message": status.message,
"stack": status.stack
};
var test_results = tests.map(function(x) {
return {name:x.name, status:x.status, message:x.message, stack:x.stack}
});
window.opener.postMessage([test_results, harness_status], "*");
}) })
}); });

View file

@ -1,9 +1,10 @@
import json import json
import os import os
import sys
import urlparse import urlparse
from abc import ABCMeta, abstractmethod from abc import ABCMeta, abstractmethod
from Queue import Empty from Queue import Empty
from collections import defaultdict, OrderedDict from collections import defaultdict, OrderedDict, deque
from multiprocessing import Queue from multiprocessing import Queue
import manifestinclude import manifestinclude
@ -25,6 +26,7 @@ class TestChunker(object):
self.total_chunks = total_chunks self.total_chunks = total_chunks
self.chunk_number = chunk_number self.chunk_number = chunk_number
assert self.chunk_number <= self.total_chunks assert self.chunk_number <= self.total_chunks
self.logger = structured.get_default_logger()
def __call__(self, manifest): def __call__(self, manifest):
raise NotImplementedError raise NotImplementedError
@ -47,18 +49,15 @@ class HashChunker(TestChunker):
if hash(test_path) % self.total_chunks == chunk_index: if hash(test_path) % self.total_chunks == chunk_index:
yield test_path, tests yield test_path, tests
class EqualTimeChunker(TestChunker): class EqualTimeChunker(TestChunker):
"""Chunker that uses the test timeout as a proxy for the running time of the test""" def _group_by_directory(self, manifest_items):
"""Split the list of manifest items into a ordered dict that groups tests in
so that anything in the same subdirectory beyond a depth of 3 is in the same
group. So all tests in a/b/c, a/b/c/d and a/b/c/e will be grouped together
and separate to tests in a/b/f
def _get_chunk(self, manifest_items): Returns: tuple (ordered dict of {test_dir: PathData}, total estimated runtime)
# For each directory containing tests, calculate the maximum execution time after running all """
# the tests in that directory. Then work out the index into the manifest corresponding to the
# directories at fractions of m/N of the running time where m=1..N-1 and N is the total number
# of chunks. Return an array of these indicies
total_time = 0
by_dir = OrderedDict()
class PathData(object): class PathData(object):
def __init__(self, path): def __init__(self, path):
@ -66,73 +65,8 @@ class EqualTimeChunker(TestChunker):
self.time = 0 self.time = 0
self.tests = [] self.tests = []
class Chunk(object): by_dir = OrderedDict()
def __init__(self): total_time = 0
self.paths = []
self.tests = []
self.time = 0
def append(self, path_data):
self.paths.append(path_data.path)
self.tests.extend(path_data.tests)
self.time += path_data.time
class ChunkList(object):
def __init__(self, total_time, n_chunks):
self.total_time = total_time
self.n_chunks = n_chunks
self.remaining_chunks = n_chunks
self.chunks = []
self.update_time_per_chunk()
def __iter__(self):
for item in self.chunks:
yield item
def __getitem__(self, i):
return self.chunks[i]
def sort_chunks(self):
self.chunks = sorted(self.chunks, key=lambda x:x.paths[0])
def get_tests(self, chunk_number):
return self[chunk_number - 1].tests
def append(self, chunk):
if len(self.chunks) == self.n_chunks:
raise ValueError("Tried to create more than %n chunks" % self.n_chunks)
self.chunks.append(chunk)
self.remaining_chunks -= 1
@property
def current_chunk(self):
if self.chunks:
return self.chunks[-1]
def update_time_per_chunk(self):
self.time_per_chunk = (self.total_time - sum(item.time for item in self)) / self.remaining_chunks
def create(self):
rv = Chunk()
self.append(rv)
return rv
def add_path(self, path_data):
sum_time = self.current_chunk.time + path_data.time
if sum_time > self.time_per_chunk and self.remaining_chunks > 0:
overshoot = sum_time - self.time_per_chunk
undershoot = self.time_per_chunk - self.current_chunk.time
if overshoot < undershoot:
self.create()
self.current_chunk.append(path_data)
else:
self.current_chunk.append(path_data)
self.create()
else:
self.current_chunk.append(path_data)
for i, (test_path, tests) in enumerate(manifest_items): for i, (test_path, tests) in enumerate(manifest_items):
test_dir = tuple(os.path.split(test_path)[0].split(os.path.sep)[:3]) test_dir = tuple(os.path.split(test_path)[0].split(os.path.sep)[:3])
@ -144,42 +78,238 @@ class EqualTimeChunker(TestChunker):
time = sum(wpttest.DEFAULT_TIMEOUT if test.timeout != time = sum(wpttest.DEFAULT_TIMEOUT if test.timeout !=
"long" else wpttest.LONG_TIMEOUT for test in tests) "long" else wpttest.LONG_TIMEOUT for test in tests)
data.time += time data.time += time
total_time += time
data.tests.append((test_path, tests)) data.tests.append((test_path, tests))
total_time += time return by_dir, total_time
chunk_list = ChunkList(total_time, self.total_chunks) def _maybe_remove(self, chunks, i, direction):
"""Trial removing a chunk from one chunk to an adjacent one.
:param chunks: - the list of all chunks
:param i: - the chunk index in the list of chunks to try removing from
:param direction: either "next" if we are going to move from the end to
the subsequent chunk, or "prev" if we are going to move
from the start into the previous chunk.
:returns bool: Did a chunk get moved?"""
source_chunk = chunks[i]
if direction == "next":
target_chunk = chunks[i+1]
path_index = -1
move_func = lambda: target_chunk.appendleft(source_chunk.pop())
elif direction == "prev":
target_chunk = chunks[i-1]
path_index = 0
move_func = lambda: target_chunk.append(source_chunk.popleft())
else:
raise ValueError("Unexpected move direction %s" % direction)
return self._maybe_move(source_chunk, target_chunk, path_index, move_func)
def _maybe_add(self, chunks, i, direction):
"""Trial adding a chunk from one chunk to an adjacent one.
:param chunks: - the list of all chunks
:param i: - the chunk index in the list of chunks to try adding to
:param direction: either "next" if we are going to remove from the
the subsequent chunk, or "prev" if we are going to remove
from the the previous chunk.
:returns bool: Did a chunk get moved?"""
target_chunk = chunks[i]
if direction == "next":
source_chunk = chunks[i+1]
path_index = 0
move_func = lambda: target_chunk.append(source_chunk.popleft())
elif direction == "prev":
source_chunk = chunks[i-1]
path_index = -1
move_func = lambda: target_chunk.appendleft(source_chunk.pop())
else:
raise ValueError("Unexpected move direction %s" % direction)
return self._maybe_move(source_chunk, target_chunk, path_index, move_func)
def _maybe_move(self, source_chunk, target_chunk, path_index, move_func):
"""Move from one chunk to another, assess the change in badness,
and keep the move iff it decreases the badness score.
:param source_chunk: chunk to move from
:param target_chunk: chunk to move to
:param path_index: 0 if we are moving from the start or -1 if we are moving from the
end
:param move_func: Function that actually moves between chunks"""
if len(source_chunk.paths) <= 1:
return False
move_time = source_chunk.paths[path_index].time
new_source_badness = self._badness(source_chunk.time - move_time)
new_target_badness = self._badness(target_chunk.time + move_time)
delta_badness = ((new_source_badness + new_target_badness) -
(source_chunk.badness + target_chunk.badness))
if delta_badness < 0:
move_func()
return True
return False
def _badness(self, time):
"""Metric of badness for a specific chunk
:param time: the time for a specific chunk"""
return (time - self.expected_time)**2
def _get_chunk(self, manifest_items):
by_dir, total_time = self._group_by_directory(manifest_items)
if len(by_dir) < self.total_chunks: if len(by_dir) < self.total_chunks:
raise ValueError("Tried to split into %i chunks, but only %i subdirectories included" % ( raise ValueError("Tried to split into %i chunks, but only %i subdirectories included" % (
self.total_chunks, len(by_dir))) self.total_chunks, len(by_dir)))
# Put any individual dirs with a time greater than the time per chunk into their own self.expected_time = float(total_time) / self.total_chunks
# chunk
chunks = self._create_initial_chunks(by_dir)
while True: while True:
to_remove = [] # Move a test from one chunk to the next until doing so no longer
for path_data in by_dir.itervalues(): # reduces the badness
if path_data.time > chunk_list.time_per_chunk: got_improvement = self._update_chunks(chunks)
to_remove.append(path_data) if not got_improvement:
if to_remove:
for path_data in to_remove:
chunk = chunk_list.create()
chunk.append(path_data)
del by_dir[path_data.path]
chunk_list.update_time_per_chunk()
else:
break break
chunk = chunk_list.create() self.logger.debug(self.expected_time)
for path_data in by_dir.itervalues(): for i, chunk in chunks.iteritems():
chunk_list.add_path(path_data) self.logger.debug("%i: %i, %i" % (i + 1, chunk.time, chunk.badness))
assert len(chunk_list.chunks) == self.total_chunks, len(chunk_list.chunks) assert self._all_tests(by_dir) == self._chunked_tests(chunks)
assert sum(item.time for item in chunk_list) == chunk_list.total_time
chunk_list.sort_chunks() return self._get_tests(chunks)
return chunk_list.get_tests(self.chunk_number) @staticmethod
def _all_tests(by_dir):
"""Return a set of all tests in the manifest from a grouping by directory"""
return set(x[0] for item in by_dir.itervalues()
for x in item.tests)
@staticmethod
def _chunked_tests(chunks):
"""Return a set of all tests in the manifest from the chunk list"""
return set(x[0] for chunk in chunks.itervalues()
for path in chunk.paths
for x in path.tests)
def _create_initial_chunks(self, by_dir):
"""Create an initial unbalanced list of chunks.
:param by_dir: All tests in the manifest grouped by subdirectory
:returns list: A list of Chunk objects"""
class Chunk(object):
def __init__(self, paths, index):
"""List of PathData objects that together form a single chunk of
tests"""
self.paths = deque(paths)
self.time = sum(item.time for item in paths)
self.index = index
def appendleft(self, path):
"""Add a PathData object to the start of the chunk"""
self.paths.appendleft(path)
self.time += path.time
def append(self, path):
"""Add a PathData object to the end of the chunk"""
self.paths.append(path)
self.time += path.time
def pop(self):
"""Remove PathData object from the end of the chunk"""
assert len(self.paths) > 1
self.time -= self.paths[-1].time
return self.paths.pop()
def popleft(self):
"""Remove PathData object from the start of the chunk"""
assert len(self.paths) > 1
self.time -= self.paths[0].time
return self.paths.popleft()
@property
def badness(self_):
"""Badness metric for this chunk"""
return self._badness(self_.time)
initial_size = len(by_dir) / self.total_chunks
chunk_boundaries = [initial_size * i
for i in xrange(self.total_chunks)] + [len(by_dir)]
chunks = OrderedDict()
for i, lower in enumerate(chunk_boundaries[:-1]):
upper = chunk_boundaries[i + 1]
paths = by_dir.values()[lower:upper]
chunks[i] = Chunk(paths, i)
assert self._all_tests(by_dir) == self._chunked_tests(chunks)
return chunks
def _update_chunks(self, chunks):
"""Run a single iteration of the chunk update algorithm.
:param chunks: - List of chunks
"""
#TODO: consider replacing this with a heap
sorted_chunks = sorted(chunks.values(), key=lambda x:-x.badness)
got_improvement = False
for chunk in sorted_chunks:
if chunk.time < self.expected_time:
f = self._maybe_add
else:
f = self._maybe_remove
if chunk.index == 0:
order = ["next"]
elif chunk.index == self.total_chunks - 1:
order = ["prev"]
else:
if chunk.time < self.expected_time:
# First try to add a test from the neighboring chunk with the
# greatest total time
if chunks[chunk.index + 1].time > chunks[chunk.index - 1].time:
order = ["next", "prev"]
else:
order = ["prev", "next"]
else:
# First try to remove a test and add to the neighboring chunk with the
# lowest total time
if chunks[chunk.index + 1].time > chunks[chunk.index - 1].time:
order = ["prev", "next"]
else:
order = ["next", "prev"]
for direction in order:
if f(chunks, chunk.index, direction):
got_improvement = True
break
if got_improvement:
break
return got_improvement
def _get_tests(self, chunks):
"""Return the list of tests corresponding to the chunk number we are running.
:param chunks: List of chunks"""
tests = []
for path in chunks[self.chunk_number - 1].paths:
tests.extend(path.tests)
return tests
def __call__(self, manifest_iter): def __call__(self, manifest_iter):
manifest = list(manifest_iter) manifest = list(manifest_iter)
@ -214,6 +344,14 @@ class TestFilter(object):
if include_tests: if include_tests:
yield test_path, include_tests yield test_path, include_tests
class TagFilter(object):
def __init__(self, tags):
self.tags = set(tags)
def __call__(self, test_iter):
for test in test_iter:
if test.tags & self.tags:
yield test
class ManifestLoader(object): class ManifestLoader(object):
def __init__(self, test_paths, force_manifest_update=False): def __init__(self, test_paths, force_manifest_update=False):
@ -276,20 +414,30 @@ class ManifestLoader(object):
return manifest_file return manifest_file
def iterfilter(filters, iter):
for f in filters:
iter = f(iter)
for item in iter:
yield item
class TestLoader(object): class TestLoader(object):
def __init__(self, def __init__(self,
test_manifests, test_manifests,
test_types, test_types,
test_filter,
run_info, run_info,
manifest_filters=None,
meta_filters=None,
chunk_type="none", chunk_type="none",
total_chunks=1, total_chunks=1,
chunk_number=1, chunk_number=1,
include_https=True): include_https=True):
self.test_types = test_types self.test_types = test_types
self.test_filter = test_filter
self.run_info = run_info self.run_info = run_info
self.manifest_filters = manifest_filters if manifest_filters is not None else []
self.meta_filters = meta_filters if meta_filters is not None else []
self.manifests = test_manifests self.manifests = test_manifests
self.tests = None self.tests = None
self.disabled_tests = None self.disabled_tests = None
@ -305,6 +453,9 @@ class TestLoader(object):
chunk_number) chunk_number)
self._test_ids = None self._test_ids = None
self.directory_manifests = {}
self._load_tests() self._load_tests()
@property @property
@ -316,22 +467,39 @@ class TestLoader(object):
self._test_ids += [item.id for item in test_dict[test_type]] self._test_ids += [item.id for item in test_dict[test_type]]
return self._test_ids return self._test_ids
def get_test(self, manifest_test, expected_file): def get_test(self, manifest_test, inherit_metadata, test_metadata):
if expected_file is not None: if test_metadata is not None:
expected = expected_file.get_test(manifest_test.id) inherit_metadata.append(test_metadata)
else: test_metadata = test_metadata.get_test(manifest_test.id)
expected = None
return wpttest.from_manifest(manifest_test, expected) return wpttest.from_manifest(manifest_test, inherit_metadata, test_metadata)
def load_expected_manifest(self, test_manifest, metadata_path, test_path): def load_dir_metadata(self, test_manifest, metadata_path, test_path):
return manifestexpected.get_manifest(metadata_path, test_path, test_manifest.url_base, self.run_info) rv = []
path_parts = os.path.dirname(test_path).split(os.path.sep)
for i in xrange(1,len(path_parts) + 1):
path = os.path.join(os.path.sep.join(path_parts[:i]), "__dir__.ini")
if path not in self.directory_manifests:
self.directory_manifests[path] = manifestexpected.get_dir_manifest(
metadata_path, path, self.run_info)
manifest = self.directory_manifests[path]
if manifest is not None:
rv.append(manifest)
return rv
def load_metadata(self, test_manifest, metadata_path, test_path):
inherit_metadata = self.load_dir_metadata(test_manifest, metadata_path, test_path)
test_metadata = manifestexpected.get_manifest(
metadata_path, test_path, test_manifest.url_base, self.run_info)
return inherit_metadata, test_metadata
def iter_tests(self): def iter_tests(self):
manifest_items = [] manifest_items = []
for manifest in self.manifests.keys(): for manifest in self.manifests.keys():
manifest_items.extend(self.test_filter(manifest.itertypes(*self.test_types))) manifest_iter = iterfilter(self.manifest_filters,
manifest.itertypes(*self.test_types))
manifest_items.extend(manifest_iter)
if self.chunker is not None: if self.chunker is not None:
manifest_items = self.chunker(manifest_items) manifest_items = self.chunker(manifest_items)
@ -339,12 +507,15 @@ class TestLoader(object):
for test_path, tests in manifest_items: for test_path, tests in manifest_items:
manifest_file = iter(tests).next().manifest manifest_file = iter(tests).next().manifest
metadata_path = self.manifests[manifest_file]["metadata_path"] metadata_path = self.manifests[manifest_file]["metadata_path"]
expected_file = self.load_expected_manifest(manifest_file, metadata_path, test_path) inherit_metadata, test_metadata = self.load_metadata(manifest_file, metadata_path, test_path)
for test in iterfilter(self.meta_filters,
self.iter_wpttest(inherit_metadata, test_metadata, tests)):
yield test_path, test.test_type, test
def iter_wpttest(self, inherit_metadata, test_metadata, tests):
for manifest_test in tests: for manifest_test in tests:
test = self.get_test(manifest_test, expected_file) yield self.get_test(manifest_test, inherit_metadata, test_metadata)
test_type = manifest_test.item_type
yield test_path, test_type, test
def _load_tests(self): def _load_tests(self):
"""Read in the tests from the manifest file and add them to a queue""" """Read in the tests from the manifest file and add them to a queue"""

View file

@ -293,8 +293,8 @@ class TestRunnerManager(threading.Thread):
# reason # reason
# Need to consider the unlikely case where one test causes the # Need to consider the unlikely case where one test causes the
# runner process to repeatedly die # runner process to repeatedly die
self.logger.info("Last test did not complete, requeueing") self.logger.critical("Last test did not complete")
self.requeue_test() break
self.logger.warning( self.logger.warning(
"More tests found, but runner process died, restarting") "More tests found, but runner process died, restarting")
self.restart_count += 1 self.restart_count += 1
@ -466,10 +466,6 @@ class TestRunnerManager(threading.Thread):
def start_next_test(self): def start_next_test(self):
self.send_message("run_test") self.send_message("run_test")
def requeue_test(self):
self.test_source.requeue(self.test)
self.test = None
def test_start(self, test): def test_start(self, test):
self.test = test self.test = test
self.logger.test_start(test.id) self.logger.test_start(test.id)

View file

@ -86,6 +86,14 @@ def create_parser(product_choices=None):
default=False, default=False,
help="List the tests that are disabled on the current platform") help="List the tests that are disabled on the current platform")
build_type = parser.add_mutually_exclusive_group()
build_type.add_argument("--debug-build", dest="debug", action="store_true",
default=None,
help="Build is a debug build (overrides any mozinfo file)")
build_type.add_argument("--release-build", dest="debug", action="store_false",
default=None,
help="Build is a release (overrides any mozinfo file)")
test_selection_group = parser.add_argument_group("Test Selection") test_selection_group = parser.add_argument_group("Test Selection")
test_selection_group.add_argument("--test-types", action="store", test_selection_group.add_argument("--test-types", action="store",
nargs="*", default=["testharness", "reftest"], nargs="*", default=["testharness", "reftest"],
@ -97,6 +105,8 @@ def create_parser(product_choices=None):
help="URL prefix to exclude") help="URL prefix to exclude")
test_selection_group.add_argument("--include-manifest", type=abs_path, test_selection_group.add_argument("--include-manifest", type=abs_path,
help="Path to manifest listing tests to include") help="Path to manifest listing tests to include")
test_selection_group.add_argument("--tag", action="append", dest="tags",
help="Labels applied to tests to include in the run. Labels starting dir: are equivalent to top-level directories.")
debugging_group = parser.add_argument_group("Debugging") debugging_group = parser.add_argument_group("Debugging")
debugging_group.add_argument('--debugger', const="__default__", nargs="?", debugging_group.add_argument('--debugger', const="__default__", nargs="?",

View file

@ -115,6 +115,9 @@ class Compiler(NodeVisitor):
def visit_ValueNode(self, node): def visit_ValueNode(self, node):
return (lambda x: True, node.data) return (lambda x: True, node.data)
def visit_AtomNode(self, node):
return (lambda x: True, node.data)
def visit_ConditionalNode(self, node): def visit_ConditionalNode(self, node):
return self.visit(node.children[0]), self.visit(node.children[1]) return self.visit(node.children[0]), self.visit(node.children[1])

View file

@ -68,6 +68,9 @@ class Compiler(NodeVisitor):
def visit_ValueNode(self, node): def visit_ValueNode(self, node):
return node.data return node.data
def visit_AtomNode(self, node):
return node.data
def visit_ListNode(self, node): def visit_ListNode(self, node):
return [self.visit(child) for child in node.children] return [self.visit(child) for child in node.children]

View file

@ -93,6 +93,10 @@ class ValueNode(Node):
raise TypeError raise TypeError
class AtomNode(ValueNode):
pass
class ConditionalNode(Node): class ConditionalNode(Node):
pass pass

View file

@ -44,6 +44,9 @@ binary_operators = ["==", "!=", "and", "or"]
operators = ["==", "!=", "not", "and", "or"] operators = ["==", "!=", "not", "and", "or"]
atoms = {"True": True,
"False": False,
"Reset": object()}
def decode(byte_str): def decode(byte_str):
return byte_str.decode("utf8") return byte_str.decode("utf8")
@ -55,7 +58,7 @@ def precedence(operator_node):
class TokenTypes(object): class TokenTypes(object):
def __init__(self): def __init__(self):
for type in ["group_start", "group_end", "paren", "list_start", "list_end", "separator", "ident", "string", "number", "eof"]: for type in ["group_start", "group_end", "paren", "list_start", "list_end", "separator", "ident", "string", "number", "atom", "eof"]:
setattr(self, type, type) setattr(self, type, type)
token_types = TokenTypes() token_types = TokenTypes()
@ -232,6 +235,8 @@ class Tokenizer(object):
self.state = self.eol_state self.state = self.eol_state
elif self.char() == ",": elif self.char() == ",":
raise ParseError(self.filename, self.line_number, "List item started with separator") raise ParseError(self.filename, self.line_number, "List item started with separator")
elif self.char() == "@":
self.state = self.list_value_atom_state
else: else:
self.state = self.list_value_state self.state = self.list_value_state
@ -267,6 +272,11 @@ class Tokenizer(object):
if rv: if rv:
yield (token_types.string, decode(rv)) yield (token_types.string, decode(rv))
def list_value_atom_state(self):
self.consume()
for _, value in self.list_value_state():
yield token_types.atom, value
def list_end_state(self): def list_end_state(self):
self.consume() self.consume()
yield (token_types.list_end, "]") yield (token_types.list_end, "]")
@ -282,7 +292,14 @@ class Tokenizer(object):
self.state = self.comment_state self.state = self.comment_state
else: else:
self.state = self.line_end_state self.state = self.line_end_state
elif self.char() == "@":
self.consume()
for _, value in self.value_inner_state():
yield token_types.atom, value
else: else:
self.state = self.value_inner_state
def value_inner_state(self):
rv = "" rv = ""
spaces = 0 spaces = 0
while True: while True:
@ -544,12 +561,17 @@ class Parser(object):
if self.token[0] == token_types.string: if self.token[0] == token_types.string:
self.value() self.value()
self.eof_or_end_group() self.eof_or_end_group()
elif self.token[0] == token_types.atom:
self.atom()
else: else:
raise ParseError raise ParseError
def list_value(self): def list_value(self):
self.tree.append(ListNode()) self.tree.append(ListNode())
while self.token[0] == token_types.string: while self.token[0] in (token_types.atom, token_types.string):
if self.token[0] == token_types.atom:
self.atom()
else:
self.value() self.value()
self.expect(token_types.list_end) self.expect(token_types.list_end)
self.tree.pop() self.tree.pop()
@ -571,6 +593,13 @@ class Parser(object):
self.consume() self.consume()
self.tree.pop() self.tree.pop()
def atom(self):
if self.token[1] not in atoms:
raise ParseError(self.tokenizer.filename, self.tokenizer.line_number, "Unrecognised symbol @%s" % self.token[1])
self.tree.append(AtomNode(atoms[self.token[1]]))
self.consume()
self.tree.pop()
def expr_start(self): def expr_start(self):
self.expr_builder = ExpressionBuilder(self.tokenizer) self.expr_builder = ExpressionBuilder(self.tokenizer)
self.expr_builders.append(self.expr_builder) self.expr_builders.append(self.expr_builder)
@ -605,21 +634,21 @@ class Parser(object):
elif self.token[0] == token_types.number: elif self.token[0] == token_types.number:
self.expr_number() self.expr_number()
else: else:
raise ParseError raise ParseError(self.tokenizer.filename, self.tokenizer.line_number, "Unrecognised operand")
def expr_unary_op(self): def expr_unary_op(self):
if self.token[1] in unary_operators: if self.token[1] in unary_operators:
self.expr_builder.push_operator(UnaryOperatorNode(self.token[1])) self.expr_builder.push_operator(UnaryOperatorNode(self.token[1]))
self.consume() self.consume()
else: else:
raise ParseError(self.filename, self.tokenizer.line_number, "Expected unary operator") raise ParseError(self.tokenizer.filename, self.tokenizer.line_number, "Expected unary operator")
def expr_bin_op(self): def expr_bin_op(self):
if self.token[1] in binary_operators: if self.token[1] in binary_operators:
self.expr_builder.push_operator(BinaryOperatorNode(self.token[1])) self.expr_builder.push_operator(BinaryOperatorNode(self.token[1]))
self.consume() self.consume()
else: else:
raise ParseError(self.filename, self.tokenizer.line_number, "Expected binary operator") raise ParseError(self.tokenizer.filename, self.tokenizer.line_number, "Expected binary operator")
def expr_value(self): def expr_value(self):
node_type = {token_types.string: StringNode, node_type = {token_types.string: StringNode,

View file

@ -3,7 +3,9 @@
# You can obtain one at http://mozilla.org/MPL/2.0/. # You can obtain one at http://mozilla.org/MPL/2.0/.
from node import NodeVisitor, ValueNode, ListNode, BinaryExpressionNode from node import NodeVisitor, ValueNode, ListNode, BinaryExpressionNode
from parser import precedence from parser import atoms, precedence
atom_names = {v:"@%s" % k for (k,v) in atoms.iteritems()}
named_escapes = set(["\a", "\b", "\f", "\n", "\r", "\t", "\v"]) named_escapes = set(["\a", "\b", "\f", "\n", "\r", "\t", "\v"])
@ -80,6 +82,9 @@ class ManifestSerializer(NodeVisitor):
quote = "" quote = ""
return [quote + escape(node.data, extras=quote) + quote] return [quote + escape(node.data, extras=quote) + quote]
def visit_AtomNode(self, node):
return [atom_names[node.data]]
def visit_ConditionalNode(self, node): def visit_ConditionalNode(self, node):
return ["if %s: %s" % tuple(self.visit(item)[0] for item in node.children)] return ["if %s: %s" % tuple(self.visit(item)[0] for item in node.children)]

View file

@ -67,5 +67,13 @@ key:
]]]]]] ]]]]]]
) )
def test_atom_0(self):
with self.assertRaises(parser.ParseError):
self.parse("key: @Unknown")
def test_atom_1(self):
with self.assertRaises(parser.ParseError):
self.parse("key: @true")
if __name__ == "__main__": if __name__ == "__main__":
unittest.main() unittest.main()

View file

@ -209,3 +209,19 @@ class TokenizerTest(unittest.TestCase):
def test_escape_11(self): def test_escape_11(self):
self.compare(r"""key: \\ab self.compare(r"""key: \\ab
""") """)
def test_atom_1(self):
self.compare(r"""key: @True
""")
def test_atom_2(self):
self.compare(r"""key: @False
""")
def test_atom_3(self):
self.compare(r"""key: @Reset
""")
def test_atom_4(self):
self.compare(r"""key: [a, @Reset, b]
""")

View file

@ -40,20 +40,27 @@ def setup_logging(*args, **kwargs):
global logger global logger
logger = wptlogging.setup(*args, **kwargs) logger = wptlogging.setup(*args, **kwargs)
def get_loader(test_paths, product, ssl_env, debug=False, **kwargs): def get_loader(test_paths, product, ssl_env, debug=None, **kwargs):
run_info = wpttest.get_run_info(kwargs["run_info"], product, debug=debug) run_info = wpttest.get_run_info(kwargs["run_info"], product, debug=debug)
test_manifests = testloader.ManifestLoader(test_paths, force_manifest_update=kwargs["manifest_update"]).load() test_manifests = testloader.ManifestLoader(test_paths, force_manifest_update=kwargs["manifest_update"]).load()
test_filter = testloader.TestFilter(include=kwargs["include"], manifest_filters = []
meta_filters = []
if kwargs["include"] or kwargs["exclude"] or kwargs["include_manifest"]:
manifest_filters.append(testloader.TestFilter(include=kwargs["include"],
exclude=kwargs["exclude"], exclude=kwargs["exclude"],
manifest_path=kwargs["include_manifest"], manifest_path=kwargs["include_manifest"],
test_manifests=test_manifests) test_manifests=test_manifests))
if kwargs["tags"]:
meta_filters.append(testloader.TagFilter(tags=kwargs["tags"]))
test_loader = testloader.TestLoader(test_manifests, test_loader = testloader.TestLoader(test_manifests,
kwargs["test_types"], kwargs["test_types"],
test_filter,
run_info, run_info,
manifest_filters=manifest_filters,
meta_filters=meta_filters,
chunk_type=kwargs["chunk_type"], chunk_type=kwargs["chunk_type"],
total_chunks=kwargs["total_chunks"], total_chunks=kwargs["total_chunks"],
chunk_number=kwargs["this_chunk"], chunk_number=kwargs["this_chunk"],
@ -111,7 +118,7 @@ def run_tests(config, test_paths, product, **kwargs):
check_args(**kwargs) check_args(**kwargs)
if "test_loader" in kwargs: if "test_loader" in kwargs:
run_info = wpttest.get_run_info(kwargs["run_info"], product, debug=False) run_info = wpttest.get_run_info(kwargs["run_info"], product, debug=None)
test_loader = kwargs["test_loader"] test_loader = kwargs["test_loader"]
else: else:
run_info, test_loader = get_loader(test_paths, product, ssl_env, run_info, test_loader = get_loader(test_paths, product, ssl_env,
@ -163,6 +170,7 @@ def run_tests(config, test_paths, product, **kwargs):
executor_kwargs = get_executor_kwargs(test_type, executor_kwargs = get_executor_kwargs(test_type,
test_environment.external_config, test_environment.external_config,
test_environment.cache_manager, test_environment.cache_manager,
run_info,
**kwargs) **kwargs)
if executor_cls is None: if executor_cls is None:
@ -212,7 +220,7 @@ def main():
elif kwargs["list_disabled"]: elif kwargs["list_disabled"]:
list_disabled(**kwargs) list_disabled(**kwargs)
else: else:
return run_tests(**kwargs) return not run_tests(**kwargs)
except Exception: except Exception:
import pdb, traceback import pdb, traceback
print traceback.format_exc() print traceback.format_exc()

View file

@ -9,6 +9,9 @@ import os
import mozinfo import mozinfo
from wptmanifest.parser import atoms
atom_reset = atoms["Reset"]
class Result(object): class Result(object):
def __init__(self, status, message, expected=None, extra=None): def __init__(self, status, message, expected=None, extra=None):
@ -58,8 +61,11 @@ class RunInfo(dict):
self._update_mozinfo(metadata_root) self._update_mozinfo(metadata_root)
self.update(mozinfo.info) self.update(mozinfo.info)
self["product"] = product self["product"] = product
if not "debug" in self: if debug is not None:
self["debug"] = debug self["debug"] = debug
elif "debug" not in self:
# Default to release
self["debug"] = False
def _update_mozinfo(self, metadata_root): def _update_mozinfo(self, metadata_root):
"""Add extra build information from a mozinfo.json file in a parent """Add extra build information from a mozinfo.json file in a parent
@ -83,27 +89,26 @@ class B2GRunInfo(RunInfo):
class Test(object): class Test(object):
result_cls = None result_cls = None
subtest_result_cls = None subtest_result_cls = None
test_type = None
def __init__(self, url, expected_metadata, timeout=DEFAULT_TIMEOUT, path=None, def __init__(self, url, inherit_metadata, test_metadata, timeout=DEFAULT_TIMEOUT, path=None,
protocol="http"): protocol="http"):
self.url = url self.url = url
self._expected_metadata = expected_metadata self._inherit_metadata = inherit_metadata
self._test_metadata = test_metadata
self.timeout = timeout self.timeout = timeout
self.path = path self.path = path
if expected_metadata: self.environment = {"protocol": protocol, "prefs": self.prefs}
prefs = expected_metadata.prefs()
else:
prefs = []
self.environment = {"protocol": protocol, "prefs": prefs}
def __eq__(self, other): def __eq__(self, other):
return self.id == other.id return self.id == other.id
@classmethod @classmethod
def from_manifest(cls, manifest_item, expected_metadata): def from_manifest(cls, manifest_item, inherit_metadata, test_metadata):
timeout = LONG_TIMEOUT if manifest_item.timeout == "long" else DEFAULT_TIMEOUT timeout = LONG_TIMEOUT if manifest_item.timeout == "long" else DEFAULT_TIMEOUT
return cls(manifest_item.url, return cls(manifest_item.url,
expected_metadata, inherit_metadata,
test_metadata,
timeout=timeout, timeout=timeout,
path=manifest_item.path, path=manifest_item.path,
protocol="https" if hasattr(manifest_item, "https") and manifest_item.https else "http") protocol="https" if hasattr(manifest_item, "https") and manifest_item.https else "http")
@ -117,22 +122,57 @@ class Test(object):
def keys(self): def keys(self):
return tuple() return tuple()
def _get_metadata(self, subtest): def _get_metadata(self, subtest=None):
if self._expected_metadata is None: if self._test_metadata is not None and subtest is not None:
return None return self._test_metadata.get_subtest(subtest)
if subtest is not None:
metadata = self._expected_metadata.get_subtest(subtest)
else: else:
metadata = self._expected_metadata return self._test_metadata
return metadata
def itermeta(self, subtest=None):
for metadata in self._inherit_metadata:
yield metadata
if self._test_metadata is not None:
yield self._get_metadata()
if subtest is not None:
subtest_meta = self._get_metadata(subtest)
if subtest_meta is not None:
yield subtest_meta
def disabled(self, subtest=None): def disabled(self, subtest=None):
metadata = self._get_metadata(subtest) for meta in self.itermeta(subtest):
if metadata is None: disabled = meta.disabled
return False if disabled is not None:
return disabled
return None
return metadata.disabled() @property
def tags(self):
tags = set()
for meta in self.itermeta():
meta_tags = meta.tags
if atom_reset in meta_tags:
tags = meta_tags.copy()
tags.remove(atom_reset)
else:
tags |= meta_tags
tags.add("dir:%s" % self.id.lstrip("/").split("/")[0])
return tags
@property
def prefs(self):
prefs = {}
for meta in self.itermeta():
meta_prefs = meta.prefs
if atom_reset in prefs:
prefs = meta_prefs.copy()
del prefs[atom_reset]
else:
prefs.update(meta_prefs)
return prefs
def expected(self, subtest=None): def expected(self, subtest=None):
if subtest is None: if subtest is None:
@ -153,6 +193,7 @@ class Test(object):
class TestharnessTest(Test): class TestharnessTest(Test):
result_cls = TestharnessResult result_cls = TestharnessResult
subtest_result_cls = TestharnessSubtestResult subtest_result_cls = TestharnessSubtestResult
test_type = "testharness"
@property @property
def id(self): def id(self):
@ -160,6 +201,8 @@ class TestharnessTest(Test):
class ManualTest(Test): class ManualTest(Test):
test_type = "manual"
@property @property
def id(self): def id(self):
return self.url return self.url
@ -167,9 +210,10 @@ class ManualTest(Test):
class ReftestTest(Test): class ReftestTest(Test):
result_cls = ReftestResult result_cls = ReftestResult
test_type = "reftest"
def __init__(self, url, expected, references, timeout=DEFAULT_TIMEOUT, path=None, protocol="http"): def __init__(self, url, inherit_metadata, test_metadata, references, timeout=DEFAULT_TIMEOUT, path=None, protocol="http"):
Test.__init__(self, url, expected, timeout, path, protocol) Test.__init__(self, url, inherit_metadata, test_metadata, timeout, path, protocol)
for _, ref_type in references: for _, ref_type in references:
if ref_type not in ("==", "!="): if ref_type not in ("==", "!="):
@ -180,7 +224,8 @@ class ReftestTest(Test):
@classmethod @classmethod
def from_manifest(cls, def from_manifest(cls,
manifest_test, manifest_test,
expected_metadata, inherit_metadata,
test_metadata,
nodes=None, nodes=None,
references_seen=None): references_seen=None):
@ -194,7 +239,8 @@ class ReftestTest(Test):
url = manifest_test.url url = manifest_test.url
node = cls(manifest_test.url, node = cls(manifest_test.url,
expected_metadata, inherit_metadata,
test_metadata,
[], [],
timeout=timeout, timeout=timeout,
path=manifest_test.path, path=manifest_test.path,
@ -219,11 +265,12 @@ class ReftestTest(Test):
manifest_node = manifest_test.manifest.get_reference(ref_url) manifest_node = manifest_test.manifest.get_reference(ref_url)
if manifest_node: if manifest_node:
reference = ReftestTest.from_manifest(manifest_node, reference = ReftestTest.from_manifest(manifest_node,
[],
None, None,
nodes, nodes,
references_seen) references_seen)
else: else:
reference = ReftestTest(ref_url, None, []) reference = ReftestTest(ref_url, [], None, [])
node.references.append((reference, ref_type)) node.references.append((reference, ref_type))
@ -243,7 +290,7 @@ manifest_test_cls = {"reftest": ReftestTest,
"manual": ManualTest} "manual": ManualTest}
def from_manifest(manifest_test, expected_metadata): def from_manifest(manifest_test, inherit_metadata, test_metadata):
test_cls = manifest_test_cls[manifest_test.item_type] test_cls = manifest_test_cls[manifest_test.item_type]
return test_cls.from_manifest(manifest_test, expected_metadata) return test_cls.from_manifest(manifest_test, inherit_metadata, test_metadata)

View file

@ -1,3 +1,5 @@
[Document-createElement-namespace.html] [Document-createElement-namespace.html]
type: testharness type: testharness
expected: TIMEOUT expected:
if os == "mac": TIMEOUT
if os == "linux": CRASH

View file

@ -1,3 +1,3 @@
[Range-cloneContents.html] [Range-cloneContents.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[Range-deleteContents.html] [Range-deleteContents.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[Range-extractContents.html] [Range-extractContents.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[Range-insertNode.html] [Range-insertNode.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[Range-surroundContents.html] [Range-surroundContents.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[TreeWalker-acceptNode-filter.html] [TreeWalker-acceptNode-filter.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[Create-Secure-blocked-port.htm] [Create-Secure-blocked-port.htm]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[004.html] [004.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[005.html] [005.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[007.html] [007.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[008.html] [008.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[010.html] [010.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[011.html] [011.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[012.html] [012.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[017.html] [017.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[021.html] [021.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[022.html] [022.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[002.html] [002.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[004.html] [004.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[005.html] [005.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[007.html] [007.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[interfaces.html] [interfaces.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[bufferedAmount-defineProperty-getter.html] [bufferedAmount-defineProperty-getter.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[bufferedAmount-defineProperty-setter.html] [bufferedAmount-defineProperty-setter.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[bufferedAmount-initial.html] [bufferedAmount-initial.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[bufferedAmount-readonly.html] [bufferedAmount-readonly.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[close-basic.html] [close-basic.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[close-connecting.html] [close-connecting.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[close-multiple.html] [close-multiple.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[close-nested.html] [close-nested.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[close-replace.html] [close-replace.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[close-return.html] [close-return.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[001.html] [001.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[002.html] [002.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[003.html] [003.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[004.html] [004.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[001.html] [001.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[002.html] [002.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[003.html] [003.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[004.html] [004.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[006.html] [006.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[007.html] [007.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[008.html] [008.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[009.html] [009.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[010.html] [010.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[011.html] [011.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[012.html] [012.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[013.html] [013.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[014.html] [014.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[020.html] [020.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

View file

@ -1,3 +1,3 @@
[001.html] [001.html]
type: testharness type: testharness
expected: TIMEOUT expected: CRASH

Some files were not shown because too many files have changed in this diff Show more