Put a copy of the wptrunner harness in-tree.

This is the same configuration as gecko and is convenient for making changes compared
to using releases from pypi
This commit is contained in:
James Graham 2015-03-27 21:15:04 +00:00
parent f7ff2aa558
commit 168b81773e
120 changed files with 11690 additions and 0 deletions

7
tests/wpt/harness/.gitignore vendored Normal file
View file

@ -0,0 +1,7 @@
*.py[co]
*~
*#
\#*
_virtualenv
test/test.cfg
test/metadata/MANIFEST.json

View file

@ -0,0 +1,13 @@
exclude MANIFEST.in
include requirements.txt
include wptrunner/browsers/b2g_setup/*
include wptrunner.default.ini
include wptrunner/testharness_runner.html
include wptrunner/testharnessreport.js
include wptrunner/testharnessreport-servo.js
include wptrunner/executors/testharness_marionette.js
include wptrunner/executors/testharness_webdriver.js
include wptrunner/executors/reftest.js
include wptrunner/executors/reftest-wait.js
include wptrunner/config.json
include wptrunner/browsers/server-locations.txt

View file

@ -0,0 +1,224 @@
wptrunner: A web-platform-tests harness
=======================================
wptrunner is a harness for running the W3C `web-platform-tests testsuite`_.
.. contents::
Installation
~~~~~~~~~~~~
wptrunner is expected to be installed into a virtualenv using pip. For
development, it can be installed using the `-e` option::
pip install -e ./
Running the Tests
~~~~~~~~~~~~~~~~~
After installation, the command ``wptrunner`` should be available to run
the tests.
The ``wptrunner`` command takes multiple options, of which the
following are most significant:
``--product`` (defaults to `firefox`)
The product to test against: `b2g`, `chrome`, `firefox`, or `servo`.
``--binary`` (required)
The path to a binary file for the product (browser) to test against.
``--metadata`` (required)
The path to a directory containing test metadata. [#]_
``--tests`` (required)
The path to a directory containing a web-platform-tests checkout.
``--prefs-root`` (required only when testing a Firefox binary)
The path to a directory containing Firefox test-harness preferences. [#]_
.. [#] The ``--metadata`` path is to a directory that contains:
* a ``MANIFEST.json`` file (the web-platform-tests documentation has
instructions on generating this file); and
* (optionally) any expectation files (see below)
.. [#] Example ``--prefs-root`` value: ``~/mozilla-central/testing/profiles``.
There are also a variety of other options available; use ``--help`` to
list them.
-------------------------------
Example: How to start wptrunner
-------------------------------
To test a Firefox Nightly build in an OS X environment, you might start
wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/mozilla-central/obj-x86_64-apple-darwin14.0.0/dist/Nightly.app/Contents/MacOS/firefox \
--prefs-root=~/mozilla-central/testing/profiles
And to test a Chromium build in an OS X environment, you might start
wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
--product=chrome
-------------------------------------
Example: How to run a subset of tests
-------------------------------------
To restrict a test run just to tests in a particular web-platform-tests
subdirectory, use ``--include`` with the directory name; for example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=/path/to/firefox --prefs-root=/path/to/testing/profiles \
--include=dom
Output
~~~~~~
By default wptrunner just dumps its entire output as raw JSON messages
to stdout. This is convenient for piping into other tools, but not ideal
for humans reading the output.
As an alternative, you can use the ``--log-mach`` option, which provides
output in a reasonable format for humans. The option requires a value:
either the path for a file to write the `mach`-formatted output to, or
"`-`" (a hyphen) to write the `mach`-formatted output to stdout.
When using ``--log-mach``, output of the full raw JSON log is still
available, from the ``--log-raw`` option. So to output the full raw JSON
log to a file and a human-readable summary to stdout, you might start
wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=/path/to/firefox --prefs-root=/path/to/testing/profiles
--log-raw=output.log --log-mach=-
Expectation Data
~~~~~~~~~~~~~~~~
wptrunner is designed to be used in an environment where it is not
just necessary to know which tests passed, but to compare the results
between runs. For this reason it is possible to store the results of a
previous run in a set of ini-like "expectation files". This format is
documented below. To generate the expectation files use `wptrunner` with
the `--log-raw=/path/to/log/file` option. This can then be used as
input to the `wptupdate` tool.
Expectation File Format
~~~~~~~~~~~~~~~~~~~~~~~
Metadat about tests, notably including their expected results, is
stored in a modified ini-like format that is designed to be human
editable, but also to be machine updatable.
Each test file that requires metadata to be specified (because it has
a non-default expectation or because it is disabled, for example) has
a corresponding expectation file in the `metadata` directory. For
example a test file `html/test1.html` containing a failing test would
have an expectation file called `html/test1.html.ini` in the
`metadata` directory.
An example of an expectation file is::
example_default_key: example_value
[filename.html]
type: testharness
[subtest1]
expected: FAIL
[subtest2]
expected:
if platform == 'win': TIMEOUT
if platform == 'osx': ERROR
FAIL
[filename.html?query=something]
type: testharness
disabled: bug12345
The file consists of two elements, key-value pairs and
sections.
Sections are delimited by headings enclosed in square brackets. Any
closing square bracket in the heading itself my be escaped with a
backslash. Each section may then contain any number of key-value pairs
followed by any number of subsections. So that it is clear which data
belongs to each section without the use of end-section markers, the
data for each section (i.e. the key-value pairs and subsections) must
be indented using spaces. Indentation need only be consistent, but
using two spaces per level is recommended.
In a test expectation file, each resource provided by the file has a
single section, with the section heading being the part after the last
`/` in the test url. Tests that have subsections may have subsections
for those subtests in which the heading is the name of the subtest.
Simple key-value pairs are of the form::
key: value
Note that unlike ini files, only `:` is a valid seperator; `=` will
not work as expected. Key-value pairs may also have conditional
values of the form::
key:
if condition1: value1
if condition2: value2
default
In this case each conditional is evaluated in turn and the value is
that on the right hand side of the first matching conditional. In the
case that no condition matches, the unconditional default is used. If
no condition matches and no default is provided it is equivalent to
the key not being present. Conditionals use a simple python-like expression
language e.g.::
if debug and (platform == "linux" or platform == "osx"): FAIL
For test expectations the avaliable variables are those in the
`run_info` which for desktop are `version`, `os`, `bits`, `processor`,
`debug` and `product`.
Key-value pairs specified at the top level of the file before any
sections are special as they provide defaults for the rest of the file
e.g.::
key1: value1
[section 1]
key2: value2
[section 2]
key1: value3
In this case, inside section 1, `key1` would have the value `value1`
and `key2` the value `value2` whereas in section 2 `key1` would have
the value `value3` and `key2` would be undefined.
The web-platform-test harness knows about several keys:
`expected`
Must evaluate to a possible test status indicating the expected
result of the test. The implicit default is PASS or OK when the
field isn't present.
`disabled`
Any value indicates that the test is disabled.
`type`
The test type e.g. `testharness` or `reftest`.
`reftype`
The type of comparison for reftests; either `==` or `!=`.
`refurl`
The reference url for reftests.
.. _`web-platform-tests testsuite`: https://github.com/w3c/web-platform-tests

View file

@ -0,0 +1,177 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/wptrunner.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/wptrunner.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/wptrunner"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/wptrunner"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 19 KiB

View file

@ -0,0 +1,267 @@
# -*- coding: utf-8 -*-
#
# wptrunner documentation build configuration file, created by
# sphinx-quickstart on Mon May 19 18:14:20 2014.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.viewcode',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'wptrunner'
copyright = u''
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.3'
# The full version, including alpha/beta/rc tags.
release = '0.3'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'wptrunnerdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'wptrunner.tex', u'wptrunner Documentation',
u'James Graham', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'wptrunner', u'wptrunner Documentation',
[u'James Graham'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'wptrunner', u'wptrunner Documentation',
u'James Graham', 'wptrunner', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'python': ('http://docs.python.org/', None),
'mozlog': ('http://mozbase.readthedocs.org/en/latest/', None)}

View file

@ -0,0 +1,106 @@
wptrunner Design
================
The design of wptrunner is intended to meet the following
requirements:
* Possible to run tests from W3C web-platform-tests.
* Tests should be run as fast as possible. In particular it should
not be necessary to restart the browser between tests, or similar.
* As far as possible, the tests should run in a "normal" browser and
browsing context. In particular many tests assume that they are
running in a top-level browsing context, so we must avoid the use
of an ``iframe`` test container.
* It must be possible to deal with all kinds of behaviour of the
browser runder test, for example, crashing, hanging, etc.
* It should be possible to add support for new platforms and browsers
with minimal code changes.
* It must be possible to run tests in parallel to further improve
performance.
* Test output must be in a machine readable form.
Architecture
------------
In order to meet the above requirements, wptrunner is designed to
push as much of the test scheduling as possible into the harness. This
allows the harness to monitor the state of the browser and perform
appropriate action if it gets into an unwanted state e.g. kill the
browser if it appears to be hung.
The harness will typically communicate with the browser via some remote
control protocol such as WebDriver. However for browsers where no such
protocol is supported, other implementation strategies are possible,
typically at the expense of speed.
The overall architecture of wptrunner is shown in the diagram below:
.. image:: architecture.svg
The main entry point to the code is :py:func:`run_tests` in
``wptrunner.py``. This is responsible for setting up the test
environment, loading the list of tests to be executed, and invoking
the remainder of the code to actually execute some tests.
The test environment is encapsulated in the
:py:class:`TestEnvironment` class. This defers to code in
``web-platform-tests`` which actually starts the required servers to
run the tests.
The set of tests to run is defined by the
:py:class:`TestLoader`. This is constructed with a
:py:class:`TestFilter` (not shown), which takes any filter arguments
from the command line to restrict the set of tests that will be
run. The :py:class:`TestLoader` reads both the ``web-platform-tests``
JSON manifest and the expectation data stored in ini files and
produces a :py:class:`multiprocessing.Queue` of tests to run, and
their expected results.
Actually running the tests happens through the
:py:class:`ManagerGroup` object. This takes the :py:class:`Queue` of
tests to be run and starts a :py:class:`testrunner.TestRunnerManager` for each
instance of the browser under test that will be started. These
:py:class:`TestRunnerManager` instances are each started in their own
thread.
A :py:class:`TestRunnerManager` coordinates starting the product under
test, and outputting results from the test. In the case that the test
has timed out or the browser has crashed, it has to restart the
browser to ensure the test run can continue. The functionality for
initialising the browser under test, and probing its state
(e.g. whether the process is still alive) is implemented through a
:py:class:`Browser` object. An implementation of this class must be
provided for each product that is supported.
The functionality for actually running the tests is provided by a
:py:class:`TestRunner` object. :py:class:`TestRunner` instances are
run in their own child process created with the
:py:mod:`multiprocessing` module. This allows them to run concurrently
and to be killed and restarted as required. Communication between the
:py:class:`TestRunnerManager` and the :py:class:`TestRunner` is
provided by a pair of queues, one for sending messages in each
direction. In particular test results are sent from the
:py:class:`TestRunner` to the :py:class:`TestRunnerManager` using one
of these queues.
The :py:class:`TestRunner` object is generic in that the same
:py:class:`TestRunner` is used regardless of the product under
test. However the details of how to run the test may vary greatly with
the product since different products support different remote control
protocols (or none at all). These protocol-specific parts are placed
in the :py:class:`Executor` object. There is typically a different
:py:class:`Executor` class for each combination of control protocol
and test type. The :py:class:`TestRunner` is responsible for pulling
each test off the :py:class:`Queue` of tests and passing it down to
the :py:class:`Executor`.
The executor often requires access to details of the particular
browser instance that it is testing so that it knows e.g. which port
to connect to to send commands to the browser. These details are
encapsulated in the :py:class:`ExecutorBrowser` class.

View file

@ -0,0 +1,244 @@
Expectation Data
================
Introduction
------------
For use in continuous integration systems, and other scenarios where
regression tracking is required, wptrunner supports storing and
loading the expected result of each test in a test run. Typically
these expected results will initially be generated by running the
testsuite in a baseline build. They may then be edited by humans as
new features are added to the product that change the expected
results. The expected results may also vary for a single product
depending on the platform on which it is run. Therefore, the raw
structured log data is not a suitable format for storing these
files. Instead something is required that is:
* Human readable
* Human editable
* Machine readable / writable
* Capable of storing test id / result pairs
* Suitable for storing in a version control system (i.e. text-based)
The need for different results per platform means either having
multiple expectation files for each platform, or having a way to
express conditional values within a certain file. The former would be
rather cumbersome for humans updating the expectation files, so the
latter approach has been adopted, leading to the requirement:
* Capable of storing result values that are conditional on the platform.
There are few extant formats that meet these requirements, so
wptrunner uses a bespoke ``expectation manifest`` format, which is
closely based on the standard ``ini`` format.
Directory Layout
----------------
Expectation manifest files must be stored under the ``metadata``
directory passed to the test runner. The directory layout follows that
of web-platform-tests with each test path having a corresponding
manifest file. Tests that differ only by query string, or reftests
with the same test path but different ref paths share the same
reference file. The file name is taken from the last /-separated part
of the path, suffixed with ``.ini``.
As an optimisation, files which produce only default results
(i.e. ``PASS`` or ``OK``) don't require a corresponding manifest file.
For example a test with url::
/spec/section/file.html?query=param
would have an expectation file ::
metadata/spec/section/file.html.ini
.. _wptupdate-label:
Generating Expectation Files
----------------------------
wptrunner provides the tool ``wptupdate`` to generate expectation
files from the results of a set of baseline test runs. The basic
syntax for this is::
wptupdate [options] [logfile]...
Each ``logfile`` is a structured log file from a previous run. These
can be generated from wptrunner using the ``--log-raw`` option
e.g. ``--log-raw=structured.log``. The default behaviour is to update
all the test data for the particular combination of hardware and OS
used in the run corresponding to the log data, whilst leaving any
other expectations untouched.
wptupdate takes several useful options:
``--sync``
Pull the latest version of web-platform-tests from the
upstream specified in the config file. If this is specified in
combination with logfiles, it is assumed that the results in the log
files apply to the post-update tests.
``--no-check-clean``
Don't attempt to check if the working directory is clean before
doing the update (assuming that the working directory is a git or
mercurial tree).
``--patch``
Create a a git commit, or a mq patch, with the changes made by wptupdate.
``--ignore-existing``
Overwrite all the expectation data for any tests that have a result
in the passed log files, not just data for the same platform.
Examples
~~~~~~~~
Update the local copy of web-platform-tests without changing the
expectation data and commit (or create a mq patch for) the result::
wptupdate --patch --sync
Update all the expectations from a set of cross-platform test runs::
wptupdate --no-check-clean --patch osx.log linux.log windows.log
Add expectation data for some new tests that are expected to be
platform-independent::
wptupdate --no-check-clean --patch --ignore-existing tests.log
Manifest Format
---------------
The format of the manifest files is based on the ini format. Files are
divided into sections, each (apart from the root section) having a
heading enclosed in square braces. Within each section are key-value
pairs. There are several notable differences from standard .ini files,
however:
* Sections may be hierarchically nested, with significant whitespace
indicating nesting depth.
* Only ``:`` is valid as a key/value separator
A simple example of a manifest file is::
root_key: root_value
[section]
section_key: section_value
[subsection]
subsection_key: subsection_value
[another_section]
another_key: another_value
Conditional Values
~~~~~~~~~~~~~~~~~~
In order to support values that depend on some external data, the
right hand side of a key/value pair can take a set of conditionals
rather than a plain value. These values are placed on a new line
following the key, with significant indentation. Conditional values
are prefixed with ``if`` and terminated with a colon, for example::
key:
if cond1: value1
if cond2: value2
value3
In this example, the value associated with ``key`` is determined by
first evaluating ``cond1`` against external data. If that is true,
``key`` is assigned the value ``value1``, otherwise ``cond2`` is
evaluated in the same way. If both ``cond1`` and ``cond2`` are false,
the unconditional ``value3`` is used.
Conditions themselves use a Python-like expression syntax. Operands
can either be variables, corresponding to data passed in, numbers
(integer or floating point; exponential notation is not supported) or
quote-delimited strings. Equality is tested using ``==`` and
inequality by ``!=``. The operators ``and``, ``or`` and ``not`` are
used in the expected way. Parentheses can also be used for
grouping. For example::
key:
if (a == 2 or a == 3) and b == "abc": value1
if a == 1 or b != "abc": value2
value3
Here ``a`` and ``b`` are variables, the value of which will be
supplied when the manifest is used.
Expectation Manifests
---------------------
When used for expectation data, manifests have the following format:
* A section per test URL described by the manifest, with the section
heading being the part of the test URL following the last ``/`` in
the path (this allows multiple tests in a single manifest file with
the same path part of the URL, but different query parts).
* A subsection per subtest, with the heading being the title of the
subtest.
* A key ``type`` indicating the test type. This takes the values
``testharness`` and ``reftest``.
* For reftests, keys ``reftype`` indicating the reference type
(``==`` or ``!=``) and ``refurl`` indicating the URL of the
reference.
* A key ``expected`` giving the expectation value of each (sub)test.
* A key ``disabled`` which can be set to any value to indicate that
the (sub)test is disabled and should either not be run (for tests)
or that its results should be ignored (subtests).
* Variables ``debug``, ``os``, ``version``, ``processor`` and
``bits`` that describe the configuration of the browser under
test. ``debug`` is a boolean indicating whether a build is a debug
build. ``os`` is a string indicating the operating system, and
``version`` a string indicating the particular version of that
operating system. ``processor`` is a string indicating the
processor architecture and ``bits`` an integer indicating the
number of bits. This information is typically provided by
:py:mod:`mozinfo`.
* Top level keys are taken as defaults for the whole file. So, for
example, a top level key with ``expected: FAIL`` would indicate
that all tests and subtests in the file are expected to fail,
unless they have an ``expected`` key of their own.
An simple example manifest might look like::
[test.html?variant=basic]
type: testharness
[Test something unsupported]
expected: FAIL
[test.html?variant=broken]
expected: ERROR
[test.html?variant=unstable]
disabled: http://test.bugs.example.org/bugs/12345
A more complex manifest with conditional properties might be::
[canvas_test.html]
expected:
if os == "osx": FAIL
if os == "windows" and version == "XP": FAIL
PASS
Note that ``PASS`` in the above works, but is unnecessary; ``PASS``
(or ``OK``) is always the default expectation for (sub)tests.

View file

@ -0,0 +1,24 @@
.. wptrunner documentation master file, created by
sphinx-quickstart on Mon May 19 18:14:20 2014.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to wptrunner's documentation!
=====================================
Contents:
.. toctree::
:maxdepth: 2
usage
expectation
design
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View file

@ -0,0 +1,242 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\wptrunner.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\wptrunner.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end

View file

@ -0,0 +1,215 @@
Getting Started
===============
Installing wptrunner
--------------------
The easiest way to install wptrunner is into a virtualenv, using pip::
virtualenv wptrunner
cd wptrunner
source bin/activate
pip install wptrunner
This will install the base dependencies for wptrunner, but not any
extra dependencies required to test against specific browsers. In
order to do this you must use use the extra requirements files in
``$VIRTUAL_ENV/requirements/requirements_browser.txt``. For example,
in order to test against Firefox you would have to run::
pip install -r requirements/requirements_firefox.txt
If you intend to work on the code, the ``-e`` option to pip should be
used in combination with a source checkout i.e. inside a virtual
environment created as above::
git clone https://github.com/w3c/wptrunner.git
cd wptrunner
pip install -e ./
In addition to the dependencies installed by pip, wptrunner requires
a copy of the web-platform-tests repository. That can be located
anywhere on the filesystem, but the easiest option is to put it within
the wptrunner checkout directory, as a subdirectory named ``tests``::
git clone https://github.com/w3c/web-platform-tests.git tests
It is also necessary to generate a web-platform-tests ``MANIFEST.json``
file. It's recommended to put that within the wptrunner
checkout directory, in a subdirectory named ``meta``::
mkdir meta
cd tests
python tools/scripts/manifest.py ../meta/MANIFEST.json
The ``MANIFEST.json`` file needs to be regenerated each time the
web-platform-tests checkout is updated. To aid with the update process
there is a tool called ``wptupdate``, which is described in
:ref:`wptupdate-label`.
Running the Tests
-----------------
A test run is started using the ``wptrunner`` command. The command
takes multiple options, of which the following are most significant:
``--product`` (defaults to `firefox`)
The product to test against: `b2g`, `chrome`, `firefox`, or `servo`.
``--binary`` (required)
The path to a binary file for the product (browser) to test against.
``--metadata`` (required only when not `using default paths`_)
The path to a directory containing test metadata. [#]_
``--tests`` (required only when not `using default paths`_)
The path to a directory containing a web-platform-tests checkout.
``--prefs-root`` (required only when testing a Firefox binary)
The path to a directory containing Firefox test-harness preferences. [#]_
.. [#] The ``--metadata`` path is to a directory that contains:
* a ``MANIFEST.json`` file (the web-platform-tests documentation has
instructions on generating this file)
* (optionally) any expectation files (see :ref:`wptupdate-label`)
.. [#] Example ``--prefs-root`` value: ``~/mozilla-central/testing/profiles``.
There are also a variety of other command-line options available; use
``--help`` to list them.
The following examples show how to start wptrunner with various options.
------------------
Starting wptrunner
------------------
To test a Firefox Nightly build in an OS X environment, you might start
wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/mozilla-central/obj-x86_64-apple-darwin14.0.0/dist/Nightly.app/Contents/MacOS/firefox \
--prefs-root=~/mozilla-central/testing/profiles
And to test a Chromium build in an OS X environment, you might start
wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
--product=chrome
--------------------
Running test subsets
--------------------
To restrict a test run just to tests in a particular web-platform-tests
subdirectory, use ``--include`` with the directory name; for example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=/path/to/firefox --prefs-root=/path/to/testing/profiles \
--include=dom
-------------------
Running in parallel
-------------------
To speed up the testing process, use the ``--processes`` option to have
wptrunner run multiple browser instances in parallel. For example, to
have wptrunner attempt to run tests against with six browser instances
in parallel, specify ``--processes=6``. But note that behaviour in this
mode is necessarily less deterministic than with ``--processes=1`` (the
default), so there may be more noise in the test results.
-------------------
Using default paths
-------------------
The (otherwise-required) ``--tests`` and ``--metadata`` command-line
options/flags be omitted if any configuration file is found that
contains a section specifying the ``tests`` and ``metadata`` keys.
See the `Configuration File`_ section for more information about
configuration files, including information about their expected
locations.
The content of the ``wptrunner.default.ini`` default configuration file
makes wptrunner look for tests (that is, a web-platform-tests checkout)
as a subdirectory of the current directory named ``tests``, and for
metadata files in a subdirectory of the current directory named ``meta``.
Output
------
wptrunner uses the :py:mod:`mozlog.structured` package for output. This
structures events such as test results or log messages as JSON objects
that can then be fed to other tools for interpretation. More details
about the message format are given in the
:py:mod:`mozlog.structured` documentation.
By default the raw JSON messages are dumped to stdout. This is
convenient for piping into other tools, but not ideal for humans
reading the output. :py:mod:`mozlog` comes with several other
formatters, which are accessible through command line options. The
general format of these options is ``--log-name=dest``, where ``name``
is the name of the format and ``dest`` is a path to a destination
file, or ``-`` for stdout. The raw JSON data is written by the ``raw``
formatter so, the default setup corresponds to ``--log-raw=-``.
A reasonable output format for humans is provided as ``mach``. So in
order to output the full raw log to a file and a human-readable
summary to stdout, one might pass the options::
--log-raw=output.log --log-mach=-
Configuration File
------------------
wptrunner uses a ``.ini`` file to control some configuration
sections. The file has three sections; ``[products]``,
``[paths]`` and ``[web-platform-tests]``.
``[products]`` is used to
define the set of available products. By default this section is empty
which means that all the products distributed with wptrunner are
enabled (although their dependencies may not be installed). The set
of enabled products can be set by using the product name as the
key. For built in products the value is empty. It is also possible to
provide the path to a script implementing the browser functionality
e.g.::
[products]
chrome =
netscape4 = path/to/netscape.py
``[paths]`` specifies the default paths for the tests and metadata,
relative to the config file. For example::
[paths]
tests = checkouts/web-platform-tests
metadata = /home/example/wpt/metadata
``[web-platform-tests]`` is used to set the properties of the upstream
repository when updating the paths. ``remote_url`` specifies the git
url to pull from; ``branch`` the branch to sync against and
``sync_path`` the local path, relative to the configuration file, to
use when checking out the tests e.g.::
[web-platform-tests]
remote_url = https://github.com/w3c/web-platform-tests.git
branch = master
sync_path = sync
A configuration file must contain all the above fields; falling back
to the default values for unspecified fields is not yet supported.
The ``wptrunner`` and ``wptupdate`` commands will use configuration
files in the following order:
* Any path supplied with a ``--config`` flag to the command.
* A file called ``wptrunner.ini`` in the current directory
* The default configuration file (``wptrunner.default.ini`` in the
source directory)

View file

@ -0,0 +1,5 @@
html5lib >= 0.99
mozinfo >= 0.7
mozlog >= 2.8
# Unfortunately, just for gdb flags
mozrunner >= 6.1

View file

@ -0,0 +1,7 @@
fxos_appgen >= 0.5
mozdevice >= 0.41
gaiatest >= 0.26
marionette_client >= 0.7.10
moznetwork >= 0.24
mozprofile >= 0.21
mozrunner >= 6.1

View file

@ -0,0 +1,2 @@
mozprocess >= 0.19
selenium >= 2.41.0

View file

@ -0,0 +1,4 @@
marionette_client >= 0.7.10
mozprofile >= 0.21
mozprocess >= 0.19
mozcrash >= 0.13

View file

@ -0,0 +1 @@
mozprocess >= 0.19

View file

@ -0,0 +1,73 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import glob
import os
import sys
import textwrap
from setuptools import setup, find_packages
here = os.path.split(__file__)[0]
PACKAGE_NAME = 'wptrunner'
PACKAGE_VERSION = '1.14'
# Dependencies
with open(os.path.join(here, "requirements.txt")) as f:
deps = f.read().splitlines()
# Browser-specific requirements
requirements_files = glob.glob(os.path.join(here, "requirements_*.txt"))
profile_dest = None
dest_exists = False
setup(name=PACKAGE_NAME,
version=PACKAGE_VERSION,
description="Harness for running the W3C web-platform-tests against various products",
author='Mozilla Automation and Testing Team',
author_email='tools@lists.mozilla.org',
license='MPL 2.0',
packages=find_packages(exclude=["tests", "metadata", "prefs"]),
entry_points={
'console_scripts': [
'wptrunner = wptrunner.wptrunner:main',
'wptupdate = wptrunner.update:main',
]
},
zip_safe=False,
platforms=['Any'],
classifiers=['Development Status :: 4 - Beta',
'Environment :: Console',
'Intended Audience :: Developers',
'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',
'Operating System :: OS Independent'],
package_data={"wptrunner": ["executors/testharness_marionette.js",
"executors/testharness_webdriver.js",
"executors/reftest.js",
"executors/reftest-wait.js",
"testharnessreport.js",
"testharness_runner.html",
"config.json",
"wptrunner.default.ini",
"browsers/server-locations.txt",
"browsers/b2g_setup/*",
"prefs/*"]},
include_package_data=True,
data_files=[("requirements", requirements_files)],
install_requires=deps
)
if "install" in sys.argv:
path = os.path.relpath(os.path.join(sys.prefix, "requirements"), os.curdir)
print textwrap.fill("""In order to use with one of the built-in browser
products, you will need to install the extra dependencies. These are provided
as requirements_[name].txt in the %s directory and can be installed using
e.g.""" % path, 80)
print """
pip install -r %s/requirements_firefox.txt
""" % path

View file

@ -0,0 +1,3 @@
[reftest_and_fail.html]
type: reftest
expected: FAIL

View file

@ -0,0 +1,3 @@
[reftest_cycle_fail.html]
type: reftest
expected: FAIL

View file

@ -0,0 +1,3 @@
[reftest_match_fail.html]
type: reftest
expected: FAIL

View file

@ -0,0 +1,3 @@
[reftest_mismatch_fail.html]
type: reftest
expected: FAIL

View file

@ -0,0 +1,3 @@
[reftest_ref_timeout.html]
type: reftest
expected: TIMEOUT

View file

@ -0,0 +1,3 @@
[reftest_timeout.html]
type: reftest
expected: TIMEOUT

View file

@ -0,0 +1,4 @@
[testharness_0.html]
type: testharness
[Test that should fail]
expected: FAIL

View file

@ -0,0 +1,3 @@
[testharness_error.html]
type: testharness
expected: ERROR

View file

@ -0,0 +1,3 @@
[testharness_timeout.html]
type: testharness
expected: TIMEOUT

View file

@ -0,0 +1,16 @@
[general]
tests=/path/to/web-platform-tests/
metadata=/path/to/web-platform-tests/
ssl-type=none
# [firefox]
# binary=/path/to/firefox
# prefs-root=/path/to/gecko-src/testing/profiles/
# [servo]
# binary=/path/to/servo-src/components/servo/target/servo
# exclude=testharness # Because it needs a special testharness.js
# [chrome]
# binary=/path/to/chrome
# webdriver-binary=/path/to/chromedriver

View file

@ -0,0 +1,162 @@
import ConfigParser
import argparse
import json
import os
import sys
import tempfile
import threading
import time
from StringIO import StringIO
from mozlog.structured import structuredlog, reader
from mozlog.structured.handlers import BaseHandler, StreamHandler, StatusHandler
from mozlog.structured.formatters import MachFormatter
from wptrunner import wptcommandline, wptrunner
here = os.path.abspath(os.path.dirname(__file__))
def setup_wptrunner_logging(logger):
structuredlog.set_default_logger(logger)
wptrunner.logger = logger
wptrunner.wptlogging.setup_stdlib_logger()
class ResultHandler(BaseHandler):
def __init__(self, verbose=False, logger=None):
self.inner = StreamHandler(sys.stdout, MachFormatter())
BaseHandler.__init__(self, self.inner)
self.product = None
self.verbose = verbose
self.logger = logger
self.register_message_handlers("wptrunner-test", {"set-product": self.set_product})
def set_product(self, product):
self.product = product
def __call__(self, data):
if self.product is not None and data["action"] in ["suite_start", "suite_end"]:
# Hack: mozlog sets some internal state to prevent multiple suite_start or
# suite_end messages. We actually want that here (one from the metaharness
# and one from the individual test type harness), so override that internal
# state (a better solution might be to not share loggers, but this works well
# enough)
self.logger._state.suite_started = True
return
if (not self.verbose and
(data["action"] == "process_output" or
data["action"] == "log" and data["level"] not in ["error", "critical"])):
return
if "test" in data:
data = data.copy()
data["test"] = "%s: %s" % (self.product, data["test"])
return self.inner(data)
def test_settings():
return {
"include": "_test",
"manifest-update": "",
"no-capture-stdio": ""
}
def read_config():
parser = ConfigParser.ConfigParser()
parser.read("test.cfg")
rv = {"general":{},
"products":{}}
rv["general"].update(dict(parser.items("general")))
# This only allows one product per whatever for now
for product in parser.sections():
if product != "general":
dest = rv["products"][product] = {}
for key, value in parser.items(product):
rv["products"][product][key] = value
return rv
def run_tests(product, kwargs):
kwargs["test_paths"]["/_test/"] = {"tests_path": os.path.join(here, "testdata"),
"metadata_path": os.path.join(here, "metadata")}
wptrunner.run_tests(**kwargs)
def settings_to_argv(settings):
rv = []
for name, value in settings.iteritems():
key = "--%s" % name
if not value:
rv.append(key)
elif isinstance(value, list):
for item in value:
rv.extend([key, item])
else:
rv.extend([key, value])
return rv
def set_from_args(settings, args):
if args.test:
settings["include"] = args.test
def run(config, args):
logger = structuredlog.StructuredLogger("web-platform-tests")
logger.add_handler(ResultHandler(logger=logger, verbose=args.verbose))
setup_wptrunner_logging(logger)
parser = wptcommandline.create_parser()
logger.suite_start(tests=[])
for product, product_settings in config["products"].iteritems():
if args.product and product not in args.product:
continue
settings = test_settings()
settings.update(config["general"])
settings.update(product_settings)
settings["product"] = product
set_from_args(settings, args)
kwargs = vars(parser.parse_args(settings_to_argv(settings)))
wptcommandline.check_args(kwargs)
logger.send_message("wptrunner-test", "set-product", product)
run_tests(product, kwargs)
logger.send_message("wptrunner-test", "set-product", None)
logger.suite_end()
def get_parser():
parser = argparse.ArgumentParser()
parser.add_argument("-v", "--verbose", action="store_true", default=False,
help="verbose log output")
parser.add_argument("--product", action="append",
help="Specific product to include in test run")
parser.add_argument("--pdb", action="store_true",
help="Invoke pdb on uncaught exception")
parser.add_argument("test", nargs="*", type=wptcommandline.slash_prefixed,
help="Specific tests to include in test run")
return parser
def main():
config = read_config()
args = get_parser().parse_args()
try:
run(config, args)
except Exception:
if args.pdb:
import pdb, traceback
print traceback.format_exc()
pdb.post_mortem()
else:
raise
if __name__ == "__main__":
main()

View file

@ -0,0 +1,4 @@
<link rel=match href=green.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,3 @@
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,3 @@
<style>
:root {background-color:red}
</style>

View file

@ -0,0 +1,9 @@
<link rel=match href=green.html>
<style>
:root {background-color:red}
</style>
<script>
if (window.location.protocol === "https:") {
document.documentElement.style.backgroundColor = "green";
}
</script>

View file

@ -0,0 +1,5 @@
<title>Reftest chain that should fail</title>
<link rel=match href=reftest_and_fail_0-ref.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,5 @@
<title>Reftest chain that should fail</title>
<link rel=match href=red.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,5 @@
<title>Reftest with cycle, all match</title>
<link rel=match href=reftest_cycle_0-ref.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,5 @@
<title>OR match that should pass</title>
<link rel=match href=reftest_cycle_1-ref.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,5 @@
<title>Reftest with cycle, all match</title>
<link rel=match href=reftest_cycle.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,5 @@
<title>Reftest with cycle, fails</title>
<link rel=match href=reftest_cycle_fail_0-ref.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,5 @@
<title>Reftest with cycle, fails</title>
<link rel=mismatch href=reftest_cycle_fail.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,5 @@
<title>rel=match that should pass</title>
<link rel=match href=green.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,5 @@
<title>rel=match that should fail</title>
<link rel=match href=red.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,5 @@
<title>rel=mismatch that should pass</title>
<link rel=mismatch href=red.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,5 @@
<title>rel=mismatch that should fail</title>
<link rel=mismatch href=green.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,6 @@
<title>OR match that should pass</title>
<link rel=match href=red.html>
<link rel=match href=green.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,6 @@
<html class="reftest-wait">
<title>rel=match that should time out in the ref</title>
<link rel=match href=reftest_ref_timeout-ref.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,6 @@
<html>
<title>rel=match that should time out in the ref</title>
<link rel=match href=reftest_ref_timeout-ref.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,6 @@
<html class="reftest-wait">
<title>rel=match that should timeout</title>
<link rel=match href=green.html>
<style>
:root {background-color:green}
</style>

View file

@ -0,0 +1,11 @@
<title>rel=match that should fail</title>
<link rel=match href=red.html>
<style>
:root {background-color:red}
</style>
<body class="reftest-wait">
<script>
setTimeout(function() {
document.documentElement.style.backgroundColor = "green";
body.className = "";
}, 2000);

View file

@ -0,0 +1,10 @@
<!doctype html>
<title>Example https test</title>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script>
test(function() {
assert_equals(window.location.protocol, "https:");
}, "Test that file was loaded with the correct protocol");
</script>

View file

@ -0,0 +1,13 @@
<!doctype html>
<title>Simple testharness.js usage</title>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script>
test(function() {
assert_true(true);
}, "Test that should pass");
test(function() {
assert_true(false);
}, "Test that should fail");
</script>

View file

@ -0,0 +1,7 @@
<!doctype html>
<title>testharness.js test that should error</title>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script>
undefined_function()
</script>

View file

@ -0,0 +1,9 @@
<!doctype html>
<title>testharness.js test with long timeout</title>
<meta name=timeout content=long>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script>
var t = async_test("Long timeout test");
setTimeout(t.step_func_done(function() {assert_true(true)}), 15*1000);
</script>

View file

@ -0,0 +1,6 @@
<!doctype html>
<title>Simple testharness.js usage</title>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
// This file should time out, obviously

View file

@ -0,0 +1,11 @@
[products]
[web-platform-tests]
remote_url = https://github.com/w3c/web-platform-tests.git
branch = master
sync_path = %(pwd)s/sync
[manifest:default]
tests = %(pwd)s/tests
metadata = %(pwd)s/meta
url_base = /

View file

@ -0,0 +1,3 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.

View file

@ -0,0 +1,32 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
"""Subpackage where each product is defined. Each product is created by adding a
a .py file containing a __wptrunner__ variable in the global scope. This must be
a dictionary with the fields
"product": Name of the product, assumed to be unique.
"browser": String indicating the Browser implementation used to launch that
product.
"executor": Dictionary with keys as supported test types and values as the name
of the Executor implemantation that will be used to run that test
type.
"browser_kwargs": String naming function that takes product, binary,
prefs_root and the wptrunner.run_tests kwargs dict as arguments
and returns a dictionary of kwargs to use when creating the
Browser class.
"executor_kwargs": String naming a function that takes http server url and
timeout multiplier and returns kwargs to use when creating
the executor class.
"env_options": String naming a funtion of no arguments that returns the
arguments passed to the TestEnvironment.
All classes and functions named in the above dict must be imported into the
module global scope.
"""
product_list = ["b2g",
"chrome",
"firefox",
"servo"]

View file

@ -0,0 +1,248 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import tempfile
import shutil
import subprocess
import fxos_appgen
import gaiatest
import mozdevice
import moznetwork
import mozrunner
from marionette import expected
from marionette.by import By
from marionette.wait import Wait
from mozprofile import FirefoxProfile, Preferences
from .base import get_free_port, BrowserError, Browser, ExecutorBrowser
from ..executors.executormarionette import MarionetteTestharnessExecutor
from ..hosts import HostsFile, HostsLine
here = os.path.split(__file__)[0]
__wptrunner__ = {"product": "b2g",
"check_args": "check_args",
"browser": "B2GBrowser",
"executor": {"testharness": "B2GMarionetteTestharnessExecutor"},
"browser_kwargs": "browser_kwargs",
"executor_kwargs": "executor_kwargs",
"env_options": "env_options"}
def check_args(**kwargs):
pass
def browser_kwargs(test_environment, **kwargs):
return {"prefs_root": kwargs["prefs_root"],
"no_backup": kwargs.get("b2g_no_backup", False)}
def executor_kwargs(test_type, server_config, cache_manager, **kwargs):
timeout_multiplier = kwargs["timeout_multiplier"]
if timeout_multiplier is None:
timeout_multiplier = 2
executor_kwargs = {"server_config": server_config,
"timeout_multiplier": timeout_multiplier,
"close_after_done": False}
if test_type == "reftest":
executor_kwargs["cache_manager"] = cache_manager
return executor_kwargs
def env_options():
return {"host": "web-platform.test",
"bind_hostname": "false",
"test_server_port": False}
class B2GBrowser(Browser):
used_ports = set()
init_timeout = 180
def __init__(self, logger, prefs_root, no_backup=False):
Browser.__init__(self, logger)
logger.info("Waiting for device")
subprocess.call(["adb", "wait-for-device"])
self.device = mozdevice.DeviceManagerADB()
self.marionette_port = get_free_port(2828, exclude=self.used_ports)
self.used_ports.add(self.marionette_port)
self.cert_test_app = None
self.runner = None
self.prefs_root = prefs_root
self.no_backup = no_backup
self.backup_path = None
self.backup_paths = []
self.backup_dirs = []
def setup(self):
self.logger.info("Running B2G setup")
self.backup_path = tempfile.mkdtemp()
self.logger.debug("Backing up device to %s" % (self.backup_path,))
if not self.no_backup:
self.backup_dirs = [("/data/local", os.path.join(self.backup_path, "local")),
("/data/b2g/mozilla", os.path.join(self.backup_path, "profile"))]
self.backup_paths = [("/system/etc/hosts", os.path.join(self.backup_path, "hosts"))]
for remote, local in self.backup_dirs:
self.device.getDirectory(remote, local)
for remote, local in self.backup_paths:
self.device.getFile(remote, local)
self.setup_hosts()
def start(self):
profile = FirefoxProfile()
profile.set_preferences({"dom.disable_open_during_load": False,
"marionette.defaultPrefs.enabled": True})
self.logger.debug("Creating device runner")
self.runner = mozrunner.B2GDeviceRunner(profile=profile)
self.logger.debug("Starting device runner")
self.runner.start()
self.logger.debug("Device runner started")
def setup_hosts(self):
hostnames = ["web-platform.test",
"www.web-platform.test",
"www1.web-platform.test",
"www2.web-platform.test",
"xn--n8j6ds53lwwkrqhv28a.web-platform.test",
"xn--lve-6lad.web-platform.test"]
host_ip = moznetwork.get_ip()
temp_dir = tempfile.mkdtemp()
hosts_path = os.path.join(temp_dir, "hosts")
remote_path = "/system/etc/hosts"
try:
self.device.getFile("/system/etc/hosts", hosts_path)
with open(hosts_path) as f:
hosts_file = HostsFile.from_file(f)
for canonical_hostname in hostnames:
hosts_file.set_host(HostsLine(host_ip, canonical_hostname))
with open(hosts_path, "w") as f:
hosts_file.to_file(f)
self.logger.info("Installing hosts file")
self.device.remount()
self.device.removeFile(remote_path)
self.device.pushFile(hosts_path, remote_path)
finally:
os.unlink(hosts_path)
os.rmdir(temp_dir)
def load_prefs(self):
prefs_path = os.path.join(self.prefs_root, "prefs_general.js")
if os.path.exists(prefs_path):
preferences = Preferences.read_prefs(prefs_path)
else:
self.logger.warning("Failed to find base prefs file in %s" % prefs_path)
preferences = []
return preferences
def stop(self):
pass
def on_output(self):
raise NotImplementedError
def cleanup(self):
self.logger.debug("Running browser cleanup steps")
self.device.remount()
for remote, local in self.backup_dirs:
self.device.removeDir(remote)
self.device.pushDir(local, remote)
for remote, local in self.backup_paths:
self.device.removeFile(remote)
self.device.pushFile(local, remote)
shutil.rmtree(self.backup_path)
self.device.reboot(wait=True)
def pid(self):
return None
def is_alive(self):
return True
def executor_browser(self):
return B2GExecutorBrowser, {"marionette_port": self.marionette_port}
class B2GExecutorBrowser(ExecutorBrowser):
# The following methods are called from a different process
def __init__(self, *args, **kwargs):
ExecutorBrowser.__init__(self, *args, **kwargs)
import sys, subprocess
self.device = mozdevice.ADBDevice()
self.device.forward("tcp:%s" % self.marionette_port,
"tcp:2828")
self.executor = None
self.marionette = None
self.gaia_device = None
self.gaia_apps = None
def after_connect(self, executor):
self.executor = executor
self.marionette = executor.marionette
self.executor.logger.debug("Running browser.after_connect steps")
self.gaia_apps = gaiatest.GaiaApps(marionette=executor.marionette)
self.executor.logger.debug("Waiting for homescreen to load")
# Moved out of gaia_test temporarily
self.executor.logger.info("Waiting for B2G to be ready")
self.wait_for_homescreen(timeout=60)
self.install_cert_app()
self.use_cert_app()
def install_cert_app(self):
"""Install the container app used to run the tests"""
if fxos_appgen.is_installed("CertTest App"):
self.executor.logger.info("CertTest App is already installed")
return
self.executor.logger.info("Installing CertTest App")
app_path = os.path.join(here, "b2g_setup", "certtest_app.zip")
fxos_appgen.install_app("CertTest App", app_path, marionette=self.marionette)
self.executor.logger.debug("Install complete")
def use_cert_app(self):
"""Start the app used to run the tests"""
self.executor.logger.info("Homescreen loaded")
self.gaia_apps.launch("CertTest App")
def wait_for_homescreen(self, timeout):
self.executor.logger.info("Waiting for home screen to load")
Wait(self.marionette, timeout).until(expected.element_present(
By.CSS_SELECTOR, '#homescreen[loading-state=false]'))
class B2GMarionetteTestharnessExecutor(MarionetteTestharnessExecutor):
def after_connect(self):
self.browser.after_connect(self)
MarionetteTestharnessExecutor.after_connect(self)

View file

@ -0,0 +1,148 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import platform
import socket
from abc import ABCMeta, abstractmethod
from ..wptcommandline import require_arg
here = os.path.split(__file__)[0]
def cmd_arg(name, value=None):
prefix = "-" if platform.system() == "Windows" else "--"
rv = prefix + name
if value is not None:
rv += "=" + value
return rv
def get_free_port(start_port, exclude=None):
"""Get the first port number after start_port (inclusive) that is
not currently bound.
:param start_port: Integer port number at which to start testing.
:param exclude: Set of port numbers to skip"""
port = start_port
while True:
if exclude and port in exclude:
port += 1
continue
s = socket.socket()
try:
s.bind(("127.0.0.1", port))
except socket.error:
port += 1
else:
return port
finally:
s.close()
class BrowserError(Exception):
pass
class Browser(object):
__metaclass__ = ABCMeta
process_cls = None
init_timeout = 30
def __init__(self, logger):
"""Abstract class serving as the basis for Browser implementations.
The Browser is used in the TestRunnerManager to start and stop the browser
process, and to check the state of that process. This class also acts as a
context manager, enabling it to do browser-specific setup at the start of
the testrun and cleanup after the run is complete.
:param logger: Structured logger to use for output.
"""
self.logger = logger
def __enter__(self):
self.setup()
return self
def __exit__(self, *args, **kwargs):
self.cleanup()
def setup(self):
"""Used for browser-specific setup that happens at the start of a test run"""
pass
@abstractmethod
def start(self):
"""Launch the browser object and get it into a state where is is ready to run tests"""
pass
@abstractmethod
def stop(self):
"""Stop the running browser process."""
pass
@abstractmethod
def pid(self):
"""pid of the browser process or None if there is no pid"""
pass
@abstractmethod
def is_alive(self):
"""Boolean indicating whether the browser process is still running"""
pass
def setup_ssl(self, hosts):
"""Return a certificate to use for tests requiring ssl that will be trusted by the browser"""
raise NotImplementedError("ssl testing not supported")
def cleanup(self):
"""Browser-specific cleanup that is run after the testrun is finished"""
pass
def executor_browser(self):
"""Returns the ExecutorBrowser subclass for this Browser subclass and the keyword arguments
with which it should be instantiated"""
return ExecutorBrowser, {}
def log_crash(self, process, test):
"""Return a list of dictionaries containing information about crashes that happend
in the browser, or an empty list if no crashes occurred"""
self.logger.crash(process, test)
class NullBrowser(Browser):
def start(self):
"""No-op browser to use in scenarios where the TestRunnerManager shouldn't
actually own the browser process (e.g. Servo where we start one browser
per test)"""
pass
def stop(self):
pass
def pid(self):
return None
def is_alive(self):
return True
def on_output(self, line):
raise NotImplementedError
class ExecutorBrowser(object):
def __init__(self, **kwargs):
"""View of the Browser used by the Executor object.
This is needed because the Executor runs in a child process and
we can't ship Browser instances between processes on Windows.
Typically this will have a few product-specific properties set,
but in some cases it may have more elaborate methods for setting
up the browser from the runner process.
"""
for k, v in kwargs.iteritems():
setattr(self, k, v)

View file

@ -0,0 +1,79 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
from .base import Browser, ExecutorBrowser, require_arg
from .webdriver import ChromedriverLocalServer
from ..executors import executor_kwargs as base_executor_kwargs
from ..executors.executorselenium import (SeleniumTestharnessExecutor,
SeleniumRefTestExecutor)
__wptrunner__ = {"product": "chrome",
"check_args": "check_args",
"browser": "ChromeBrowser",
"executor": {"testharness": "SeleniumTestharnessExecutor",
"reftest": "SeleniumRefTestExecutor"},
"browser_kwargs": "browser_kwargs",
"executor_kwargs": "executor_kwargs",
"env_options": "env_options"}
def check_args(**kwargs):
require_arg(kwargs, "binary")
def browser_kwargs(**kwargs):
return {"binary": kwargs["binary"],
"webdriver_binary": kwargs["webdriver_binary"]}
def executor_kwargs(test_type, server_config, cache_manager, **kwargs):
from selenium.webdriver import DesiredCapabilities
executor_kwargs = base_executor_kwargs(test_type, server_config,
cache_manager, **kwargs)
executor_kwargs["close_after_done"] = True
executor_kwargs["capabilities"] = dict(DesiredCapabilities.CHROME.items() +
{"chromeOptions":
{"binary": kwargs["binary"]}}.items())
return executor_kwargs
def env_options():
return {"host": "web-platform.test",
"bind_hostname": "true"}
class ChromeBrowser(Browser):
"""Chrome is backed by chromedriver, which is supplied through
``browsers.webdriver.ChromedriverLocalServer``."""
def __init__(self, logger, binary, webdriver_binary="chromedriver"):
"""Creates a new representation of Chrome. The `binary` argument gives
the browser binary to use for testing."""
Browser.__init__(self, logger)
self.binary = binary
self.driver = ChromedriverLocalServer(self.logger, binary=webdriver_binary)
def start(self):
self.driver.start()
def stop(self):
self.driver.stop()
def pid(self):
return self.driver.pid
def is_alive(self):
# TODO(ato): This only indicates the driver is alive,
# and doesn't say anything about whether a browser session
# is active.
return self.driver.is_alive()
def cleanup(self):
self.stop()
def executor_browser(self):
return ExecutorBrowser, {"webdriver_url": self.driver.url}

View file

@ -0,0 +1,223 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import subprocess
import mozinfo
from mozprocess import ProcessHandler
from mozprofile import FirefoxProfile, Preferences
from mozprofile.permissions import ServerLocations
from mozrunner import FirefoxRunner
from mozcrash import mozcrash
from .base import get_free_port, Browser, ExecutorBrowser, require_arg, cmd_arg
from ..executors import executor_kwargs as base_executor_kwargs
from ..executors.executormarionette import MarionetteTestharnessExecutor, MarionetteRefTestExecutor
here = os.path.join(os.path.split(__file__)[0])
__wptrunner__ = {"product": "firefox",
"check_args": "check_args",
"browser": "FirefoxBrowser",
"executor": {"testharness": "MarionetteTestharnessExecutor",
"reftest": "MarionetteRefTestExecutor"},
"browser_kwargs": "browser_kwargs",
"executor_kwargs": "executor_kwargs",
"env_options": "env_options"}
def check_args(**kwargs):
require_arg(kwargs, "binary")
if kwargs["ssl_type"] != "none":
require_arg(kwargs, "certutil_binary")
def browser_kwargs(**kwargs):
return {"binary": kwargs["binary"],
"prefs_root": kwargs["prefs_root"],
"debug_args": kwargs["debug_args"],
"interactive": kwargs["interactive"],
"symbols_path": kwargs["symbols_path"],
"stackwalk_binary": kwargs["stackwalk_binary"],
"certutil_binary": kwargs["certutil_binary"],
"ca_certificate_path": kwargs["ssl_env"].ca_cert_path()}
def executor_kwargs(test_type, server_config, cache_manager, **kwargs):
executor_kwargs = base_executor_kwargs(test_type, server_config,
cache_manager, **kwargs)
executor_kwargs["close_after_done"] = True
return executor_kwargs
def env_options():
return {"host": "127.0.0.1",
"external_host": "web-platform.test",
"bind_hostname": "false",
"certificate_domain": "web-platform.test",
"encrypt_after_connect": True}
class FirefoxBrowser(Browser):
used_ports = set()
def __init__(self, logger, binary, prefs_root, debug_args=None, interactive=None,
symbols_path=None, stackwalk_binary=None, certutil_binary=None,
ca_certificate_path=None):
Browser.__init__(self, logger)
self.binary = binary
self.prefs_root = prefs_root
self.marionette_port = None
self.used_ports.add(self.marionette_port)
self.runner = None
self.debug_args = debug_args
self.interactive = interactive
self.profile = None
self.symbols_path = symbols_path
self.stackwalk_binary = stackwalk_binary
self.ca_certificate_path = ca_certificate_path
self.certutil_binary = certutil_binary
def start(self):
self.marionette_port = get_free_port(2828, exclude=self.used_ports)
env = os.environ.copy()
env["MOZ_CRASHREPORTER"] = "1"
env["MOZ_CRASHREPORTER_SHUTDOWN"] = "1"
env["MOZ_CRASHREPORTER_NO_REPORT"] = "1"
env["MOZ_DISABLE_NONLOCAL_CONNECTIONS"] = "1"
locations = ServerLocations(filename=os.path.join(here, "server-locations.txt"))
preferences = self.load_prefs()
ports = {"http": "8000",
"https": "8443",
"ws": "8888"}
self.profile = FirefoxProfile(locations=locations,
proxy=ports,
preferences=preferences)
self.profile.set_preferences({"marionette.defaultPrefs.enabled": True,
"marionette.defaultPrefs.port": self.marionette_port,
"dom.disable_open_during_load": False})
if self.ca_certificate_path is not None:
self.setup_ssl()
self.runner = FirefoxRunner(profile=self.profile,
binary=self.binary,
cmdargs=[cmd_arg("marionette"), "about:blank"],
env=env,
process_class=ProcessHandler,
process_args={"processOutputLine": [self.on_output]})
self.logger.debug("Starting Firefox")
self.runner.start(debug_args=self.debug_args, interactive=self.interactive)
self.logger.debug("Firefox Started")
def load_prefs(self):
prefs_path = os.path.join(self.prefs_root, "prefs_general.js")
if os.path.exists(prefs_path):
preferences = Preferences.read_prefs(prefs_path)
else:
self.logger.warning("Failed to find base prefs file in %s" % prefs_path)
preferences = []
return preferences
def stop(self):
self.logger.debug("Stopping browser")
if self.runner is not None:
try:
self.runner.stop()
except OSError:
# This can happen on Windows if the process is already dead
pass
def pid(self):
if self.runner.process_handler is None:
return None
try:
return self.runner.process_handler.pid
except AttributeError:
return None
def on_output(self, line):
"""Write a line of output from the firefox process to the log"""
self.logger.process_output(self.pid(),
line.decode("utf8", "replace"),
command=" ".join(self.runner.command))
def is_alive(self):
if self.runner:
return self.runner.is_running()
return False
def cleanup(self):
self.stop()
def executor_browser(self):
assert self.marionette_port is not None
return ExecutorBrowser, {"marionette_port": self.marionette_port}
def log_crash(self, process, test):
dump_dir = os.path.join(self.profile.profile, "minidumps")
mozcrash.log_crashes(self.logger,
dump_dir,
symbols_path=self.symbols_path,
stackwalk_binary=self.stackwalk_binary,
process=process,
test=test)
def setup_ssl(self):
"""Create a certificate database to use in the test profile. This is configured
to trust the CA Certificate that has signed the web-platform.test server
certificate."""
self.logger.info("Setting up ssl")
# Make sure the certutil libraries from the source tree are loaded when using a
# local copy of certutil
# TODO: Maybe only set this if certutil won't launch?
env = os.environ.copy()
certutil_dir = os.path.dirname(self.binary)
if mozinfo.isMac:
env_var = "DYLD_LIBRARY_PATH"
elif mozinfo.isUnix:
env_var = "LD_LIBRARY_PATH"
else:
env_var = "PATH"
env[env_var] = (os.path.pathsep.join([certutil_dir, env[env_var]])
if env_var in env else certutil_dir)
def certutil(*args):
cmd = [self.certutil_binary] + list(args)
self.logger.process_output("certutil",
subprocess.check_output(cmd,
env=env,
stderr=subprocess.STDOUT),
" ".join(cmd))
pw_path = os.path.join(self.profile.profile, ".crtdbpw")
with open(pw_path, "w") as f:
# Use empty password for certificate db
f.write("\n")
cert_db_path = self.profile.profile
# Create a new certificate db
certutil("-N", "-d", cert_db_path, "-f", pw_path)
# Add the CA certificate to the database and mark as trusted to issue server certs
certutil("-A", "-d", cert_db_path, "-f", pw_path, "-t", "CT,,",
"-n", "web-platform-tests", "-i", self.ca_certificate_path)
# List all certs in the database
certutil("-L", "-d", cert_db_path)

View file

@ -0,0 +1,38 @@
#
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# See /build/pgo/server-locations.txt for documentation on the format
http://localhost:8000 primary
http://web-platform.test:8000
http://www.web-platform.test:8000
http://www1.web-platform.test:8000
http://www2.web-platform.test:8000
http://xn--n8j6ds53lwwkrqhv28a.web-platform.test:8000
http://xn--lve-6lad.web-platform.test:8000
http://web-platform.test:8001
http://www.web-platform.test:8001
http://www1.web-platform.test:8001
http://www2.web-platform.test:8001
http://xn--n8j6ds53lwwkrqhv28a.web-platform.test:8001
http://xn--lve-6lad.web-platform.test:8001
https://web-platform.test:8443
https://www.web-platform.test:8443
https://www1.web-platform.test:8443
https://www2.web-platform.test:8443
https://xn--n8j6ds53lwwkrqhv28a.web-platform.test:8443
https://xn--lve-6lad.web-platform.test:8443
# These are actually ws servers, but until mozprofile is
# fixed we have to pretend that they are http servers
http://web-platform.test:8888
http://www.web-platform.test:8888
http://www1.web-platform.test:8888
http://www2.web-platform.test:8888
http://xn--n8j6ds53lwwkrqhv28a.web-platform.test:8888
http://xn--lve-6lad.web-platform.test:8888

View file

@ -0,0 +1,55 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import os
from .base import NullBrowser, ExecutorBrowser, require_arg
from ..executors import executor_kwargs as base_executor_kwargs
from ..executors.executorservo import ServoTestharnessExecutor, ServoRefTestExecutor
here = os.path.join(os.path.split(__file__)[0])
__wptrunner__ = {"product": "servo",
"check_args": "check_args",
"browser": "ServoBrowser",
"executor": {"testharness": "ServoTestharnessExecutor",
"reftest": "ServoRefTestExecutor"},
"browser_kwargs": "browser_kwargs",
"executor_kwargs": "executor_kwargs",
"env_options": "env_options"}
def check_args(**kwargs):
require_arg(kwargs, "binary")
def browser_kwargs(**kwargs):
return {"binary": kwargs["binary"],
"debug_args": kwargs["debug_args"],
"interactive": kwargs["interactive"]}
def executor_kwargs(test_type, server_config, cache_manager, **kwargs):
rv = base_executor_kwargs(test_type, server_config,
cache_manager, **kwargs)
rv["pause_after_test"] = kwargs["pause_after_test"]
return rv
def env_options():
return {"host": "localhost",
"bind_hostname": "true",
"testharnessreport": "testharnessreport-servo.js"}
class ServoBrowser(NullBrowser):
def __init__(self, logger, binary, debug_args=None, interactive=False):
NullBrowser.__init__(self, logger)
self.binary = binary
self.debug_args = debug_args
self.interactive = interactive
def executor_browser(self):
return ExecutorBrowser, {"binary": self.binary,
"debug_args": self.debug_args,
"interactive": self.interactive}

View file

@ -0,0 +1,137 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import errno
import socket
import time
import traceback
import urlparse
import mozprocess
from .base import get_free_port, cmd_arg
__all__ = ["SeleniumLocalServer", "ChromedriverLocalServer"]
class LocalServer(object):
used_ports = set()
default_endpoint = "/"
def __init__(self, logger, binary, port=None, endpoint=None):
self.logger = logger
self.binary = binary
self.port = port
self.endpoint = endpoint or self.default_endpoint
if self.port is None:
self.port = get_free_port(4444, exclude=self.used_ports)
self.used_ports.add(self.port)
self.url = "http://127.0.0.1:%i%s" % (self.port, self.endpoint)
self.proc, self.cmd = None, None
def start(self):
self.proc = mozprocess.ProcessHandler(
self.cmd, processOutputLine=self.on_output)
try:
self.proc.run()
except OSError as e:
if e.errno == errno.ENOENT:
raise IOError(
"chromedriver executable not found: %s" % self.binary)
raise
self.logger.debug(
"Waiting for server to become accessible: %s" % self.url)
surl = urlparse.urlparse(self.url)
addr = (surl.hostname, surl.port)
try:
wait_service(addr)
except:
self.logger.error(
"Server was not accessible within the timeout:\n%s" % traceback.format_exc())
raise
else:
self.logger.info("Server listening on port %i" % self.port)
def stop(self):
if hasattr(self.proc, "proc"):
self.proc.kill()
def is_alive(self):
if hasattr(self.proc, "proc"):
exitcode = self.proc.poll()
return exitcode is None
return False
def on_output(self, line):
self.logger.process_output(self.pid,
line.decode("utf8", "replace"),
command=" ".join(self.cmd))
@property
def pid(self):
if hasattr(self.proc, "proc"):
return self.proc.pid
class SeleniumLocalServer(LocalServer):
default_endpoint = "/wd/hub"
def __init__(self, logger, binary, port=None):
LocalServer.__init__(self, logger, binary, port=port)
self.cmd = ["java",
"-jar", self.binary,
"-port", str(self.port)]
def start(self):
self.logger.debug("Starting local Selenium server")
LocalServer.start(self)
def stop(self):
LocalServer.stop(self)
self.logger.info("Selenium server stopped listening")
class ChromedriverLocalServer(LocalServer):
default_endpoint = "/wd/hub"
def __init__(self, logger, binary="chromedriver", port=None, endpoint=None):
LocalServer.__init__(self, logger, binary, port=port, endpoint=endpoint)
# TODO: verbose logging
self.cmd = [self.binary,
cmd_arg("port", str(self.port)) if self.port else "",
cmd_arg("url-base", self.endpoint) if self.endpoint else ""]
def start(self):
self.logger.debug("Starting local chromedriver server")
LocalServer.start(self)
def stop(self):
LocalServer.stop(self)
self.logger.info("chromedriver server stopped listening")
def wait_service(addr, timeout=15):
"""Waits until network service given as a tuple of (host, port) becomes
available or the `timeout` duration is reached, at which point
``socket.error`` is raised."""
end = time.time() + timeout
while end > time.time():
so = socket.socket()
try:
so.connect(addr)
except socket.timeout:
pass
except socket.error as e:
if e[0] != errno.ECONNREFUSED:
raise
else:
return True
finally:
so.close()
time.sleep(0.5)
raise socket.error("Service is unavailable: %s:%i" % addr)

View file

@ -0,0 +1,7 @@
{"host": "%(host)s",
"ports":{"http":[8000, 8001],
"https":[8443],
"ws":[8888]},
"check_subdomains":false,
"bind_hostname":%(bind_hostname)s,
"ssl":{}}

View file

@ -0,0 +1,64 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import ConfigParser
import os
import sys
from collections import OrderedDict
here = os.path.split(__file__)[0]
class ConfigDict(dict):
def __init__(self, base_path, *args, **kwargs):
self.base_path = base_path
dict.__init__(self, *args, **kwargs)
def get_path(self, key, default=None):
if key not in self:
return default
path = self[key]
os.path.expanduser(path)
return os.path.abspath(os.path.join(self.base_path, path))
def read(config_path):
config_path = os.path.abspath(config_path)
config_root = os.path.split(config_path)[0]
parser = ConfigParser.SafeConfigParser()
success = parser.read(config_path)
assert config_path in success, success
subns = {"pwd": os.path.abspath(os.path.curdir)}
rv = OrderedDict()
for section in parser.sections():
rv[section] = ConfigDict(config_root)
for key in parser.options(section):
rv[section][key] = parser.get(section, key, False, subns)
return rv
def path(argv=None):
if argv is None:
argv = []
path = None
for i, arg in enumerate(argv):
if arg == "--config":
if i + 1 < len(argv):
path = argv[i + 1]
elif arg.startswith("--config="):
path = arg.split("=", 1)[1]
if path is not None:
break
if path is None:
if os.path.exists("wptrunner.ini"):
path = os.path.abspath("wptrunner.ini")
else:
path = os.path.join(here, "..", "wptrunner.default.ini")
return os.path.abspath(path)
def load():
return read(path(sys.argv))

View file

@ -0,0 +1,226 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import json
import os
import multiprocessing
import socket
import sys
import time
from mozlog.structured import get_default_logger, handlers
from wptlogging import LogLevelRewriter
here = os.path.split(__file__)[0]
serve = None
sslutils = None
def do_delayed_imports(logger, test_paths):
global serve, sslutils
serve_root = serve_path(test_paths)
sys.path.insert(0, serve_root)
failed = []
try:
from tools.serve import serve
except ImportError:
failed.append("serve")
try:
import sslutils
except ImportError:
raise
failed.append("sslutils")
if failed:
logger.critical(
"Failed to import %s. Ensure that tests path %s contains web-platform-tests" %
(", ".join(failed), serve_root))
sys.exit(1)
def serve_path(test_paths):
return test_paths["/"]["tests_path"]
def get_ssl_kwargs(**kwargs):
if kwargs["ssl_type"] == "openssl":
args = {"openssl_binary": kwargs["openssl_binary"]}
elif kwargs["ssl_type"] == "pregenerated":
args = {"host_key_path": kwargs["host_key_path"],
"host_cert_path": kwargs["host_cert_path"],
"ca_cert_path": kwargs["ca_cert_path"]}
else:
args = {}
return args
def ssl_env(logger, **kwargs):
ssl_env_cls = sslutils.environments[kwargs["ssl_type"]]
return ssl_env_cls(logger, **get_ssl_kwargs(**kwargs))
class TestEnvironmentError(Exception):
pass
class StaticHandler(object):
def __init__(self, path, format_args, content_type, **headers):
with open(path) as f:
self.data = f.read() % format_args
self.resp_headers = [("Content-Type", content_type)]
for k, v in headers.iteritems():
resp_headers.append((k.replace("_", "-"), v))
self.handler = serve.handlers.handler(self.handle_request)
def handle_request(self, request, response):
return self.resp_headers, self.data
def __call__(self, request, response):
rv = self.handler(request, response)
return rv
class TestEnvironment(object):
def __init__(self, test_paths, ssl_env, pause_after_test, options):
"""Context manager that owns the test environment i.e. the http and
websockets servers"""
self.test_paths = test_paths
self.ssl_env = ssl_env
self.server = None
self.config = None
self.external_config = None
self.pause_after_test = pause_after_test
self.test_server_port = options.pop("test_server_port", True)
self.options = options if options is not None else {}
self.cache_manager = multiprocessing.Manager()
self.routes = self.get_routes()
def __enter__(self):
self.ssl_env.__enter__()
self.cache_manager.__enter__()
self.setup_server_logging()
self.config = self.load_config()
serve.set_computed_defaults(self.config)
self.external_config, self.servers = serve.start(self.config, self.ssl_env,
self.routes)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.cache_manager.__exit__(exc_type, exc_val, exc_tb)
self.ssl_env.__exit__(exc_type, exc_val, exc_tb)
for scheme, servers in self.servers.iteritems():
for port, server in servers:
server.kill()
def load_config(self):
default_config_path = os.path.join(serve_path(self.test_paths), "config.default.json")
local_config_path = os.path.join(here, "config.json")
with open(default_config_path) as f:
default_config = json.load(f)
with open(local_config_path) as f:
data = f.read()
local_config = json.loads(data % self.options)
#TODO: allow non-default configuration for ssl
local_config["external_host"] = self.options.get("external_host", None)
local_config["ssl"]["encrypt_after_connect"] = self.options.get("encrypt_after_connect", False)
config = serve.merge_json(default_config, local_config)
config["doc_root"] = serve_path(self.test_paths)
if not self.ssl_env.ssl_enabled:
config["ports"]["https"] = [None]
host = self.options.get("certificate_domain", config["host"])
hosts = [host]
hosts.extend("%s.%s" % (item[0], host) for item in serve.get_subdomains(host).values())
key_file, certificate = self.ssl_env.host_cert_path(hosts)
config["key_file"] = key_file
config["certificate"] = certificate
return config
def setup_server_logging(self):
server_logger = get_default_logger(component="wptserve")
assert server_logger is not None
log_filter = handlers.LogLevelFilter(lambda x:x, "info")
# Downgrade errors to warnings for the server
log_filter = LogLevelRewriter(log_filter, ["error"], "warning")
server_logger.component_filter = log_filter
try:
#Set as the default logger for wptserve
serve.set_logger(server_logger)
serve.logger = server_logger
except Exception:
# This happens if logging has already been set up for wptserve
pass
def get_routes(self):
routes = serve.default_routes()
for path, format_args, content_type, route in [
("testharness_runner.html", {}, "text/html", "/testharness_runner.html"),
(self.options.get("testharnessreport", "testharnessreport.js"),
{"output": self.pause_after_test}, "text/javascript",
"/resources/testharnessreport.js")]:
handler = StaticHandler(os.path.join(here, path), format_args, content_type)
routes.insert(0, (b"GET", str(route), handler))
for url, paths in self.test_paths.iteritems():
if url == "/":
continue
path = paths["tests_path"]
url = "/%s/" % url.strip("/")
for (method,
suffix,
handler_cls) in [(b"*",
b"*.py",
serve.handlers.PythonScriptHandler),
(b"GET",
"*.asis",
serve.handlers.AsIsHandler),
(b"GET",
"*",
serve.handlers.FileHandler)]:
route = (method, b"%s%s" % (str(url), str(suffix)), handler_cls(path, url_base=url))
routes.insert(-3, route)
if "/" not in self.test_paths:
routes = routes[:-3]
return routes
def ensure_started(self):
# Pause for a while to ensure that the server has a chance to start
time.sleep(2)
for scheme, servers in self.servers.iteritems():
for port, server in servers:
if self.test_server_port:
s = socket.socket()
try:
s.connect((self.config["host"], port))
except socket.error:
raise EnvironmentError(
"%s server on port %d failed to start" % (scheme, port))
finally:
s.close()
if not server.is_alive():
raise EnvironmentError("%s server on port %d failed to start" % (scheme, port))

View file

@ -0,0 +1,8 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
from base import (executor_kwargs,
testharness_result_converter,
reftest_result_converter,
TestExecutor)

View file

@ -0,0 +1,301 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import hashlib
import json
import os
import traceback
import urlparse
from abc import ABCMeta, abstractmethod
from ..testrunner import Stop
here = os.path.split(__file__)[0]
def executor_kwargs(test_type, server_config, cache_manager, **kwargs):
timeout_multiplier = kwargs["timeout_multiplier"]
if timeout_multiplier is None:
timeout_multiplier = 1
executor_kwargs = {"server_config": server_config,
"timeout_multiplier": timeout_multiplier,
"debug_args": kwargs["debug_args"]}
if test_type == "reftest":
executor_kwargs["screenshot_cache"] = cache_manager.dict()
return executor_kwargs
def strip_server(url):
"""Remove the scheme and netloc from a url, leaving only the path and any query
or fragment.
url - the url to strip
e.g. http://example.org:8000/tests?id=1#2 becomes /tests?id=1#2"""
url_parts = list(urlparse.urlsplit(url))
url_parts[0] = ""
url_parts[1] = ""
return urlparse.urlunsplit(url_parts)
class TestharnessResultConverter(object):
harness_codes = {0: "OK",
1: "ERROR",
2: "TIMEOUT"}
test_codes = {0: "PASS",
1: "FAIL",
2: "TIMEOUT",
3: "NOTRUN"}
def __call__(self, test, result):
"""Convert a JSON result into a (TestResult, [SubtestResult]) tuple"""
assert result["test"] == test.url, ("Got results from %s, expected %s" %
(result["test"], test.url))
harness_result = test.result_cls(self.harness_codes[result["status"]], result["message"])
return (harness_result,
[test.subtest_result_cls(subtest["name"], self.test_codes[subtest["status"]],
subtest["message"], subtest.get("stack", None)) for subtest in result["tests"]])
testharness_result_converter = TestharnessResultConverter()
def reftest_result_converter(self, test, result):
return (test.result_cls(result["status"], result["message"],
extra=result.get("extra")), [])
class ExecutorException(Exception):
def __init__(self, status, message):
self.status = status
self.message = message
class TestExecutor(object):
__metaclass__ = ABCMeta
test_type = None
convert_result = None
def __init__(self, browser, server_config, timeout_multiplier=1,
debug_args=None):
"""Abstract Base class for object that actually executes the tests in a
specific browser. Typically there will be a different TestExecutor
subclass for each test type and method of executing tests.
:param browser: ExecutorBrowser instance providing properties of the
browser that will be tested.
:param server_config: Dictionary of wptserve server configuration of the
form stored in TestEnvironment.external_config
:param timeout_multiplier: Multiplier relative to base timeout to use
when setting test timeout.
"""
self.runner = None
self.browser = browser
self.server_config = server_config
self.timeout_multiplier = timeout_multiplier
self.debug_args = debug_args
self.last_protocol = "http"
self.protocol = None # This must be set in subclasses
@property
def logger(self):
"""StructuredLogger for this executor"""
if self.runner is not None:
return self.runner.logger
def setup(self, runner):
"""Run steps needed before tests can be started e.g. connecting to
browser instance
:param runner: TestRunner instance that is going to run the tests"""
self.runner = runner
self.protocol.setup(runner)
def teardown(self):
"""Run cleanup steps after tests have finished"""
self.protocol.teardown()
def run_test(self, test):
"""Run a particular test.
:param test: The test to run"""
if test.protocol != self.last_protocol:
self.on_protocol_change(test.protocol)
try:
result = self.do_test(test)
except Exception as e:
result = self.result_from_exception(test, e)
if result is Stop:
return result
if result[0].status == "ERROR":
self.logger.debug(result[0].message)
self.last_protocol = test.protocol
self.runner.send_message("test_ended", test, result)
def server_url(self, protocol):
return "%s://%s:%s" % (protocol,
self.server_config["host"],
self.server_config["ports"][protocol][0])
def test_url(self, test):
return urlparse.urljoin(self.server_url(test.protocol), test.url)
@abstractmethod
def do_test(self, test):
"""Test-type and protocol specific implmentation of running a
specific test.
:param test: The test to run."""
pass
def on_protocol_change(self, new_protocol):
pass
def result_from_exception(self, test, e):
if hasattr(e, "status") and e.status in test.result_cls.statuses:
status = e.status
else:
status = "ERROR"
message = unicode(getattr(e, "message", ""))
if message:
message += "\n"
message += traceback.format_exc(e)
return test.result_cls(status, message), []
class TestharnessExecutor(TestExecutor):
convert_result = testharness_result_converter
class RefTestExecutor(TestExecutor):
convert_result = reftest_result_converter
def __init__(self, browser, server_config, timeout_multiplier=1, screenshot_cache=None,
debug_args=None):
TestExecutor.__init__(self, browser, server_config,
timeout_multiplier=timeout_multiplier,
debug_args=debug_args)
self.screenshot_cache = screenshot_cache
class RefTestImplementation(object):
def __init__(self, executor):
self.timeout_multiplier = executor.timeout_multiplier
self.executor = executor
# Cache of url:(screenshot hash, screenshot). Typically the
# screenshot is None, but we set this value if a test fails
# and the screenshot was taken from the cache so that we may
# retrieve the screenshot from the cache directly in the future
self.screenshot_cache = self.executor.screenshot_cache
self.message = None
@property
def logger(self):
return self.executor.logger
def get_hash(self, test):
timeout = test.timeout * self.timeout_multiplier
if test.url not in self.screenshot_cache:
success, data = self.executor.screenshot(test)
if not success:
return False, data
screenshot = data
hash_value = hashlib.sha1(screenshot).hexdigest()
self.screenshot_cache[test.url] = (hash_value, None)
rv = True, (hash_value, screenshot)
else:
rv = True, self.screenshot_cache[test.url]
self.message.append("%s %s" % (test.url, rv[1][0]))
return rv
def is_pass(self, lhs_hash, rhs_hash, relation):
assert relation in ("==", "!=")
self.message.append("Testing %s %s %s" % (lhs_hash, relation, rhs_hash))
return ((relation == "==" and lhs_hash == rhs_hash) or
(relation == "!=" and lhs_hash != rhs_hash))
def run_test(self, test):
self.message = []
# Depth-first search of reference tree, with the goal
# of reachings a leaf node with only pass results
stack = list(((test, item[0]), item[1]) for item in reversed(test.references))
while stack:
hashes = [None, None]
screenshots = [None, None]
nodes, relation = stack.pop()
for i, node in enumerate(nodes):
success, data = self.get_hash(node)
if success is False:
return {"status": data[0], "message": data[1]}
hashes[i], screenshots[i] = data
if self.is_pass(hashes[0], hashes[1], relation):
if nodes[1].references:
stack.extend(list(((nodes[1], item[0]), item[1]) for item in reversed(nodes[1].references)))
else:
# We passed
return {"status":"PASS", "message": None}
# We failed, so construct a failure message
for i, (node, screenshot) in enumerate(zip(nodes, screenshots)):
if screenshot is None:
success, screenshot = self.retake_screenshot(node)
if success:
screenshots[i] = screenshot
log_data = [{"url": nodes[0].url, "screenshot": screenshots[0]}, relation,
{"url": nodes[1].url, "screenshot": screenshots[1]}]
return {"status": "FAIL",
"message": "\n".join(self.message),
"extra": {"reftest_screenshots": log_data}}
def retake_screenshot(self, node):
success, data = self.executor.screenshot(node)
if not success:
return False, data
hash_val, _ = self.screenshot_cache[node.url]
self.screenshot_cache[node.url] = hash_val, data
return True, data
class Protocol(object):
def __init__(self, executor, browser):
self.executor = executor
self.browser = browser
@property
def logger(self):
return self.executor.logger
def setup(self, runner):
pass
def teardown(self):
pass
def wait(self):
pass

View file

@ -0,0 +1,322 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import hashlib
import os
import socket
import sys
import threading
import time
import traceback
import urlparse
import uuid
from collections import defaultdict
marionette = None
here = os.path.join(os.path.split(__file__)[0])
from .base import (ExecutorException,
Protocol,
RefTestExecutor,
RefTestImplementation,
TestExecutor,
TestharnessExecutor,
testharness_result_converter,
reftest_result_converter,
strip_server)
from ..testrunner import Stop
# Extra timeout to use after internal test timeout at which the harness
# should force a timeout
extra_timeout = 5 # seconds
def do_delayed_imports():
global marionette
global errors
try:
import marionette
from marionette import errors
except ImportError:
from marionette_driver import marionette, errors
class MarionetteProtocol(Protocol):
def __init__(self, executor, browser):
do_delayed_imports()
Protocol.__init__(self, executor, browser)
self.marionette = None
self.marionette_port = browser.marionette_port
def setup(self, runner):
"""Connect to browser via Marionette."""
Protocol.setup(self, runner)
self.logger.debug("Connecting to marionette on port %i" % self.marionette_port)
self.marionette = marionette.Marionette(host='localhost', port=self.marionette_port)
# XXX Move this timeout somewhere
self.logger.debug("Waiting for Marionette connection")
while True:
success = self.marionette.wait_for_port(60)
#When running in a debugger wait indefinitely for firefox to start
if success or self.executor.debug_args is None:
break
session_started = False
if success:
try:
self.logger.debug("Starting Marionette session")
self.marionette.start_session()
except Exception as e:
self.logger.warning("Starting marionette session failed: %s" % e)
else:
self.logger.debug("Marionette session started")
session_started = True
if not success or not session_started:
self.logger.warning("Failed to connect to Marionette")
self.executor.runner.send_message("init_failed")
else:
try:
self.after_connect()
except Exception:
self.logger.warning("Post-connection steps failed")
self.logger.error(traceback.format_exc())
self.executor.runner.send_message("init_failed")
else:
self.executor.runner.send_message("init_succeeded")
def teardown(self):
try:
self.marionette.delete_session()
except Exception:
# This is typically because the session never started
pass
del self.marionette
def is_alive(self):
"""Check if the marionette connection is still active"""
try:
# Get a simple property over the connection
self.marionette.current_window_handle
except Exception:
return False
return True
def after_connect(self):
self.load_runner("http")
def load_runner(self, protocol):
# Check if we previously had a test window open, and if we did make sure it's closed
self.marionette.execute_script("if (window.wrappedJSObject.win) {window.wrappedJSObject.win.close()}")
url = urlparse.urljoin(self.executor.server_url(protocol), "/testharness_runner.html")
self.logger.debug("Loading %s" % url)
try:
self.marionette.navigate(url)
except Exception as e:
self.logger.critical(
"Loading initial page %s failed. Ensure that the "
"there are no other programs bound to this port and "
"that your firewall rules or network setup does not "
"prevent access.\e%s" % (url, traceback.format_exc(e)))
self.marionette.execute_script(
"document.title = '%s'" % threading.current_thread().name.replace("'", '"'))
def wait(self):
while True:
try:
self.marionette.execute_async_script("");
except errors.ScriptTimeoutException:
pass
except (socket.timeout, errors.InvalidResponseException, IOError):
break
except Exception as e:
self.logger.error(traceback.format_exc(e))
break
class MarionetteRun(object):
def __init__(self, logger, func, marionette, url, timeout):
self.logger = logger
self.result = None
self.marionette = marionette
self.func = func
self.url = url
self.timeout = timeout
self.result_flag = threading.Event()
def run(self):
timeout = self.timeout
try:
if timeout is not None:
self.marionette.set_script_timeout((timeout + extra_timeout) * 1000)
else:
# We just want it to never time out, really, but marionette doesn't
# make that possible. It also seems to time out immediately if the
# timeout is set too high. This works at least.
self.marionette.set_script_timeout(2**31 - 1)
except (IOError, errors.InvalidResponseException):
self.logger.error("Lost marionette connection before starting test")
return Stop
executor = threading.Thread(target = self._run)
executor.start()
if timeout is not None:
wait_timeout = timeout + 2 * extra_timeout
else:
wait_timeout = None
flag = self.result_flag.wait(wait_timeout)
if self.result is None:
self.logger.debug("Timed out waiting for a result")
assert not flag
self.result = False, ("EXTERNAL-TIMEOUT", None)
return self.result
def _run(self):
try:
self.result = True, self.func(self.marionette, self.url, self.timeout)
except errors.ScriptTimeoutException:
self.logger.debug("Got a marionette timeout")
self.result = False, ("EXTERNAL-TIMEOUT", None)
except (socket.timeout, errors.InvalidResponseException, IOError):
# This can happen on a crash
# Also, should check after the test if the firefox process is still running
# and otherwise ignore any other result and set it to crash
self.result = False, ("CRASH", None)
except Exception as e:
message = getattr(e, "message", "")
if message:
message += "\n"
message += traceback.format_exc(e)
self.result = False, ("ERROR", e)
finally:
self.result_flag.set()
class MarionetteTestharnessExecutor(TestharnessExecutor):
def __init__(self, browser, server_config, timeout_multiplier=1, close_after_done=True,
debug_args=None):
"""Marionette-based executor for testharness.js tests"""
TestharnessExecutor.__init__(self, browser, server_config,
timeout_multiplier=timeout_multiplier,
debug_args=debug_args)
self.protocol = MarionetteProtocol(self, browser)
self.script = open(os.path.join(here, "testharness_marionette.js")).read()
self.close_after_done = close_after_done
self.window_id = str(uuid.uuid4())
if marionette is None:
do_delayed_imports()
def is_alive(self):
return self.protocol.is_alive()
def on_protocol_change(self, new_protocol):
self.protocol.load_runner(new_protocol)
def do_test(self, test):
timeout = (test.timeout * self.timeout_multiplier if self.debug_args is None
else None)
success, data = MarionetteRun(self.logger,
self.do_testharness,
self.protocol.marionette,
self.test_url(test),
timeout).run()
if success:
return self.convert_result(test, data)
return (test.result_cls(*data), [])
def do_testharness(self, marionette, url, timeout):
if self.close_after_done:
marionette.execute_script("if (window.wrappedJSObject.win) {window.wrappedJSObject.win.close()}")
if timeout is not None:
timeout_ms = str(timeout * 1000)
else:
timeout_ms = "null"
script = self.script % {"abs_url": url,
"url": strip_server(url),
"window_id": self.window_id,
"timeout_multiplier": self.timeout_multiplier,
"timeout": timeout_ms,
"explicit_timeout": timeout is None}
return marionette.execute_async_script(script, new_sandbox=False)
class MarionetteRefTestExecutor(RefTestExecutor):
def __init__(self, browser, server_config, timeout_multiplier=1,
screenshot_cache=None, close_after_done=True, debug_args=None):
"""Marionette-based executor for reftests"""
RefTestExecutor.__init__(self,
browser,
server_config,
screenshot_cache=screenshot_cache,
timeout_multiplier=timeout_multiplier,
debug_args=debug_args)
self.protocol = MarionetteProtocol(self, browser)
self.implementation = RefTestImplementation(self)
self.close_after_done = close_after_done
self.has_window = False
with open(os.path.join(here, "reftest.js")) as f:
self.script = f.read()
with open(os.path.join(here, "reftest-wait.js")) as f:
self.wait_script = f.read()
def is_alive(self):
return self.protocol.is_alive()
def do_test(self, test):
if self.close_after_done and self.has_window:
self.protocol.marionette.close()
self.protocol.marionette.switch_to_window(
self.protocol.marionette.window_handles[-1])
self.has_window = False
if not self.has_window:
self.protocol.marionette.execute_script(self.script)
self.protocol.marionette.switch_to_window(self.protocol.marionette.window_handles[-1])
self.has_window = True
result = self.implementation.run_test(test)
return self.convert_result(test, result)
def screenshot(self, test):
timeout = test.timeout if self.debug_args is None else None
test_url = self.test_url(test)
return MarionetteRun(self.logger,
self._screenshot,
self.protocol.marionette,
test_url,
timeout).run()
def _screenshot(self, marionette, url, timeout):
try:
marionette.navigate(url)
except errors.MarionetteException:
raise ExecutorException("ERROR", "Failed to load url %s" % (url,))
marionette.execute_async_script(self.wait_script)
screenshot = marionette.screenshot()
# strip off the data:img/png, part of the url
if screenshot.startswith("data:image/png;base64,"):
screenshot = screenshot.split(",", 1)[1]
return screenshot

View file

@ -0,0 +1,267 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import socket
import sys
import threading
import time
import traceback
import urlparse
import uuid
from .base import (ExecutorException,
Protocol,
RefTestExecutor,
RefTestImplementation,
TestExecutor,
TestharnessExecutor,
testharness_result_converter,
reftest_result_converter,
strip_server)
from ..testrunner import Stop
here = os.path.join(os.path.split(__file__)[0])
webdriver = None
exceptions = None
extra_timeout = 5
def do_delayed_imports():
global webdriver
global exceptions
from selenium import webdriver
from selenium.common import exceptions
class SeleniumProtocol(Protocol):
def __init__(self, executor, browser, capabilities, **kwargs):
do_delayed_imports()
Protocol.__init__(self, executor, browser)
self.capabilities = capabilities
self.url = browser.webdriver_url
self.webdriver = None
def setup(self, runner):
"""Connect to browser via Selenium's WebDriver implementation."""
self.runner = runner
self.logger.debug("Connecting to Selenium on URL: %s" % self.url)
session_started = False
try:
self.webdriver = webdriver.Remote(
self.url, desired_capabilities=self.capabilities)
except:
self.logger.warning(
"Connecting to Selenium failed:\n%s" % traceback.format_exc())
else:
self.logger.debug("Selenium session started")
session_started = True
if not session_started:
self.logger.warning("Failed to connect to Selenium")
self.executor.runner.send_message("init_failed")
else:
try:
self.after_connect()
except:
print >> sys.stderr, traceback.format_exc()
self.logger.warning(
"Failed to connect to navigate initial page")
self.executor.runner.send_message("init_failed")
else:
self.executor.runner.send_message("init_succeeded")
def teardown(self):
self.logger.debug("Hanging up on Selenium session")
try:
self.webdriver.quit()
except:
pass
del self.webdriver
def is_alive(self):
try:
# Get a simple property over the connection
self.webdriver.current_window_handle
# TODO what exception?
except (socket.timeout, exceptions.ErrorInResponseException):
return False
return True
def after_connect(self):
self.load_runner("http")
def load_runner(self, protocol):
url = urlparse.urljoin(self.executor.server_url(protocol),
"/testharness_runner.html")
self.logger.debug("Loading %s" % url)
self.webdriver.get(url)
self.webdriver.execute_script("document.title = '%s'" %
threading.current_thread().name.replace("'", '"'))
def wait(self):
while True:
try:
self.webdriver.execute_async_script("");
except exceptions.TimeoutException:
pass
except (socket.timeout, exceptions.NoSuchWindowException,
exceptions.ErrorInResponseException, IOError):
break
except Exception as e:
self.logger.error(traceback.format_exc(e))
break
class SeleniumRun(object):
def __init__(self, func, webdriver, url, timeout):
self.func = func
self.result = None
self.webdriver = webdriver
self.url = url
self.timeout = timeout
self.result_flag = threading.Event()
def run(self):
timeout = self.timeout
try:
self.webdriver.set_script_timeout((timeout + extra_timeout) * 1000)
except exceptions.ErrorInResponseException:
self.logger.error("Lost webdriver connection")
return Stop
executor = threading.Thread(target=self._run)
executor.start()
flag = self.result_flag.wait(timeout + 2 * extra_timeout)
if self.result is None:
assert not flag
self.result = False, ("EXTERNAL-TIMEOUT", None)
return self.result
def _run(self):
try:
self.result = True, self.func(self.webdriver, self.url, self.timeout)
except exceptions.TimeoutException:
self.result = False, ("EXTERNAL-TIMEOUT", None)
except (socket.timeout, exceptions.ErrorInResponseException):
self.result = False, ("CRASH", None)
except Exception as e:
message = getattr(e, "message", "")
if message:
message += "\n"
message += traceback.format_exc(e)
self.result = False, ("ERROR", e)
finally:
self.result_flag.set()
class SeleniumTestharnessExecutor(TestharnessExecutor):
def __init__(self, browser, server_config, timeout_multiplier=1,
close_after_done=True, capabilities=None, debug_args=None):
"""Selenium-based executor for testharness.js tests"""
TestharnessExecutor.__init__(self, browser, server_config,
timeout_multiplier=timeout_multiplier,
debug_args=debug_args)
self.protocol = SeleniumProtocol(self, browser, capabilities)
with open(os.path.join(here, "testharness_webdriver.js")) as f:
self.script = f.read()
self.close_after_done = close_after_done
self.window_id = str(uuid.uuid4())
def is_alive(self):
return self.protocol.is_alive()
def on_protocol_change(self, new_protocol):
self.protocol.load_runner(new_protocol)
def do_test(self, test):
url = self.test_url(test)
success, data = SeleniumRun(self.do_testharness,
self.protocol.webdriver,
url,
test.timeout * self.timeout_multiplier).run()
if success:
return self.convert_result(test, data)
return (test.result_cls(*data), [])
def do_testharness(self, webdriver, url, timeout):
return webdriver.execute_async_script(
self.script % {"abs_url": url,
"url": strip_server(url),
"window_id": self.window_id,
"timeout_multiplier": self.timeout_multiplier,
"timeout": timeout * 1000})
class SeleniumRefTestExecutor(RefTestExecutor):
def __init__(self, browser, server_config, timeout_multiplier=1,
screenshot_cache=None, close_after_done=True,
debug_args=None, capabilities=None):
"""Selenium WebDriver-based executor for reftests"""
RefTestExecutor.__init__(self,
browser,
server_config,
screenshot_cache=screenshot_cache,
timeout_multiplier=timeout_multiplier,
debug_args=debug_args)
self.protocol = SeleniumProtocol(self, browser,
capabilities=capabilities)
self.implementation = RefTestImplementation(self)
self.close_after_done = close_after_done
self.has_window = False
with open(os.path.join(here, "reftest.js")) as f:
self.script = f.read()
with open(os.path.join(here, "reftest-wait_webdriver.js")) as f:
self.wait_script = f.read()
def is_alive(self):
return self.protocol.is_alive()
def do_test(self, test):
self.logger.info("Test requires OS-level window focus")
if self.close_after_done and self.has_window:
self.protocol.webdriver.close()
self.protocol.webdriver.switch_to_window(
self.protocol.webdriver.window_handles[-1])
self.has_window = False
if not self.has_window:
self.protocol.webdriver.execute_script(self.script)
self.protocol.webdriver.switch_to_window(
self.protocol.webdriver.window_handles[-1])
self.has_window = True
result = self.implementation.run_test(test)
return self.convert_result(test, result)
def screenshot(self, test):
return SeleniumRun(self._screenshot,
self.protocol.webdriver,
self.test_url(test),
test.timeout).run()
def _screenshot(self, webdriver, url, timeout):
webdriver.get(url)
webdriver.execute_async_script(self.wait_script)
screenshot = webdriver.get_screenshot_as_base64()
# strip off the data:img/png, part of the url
if screenshot.startswith("data:image/png;base64,"):
screenshot = screenshot.split(",", 1)[1]
return screenshot

View file

@ -0,0 +1,220 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import base64
import hashlib
import json
import os
import subprocess
import tempfile
import threading
import urlparse
import uuid
from collections import defaultdict
from mozprocess import ProcessHandler
from .base import (ExecutorException,
Protocol,
RefTestImplementation,
testharness_result_converter,
reftest_result_converter)
from .process import ProcessTestExecutor
hosts_text = """127.0.0.1 web-platform.test
127.0.0.1 www.web-platform.test
127.0.0.1 www1.web-platform.test
127.0.0.1 www2.web-platform.test
127.0.0.1 xn--n8j6ds53lwwkrqhv28a.web-platform.test
127.0.0.1 xn--lve-6lad.web-platform.test
"""
def make_hosts_file():
hosts_fd, hosts_path = tempfile.mkstemp()
with os.fdopen(hosts_fd, "w") as f:
f.write(hosts_text)
return hosts_path
class ServoTestharnessExecutor(ProcessTestExecutor):
convert_result = testharness_result_converter
def __init__(self, browser, server_config, timeout_multiplier=1, debug_args=None,
pause_after_test=False):
ProcessTestExecutor.__init__(self, browser, server_config,
timeout_multiplier=timeout_multiplier,
debug_args=debug_args)
self.pause_after_test = pause_after_test
self.result_data = None
self.result_flag = None
self.protocol = Protocol(self, browser)
self.hosts_path = make_hosts_file()
def teardown(self):
try:
os.unlink(self.hosts_path)
except OSError:
pass
ProcessTestExecutor.teardown(self)
def do_test(self, test):
self.result_data = None
self.result_flag = threading.Event()
self.command = [self.binary, "--cpu", "--hard-fail", "-z", self.test_url(test)]
if self.pause_after_test:
self.command.remove("-z")
if self.debug_args:
self.command = list(self.debug_args) + self.command
env = os.environ.copy()
env["HOST_FILE"] = self.hosts_path
self.proc = ProcessHandler(self.command,
processOutputLine=[self.on_output],
onFinish=self.on_finish,
env=env)
try:
self.proc.run()
timeout = test.timeout * self.timeout_multiplier
# Now wait to get the output we expect, or until we reach the timeout
if self.debug_args is None and not self.pause_after_test:
wait_timeout = timeout + 5
else:
wait_timeout = None
self.result_flag.wait(wait_timeout)
proc_is_running = True
if self.result_flag.is_set() and self.result_data is not None:
self.result_data["test"] = test.url
result = self.convert_result(test, self.result_data)
else:
if self.proc.proc.poll() is not None:
result = (test.result_cls("CRASH", None), [])
proc_is_running = False
else:
result = (test.result_cls("TIMEOUT", None), [])
if proc_is_running:
if self.pause_after_test:
self.logger.info("Pausing until the browser exits")
self.proc.wait()
else:
self.proc.kill()
except KeyboardInterrupt:
self.proc.kill()
raise
return result
def on_output(self, line):
prefix = "ALERT: RESULT: "
line = line.decode("utf8", "replace")
if line.startswith(prefix):
self.result_data = json.loads(line[len(prefix):])
self.result_flag.set()
else:
if self.interactive:
print line
else:
self.logger.process_output(self.proc.pid,
line,
" ".join(self.command))
def on_finish(self):
self.result_flag.set()
class TempFilename(object):
def __init__(self, directory):
self.directory = directory
self.path = None
def __enter__(self):
self.path = os.path.join(self.directory, str(uuid.uuid4()))
return self.path
def __exit__(self, *args, **kwargs):
try:
os.unlink(self.path)
except OSError:
pass
class ServoRefTestExecutor(ProcessTestExecutor):
convert_result = reftest_result_converter
def __init__(self, browser, server_config, binary=None, timeout_multiplier=1,
screenshot_cache=None, debug_args=None, pause_after_test=False):
ProcessTestExecutor.__init__(self,
browser,
server_config,
timeout_multiplier=timeout_multiplier,
debug_args=debug_args)
self.protocol = Protocol(self, browser)
self.screenshot_cache = screenshot_cache
self.implementation = RefTestImplementation(self)
self.tempdir = tempfile.mkdtemp()
self.hosts_path = make_hosts_file()
def teardown(self):
try:
os.unlink(self.hosts_path)
except OSError:
pass
os.rmdir(self.tempdir)
ProcessTestExecutor.teardown(self)
def screenshot(self, test):
full_url = self.test_url(test)
with TempFilename(self.tempdir) as output_path:
self.command = [self.binary, "--cpu", "--hard-fail", "--exit",
"--output=%s" % output_path, full_url]
env = os.environ.copy()
env["HOST_FILE"] = self.hosts_path
self.proc = ProcessHandler(self.command,
processOutputLine=[self.on_output],
env=env)
try:
self.proc.run()
rv = self.proc.wait(timeout=test.timeout)
except KeyboardInterrupt:
self.proc.kill()
raise
if rv is None:
self.proc.kill()
return False, ("EXTERNAL-TIMEOUT", None)
if rv != 0 or not os.path.exists(output_path):
return False, ("CRASH", None)
with open(output_path) as f:
# Might need to strip variable headers or something here
data = f.read()
return True, base64.b64encode(data)
def do_test(self, test):
result = self.implementation.run_test(test)
return self.convert_result(test, result)
def on_output(self, line):
line = line.decode("utf8", "replace")
if self.interactive:
print line
else:
self.logger.process_output(self.proc.pid,
line,
" ".join(self.command))

View file

@ -0,0 +1,23 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
from .base import TestExecutor
class ProcessTestExecutor(TestExecutor):
def __init__(self, *args, **kwargs):
TestExecutor.__init__(self, *args, **kwargs)
self.binary = self.browser.binary
self.interactive = self.browser.interactive
def setup(self, runner):
self.runner = runner
self.runner.send_message("init_succeeded")
return True
def is_alive(self):
return True
def do_test(self, test):
raise NotImplementedError

View file

@ -0,0 +1,22 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
function test(x) {
log("classList: " + root.classList);
if (!root.classList.contains("reftest-wait")) {
observer.disconnect();
marionetteScriptFinished();
}
}
var root = document.documentElement;
var observer = new MutationObserver(test);
observer.observe(root, {attributes: true});
if (document.readyState != "complete") {
onload = test
} else {
test();
}

View file

@ -0,0 +1,23 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
var callback = arguments[arguments.length - 1];
function test(x) {
if (!root.classList.contains("reftest-wait")) {
observer.disconnect();
callback()
}
}
var root = document.documentElement;
var observer = new MutationObserver(test);
observer.observe(root, {attributes: true});
if (document.readyState != "complete") {
onload = test;
} else {
test();
}

View file

@ -0,0 +1,5 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
var win = window.open("about:blank", "test", "width=600,height=600");

View file

@ -0,0 +1,28 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
window.wrappedJSObject.timeout_multiplier = %(timeout_multiplier)d;
window.wrappedJSObject.explicit_timeout = %(explicit_timeout)d;
window.wrappedJSObject.done = function(tests, status) {
clearTimeout(timer);
var test_results = tests.map(function(x) {
return {name:x.name, status:x.status, message:x.message, stack:x.stack}
});
marionetteScriptFinished({test:"%(url)s",
tests:test_results,
status: status.status,
message: status.message,
stack: status.stack});
}
window.wrappedJSObject.win = window.open("%(abs_url)s", "%(window_id)s");
var timer = null;
if (%(timeout)s) {
timer = setTimeout(function() {
log("Timeout fired");
window.wrappedJSObject.win.timeout();
}, %(timeout)s);
}

View file

@ -0,0 +1,25 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
var callback = arguments[arguments.length - 1];
window.timeout_multiplier = %(timeout_multiplier)d;
window.done = function(tests, status) {
clearTimeout(timer);
var test_results = tests.map(function(x) {
return {name:x.name, status:x.status, message:x.message, stack:x.stack}
});
callback({test:"%(url)s",
tests:test_results,
status: status.status,
message: status.message,
stack: status.stack});
}
window.win = window.open("%(abs_url)s", "%(window_id)s");
var timer = setTimeout(function() {
window.win.timeout();
window.win.close();
}, %(timeout)s);

View file

@ -0,0 +1,18 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
def expected_path(metadata_path, test_path):
"""Path to the expectation data file for a given test path.
This is defined as metadata_path + relative_test_path + .ini
:param metadata_path: Path to the root of the metadata directory
:param test_path: Relative path to the test file from the test root
"""
args = list(test_path.split("/"))
args[-1] += ".ini"
return os.path.join(metadata_path, *args)

View file

@ -0,0 +1,104 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
from __future__ import unicode_literals
class HostsLine(object):
def __init__(self, ip_address, canonical_hostname, aliases=None, comment=None):
self.ip_address = ip_address
self.canonical_hostname = canonical_hostname
self.aliases = aliases if aliases is not None else []
self.comment = comment
if self.ip_address is None:
assert self.canonical_hostname is None
assert not self.aliases
assert self.comment is not None
@classmethod
def from_string(cls, line):
if not line.strip():
return
line = line.strip()
ip_address = None
canonical_hostname = None
aliases = []
comment = None
comment_parts = line.split("#", 1)
if len(comment_parts) > 1:
comment = comment_parts[1]
data = comment_parts[0].strip()
if data:
fields = data.split()
if len(fields) < 2:
raise ValueError("Invalid hosts line")
ip_address = fields[0]
canonical_hostname = fields[1]
aliases = fields[2:]
return cls(ip_address, canonical_hostname, aliases, comment)
class HostsFile(object):
def __init__(self):
self.data = []
self.by_hostname = {}
def set_host(self, host):
if host.canonical_hostname is None:
self.data.append(host)
elif host.canonical_hostname in self.by_hostname:
old_host = self.by_hostname[host.canonical_hostname]
old_host.ip_address = host.ip_address
old_host.aliases = host.aliases
old_host.comment = host.comment
else:
self.data.append(host)
self.by_hostname[host.canonical_hostname] = host
@classmethod
def from_file(cls, f):
rv = cls()
for line in f:
host = HostsLine.from_string(line)
if host is not None:
rv.set_host(host)
return rv
def to_string(self):
field_widths = [0, 0]
for line in self.data:
if line.ip_address is not None:
field_widths[0] = max(field_widths[0], len(line.ip_address))
field_widths[1] = max(field_widths[1], len(line.canonical_hostname))
lines = []
for host in self.data:
line = ""
if host.ip_address is not None:
ip_string = host.ip_address.ljust(field_widths[0])
hostname_str = host.canonical_hostname
if host.aliases:
hostname_str = "%s %s" % (hostname_str.ljust(field_widths[1]),
" ".join(host.aliases))
line = "%s %s" % (ip_string, hostname_str)
if host.comment:
if line:
line += " "
line += "#%s" % host.comment
lines.append(line)
lines.append("")
return "\n".join(lines)
def to_file(self, f):
f.write(self.to_string().encode("utf8"))

View file

@ -0,0 +1,158 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import urlparse
from wptmanifest.backends import static
from wptmanifest.backends.static import ManifestItem
import expected
"""Manifest structure used to store expected results of a test.
Each manifest file is represented by an ExpectedManifest that
has one or more TestNode children, one per test in the manifest.
Each TestNode has zero or more SubtestNode children, one for each
known subtest of the test.
"""
def data_cls_getter(output_node, visited_node):
# visited_node is intentionally unused
if output_node is None:
return ExpectedManifest
if isinstance(output_node, ExpectedManifest):
return TestNode
if isinstance(output_node, TestNode):
return SubtestNode
raise ValueError
class ExpectedManifest(ManifestItem):
def __init__(self, name, test_path, url_base):
"""Object representing all the tests in a particular manifest
:param name: Name of the AST Node associated with this object.
Should always be None since this should always be associated with
the root node of the AST.
:param test_path: Path of the test file associated with this manifest.
:param url_base: Base url for serving the tests in this manifest
"""
if name is not None:
raise ValueError("ExpectedManifest should represent the root node")
if test_path is None:
raise ValueError("ExpectedManifest requires a test path")
if url_base is None:
raise ValueError("ExpectedManifest requires a base url")
ManifestItem.__init__(self, name)
self.child_map = {}
self.test_path = test_path
self.url_base = url_base
def append(self, child):
"""Add a test to the manifest"""
ManifestItem.append(self, child)
self.child_map[child.id] = child
def _remove_child(self, child):
del self.child_map[child.id]
ManifestItem.remove_child(self, child)
assert len(self.child_map) == len(self.children)
def get_test(self, test_id):
"""Get a test from the manifest by ID
:param test_id: ID of the test to return."""
return self.child_map.get(test_id)
@property
def url(self):
return urlparse.urljoin(self.url_base,
"/".join(self.test_path.split(os.path.sep)))
class TestNode(ManifestItem):
def __init__(self, name):
"""Tree node associated with a particular test in a manifest
:param name: name of the test"""
assert name is not None
ManifestItem.__init__(self, name)
self.updated_expected = []
self.new_expected = []
self.subtests = {}
self.default_status = None
self._from_file = True
@property
def is_empty(self):
required_keys = set(["type"])
if set(self._data.keys()) != required_keys:
return False
return all(child.is_empty for child in self.children)
@property
def test_type(self):
return self.get("type")
@property
def id(self):
return urlparse.urljoin(self.parent.url, self.name)
def disabled(self):
"""Boolean indicating whether the test is disabled"""
try:
return self.get("disabled")
except KeyError:
return False
def append(self, node):
"""Add a subtest to the current test
:param node: AST Node associated with the subtest"""
child = ManifestItem.append(self, node)
self.subtests[child.name] = child
def get_subtest(self, name):
"""Get the SubtestNode corresponding to a particular subtest, by name
:param name: Name of the node to return"""
if name in self.subtests:
return self.subtests[name]
return None
class SubtestNode(TestNode):
def __init__(self, name):
"""Tree node associated with a particular subtest in a manifest
:param name: name of the subtest"""
TestNode.__init__(self, name)
@property
def is_empty(self):
if self._data:
return False
return True
def get_manifest(metadata_root, test_path, url_base, run_info):
"""Get the ExpectedManifest for a particular test path, or None if there is no
metadata stored for that test path.
:param metadata_root: Absolute path to the root of the metadata directory
:param test_path: Path to the test(s) relative to the test root
:param url_base: Base url for serving the tests in this manifest
:param run_info: Dictionary of properties of the test run for which the expectation
values should be computed.
"""
manifest_path = expected.expected_path(metadata_root, test_path)
try:
with open(manifest_path) as f:
return static.compile(f, run_info,
data_cls_getter=data_cls_getter,
test_path=test_path,
url_base=url_base)
except IOError:
return None

View file

@ -0,0 +1,114 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
"""Manifest structure used to store paths that should be included in a test run.
The manifest is represented by a tree of IncludeManifest objects, the root
representing the file and each subnode representing a subdirectory that should
be included or excluded.
"""
import os
from wptmanifest.node import DataNode
from wptmanifest.backends import conditional
from wptmanifest.backends.conditional import ManifestItem
class IncludeManifest(ManifestItem):
def __init__(self, node):
"""Node in a tree structure representing the paths
that should be included or excluded from the test run.
:param node: AST Node corresponding to this Node.
"""
ManifestItem.__init__(self, node)
self.child_map = {}
@classmethod
def create(cls):
"""Create an empty IncludeManifest tree"""
node = DataNode(None)
return cls(node)
def append(self, child):
ManifestItem.append(self, child)
self.child_map[child.name] = child
assert len(self.child_map) == len(self.children)
def include(self, test):
"""Return a boolean indicating whether a particular test should be
included in a test run, based on the IncludeManifest tree rooted on
this object.
:param test: The test object"""
path_components = self._get_path_components(test)
return self._include(test, path_components)
def _include(self, test, path_components):
if path_components:
next_path_part = path_components.pop()
if next_path_part in self.child_map:
return self.child_map[next_path_part]._include(test, path_components)
node = self
while node:
try:
skip_value = self.get("skip", {"test_type": test.item_type}).lower()
assert skip_value in ("true", "false")
return False if skip_value == "true" else True
except KeyError:
if node.parent is not None:
node = node.parent
else:
# Include by default
return True
def _get_path_components(self, test):
test_url = test.url
assert test_url[0] == "/"
return [item for item in reversed(test_url.split("/")) if item]
def _add_rule(self, test_manifests, url, direction):
maybe_path = os.path.abspath(os.path.join(os.curdir, url))
if os.path.exists(maybe_path):
for manifest, data in test_manifests.iteritems():
rel_path = os.path.relpath(maybe_path, data["tests_path"])
if ".." not in rel_path.split(os.sep):
url = rel_path
assert direction in ("include", "exclude")
components = [item for item in reversed(url.split("/")) if item]
node = self
while components:
component = components.pop()
if component not in node.child_map:
new_node = IncludeManifest(DataNode(component))
node.append(new_node)
node = node.child_map[component]
skip = False if direction == "include" else True
node.set("skip", str(skip))
def add_include(self, test_manifests, url_prefix):
"""Add a rule indicating that tests under a url path
should be included in test runs
:param url_prefix: The url prefix to include
"""
return self._add_rule(test_manifests, url_prefix, "include")
def add_exclude(self, test_manifests, url_prefix):
"""Add a rule indicating that tests under a url path
should be excluded from test runs
:param url_prefix: The url prefix to exclude
"""
return self._add_rule(test_manifests, url_prefix, "exclude")
def get_manifest(manifest_path):
with open(manifest_path) as f:
return conditional.compile(f, data_cls_getter=lambda x, y: IncludeManifest)

View file

@ -0,0 +1,420 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import urlparse
from collections import namedtuple, defaultdict
from wptmanifest.node import (DataNode, ConditionalNode, BinaryExpressionNode,
BinaryOperatorNode, VariableNode, StringNode, NumberNode,
UnaryExpressionNode, UnaryOperatorNode, KeyValueNode)
from wptmanifest.backends import conditional
from wptmanifest.backends.conditional import ManifestItem
import expected
"""Manifest structure used to update the expected results of a test
Each manifest file is represented by an ExpectedManifest that has one
or more TestNode children, one per test in the manifest. Each
TestNode has zero or more SubtestNode children, one for each known
subtest of the test.
In these representations, conditionals expressions in the manifest are
not evaluated upfront but stored as python functions to be evaluated
at runtime.
When a result for a test is to be updated set_result on the
[Sub]TestNode is called to store the new result, alongside the
existing conditional that result's run info matched, if any. Once all
new results are known, coalesce_expected is called to compute the new
set of results and conditionals. The AST of the underlying parsed manifest
is updated with the changes, and the result is serialised to a file.
"""
Result = namedtuple("Result", ["run_info", "status"])
def data_cls_getter(output_node, visited_node):
# visited_node is intentionally unused
if output_node is None:
return ExpectedManifest
elif isinstance(output_node, ExpectedManifest):
return TestNode
elif isinstance(output_node, TestNode):
return SubtestNode
else:
raise ValueError
class ExpectedManifest(ManifestItem):
def __init__(self, node, test_path=None, url_base=None):
"""Object representing all the tests in a particular manifest
:param node: AST Node associated with this object. If this is None,
a new AST is created to associate with this manifest.
:param test_path: Path of the test file associated with this manifest.
:param url_base: Base url for serving the tests in this manifest
"""
if node is None:
node = DataNode(None)
ManifestItem.__init__(self, node)
self.child_map = {}
self.test_path = test_path
self.url_base = url_base
assert self.url_base is not None
self.modified = False
def append(self, child):
ManifestItem.append(self, child)
if child.id in self.child_map:
print "Warning: Duplicate heading %s" % child.id
self.child_map[child.id] = child
def _remove_child(self, child):
del self.child_map[child.id]
ManifestItem._remove_child(self, child)
def get_test(self, test_id):
"""Return a TestNode by test id, or None if no test matches
:param test_id: The id of the test to look up"""
return self.child_map[test_id]
def has_test(self, test_id):
"""Boolean indicating whether the current test has a known child test
with id test id
:param test_id: The id of the test to look up"""
return test_id in self.child_map
@property
def url(self):
return urlparse.urljoin(self.url_base,
"/".join(self.test_path.split(os.path.sep)))
class TestNode(ManifestItem):
def __init__(self, node):
"""Tree node associated with a particular test in a manifest
:param node: AST node associated with the test"""
ManifestItem.__init__(self, node)
self.updated_expected = []
self.new_expected = []
self.subtests = {}
self.default_status = None
self._from_file = True
@classmethod
def create(cls, test_type, test_id):
"""Create a TestNode corresponding to a given test
:param test_type: The type of the test
:param test_id: The id of the test"""
url = test_id
name = url.split("/")[-1]
node = DataNode(name)
self = cls(node)
self.set("type", test_type)
self._from_file = False
return self
@property
def is_empty(self):
required_keys = set(["type"])
if set(self._data.keys()) != required_keys:
return False
return all(child.is_empty for child in self.children)
@property
def test_type(self):
"""The type of the test represented by this TestNode"""
return self.get("type", None)
@property
def id(self):
"""The id of the test represented by this TestNode"""
return urlparse.urljoin(self.parent.url, self.name)
def disabled(self, run_info):
"""Boolean indicating whether this test is disabled when run in an
environment with the given run_info
:param run_info: Dictionary of run_info parameters"""
return self.get("disabled", run_info) is not None
def set_result(self, run_info, result):
"""Set the result of the test in a particular run
:param run_info: Dictionary of run_info parameters corresponding
to this run
:param result: Status of the test in this run"""
if self.default_status is not None:
assert self.default_status == result.default_expected
else:
self.default_status = result.default_expected
# Add this result to the list of results satisfying
# any condition in the list of updated results it matches
for (cond, values) in self.updated_expected:
if cond(run_info):
values.append(Result(run_info, result.status))
if result.status != cond.value:
self.root.modified = True
break
else:
# We didn't find a previous value for this
self.new_expected.append(Result(run_info, result.status))
self.root.modified = True
def coalesce_expected(self):
"""Update the underlying manifest AST for this test based on all the
added results.
This will update existing conditionals if they got the same result in
all matching runs in the updated results, will delete existing conditionals
that get more than one different result in the updated run, and add new
conditionals for anything that doesn't match an existing conditional.
Conditionals not matched by any added result are not changed."""
final_conditionals = []
try:
unconditional_status = self.get("expected")
except KeyError:
unconditional_status = self.default_status
for conditional_value, results in self.updated_expected:
if not results:
# The conditional didn't match anything in these runs so leave it alone
final_conditionals.append(conditional_value)
elif all(results[0].status == result.status for result in results):
# All the new values for this conditional matched, so update the node
result = results[0]
if (result.status == unconditional_status and
conditional_value.condition_node is not None):
self.remove_value("expected", conditional_value)
else:
conditional_value.value = result.status
final_conditionals.append(conditional_value)
elif conditional_value.condition_node is not None:
# Blow away the existing condition and rebuild from scratch
# This isn't sure to work if we have a conditional later that matches
# these values too, but we can hope, verify that we get the results
# we expect, and if not let a human sort it out
self.remove_value("expected", conditional_value)
self.new_expected.extend(results)
elif conditional_value.condition_node is None:
self.new_expected.extend(result for result in results
if result.status != unconditional_status)
# It is an invariant that nothing in new_expected matches an existing
# condition except for the default condition
if self.new_expected:
if all(self.new_expected[0].status == result.status
for result in self.new_expected) and not self.updated_expected:
status = self.new_expected[0].status
if status != self.default_status:
self.set("expected", status, condition=None)
final_conditionals.append(self._data["expected"][-1])
else:
for conditional_node, status in group_conditionals(self.new_expected):
if status != unconditional_status:
self.set("expected", status, condition=conditional_node.children[0])
final_conditionals.append(self._data["expected"][-1])
if ("expected" in self._data and
len(self._data["expected"]) > 0 and
self._data["expected"][-1].condition_node is None and
self._data["expected"][-1].value == self.default_status):
self.remove_value("expected", self._data["expected"][-1])
if ("expected" in self._data and
len(self._data["expected"]) == 0):
for child in self.node.children:
if (isinstance(child, KeyValueNode) and
child.data == "expected"):
child.remove()
break
def _add_key_value(self, node, values):
ManifestItem._add_key_value(self, node, values)
if node.data == "expected":
self.updated_expected = []
for value in values:
self.updated_expected.append((value, []))
def clear_expected(self):
"""Clear all the expected data for this test and all of its subtests"""
self.updated_expected = []
if "expected" in self._data:
for child in self.node.children:
if (isinstance(child, KeyValueNode) and
child.data == "expected"):
child.remove()
del self._data["expected"]
break
for subtest in self.subtests.itervalues():
subtest.clear_expected()
def append(self, node):
child = ManifestItem.append(self, node)
self.subtests[child.name] = child
def get_subtest(self, name):
"""Return a SubtestNode corresponding to a particular subtest of
the current test, creating a new one if no subtest with that name
already exists.
:param name: Name of the subtest"""
if name in self.subtests:
return self.subtests[name]
else:
subtest = SubtestNode.create(name)
self.append(subtest)
return subtest
class SubtestNode(TestNode):
def __init__(self, node):
assert isinstance(node, DataNode)
TestNode.__init__(self, node)
@classmethod
def create(cls, name):
node = DataNode(name)
self = cls(node)
return self
@property
def is_empty(self):
if self._data:
return False
return True
def group_conditionals(values):
"""Given a list of Result objects, return a list of
(conditional_node, status) pairs representing the conditional
expressions that are required to match each status
:param values: List of Results"""
by_property = defaultdict(set)
for run_info, status in values:
for prop_name, prop_value in run_info.iteritems():
by_property[(prop_name, prop_value)].add(status)
# If we have more than one value, remove any properties that are common
# for all the values
if len(values) > 1:
for key, statuses in by_property.copy().iteritems():
if len(statuses) == len(values):
del by_property[key]
properties = set(item[0] for item in by_property.iterkeys())
prop_order = ["debug", "os", "version", "processor", "bits"]
include_props = []
for prop in prop_order:
if prop in properties:
include_props.append(prop)
conditions = {}
for run_info, status in values:
prop_set = tuple((prop, run_info[prop]) for prop in include_props)
if prop_set in conditions:
continue
expr = make_expr(prop_set, status)
conditions[prop_set] = (expr, status)
return conditions.values()
def make_expr(prop_set, status):
"""Create an AST that returns the value ``status`` given all the
properties in prop_set match."""
root = ConditionalNode()
assert len(prop_set) > 0
no_value_props = set(["debug"])
expressions = []
for prop, value in prop_set:
number_types = (int, float, long)
value_cls = (NumberNode
if type(value) in number_types
else StringNode)
if prop not in no_value_props:
expressions.append(
BinaryExpressionNode(
BinaryOperatorNode("=="),
VariableNode(prop),
value_cls(unicode(value))
))
else:
if value:
expressions.append(VariableNode(prop))
else:
expressions.append(
UnaryExpressionNode(
UnaryOperatorNode("not"),
VariableNode(prop)
))
if len(expressions) > 1:
prev = expressions[-1]
for curr in reversed(expressions[:-1]):
node = BinaryExpressionNode(
BinaryOperatorNode("and"),
curr,
prev)
prev = node
else:
node = expressions[0]
root.append(node)
root.append(StringNode(status))
return root
def get_manifest(metadata_root, test_path, url_base):
"""Get the ExpectedManifest for a particular test path, or None if there is no
metadata stored for that test path.
:param metadata_root: Absolute path to the root of the metadata directory
:param test_path: Path to the test(s) relative to the test root
:param url_base: Base url for serving the tests in this manifest
"""
manifest_path = expected.expected_path(metadata_root, test_path)
try:
with open(manifest_path) as f:
return compile(f, test_path, url_base)
except IOError:
return None
def compile(manifest_file, test_path, url_base):
return conditional.compile(manifest_file,
data_cls_getter=data_cls_getter,
test_path=test_path,
url_base=url_base)

View file

@ -0,0 +1,315 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import shutil
import sys
import tempfile
import types
import uuid
from collections import defaultdict
from mozlog.structured import reader
from mozlog.structured import structuredlog
import expected
import manifestupdate
import testloader
import wptmanifest
import wpttest
from vcs import git
manifest = None # Module that will be imported relative to test_root
logger = structuredlog.StructuredLogger("web-platform-tests")
def load_test_manifests(serve_root, test_paths):
do_delayed_imports(serve_root)
manifest_loader = testloader.ManifestLoader(test_paths, False)
return manifest_loader.load()
def update_expected(test_paths, serve_root, log_file_names,
rev_old=None, rev_new="HEAD", ignore_existing=False,
sync_root=None):
"""Update the metadata files for web-platform-tests based on
the results obtained in a previous run"""
manifests = load_test_manifests(serve_root, test_paths)
change_data = {}
if sync_root is not None:
if rev_old is not None:
rev_old = git("rev-parse", rev_old, repo=sync_root).strip()
rev_new = git("rev-parse", rev_new, repo=sync_root).strip()
if rev_old is not None:
change_data = load_change_data(rev_old, rev_new, repo=sync_root)
expected_map_by_manifest = update_from_logs(manifests,
*log_file_names,
ignore_existing=ignore_existing)
for test_manifest, expected_map in expected_map_by_manifest.iteritems():
url_base = manifests[test_manifest]["url_base"]
metadata_path = test_paths[url_base]["metadata_path"]
write_changes(metadata_path, expected_map)
results_changed = [item.test_path for item in expected_map.itervalues() if item.modified]
return unexpected_changes(manifests, change_data, results_changed)
def do_delayed_imports(serve_root):
global manifest
from manifest import manifest
def files_in_repo(repo_root):
return git("ls-tree", "-r", "--name-only", "HEAD").split("\n")
def rev_range(rev_old, rev_new, symmetric=False):
joiner = ".." if not symmetric else "..."
return "".join([rev_old, joiner, rev_new])
def paths_changed(rev_old, rev_new, repo):
data = git("diff", "--name-status", rev_range(rev_old, rev_new), repo=repo)
lines = [tuple(item.strip() for item in line.strip().split("\t", 1))
for line in data.split("\n") if line.strip()]
output = set(lines)
return output
def load_change_data(rev_old, rev_new, repo):
changes = paths_changed(rev_old, rev_new, repo)
rv = {}
status_keys = {"M": "modified",
"A": "new",
"D": "deleted"}
# TODO: deal with renames
for item in changes:
rv[item[1]] = status_keys[item[0]]
return rv
def unexpected_changes(manifests, change_data, files_changed):
files_changed = set(files_changed)
root_manifest = None
for manifest, paths in manifests.iteritems():
if paths["url_base"] == "/":
root_manifest = manifest
break
else:
return []
rv = []
return [fn for fn, tests in root_manifest if fn in files_changed and change_data.get(fn) != "M"]
# For each testrun
# Load all files and scan for the suite_start entry
# Build a hash of filename: properties
# For each different set of properties, gather all chunks
# For each chunk in the set of chunks, go through all tests
# for each test, make a map of {conditionals: [(platform, new_value)]}
# Repeat for each platform
# For each test in the list of tests:
# for each conditional:
# If all the new values match (or there aren't any) retain that conditional
# If any new values mismatch mark the test as needing human attention
# Check if all the RHS values are the same; if so collapse the conditionals
def update_from_logs(manifests, *log_filenames, **kwargs):
ignore_existing = kwargs.pop("ignore_existing", False)
expected_map = {}
id_test_map = {}
for test_manifest, paths in manifests.iteritems():
expected_map_manifest, id_path_map_manifest = create_test_tree(paths["metadata_path"],
test_manifest)
expected_map[test_manifest] = expected_map_manifest
id_test_map.update(id_path_map_manifest)
updater = ExpectedUpdater(manifests, expected_map, id_test_map,
ignore_existing=ignore_existing)
for log_filename in log_filenames:
with open(log_filename) as f:
updater.update_from_log(f)
for manifest_expected in expected_map.itervalues():
for tree in manifest_expected.itervalues():
for test in tree.iterchildren():
for subtest in test.iterchildren():
subtest.coalesce_expected()
test.coalesce_expected()
return expected_map
def write_changes(metadata_path, expected_map):
# First write the new manifest files to a temporary directory
temp_path = tempfile.mkdtemp(dir=os.path.split(metadata_path)[0])
write_new_expected(temp_path, expected_map)
# Copy all files in the root to the temporary location since
# these cannot be ini files
keep_files = [item for item in os.listdir(metadata_path) if
not os.path.isdir(os.path.join(metadata_path, item))]
for item in keep_files:
shutil.copyfile(os.path.join(metadata_path, item),
os.path.join(temp_path, item))
# Then move the old manifest files to a new location
temp_path_2 = metadata_path + str(uuid.uuid4())
os.rename(metadata_path, temp_path_2)
# Move the new files to the destination location and remove the old files
os.rename(temp_path, metadata_path)
shutil.rmtree(temp_path_2)
def write_new_expected(metadata_path, expected_map):
# Serialize the data back to a file
for tree in expected_map.itervalues():
if not tree.is_empty:
manifest_str = wptmanifest.serialize(tree.node, skip_empty_data=True)
assert manifest_str != ""
path = expected.expected_path(metadata_path, tree.test_path)
dir = os.path.split(path)[0]
if not os.path.exists(dir):
os.makedirs(dir)
with open(path, "w") as f:
f.write(manifest_str.encode("utf8"))
class ExpectedUpdater(object):
def __init__(self, test_manifests, expected_tree, id_path_map, ignore_existing=False):
self.test_manifests = test_manifests
self.expected_tree = expected_tree
self.id_path_map = id_path_map
self.ignore_existing = ignore_existing
self.run_info = None
self.action_map = {"suite_start": self.suite_start,
"test_start": self.test_start,
"test_status": self.test_status,
"test_end": self.test_end}
self.tests_visited = {}
self.test_cache = {}
def update_from_log(self, log_file):
self.run_info = None
log_reader = reader.read(log_file)
reader.each_log(log_reader, self.action_map)
def suite_start(self, data):
self.run_info = data["run_info"]
def test_id(self, id):
if type(id) in types.StringTypes:
return id
else:
return tuple(id)
def test_start(self, data):
test_id = self.test_id(data["test"])
try:
test_manifest, test = self.id_path_map[test_id]
expected_node = self.expected_tree[test_manifest][test].get_test(test_id)
except KeyError:
print "Test not found %s, skipping" % test_id
return
self.test_cache[test_id] = expected_node
if test_id not in self.tests_visited:
if self.ignore_existing:
expected_node.clear_expected()
self.tests_visited[test_id] = set()
def test_status(self, data):
test_id = self.test_id(data["test"])
test = self.test_cache.get(test_id)
if test is None:
return
test_cls = wpttest.manifest_test_cls[test.test_type]
subtest = test.get_subtest(data["subtest"])
self.tests_visited[test.id].add(data["subtest"])
result = test_cls.subtest_result_cls(
data["subtest"],
data["status"],
data.get("message"))
subtest.set_result(self.run_info, result)
def test_end(self, data):
test_id = self.test_id(data["test"])
test = self.test_cache.get(test_id)
if test is None:
return
test_cls = wpttest.manifest_test_cls[test.test_type]
if data["status"] == "SKIP":
return
result = test_cls.result_cls(
data["status"],
data.get("message"))
test.set_result(self.run_info, result)
del self.test_cache[test_id]
def create_test_tree(metadata_path, test_manifest):
expected_map = {}
id_test_map = {}
exclude_types = frozenset(["stub", "helper", "manual"])
include_types = set(manifest.item_types) - exclude_types
for test_path, tests in test_manifest.itertypes(*include_types):
expected_data = load_expected(test_manifest, metadata_path, test_path, tests)
if expected_data is None:
expected_data = create_expected(test_manifest, test_path, tests)
for test in tests:
id_test_map[test.id] = (test_manifest, test)
expected_map[test] = expected_data
return expected_map, id_test_map
def create_expected(test_manifest, test_path, tests):
expected = manifestupdate.ExpectedManifest(None, test_path, test_manifest.url_base)
for test in tests:
expected.append(manifestupdate.TestNode.create(test.item_type, test.id))
return expected
def load_expected(test_manifest, metadata_path, test_path, tests):
expected_manifest = manifestupdate.get_manifest(metadata_path,
test_path,
test_manifest.url_base)
if expected_manifest is None:
return
tests_by_id = {item.id: item for item in tests}
# Remove expected data for tests that no longer exist
for test in expected_manifest.iterchildren():
if not test.id in tests_by_id:
test.remove()
# Add tests that don't have expected data
for test in tests:
if not expected_manifest.has_test(test.id):
expected_manifest.append(manifestupdate.TestNode.create(test.item_type, test.id))
return expected_manifest

View file

@ -0,0 +1,55 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import importlib
import imp
from .browsers import product_list
def products_enabled(config):
names = config.get("products", {}).keys()
if not names:
return product_list
else:
return names
def product_module(config, product):
here = os.path.join(os.path.split(__file__)[0])
product_dir = os.path.join(here, "browsers")
if product not in products_enabled(config):
raise ValueError("Unknown product %s" % product)
path = config.get("products", {}).get(product, None)
if path:
module = imp.load_source('wptrunner.browsers.' + product, path)
else:
module = importlib.import_module("wptrunner.browsers." + product)
if not hasattr(module, "__wptrunner__"):
raise ValueError("Product module does not define __wptrunner__ variable")
return module
def load_product(config, product):
module = product_module(config, product)
data = module.__wptrunner__
check_args = getattr(module, data["check_args"])
browser_cls = getattr(module, data["browser"])
browser_kwargs = getattr(module, data["browser_kwargs"])
executor_kwargs = getattr(module, data["executor_kwargs"])
env_options = getattr(module, data["env_options"])()
executor_classes = {}
for test_type, cls_name in data["executor"].iteritems():
cls = getattr(module, cls_name)
executor_classes[test_type] = cls
return (check_args,
browser_cls, browser_kwargs,
executor_classes, executor_kwargs,
env_options)

View file

@ -0,0 +1,197 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import sys
import tempfile
from cStringIO import StringIO
from collections import defaultdict
import wptrunner
import wpttest
from mozlog.structured import commandline, reader
logger = None
def setup_logging(args, defaults):
global logger
logger = commandline.setup_logging("web-platform-tests-unstable", args, defaults)
wptrunner.setup_stdlib_logger()
for name in args.keys():
if name.startswith("log_"):
args.pop(name)
return logger
def group(items, size):
rv = []
i = 0
while i < len(items):
rv.append(items[i:i + size])
i += size
return rv
def next_power_of_two(num):
rv = 1
while rv < num:
rv = rv << 1
return rv
class Reducer(object):
def __init__(self, target, **kwargs):
self.target = target
self.test_type = kwargs["test_types"][0]
run_info = wpttest.get_run_info(kwargs["metadata_root"],
kwargs["product"],
debug=False)
test_filter = wptrunner.TestFilter(include=kwargs["include"])
self.test_loader = wptrunner.TestLoader(kwargs["tests_root"],
kwargs["metadata_root"],
[self.test_type],
test_filter,
run_info)
if kwargs["repeat"] == 1:
logger.critical("Need to specify --repeat with more than one repetition")
sys.exit(1)
self.kwargs = kwargs
def run(self):
all_tests = self.get_initial_tests()
tests = all_tests[:-1]
target_test = [all_tests[-1]]
if self.unstable(target_test):
return target_test
if not self.unstable(all_tests):
return []
chunk_size = next_power_of_two(int(len(tests) / 2))
logger.debug("Using chunk size %i" % chunk_size)
while chunk_size >= 1:
logger.debug("%i tests remain" % len(tests))
chunks = group(tests, chunk_size)
chunk_results = [None] * len(chunks)
for i, chunk in enumerate(chunks):
logger.debug("Running chunk %i/%i of size %i" % (i + 1, len(chunks), chunk_size))
trial_tests = []
chunk_str = ""
for j, inc_chunk in enumerate(chunks):
if i != j and chunk_results[j] in (None, False):
chunk_str += "+"
trial_tests.extend(inc_chunk)
else:
chunk_str += "-"
logger.debug("Using chunks %s" % chunk_str)
trial_tests.extend(target_test)
chunk_results[i] = self.unstable(trial_tests)
# if i == len(chunks) - 2 and all(item is False for item in chunk_results[:-1]):
# Dangerous? optimisation that if you got stability for 0..N-1 chunks
# it must be unstable with the Nth chunk
# chunk_results[i+1] = True
# continue
new_tests = []
keep_str = ""
for result, chunk in zip(chunk_results, chunks):
if not result:
keep_str += "+"
new_tests.extend(chunk)
else:
keep_str += "-"
logger.debug("Keeping chunks %s" % keep_str)
tests = new_tests
chunk_size = int(chunk_size / 2)
return tests + target_test
def unstable(self, tests):
logger.debug("Running with %i tests" % len(tests))
self.test_loader.tests = {self.test_type: tests}
stdout, stderr = sys.stdout, sys.stderr
sys.stdout = StringIO()
sys.stderr = StringIO()
with tempfile.NamedTemporaryFile() as f:
args = self.kwargs.copy()
args["log_raw"] = [f]
args["capture_stdio"] = False
wptrunner.setup_logging(args, {})
wptrunner.run_tests(test_loader=self.test_loader, **args)
wptrunner.logger.remove_handler(wptrunner.logger.handlers[0])
is_unstable = self.log_is_unstable(f)
sys.stdout, sys.stderr = stdout, stderr
logger.debug("Result was unstable with chunk removed"
if is_unstable else "stable")
return is_unstable
def log_is_unstable(self, log_f):
log_f.seek(0)
statuses = defaultdict(set)
def handle_status(item):
if item["test"] == self.target:
statuses[item["subtest"]].add(item["status"])
def handle_end(item):
if item["test"] == self.target:
statuses[None].add(item["status"])
reader.each_log(reader.read(log_f),
{"test_status": handle_status,
"test_end": handle_end})
logger.debug(str(statuses))
if not statuses:
logger.error("Didn't get any useful output from wptrunner")
log_f.seek(0)
for item in reader.read(log_f):
logger.debug(item)
return None
return any(len(item) > 1 for item in statuses.itervalues())
def get_initial_tests(self):
# Need to pass in arguments
all_tests = self.test_loader.tests[self.test_type]
tests = []
for item in all_tests:
tests.append(item)
if item.url == self.target:
break
logger.debug("Starting with tests: %s" % ("\n".join(item.id for item in tests)))
return tests
def do_reduce(**kwargs):
target = kwargs.pop("target")
reducer = Reducer(target, **kwargs)
unstable_set = reducer.run()
return unstable_set

View file

@ -0,0 +1,6 @@
<!doctype html>
<title></title>
<script>
var timeout_multiplier = 1;
var win = null;
</script>

View file

@ -0,0 +1,18 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
var props = {output:%(output)d};
setup(props);
add_completion_callback(function (tests, harness_status) {
alert("RESULT: " + JSON.stringify({
tests: tests.map(function(t) {
return { name: t.name, status: t.status, message: t.message, stack: t.stack}
}),
status: harness_status.status,
message: harness_status.message,
stack: harness_status.stack,
}));
});

View file

@ -0,0 +1,21 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
var props = {output:%(output)d,
explicit_timeout: true};
if (window.opener && "timeout_multiplier" in window.opener) {
props["timeout_multiplier"] = window.opener.timeout_multiplier;
}
if (window.opener && window.opener.explicit_timeout) {
props["explicit_timeout"] = window.opener.explicit_timeout;
}
setup(props);
add_completion_callback(function() {
add_completion_callback(function(tests, status) {
window.opener.done(tests, status)
})
});

View file

@ -0,0 +1,451 @@
import json
import os
import urlparse
from abc import ABCMeta, abstractmethod
from Queue import Empty
from collections import defaultdict, OrderedDict
from multiprocessing import Queue
import manifestinclude
import manifestexpected
import wpttest
from mozlog import structured
manifest = None
manifest_update = None
def do_delayed_imports():
# This relies on an already loaded module having set the sys.path correctly :(
global manifest, manifest_update
from manifest import manifest
from manifest import update as manifest_update
class TestChunker(object):
def __init__(self, total_chunks, chunk_number):
self.total_chunks = total_chunks
self.chunk_number = chunk_number
assert self.chunk_number <= self.total_chunks
def __call__(self, manifest):
raise NotImplementedError
class Unchunked(TestChunker):
def __init__(self, *args, **kwargs):
TestChunker.__init__(self, *args, **kwargs)
assert self.total_chunks == 1
def __call__(self, manifest):
for item in manifest:
yield item
class HashChunker(TestChunker):
def __call__(self):
chunk_index = self.chunk_number - 1
for test_path, tests in manifest:
if hash(test_path) % self.total_chunks == chunk_index:
yield test_path, tests
class EqualTimeChunker(TestChunker):
"""Chunker that uses the test timeout as a proxy for the running time of the test"""
def _get_chunk(self, manifest_items):
# For each directory containing tests, calculate the maximum execution time after running all
# the tests in that directory. Then work out the index into the manifest corresponding to the
# directories at fractions of m/N of the running time where m=1..N-1 and N is the total number
# of chunks. Return an array of these indicies
total_time = 0
by_dir = OrderedDict()
class PathData(object):
def __init__(self, path):
self.path = path
self.time = 0
self.tests = []
class Chunk(object):
def __init__(self):
self.paths = []
self.tests = []
self.time = 0
def append(self, path_data):
self.paths.append(path_data.path)
self.tests.extend(path_data.tests)
self.time += path_data.time
class ChunkList(object):
def __init__(self, total_time, n_chunks):
self.total_time = total_time
self.n_chunks = n_chunks
self.remaining_chunks = n_chunks
self.chunks = []
self.update_time_per_chunk()
def __iter__(self):
for item in self.chunks:
yield item
def __getitem__(self, i):
return self.chunks[i]
def sort_chunks(self):
self.chunks = sorted(self.chunks, key=lambda x:x.paths[0])
def get_tests(self, chunk_number):
return self[chunk_number - 1].tests
def append(self, chunk):
if len(self.chunks) == self.n_chunks:
raise ValueError("Tried to create more than %n chunks" % self.n_chunks)
self.chunks.append(chunk)
self.remaining_chunks -= 1
@property
def current_chunk(self):
if self.chunks:
return self.chunks[-1]
def update_time_per_chunk(self):
self.time_per_chunk = (self.total_time - sum(item.time for item in self)) / self.remaining_chunks
def create(self):
rv = Chunk()
self.append(rv)
return rv
def add_path(self, path_data):
sum_time = self.current_chunk.time + path_data.time
if sum_time > self.time_per_chunk and self.remaining_chunks > 0:
overshoot = sum_time - self.time_per_chunk
undershoot = self.time_per_chunk - self.current_chunk.time
if overshoot < undershoot:
self.create()
self.current_chunk.append(path_data)
else:
self.current_chunk.append(path_data)
self.create()
else:
self.current_chunk.append(path_data)
for i, (test_path, tests) in enumerate(manifest_items):
test_dir = tuple(os.path.split(test_path)[0].split(os.path.sep)[:3])
if not test_dir in by_dir:
by_dir[test_dir] = PathData(test_dir)
data = by_dir[test_dir]
time = sum(wpttest.DEFAULT_TIMEOUT if test.timeout !=
"long" else wpttest.LONG_TIMEOUT for test in tests)
data.time += time
data.tests.append((test_path, tests))
total_time += time
chunk_list = ChunkList(total_time, self.total_chunks)
if len(by_dir) < self.total_chunks:
raise ValueError("Tried to split into %i chunks, but only %i subdirectories included" % (
self.total_chunks, len(by_dir)))
# Put any individual dirs with a time greater than the time per chunk into their own
# chunk
while True:
to_remove = []
for path_data in by_dir.itervalues():
if path_data.time > chunk_list.time_per_chunk:
to_remove.append(path_data)
if to_remove:
for path_data in to_remove:
chunk = chunk_list.create()
chunk.append(path_data)
del by_dir[path_data.path]
chunk_list.update_time_per_chunk()
else:
break
chunk = chunk_list.create()
for path_data in by_dir.itervalues():
chunk_list.add_path(path_data)
assert len(chunk_list.chunks) == self.total_chunks, len(chunk_list.chunks)
assert sum(item.time for item in chunk_list) == chunk_list.total_time
chunk_list.sort_chunks()
return chunk_list.get_tests(self.chunk_number)
def __call__(self, manifest_iter):
manifest = list(manifest_iter)
tests = self._get_chunk(manifest)
for item in tests:
yield item
class TestFilter(object):
def __init__(self, test_manifests, include=None, exclude=None, manifest_path=None):
test_manifests = test_manifests
if manifest_path is not None and include is None:
self.manifest = manifestinclude.get_manifest(manifest_path)
else:
self.manifest = manifestinclude.IncludeManifest.create()
if include:
self.manifest.set("skip", "true")
for item in include:
self.manifest.add_include(test_manifests, item)
if exclude:
for item in exclude:
self.manifest.add_exclude(test_manifests, item)
def __call__(self, manifest_iter):
for test_path, tests in manifest_iter:
include_tests = set()
for test in tests:
if self.manifest.include(test):
include_tests.add(test)
if include_tests:
yield test_path, include_tests
class ManifestLoader(object):
def __init__(self, test_paths, force_manifest_update=False):
do_delayed_imports()
self.test_paths = test_paths
self.force_manifest_update = force_manifest_update
self.logger = structured.get_default_logger()
if self.logger is None:
self.logger = structured.structuredlog.StructuredLogger("ManifestLoader")
def load(self):
rv = {}
for url_base, paths in self.test_paths.iteritems():
manifest_file = self.load_manifest(url_base=url_base,
**paths)
path_data = {"url_base": url_base}
path_data.update(paths)
rv[manifest_file] = path_data
return rv
def create_manifest(self, manifest_path, tests_path, url_base="/"):
self.update_manifest(manifest_path, tests_path, url_base, recreate=True)
def update_manifest(self, manifest_path, tests_path, url_base="/",
recreate=False):
self.logger.info("Updating test manifest %s" % manifest_path)
json_data = None
if not recreate:
try:
with open(manifest_path) as f:
json_data = json.load(f)
except IOError:
#If the existing file doesn't exist just create one from scratch
pass
if not json_data:
manifest_file = manifest.Manifest(None, url_base)
else:
try:
manifest_file = manifest.Manifest.from_json(tests_path, json_data)
except manifest.ManifestVersionMismatch:
manifest_file = manifest.Manifest(None, url_base)
manifest_update.update(tests_path, url_base, manifest_file)
manifest.write(manifest_file, manifest_path)
def load_manifest(self, tests_path, metadata_path, url_base="/"):
manifest_path = os.path.join(metadata_path, "MANIFEST.json")
if (not os.path.exists(manifest_path) or
self.force_manifest_update):
self.update_manifest(manifest_path, tests_path, url_base)
manifest_file = manifest.load(tests_path, manifest_path)
if manifest_file.url_base != url_base:
self.logger.info("Updating url_base in manifest from %s to %s" % (manifest_file.url_base,
url_base))
manifest_file.url_base = url_base
manifest.write(manifest_file, manifest_path)
return manifest_file
class TestLoader(object):
def __init__(self,
test_manifests,
test_types,
test_filter,
run_info,
chunk_type="none",
total_chunks=1,
chunk_number=1,
include_https=True):
self.test_types = test_types
self.test_filter = test_filter
self.run_info = run_info
self.manifests = test_manifests
self.tests = None
self.disabled_tests = None
self.include_https = include_https
self.chunk_type = chunk_type
self.total_chunks = total_chunks
self.chunk_number = chunk_number
self.chunker = {"none": Unchunked,
"hash": HashChunker,
"equal_time": EqualTimeChunker}[chunk_type](total_chunks,
chunk_number)
self._test_ids = None
self._load_tests()
@property
def test_ids(self):
if self._test_ids is None:
self._test_ids = []
for test_dict in [self.disabled_tests, self.tests]:
for test_type in self.test_types:
self._test_ids += [item.id for item in test_dict[test_type]]
return self._test_ids
def get_test(self, manifest_test, expected_file):
if expected_file is not None:
expected = expected_file.get_test(manifest_test.id)
else:
expected = None
return wpttest.from_manifest(manifest_test, expected)
def load_expected_manifest(self, test_manifest, metadata_path, test_path):
return manifestexpected.get_manifest(metadata_path, test_path, test_manifest.url_base, self.run_info)
def iter_tests(self):
manifest_items = []
for manifest in self.manifests.keys():
manifest_items.extend(self.test_filter(manifest.itertypes(*self.test_types)))
if self.chunker is not None:
manifest_items = self.chunker(manifest_items)
for test_path, tests in manifest_items:
manifest_file = iter(tests).next().manifest
metadata_path = self.manifests[manifest_file]["metadata_path"]
expected_file = self.load_expected_manifest(manifest_file, metadata_path, test_path)
for manifest_test in tests:
test = self.get_test(manifest_test, expected_file)
test_type = manifest_test.item_type
yield test_path, test_type, test
def _load_tests(self):
"""Read in the tests from the manifest file and add them to a queue"""
tests = {"enabled":defaultdict(list),
"disabled":defaultdict(list)}
for test_path, test_type, test in self.iter_tests():
enabled = not test.disabled()
if not self.include_https and test.protocol == "https":
enabled = False
key = "enabled" if enabled else "disabled"
tests[key][test_type].append(test)
self.tests = tests["enabled"]
self.disabled_tests = tests["disabled"]
def groups(self, test_types, chunk_type="none", total_chunks=1, chunk_number=1):
groups = set()
for test_type in test_types:
for test in self.tests[test_type]:
group = test.url.split("/")[1]
groups.add(group)
return groups
class TestSource(object):
__metaclass__ = ABCMeta
@abstractmethod
def queue_tests(self, test_queue):
pass
@abstractmethod
def requeue_test(self, test):
pass
def __enter__(self):
return self
def __exit__(self, *args, **kwargs):
pass
class SingleTestSource(TestSource):
def __init__(self, test_queue):
self.test_queue = test_queue
@classmethod
def queue_tests(cls, test_queue, test_type, tests):
for test in tests[test_type]:
test_queue.put(test)
def get_queue(self):
if self.test_queue.empty():
return None
return self.test_queue
def requeue_test(self, test):
self.test_queue.put(test)
class PathGroupedSource(TestSource):
def __init__(self, test_queue):
self.test_queue = test_queue
self.current_queue = None
@classmethod
def queue_tests(cls, test_queue, test_type, tests, depth=None):
if depth is True:
depth = None
prev_path = None
group = None
for test in tests[test_type]:
path = urlparse.urlsplit(test.url).path.split("/")[1:-1][:depth]
if path != prev_path:
group = []
test_queue.put(group)
prev_path = path
group.append(test)
def get_queue(self):
if not self.current_queue or self.current_queue.empty():
try:
data = self.test_queue.get(block=True, timeout=1)
self.current_queue = Queue()
for item in data:
self.current_queue.put(item)
except Empty:
return None
return self.current_queue
def requeue_test(self, test):
self.current_queue.put(test)
def __exit__(self, *args, **kwargs):
if self.current_queue:
self.current_queue.close()

View file

@ -0,0 +1,664 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
from __future__ import unicode_literals
import multiprocessing
import sys
import threading
import traceback
from Queue import Empty
from multiprocessing import Process, current_process, Queue
from mozlog.structured import structuredlog
# Special value used as a sentinal in various commands
Stop = object()
class MessageLogger(object):
def __init__(self, message_func):
self.send_message = message_func
def _log_data(self, action, **kwargs):
self.send_message("log", action, kwargs)
def process_output(self, process, data, command):
self._log_data("process_output", process=process, data=data, command=command)
def _log_func(level_name):
def log(self, message):
self._log_data(level_name.lower(), message=message)
log.__doc__ = """Log a message with level %s
:param message: The string message to log
""" % level_name
log.__name__ = str(level_name).lower()
return log
# Create all the methods on StructuredLog for debug levels
for level_name in structuredlog.log_levels:
setattr(MessageLogger, level_name.lower(), _log_func(level_name))
class TestRunner(object):
def __init__(self, test_queue, command_queue, result_queue, executor):
"""Class implementing the main loop for running tests.
This class delegates the job of actually running a test to the executor
that is passed in.
:param test_queue: subprocess.Queue containing the tests to run
:param command_queue: subprocess.Queue used to send commands to the
process
:param result_queue: subprocess.Queue used to send results to the
parent TestManager process
:param executor: TestExecutor object that will actually run a test.
"""
self.test_queue = test_queue
self.command_queue = command_queue
self.result_queue = result_queue
self.executor = executor
self.name = current_process().name
self.logger = MessageLogger(self.send_message)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.teardown()
def setup(self):
self.executor.setup(self)
def teardown(self):
self.executor.teardown()
self.send_message("runner_teardown")
self.result_queue = None
self.command_queue = None
self.browser = None
def run(self):
"""Main loop accepting commands over the pipe and triggering
the associated methods"""
self.setup()
commands = {"run_test": self.run_test,
"stop": self.stop,
"wait": self.wait}
while True:
command, args = self.command_queue.get()
try:
rv = commands[command](*args)
except Exception:
self.send_message("error",
"Error running command %s with arguments %r:\n%s" %
(command, args, traceback.format_exc()))
else:
if rv is Stop:
break
def stop(self):
return Stop
def run_test(self):
if not self.executor.is_alive():
self.send_message("restart_runner")
return
try:
# Need to block here just to allow for contention with other processes
test = self.test_queue.get(block=True, timeout=1)
except Empty:
# If we are running tests in groups (e.g. by-dir) then this queue might be
# empty but there could be other test queues. restart_runner won't actually
# start the runner if there aren't any more tests to run
self.send_message("restart_runner")
return
else:
self.send_message("test_start", test)
try:
return self.executor.run_test(test)
except Exception:
self.logger.critical(traceback.format_exc())
raise
def wait(self):
self.executor.protocol.wait()
self.send_message("after_test_ended", True)
def send_message(self, command, *args):
self.result_queue.put((command, args))
def start_runner(test_queue, runner_command_queue, runner_result_queue,
executor_cls, executor_kwargs,
executor_browser_cls, executor_browser_kwargs,
stop_flag):
"""Launch a TestRunner in a new process"""
try:
browser = executor_browser_cls(**executor_browser_kwargs)
executor = executor_cls(browser, **executor_kwargs)
with TestRunner(test_queue, runner_command_queue, runner_result_queue, executor) as runner:
try:
runner.run()
except KeyboardInterrupt:
stop_flag.set()
except Exception:
runner_result_queue.put(("log", ("critical", {"message": traceback.format_exc()})))
print >> sys.stderr, traceback.format_exc()
stop_flag.set()
finally:
runner_command_queue = None
runner_result_queue = None
manager_count = 0
def next_manager_number():
global manager_count
local = manager_count = manager_count + 1
return local
class TestRunnerManager(threading.Thread):
init_lock = threading.Lock()
def __init__(self, suite_name, test_queue, test_source_cls, browser_cls, browser_kwargs,
executor_cls, executor_kwargs, stop_flag, pause_after_test=False,
pause_on_unexpected=False, debug_args=None):
"""Thread that owns a single TestRunner process and any processes required
by the TestRunner (e.g. the Firefox binary).
TestRunnerManagers are responsible for launching the browser process and the
runner process, and for logging the test progress. The actual test running
is done by the TestRunner. In particular they:
* Start the binary of the program under test
* Start the TestRunner
* Tell the TestRunner to start a test, if any
* Log that the test started
* Log the test results
* Take any remedial action required e.g. restart crashed or hung
processes
"""
self.suite_name = suite_name
self.test_queue = test_queue
self.test_source_cls = test_source_cls
self.browser_cls = browser_cls
self.browser_kwargs = browser_kwargs
self.executor_cls = executor_cls
self.executor_kwargs = executor_kwargs
self.test_source = None
self.browser = None
self.browser_pid = None
# Flags used to shut down this thread if we get a sigint
self.parent_stop_flag = stop_flag
self.child_stop_flag = multiprocessing.Event()
self.pause_after_test = pause_after_test
self.pause_on_unexpected = pause_on_unexpected
self.debug_args = debug_args
self.manager_number = next_manager_number()
self.command_queue = Queue()
self.remote_queue = Queue()
self.test_runner_proc = None
threading.Thread.__init__(self, name="Thread-TestrunnerManager-%i" % self.manager_number)
# This is started in the actual new thread
self.logger = None
# The test that is currently running
self.test = None
self.unexpected_count = 0
# This may not really be what we want
self.daemon = True
self.init_fail_count = 0
self.max_init_fails = 5
self.init_timer = None
self.restart_count = 0
self.max_restarts = 5
def run(self):
"""Main loop for the TestManager.
TestManagers generally receive commands from their
TestRunner updating them on the status of a test. They
may also have a stop flag set by the main thread indicating
that the manager should shut down the next time the event loop
spins."""
self.logger = structuredlog.StructuredLogger(self.suite_name)
with self.browser_cls(self.logger, **self.browser_kwargs) as browser, self.test_source_cls(self.test_queue) as test_source:
self.browser = browser
self.test_source = test_source
try:
if self.init() is Stop:
return
while True:
commands = {"init_succeeded": self.init_succeeded,
"init_failed": self.init_failed,
"test_start": self.test_start,
"test_ended": self.test_ended,
"after_test_ended": self.after_test_ended,
"restart_runner": self.restart_runner,
"runner_teardown": self.runner_teardown,
"log": self.log,
"error": self.error}
try:
command, data = self.command_queue.get(True, 1)
except IOError:
if not self.should_stop():
self.logger.error("Got IOError from poll")
self.restart_count += 1
if self.restart_runner() is Stop:
break
except Empty:
command = None
if self.should_stop():
self.logger.debug("A flag was set; stopping")
break
if command is not None:
self.restart_count = 0
if commands[command](*data) is Stop:
break
else:
if not self.test_runner_proc.is_alive():
if not self.command_queue.empty():
# We got a new message so process that
continue
# If we got to here the runner presumably shut down
# unexpectedly
self.logger.info("Test runner process shut down")
if self.test is not None:
# This could happen if the test runner crashed for some other
# reason
# Need to consider the unlikely case where one test causes the
# runner process to repeatedly die
self.logger.info("Last test did not complete, requeueing")
self.requeue_test()
self.logger.warning(
"More tests found, but runner process died, restarting")
self.restart_count += 1
if self.restart_runner() is Stop:
break
finally:
self.logger.debug("TestRunnerManager main loop terminating, starting cleanup")
self.stop_runner()
self.teardown()
self.logger.debug("TestRunnerManager main loop terminated")
def should_stop(self):
return self.child_stop_flag.is_set() or self.parent_stop_flag.is_set()
def init(self):
"""Launch the browser that is being tested,
and the TestRunner process that will run the tests."""
# It seems that this lock is helpful to prevent some race that otherwise
# sometimes stops the spawned processes initalising correctly, and
# leaves this thread hung
if self.init_timer is not None:
self.init_timer.cancel()
self.logger.debug("Init called, starting browser and runner")
def init_failed():
# This is called from a seperate thread, so we send a message to the
# main loop so we get back onto the manager thread
self.logger.debug("init_failed called from timer")
if self.command_queue:
self.command_queue.put(("init_failed", ()))
else:
self.logger.debug("Setting child stop flag in init_failed")
self.child_stop_flag.set()
with self.init_lock:
# Guard against problems initialising the browser or the browser
# remote control method
if self.debug_args is None:
self.init_timer = threading.Timer(self.browser.init_timeout, init_failed)
test_queue = self.test_source.get_queue()
if test_queue is None:
self.logger.info("No more tests")
return Stop
try:
if self.init_timer is not None:
self.init_timer.start()
self.browser.start()
self.browser_pid = self.browser.pid()
self.start_test_runner(test_queue)
except:
self.logger.warning("Failure during init %s" % traceback.format_exc())
if self.init_timer is not None:
self.init_timer.cancel()
self.logger.error(traceback.format_exc())
succeeded = False
else:
succeeded = True
# This has to happen after the lock is released
if not succeeded:
self.init_failed()
def init_succeeded(self):
"""Callback when we have started the browser, started the remote
control connection, and we are ready to start testing."""
self.logger.debug("Init succeeded")
if self.init_timer is not None:
self.init_timer.cancel()
self.init_fail_count = 0
self.start_next_test()
def init_failed(self):
"""Callback when starting the browser or the remote control connect
fails."""
self.init_fail_count += 1
self.logger.warning("Init failed %i" % self.init_fail_count)
if self.init_timer is not None:
self.init_timer.cancel()
if self.init_fail_count < self.max_init_fails:
self.restart_runner()
else:
self.logger.critical("Test runner failed to initialise correctly; shutting down")
return Stop
def start_test_runner(self, test_queue):
# Note that we need to be careful to start the browser before the
# test runner to ensure that any state set when the browser is started
# can be passed in to the test runner.
assert self.command_queue is not None
assert self.remote_queue is not None
self.logger.info("Starting runner")
executor_browser_cls, executor_browser_kwargs = self.browser.executor_browser()
args = (test_queue,
self.remote_queue,
self.command_queue,
self.executor_cls,
self.executor_kwargs,
executor_browser_cls,
executor_browser_kwargs,
self.child_stop_flag)
self.test_runner_proc = Process(target=start_runner,
args=args,
name="Thread-TestRunner-%i" % self.manager_number)
self.test_runner_proc.start()
self.logger.debug("Test runner started")
def send_message(self, command, *args):
self.remote_queue.put((command, args))
def cleanup(self):
if self.init_timer is not None:
self.init_timer.cancel()
self.logger.debug("TestManager cleanup")
while True:
try:
self.logger.warning(" ".join(map(repr, self.command_queue.get_nowait())))
except Empty:
break
while True:
try:
self.logger.warning(" ".join(map(repr, self.remote_queue.get_nowait())))
except Empty:
break
def teardown(self):
self.logger.debug("teardown in testrunnermanager")
self.test_runner_proc = None
self.command_queue.close()
self.remote_queue.close()
self.command_queue = None
self.remote_queue = None
def ensure_runner_stopped(self):
if self.test_runner_proc is None:
return
self.test_runner_proc.join(10)
if self.test_runner_proc.is_alive():
# This might leak a file handle from the queue
self.logger.warning("Forcibly terminating runner process")
self.test_runner_proc.terminate()
self.test_runner_proc.join(10)
else:
self.logger.debug("Testrunner exited with code %i" % self.test_runner_proc.exitcode)
def runner_teardown(self):
self.ensure_runner_stopped()
return Stop
def stop_runner(self):
"""Stop the TestRunner and the Firefox binary."""
self.logger.debug("Stopping runner")
if self.test_runner_proc is None:
return
try:
self.browser.stop()
if self.test_runner_proc.is_alive():
self.send_message("stop")
self.ensure_runner_stopped()
finally:
self.cleanup()
def start_next_test(self):
self.send_message("run_test")
def requeue_test(self):
self.test_source.requeue(self.test)
self.test = None
def test_start(self, test):
self.test = test
self.logger.test_start(test.id)
def test_ended(self, test, results):
"""Handle the end of a test.
Output the result of each subtest, and the result of the overall
harness to the logs.
"""
assert test == self.test
# Write the result of each subtest
file_result, test_results = results
subtest_unexpected = False
for result in test_results:
if test.disabled(result.name):
continue
expected = test.expected(result.name)
is_unexpected = expected != result.status
if is_unexpected:
self.unexpected_count += 1
self.logger.debug("Unexpected count in this thread %i" % self.unexpected_count)
subtest_unexpected = True
self.logger.test_status(test.id,
result.name,
result.status,
message=result.message,
expected=expected,
stack=result.stack)
# TODO: consider changing result if there is a crash dump file
# Write the result of the test harness
expected = test.expected()
status = file_result.status if file_result.status != "EXTERNAL-TIMEOUT" else "TIMEOUT"
is_unexpected = expected != status
if is_unexpected:
self.unexpected_count += 1
self.logger.debug("Unexpected count in this thread %i" % self.unexpected_count)
if status == "CRASH":
self.browser.log_crash(process=self.browser_pid, test=test.id)
self.logger.test_end(test.id,
status,
message=file_result.message,
expected=expected,
extra=file_result.extra)
self.test = None
restart_before_next = (file_result.status in ("CRASH", "EXTERNAL-TIMEOUT") or
subtest_unexpected or is_unexpected)
if (self.pause_after_test or
(self.pause_on_unexpected and (subtest_unexpected or is_unexpected))):
self.logger.info("Pausing until the browser exits")
self.send_message("wait")
else:
self.after_test_ended(restart_before_next)
def after_test_ended(self, restart_before_next):
# Handle starting the next test, with a runner restart if required
if restart_before_next:
return self.restart_runner()
else:
return self.start_next_test()
def restart_runner(self):
"""Stop and restart the TestRunner"""
if self.restart_count >= self.max_restarts:
return Stop
self.stop_runner()
return self.init()
def log(self, action, kwargs):
getattr(self.logger, action)(**kwargs)
def error(self, message):
self.logger.error(message)
self.restart_runner()
class TestQueue(object):
def __init__(self, test_source_cls, test_type, tests, **kwargs):
self.queue = None
self.test_source_cls = test_source_cls
self.test_type = test_type
self.tests = tests
self.kwargs = kwargs
self.queue = None
def __enter__(self):
if not self.tests[self.test_type]:
return None
self.queue = Queue()
has_tests = self.test_source_cls.queue_tests(self.queue,
self.test_type,
self.tests,
**self.kwargs)
# There is a race condition that means sometimes we continue
# before the tests have been written to the underlying pipe.
# Polling the pipe for data here avoids that
self.queue._reader.poll(10)
assert not self.queue.empty()
return self.queue
def __exit__(self, *args, **kwargs):
if self.queue is not None:
self.queue.close()
self.queue = None
class ManagerGroup(object):
def __init__(self, suite_name, size, test_source_cls, test_source_kwargs,
browser_cls, browser_kwargs,
executor_cls, executor_kwargs,
pause_after_test=False,
pause_on_unexpected=False,
debug_args=None):
"""Main thread object that owns all the TestManager threads."""
self.suite_name = suite_name
self.size = size
self.test_source_cls = test_source_cls
self.test_source_kwargs = test_source_kwargs
self.browser_cls = browser_cls
self.browser_kwargs = browser_kwargs
self.executor_cls = executor_cls
self.executor_kwargs = executor_kwargs
self.pause_after_test = pause_after_test
self.pause_on_unexpected = pause_on_unexpected
self.debug_args = debug_args
self.pool = set()
# Event that is polled by threads so that they can gracefully exit in the face
# of sigint
self.stop_flag = threading.Event()
self.logger = structuredlog.StructuredLogger(suite_name)
self.test_queue = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.stop()
def run(self, test_type, tests):
"""Start all managers in the group"""
self.logger.debug("Using %i processes" % self.size)
self.test_queue = TestQueue(self.test_source_cls,
test_type,
tests,
**self.test_source_kwargs)
with self.test_queue as test_queue:
if test_queue is None:
self.logger.info("No %s tests to run" % test_type)
return
for _ in range(self.size):
manager = TestRunnerManager(self.suite_name,
test_queue,
self.test_source_cls,
self.browser_cls,
self.browser_kwargs,
self.executor_cls,
self.executor_kwargs,
self.stop_flag,
self.pause_after_test,
self.pause_on_unexpected,
self.debug_args)
manager.start()
self.pool.add(manager)
self.wait()
def is_alive(self):
"""Boolean indicating whether any manager in the group is still alive"""
return any(manager.is_alive() for manager in self.pool)
def wait(self):
"""Wait for all the managers in the group to finish"""
for item in self.pool:
item.join()
def stop(self):
"""Set the stop flag so that all managers in the group stop as soon
as possible"""
self.stop_flag.set()
self.logger.debug("Stop flag set in ManagerGroup")
def unexpected_count(self):
return sum(item.unexpected_count for item in self.pool)

View file

@ -0,0 +1,3 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.

View file

@ -0,0 +1,79 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import unittest
import sys
sys.path.insert(0, "..")
from wptrunner import wptrunner
class MockTest(object):
def __init__(self, id, timeout=10):
self.id = id
self.item_type = "testharness"
self.timeout = timeout
def make_mock_manifest(*items):
rv = []
for dir_path, num_tests in items:
for i in range(num_tests):
rv.append((dir_path + "/%i.test" % i, set([MockTest(i)])))
return rv
class TestEqualTimeChunker(unittest.TestCase):
def test_include_all(self):
tests = make_mock_manifest(("a", 10), ("a/b", 10), ("c", 10))
chunk_1 = list(wptrunner.EqualTimeChunker(3, 1)(tests))
chunk_2 = list(wptrunner.EqualTimeChunker(3, 2)(tests))
chunk_3 = list(wptrunner.EqualTimeChunker(3, 3)(tests))
self.assertEquals(tests[:10], chunk_1)
self.assertEquals(tests[10:20], chunk_2)
self.assertEquals(tests[20:], chunk_3)
def test_include_all_1(self):
tests = make_mock_manifest(("a", 5), ("a/b", 5), ("c", 10), ("d", 10))
chunk_1 = list(wptrunner.EqualTimeChunker(3, 1)(tests))
chunk_2 = list(wptrunner.EqualTimeChunker(3, 2)(tests))
chunk_3 = list(wptrunner.EqualTimeChunker(3, 3)(tests))
self.assertEquals(tests[:10], chunk_1)
self.assertEquals(tests[10:20], chunk_2)
self.assertEquals(tests[20:], chunk_3)
def test_long(self):
tests = make_mock_manifest(("a", 100), ("a/b", 1), ("c", 1))
chunk_1 = list(wptrunner.EqualTimeChunker(3, 1)(tests))
chunk_2 = list(wptrunner.EqualTimeChunker(3, 2)(tests))
chunk_3 = list(wptrunner.EqualTimeChunker(3, 3)(tests))
self.assertEquals(tests[:100], chunk_1)
self.assertEquals(tests[100:101], chunk_2)
self.assertEquals(tests[101:102], chunk_3)
def test_long_1(self):
tests = make_mock_manifest(("a", 1), ("a/b", 100), ("c", 1))
chunk_1 = list(wptrunner.EqualTimeChunker(3, 1)(tests))
chunk_2 = list(wptrunner.EqualTimeChunker(3, 2)(tests))
chunk_3 = list(wptrunner.EqualTimeChunker(3, 3)(tests))
self.assertEquals(tests[:1], chunk_1)
self.assertEquals(tests[1:101], chunk_2)
self.assertEquals(tests[101:102], chunk_3)
def test_too_few_dirs(self):
with self.assertRaises(ValueError):
tests = make_mock_manifest(("a", 1), ("a/b", 100), ("c", 1))
list(wptrunner.EqualTimeChunker(4, 1)(tests))
if __name__ == "__main__":
unittest.main()

View file

@ -0,0 +1,59 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import unittest
import sys
from cStringIO import StringIO
sys.path.insert(0, "..")
import hosts
class HostsTest(unittest.TestCase):
def do_test(self, input, expected):
host_file = hosts.HostsFile.from_file(StringIO(input))
self.assertEquals(host_file.to_string(), expected)
def test_simple(self):
self.do_test("""127.0.0.1 \tlocalhost alias # comment
# Another comment""",
"""127.0.0.1 localhost alias # comment
# Another comment
""")
def test_blank_lines(self):
self.do_test("""127.0.0.1 \tlocalhost alias # comment
\r
\t
# Another comment""",
"""127.0.0.1 localhost alias # comment
# Another comment
""")
def test_whitespace(self):
self.do_test(""" \t127.0.0.1 \tlocalhost alias # comment \r
\t# Another comment""",
"""127.0.0.1 localhost alias # comment
# Another comment
""")
def test_alignment(self):
self.do_test("""127.0.0.1 \tlocalhost alias
192.168.1.1 another_host another_alias
""","""127.0.0.1 localhost alias
192.168.1.1 another_host another_alias
"""
)
def test_multiple_same_name(self):
# The semantics are that we overwrite earlier entries with the same name
self.do_test("""127.0.0.1 \tlocalhost alias
192.168.1.1 localhost another_alias""","""192.168.1.1 localhost another_alias
"""
)
if __name__ == "__main__":
unittest.main()

View file

@ -0,0 +1,322 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import unittest
import StringIO
from .. import metadata, manifestupdate
from mozlog.structured import structuredlog, handlers, formatters
class TestExpectedUpdater(unittest.TestCase):
def create_manifest(self, data, test_path="path/to/test.ini"):
f = StringIO.StringIO(data)
return manifestupdate.compile(f, test_path)
def create_updater(self, data, **kwargs):
expected_tree = {}
id_path_map = {}
for test_path, test_ids, manifest_str in data:
if isinstance(test_ids, (str, unicode)):
test_ids = [test_ids]
expected_tree[test_path] = self.create_manifest(manifest_str, test_path)
for test_id in test_ids:
id_path_map[test_id] = test_path
return metadata.ExpectedUpdater(expected_tree, id_path_map, **kwargs)
def create_log(self, *args, **kwargs):
logger = structuredlog.StructuredLogger("expected_test")
data = StringIO.StringIO()
handler = handlers.StreamHandler(data, formatters.JSONFormatter())
logger.add_handler(handler)
log_entries = ([("suite_start", {"tests": [], "run_info": kwargs.get("run_info", {})})] +
list(args) +
[("suite_end", {})])
for item in log_entries:
action, kwargs = item
getattr(logger, action)(**kwargs)
logger.remove_handler(handler)
data.seek(0)
return data
def coalesce_results(self, trees):
for tree in trees:
for test in tree.iterchildren():
for subtest in test.iterchildren():
subtest.coalesce_expected()
test.coalesce_expected()
def test_update_0(self):
prev_data = [("path/to/test.htm.ini", ["/path/to/test.htm"], """[test.htm]
type: testharness
[test1]
expected: FAIL""")]
new_data = self.create_log(("test_start", {"test": "/path/to/test.htm"}),
("test_status", {"test": "/path/to/test.htm",
"subtest": "test1",
"status": "PASS",
"expected": "FAIL"}),
("test_end", {"test": "/path/to/test.htm",
"status": "OK"}))
updater = self.create_updater(prev_data)
updater.update_from_log(new_data)
new_manifest = updater.expected_tree["path/to/test.htm.ini"]
self.coalesce_results([new_manifest])
self.assertTrue(new_manifest.is_empty)
def test_update_1(self):
test_id = "/path/to/test.htm"
prev_data = [("path/to/test.htm.ini", [test_id], """[test.htm]
type: testharness
[test1]
expected: ERROR""")]
new_data = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "FAIL",
"expected": "ERROR"}),
("test_end", {"test": test_id,
"status": "OK"}))
updater = self.create_updater(prev_data)
updater.update_from_log(new_data)
new_manifest = updater.expected_tree["path/to/test.htm.ini"]
self.coalesce_results([new_manifest])
self.assertFalse(new_manifest.is_empty)
self.assertEquals(new_manifest.get_test(test_id).children[0].get("expected"), "FAIL")
def test_new_subtest(self):
test_id = "/path/to/test.htm"
prev_data = [("path/to/test.htm.ini", [test_id], """[test.htm]
type: testharness
[test1]
expected: FAIL""")]
new_data = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "FAIL",
"expected": "FAIL"}),
("test_status", {"test": test_id,
"subtest": "test2",
"status": "FAIL",
"expected": "PASS"}),
("test_end", {"test": test_id,
"status": "OK"}))
updater = self.create_updater(prev_data)
updater.update_from_log(new_data)
new_manifest = updater.expected_tree["path/to/test.htm.ini"]
self.coalesce_results([new_manifest])
self.assertFalse(new_manifest.is_empty)
self.assertEquals(new_manifest.get_test(test_id).children[0].get("expected"), "FAIL")
self.assertEquals(new_manifest.get_test(test_id).children[1].get("expected"), "FAIL")
def test_update_multiple_0(self):
test_id = "/path/to/test.htm"
prev_data = [("path/to/test.htm.ini", [test_id], """[test.htm]
type: testharness
[test1]
expected: FAIL""")]
new_data_0 = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "FAIL",
"expected": "FAIL"}),
("test_end", {"test": test_id,
"status": "OK"}),
run_info={"debug": False, "os": "osx"})
new_data_1 = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "TIMEOUT",
"expected": "FAIL"}),
("test_end", {"test": test_id,
"status": "OK"}),
run_info={"debug": False, "os": "linux"})
updater = self.create_updater(prev_data)
updater.update_from_log(new_data_0)
updater.update_from_log(new_data_1)
new_manifest = updater.expected_tree["path/to/test.htm.ini"]
self.coalesce_results([new_manifest])
self.assertFalse(new_manifest.is_empty)
self.assertEquals(new_manifest.get_test(test_id).children[0].get(
"expected", {"debug": False, "os": "osx"}), "FAIL")
self.assertEquals(new_manifest.get_test(test_id).children[0].get(
"expected", {"debug": False, "os": "linux"}), "TIMEOUT")
def test_update_multiple_1(self):
test_id = "/path/to/test.htm"
prev_data = [("path/to/test.htm.ini", [test_id], """[test.htm]
type: testharness
[test1]
expected: FAIL""")]
new_data_0 = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "FAIL",
"expected": "FAIL"}),
("test_end", {"test": test_id,
"status": "OK"}),
run_info={"debug": False, "os": "osx"})
new_data_1 = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "TIMEOUT",
"expected": "FAIL"}),
("test_end", {"test": test_id,
"status": "OK"}),
run_info={"debug": False, "os": "linux"})
updater = self.create_updater(prev_data)
updater.update_from_log(new_data_0)
updater.update_from_log(new_data_1)
new_manifest = updater.expected_tree["path/to/test.htm.ini"]
self.coalesce_results([new_manifest])
self.assertFalse(new_manifest.is_empty)
self.assertEquals(new_manifest.get_test(test_id).children[0].get(
"expected", {"debug": False, "os": "osx"}), "FAIL")
self.assertEquals(new_manifest.get_test(test_id).children[0].get(
"expected", {"debug": False, "os": "linux"}), "TIMEOUT")
self.assertEquals(new_manifest.get_test(test_id).children[0].get(
"expected", {"debug": False, "os": "windows"}), "FAIL")
def test_update_multiple_2(self):
test_id = "/path/to/test.htm"
prev_data = [("path/to/test.htm.ini", [test_id], """[test.htm]
type: testharness
[test1]
expected: FAIL""")]
new_data_0 = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "FAIL",
"expected": "FAIL"}),
("test_end", {"test": test_id,
"status": "OK"}),
run_info={"debug": False, "os": "osx"})
new_data_1 = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "TIMEOUT",
"expected": "FAIL"}),
("test_end", {"test": test_id,
"status": "OK"}),
run_info={"debug": True, "os": "osx"})
updater = self.create_updater(prev_data)
updater.update_from_log(new_data_0)
updater.update_from_log(new_data_1)
new_manifest = updater.expected_tree["path/to/test.htm.ini"]
self.coalesce_results([new_manifest])
self.assertFalse(new_manifest.is_empty)
self.assertEquals(new_manifest.get_test(test_id).children[0].get(
"expected", {"debug": False, "os": "osx"}), "FAIL")
self.assertEquals(new_manifest.get_test(test_id).children[0].get(
"expected", {"debug": True, "os": "osx"}), "TIMEOUT")
def test_update_multiple_3(self):
test_id = "/path/to/test.htm"
prev_data = [("path/to/test.htm.ini", [test_id], """[test.htm]
type: testharness
[test1]
expected:
if debug: FAIL
if not debug and os == "osx": TIMEOUT""")]
new_data_0 = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "FAIL",
"expected": "FAIL"}),
("test_end", {"test": test_id,
"status": "OK"}),
run_info={"debug": False, "os": "osx"})
new_data_1 = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "TIMEOUT",
"expected": "FAIL"}),
("test_end", {"test": test_id,
"status": "OK"}),
run_info={"debug": True, "os": "osx"})
updater = self.create_updater(prev_data)
updater.update_from_log(new_data_0)
updater.update_from_log(new_data_1)
new_manifest = updater.expected_tree["path/to/test.htm.ini"]
self.coalesce_results([new_manifest])
self.assertFalse(new_manifest.is_empty)
self.assertEquals(new_manifest.get_test(test_id).children[0].get(
"expected", {"debug": False, "os": "osx"}), "FAIL")
self.assertEquals(new_manifest.get_test(test_id).children[0].get(
"expected", {"debug": True, "os": "osx"}), "TIMEOUT")
def test_update_ignore_existing(self):
test_id = "/path/to/test.htm"
prev_data = [("path/to/test.htm.ini", [test_id], """[test.htm]
type: testharness
[test1]
expected:
if debug: TIMEOUT
if not debug and os == "osx": NOTRUN""")]
new_data_0 = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "FAIL",
"expected": "PASS"}),
("test_end", {"test": test_id,
"status": "OK"}),
run_info={"debug": False, "os": "linux"})
new_data_1 = self.create_log(("test_start", {"test": test_id}),
("test_status", {"test": test_id,
"subtest": "test1",
"status": "FAIL",
"expected": "PASS"}),
("test_end", {"test": test_id,
"status": "OK"}),
run_info={"debug": True, "os": "windows"})
updater = self.create_updater(prev_data, ignore_existing=True)
updater.update_from_log(new_data_0)
updater.update_from_log(new_data_1)
new_manifest = updater.expected_tree["path/to/test.htm.ini"]
self.coalesce_results([new_manifest])
self.assertFalse(new_manifest.is_empty)
self.assertEquals(new_manifest.get_test(test_id).children[0].get(
"expected", {"debug": True, "os": "osx"}), "FAIL")
self.assertEquals(new_manifest.get_test(test_id).children[0].get(
"expected", {"debug": False, "os": "osx"}), "FAIL")

View file

@ -0,0 +1,51 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import sys
from mozlog.structured import structuredlog, commandline
from .. import wptcommandline
from update import WPTUpdate
def remove_logging_args(args):
"""Take logging args out of the dictionary of command line arguments so
they are not passed in as kwargs to the update code. This is particularly
necessary here because the arguments are often of type file, which cannot
be serialized.
:param args: Dictionary of command line arguments.
"""
for name in args.keys():
if name.startswith("log_"):
args.pop(name)
def setup_logging(args, defaults):
"""Use the command line arguments to set up the logger.
:param args: Dictionary of command line arguments.
:param defaults: Dictionary of {formatter_name: stream} to use if
no command line logging is specified"""
logger = commandline.setup_logging("web-platform-tests-update", args, defaults)
remove_logging_args(args)
return logger
def run_update(logger, **kwargs):
updater = WPTUpdate(logger, **kwargs)
return updater.run()
def main():
args = wptcommandline.parse_args_update()
logger = setup_logging(args, {"mach": sys.stdout})
assert structuredlog.get_default_logger() is not None
success = run_update(logger, **args)
sys.exit(0 if success else 1)

View file

@ -0,0 +1,69 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
exit_unclean = object()
exit_clean = object()
class Step(object):
provides = []
def __init__(self, logger):
self.logger = logger
def run(self, step_index, state):
"""Base class for state-creating steps.
When a Step is run() the current state is checked to see
if the state from this step has already been created. If it
has the restore() method is invoked. Otherwise the create()
method is invoked with the state object. This is expected to
add items with all the keys in __class__.provides to the state
object.
"""
name = self.__class__.__name__
try:
stored_step = state.steps[step_index]
except IndexError:
stored_step = None
if stored_step == name:
self.restore(state)
elif stored_step is None:
self.create(state)
assert set(self.provides).issubset(set(state.keys()))
state.steps = state.steps + [name]
else:
raise ValueError("Expected a %s step, got a %s step" % (name, stored_step))
def create(self, data):
raise NotImplementedError
def restore(self, state):
self.logger.debug("Step %s using stored state" % (self.__class__.__name__,))
for key in self.provides:
assert key in state
class StepRunner(object):
steps = []
def __init__(self, logger, state):
"""Class that runs a specified series of Steps with a common State"""
self.state = state
self.logger = logger
if "steps" not in state:
state.steps = []
def run(self):
rv = None
for step_index, step in enumerate(self.steps):
self.logger.debug("Starting step %s" % step.__name__)
rv = step(self.logger).run(step_index, self.state)
if rv in (exit_clean, exit_unclean):
break
return rv

View file

@ -0,0 +1,61 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
from .. import metadata
from base import Step, StepRunner
class UpdateExpected(Step):
"""Do the metadata update on the local checkout"""
provides = ["needs_human"]
def create(self, state):
if state.sync_tree is not None:
sync_root = state.sync_tree.root
else:
sync_root = None
state.needs_human = metadata.update_expected(state.paths,
state.serve_root,
state.run_log,
rev_old=None,
ignore_existing=state.ignore_existing,
sync_root=sync_root)
class CreateMetadataPatch(Step):
"""Create a patch/commit for the metadata checkout"""
def create(self, state):
if state.no_patch:
return
local_tree = state.local_tree
sync_tree = state.sync_tree
if sync_tree is not None:
name = "web-platform-tests_update_%s_metadata" % sync_tree.rev
message = "Update %s expected data to revision %s" % (state.suite_name, sync_tree.rev)
else:
name = "web-platform-tests_update_metadata"
message = "Update %s expected data" % state.suite_name
local_tree.create_patch(name, message)
if not local_tree.is_clean:
metadata_paths = [manifest_path["metadata_path"]
for manifest_path in state.paths.itervalues()]
for path in metadata_paths:
local_tree.add_new(os.path.relpath(path, local_tree.root))
local_tree.update_patch(include=metadata_paths)
local_tree.commit_patch()
class MetadataUpdateRunner(StepRunner):
"""(Sub)Runner for updating metadata"""
steps = [UpdateExpected,
CreateMetadataPatch]

View file

@ -0,0 +1,137 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import cPickle as pickle
here = os.path.abspath(os.path.split(__file__)[0])
class State(object):
filename = os.path.join(here, ".wpt-update.lock")
def __new__(cls, logger):
rv = cls.load(logger)
if rv is not None:
logger.debug("Existing state found")
return rv
logger.debug("No existing state found")
return object.__new__(cls, logger)
def __init__(self, logger):
"""Object containing state variables created when running Steps.
On write the state is serialized to disk, such that it can be restored in
the event that the program is interrupted before all steps are complete.
Note that this only works well if the values are immutable; mutating an
existing value will not cause the data to be serialized.
Variables are set and get as attributes e.g. state_obj.spam = "eggs".
:param parent: Parent State object or None if this is the root object.
"""
if hasattr(self, "_data"):
return
self._data = [{}]
self._logger = logger
self._index = 0
def __getstate__(self):
rv = self.__dict__.copy()
del rv["_logger"]
return rv
@classmethod
def load(cls, logger):
"""Load saved state from a file"""
try:
with open(cls.filename) as f:
try:
rv = pickle.load(f)
logger.debug("Loading data %r" % (rv._data,))
rv._logger = logger
rv._index = 0
return rv
except EOFError:
logger.warning("Found empty state file")
except IOError:
logger.debug("IOError loading stored state")
def push(self, init_values):
"""Push a new clean state dictionary
:param init_values: List of variable names in the current state dict to copy
into the new state dict."""
return StateContext(self, init_values)
def save(self):
"""Write the state to disk"""
with open(self.filename, "w") as f:
pickle.dump(self, f)
def is_empty(self):
return len(self._data) == 1 and self._data[0] == {}
def clear(self):
"""Remove all state and delete the stored copy."""
try:
os.unlink(self.filename)
except OSError:
pass
self._data = [{}]
def __setattr__(self, key, value):
if key.startswith("_"):
object.__setattr__(self, key, value)
else:
self._data[self._index][key] = value
self.save()
def __getattr__(self, key):
if key.startswith("_"):
raise AttributeError
try:
return self._data[self._index][key]
except KeyError:
raise AttributeError
def __contains__(self, key):
return key in self._data[self._index]
def update(self, items):
"""Add a dictionary of {name: value} pairs to the state"""
self._data[self._index].update(items)
self.save()
def keys(self):
return self._data[self._index].keys()
class StateContext(object):
def __init__(self, state, init_values):
self.state = state
self.init_values = init_values
def __enter__(self):
if len(self.state._data) == self.state._index + 1:
# This is the case where there is no stored state
new_state = {}
for key in self.init_values:
new_state[key] = self.state._data[self.state._index][key]
self.state._data.append(new_state)
self.state._index += 1
self.state._logger.debug("Incremented index to %s" % self.state._index)
def __exit__(self, *args, **kwargs):
if len(self.state._data) > 1:
assert self.state._index == len(self.state._data) - 1
self.state._data.pop()
self.state._index -= 1
self.state._logger.debug("Decremented index to %s" % self.state._index)
assert self.state._index >= 0
else:
raise ValueError("Tried to pop the top state")

View file

@ -0,0 +1,175 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import shutil
import uuid
from .. import testloader
from base import Step, StepRunner
from tree import Commit
here = os.path.abspath(os.path.split(__file__)[0])
bsd_license = """W3C 3-clause BSD License
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of works must retain the original copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the original copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the W3C nor the names of its contributors may be
used to endorse or promote products derived from this work without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
"""
def copy_wpt_tree(tree, dest):
"""Copy the working copy of a Tree to a destination directory.
:param tree: The Tree to copy.
:param dest: The destination directory"""
if os.path.exists(dest):
assert os.path.isdir(dest)
shutil.rmtree(dest)
os.mkdir(dest)
for tree_path in tree.paths():
source_path = os.path.join(tree.root, tree_path)
dest_path = os.path.join(dest, tree_path)
dest_dir = os.path.split(dest_path)[0]
if not os.path.isdir(source_path):
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
shutil.copy2(source_path, dest_path)
for source, destination in [("testharness_runner.html", ""),
("testharnessreport.js", "resources/")]:
source_path = os.path.join(here, os.pardir, source)
dest_path = os.path.join(dest, destination, os.path.split(source)[1])
shutil.copy2(source_path, dest_path)
add_license(dest)
def add_license(dest):
"""Write the bsd license string to a LICENSE file.
:param dest: Directory in which to place the LICENSE file."""
with open(os.path.join(dest, "LICENSE"), "w") as f:
f.write(bsd_license)
class UpdateCheckout(Step):
"""Pull changes from upstream into the local sync tree."""
provides = ["local_branch"]
def create(self, state):
sync_tree = state.sync_tree
state.local_branch = uuid.uuid4().hex
sync_tree.update(state.sync["remote_url"],
state.sync["branch"],
state.local_branch)
class GetSyncTargetCommit(Step):
"""Find the commit that we will sync to."""
provides = ["sync_commit"]
def create(self, state):
if state.target_rev is None:
#Use upstream branch HEAD as the base commit
state.sync_commit = state.sync_tree.get_remote_sha1(state.sync["remote_url"],
state.sync["branch"])
else:
state.sync_commit = Commit(state.sync_tree, state.rev)
state.sync_tree.checkout(state.sync_commit.sha1, state.local_branch, force=True)
self.logger.debug("New base commit is %s" % state.sync_commit.sha1)
class LoadManifest(Step):
"""Load the test manifest"""
provides = ["test_manifest"]
def create(self, state):
state.test_manifest = testloader.ManifestLoader(state.tests_path).load_manifest(
state.tests_path, state.metadata_path,
)
class UpdateManifest(Step):
"""Update the manifest to match the tests in the sync tree checkout"""
provides = ["initial_rev"]
def create(self, state):
from manifest import manifest, update
test_manifest = state.test_manifest
state.initial_rev = test_manifest.rev
update.update(state.sync["path"], "/", test_manifest)
manifest.write(test_manifest, os.path.join(state.metadata_path, "MANIFEST.json"))
class CopyWorkTree(Step):
"""Copy the sync tree over to the destination in the local tree"""
def create(self, state):
copy_wpt_tree(state.sync_tree,
state.tests_path)
class CreateSyncPatch(Step):
"""Add the updated test files to a commit/patch in the local tree."""
def create(self, state):
if state.no_patch:
return
local_tree = state.local_tree
sync_tree = state.sync_tree
local_tree.create_patch("web-platform-tests_update_%s" % sync_tree.rev,
"Update %s to revision %s" % (state.suite_name, sync_tree.rev))
local_tree.add_new(os.path.relpath(state.tests_path,
local_tree.root))
updated = local_tree.update_patch(include=[state.tests_path,
state.metadata_path])
local_tree.commit_patch()
if not updated:
self.logger.info("Nothing to sync")
class SyncFromUpstreamRunner(StepRunner):
"""(Sub)Runner for doing an upstream sync"""
steps = [UpdateCheckout,
GetSyncTargetCommit,
LoadManifest,
UpdateManifest,
CopyWorkTree,
CreateSyncPatch]

Some files were not shown because too many files have changed in this diff Show more