Update web-platform-tests to revision b'b728032f59a396243864b0f8584e7211e3632005'

This commit is contained in:
WPT Sync Bot 2022-11-10 01:22:36 +00:00
parent ace9b32b1c
commit df68c4e5d1
15632 changed files with 514865 additions and 155000 deletions

View file

@ -115,7 +115,8 @@ no [parse errors](https://validator.nu).
This is not, however, to discourage testing of edge cases or
interactions between multiple features; such tests are an essential
part of ensuring interoperability of the web platform.
part of ensuring interoperability of the web platform. When possible, use the
canonical support libraries provided by features; for more information, see the documentation on [testing interactions between features][interacting-features].
Tests should pass when the feature under test exposes the expected behavior,
and they should fail when the feature under test is not implemented or is
@ -158,12 +159,12 @@ should be used.
### Be Self-Contained
Tests must not depend on external network resources, including
w3c-test.org. When these tests are run on CI systems they are
typically configured with access to external resources disabled, so
tests that try to access them will fail. Where tests want to use
multiple hosts this is possible through a known set of subdomains and
the [text substitution features of wptserve](server-features).
Tests must not depend on external network resources. When these tests
are run on CI systems, they are typically configured with access to
external resources disabled, so tests that try to access them will
fail. Where tests want to use multiple hosts, this is possible through
a known set of subdomains and the [text substitution features of
wptserve](server-features).
### Be Self-Describing
@ -221,6 +222,7 @@ for CSS have some additional requirements for:
[css-metadata]: css-metadata
[css-user-styles]: css-user-styles
[file-name-flags]: file-names
[interacting-features]: interacting-features
[mozilla-firefox]: https://mozilla.org/firefox
[google-chrome]: https://google.com/chrome/browser/desktop/
[apple-safari]: https://apple.com/safari

View file

@ -116,7 +116,7 @@ For additional information, please see the [GitHub docs][github-fork-docs].
## Configure your environment
If all you intend to do is to load [manual tests](../writing-tests/manual) or [ref tests](../writing-tests/reftests) from your local file system,
If all you intend to do is to load [manual tests](../writing-tests/manual) or [reftests](../writing-tests/reftests) from your local file system,
the above setup should be sufficient.
But many tests (and in particular, all [testharness.js tests](../writing-tests/testharness)) require a local web server.

View file

@ -0,0 +1,25 @@
# Testing interactions between features
Web platform features do not exist in isolation. Often, testing the interaction between two features is necessary in tests.
To support this, many directories contain libraries which are intended to be used from other directories.
These are not WPT server features, but are canonical usage of one feature intended for other features to test against.
This allows the tests for a feature to be decoupled as much as possible from the specifics of another feature which it should integrate with.
## Web Platform Feature Testing Support Libraries
### Common
There are several useful utilities in the `/common/` directory
### Cookies
Features which need to test their interaction with cookies can use the scripts in `cookies/resources` to control which cookies are set on a given request.
### Permissions Policy
Features which integrate with Permissions Policy can make use of the `permissions-policy.js` support library to generate a set of tests for that integration.
### Reporting
Testing integration with the Reporting API can be done with the help of the common report collector. This service will collect reports sent from tests and provides an API to retrieve them. See documentation at `reporting/resources/README.md`.

View file

@ -67,6 +67,17 @@ import importlib
myhelper = importlib.import_module('common.security-features.myhelper')
```
**Note on __init__ files**: Importing helper scripts like this
requires a 'path' of empty `__init__.py` files in every directory down
to the helper. For example, if your helper is
`css/css-align/resources/myhelper.py`, you need to have:
```
css/__init__.py
css/css-align/__init__.py
css/css-align/resources/__init__.py
```
## Example: Dynamic HTTP headers
The following code defines a Python handler that allows the requester to

View file

@ -35,28 +35,6 @@ channel][matrix] if you have an issue. There is no need to announce
your review request; as soon as you make a Pull Request, GitHub will
inform interested parties.
## Previews
The website [http://w3c-test.org](http://w3c-test.org) exists to help
contributors demonstrate their proposed changes to others. If you are [a GitHub
collaborator](https://help.github.com/en/articles/permission-levels-for-a-user-account-repository)
on WPT, then the content of your pull requests will be available at
`http://w3c-test.org/submissions/{{pull request ID}}`, where "pull request ID"
is the numeric identifier for the pull request.
For example, a pull request at https://github.com/web-platform-tests/wpt/pull/3
has a pull request ID `3`. Its contents can be viewed at
http://w3c-test.org/submissions/3.
If you are *not* a GitHub collaborator, then your submission may be made
available if a collaborator makes the following comment on your pull request:
"w3c-test:mirror".
Previews are not created automatically for non-collaborators because the WPT
server will execute Python code in the mirrored submissions. Collaborators are
encouraged to enable the preview by making the special comment only if they
trust the authors not to submit malicious code.
[repo]: https://github.com/web-platform-tests/wpt/
[github flow]: https://guides.github.com/introduction/flow/
[public-test-infra]: https://lists.w3.org/Archives/Public/public-test-infra/

View file

@ -50,6 +50,8 @@ the global scope.
### Cookies ###
```eval_rst
.. js:autofunction:: test_driver.delete_all_cookies
.. js:autofunction:: test_driver.get_all_cookies
.. js:autofunction:: test_driver.get_named_cookie
```
### Permissions ###
@ -129,9 +131,9 @@ function that can be used to send arbitary messages to the test
window. For example, in an auxillary browsing context:
```js
testdriver.set_test_context(window.opener)
await testdriver.click(document.getElementsByTagName("button")[0])
testdriver.message_test("click complete")
test_driver.set_test_context(window.opener)
await test_driver.click(document.getElementsByTagName("button")[0])
test_driver.message_test("click complete")
```
The requirement to have a handle to the test window does mean it's
@ -167,11 +169,11 @@ let actions = new test_driver.Actions()
.setContext(frames[0])
.keyDown("p")
.keyUp("p");
actions.send();
await actions.send();
```
Note that if an action uses an element reference, the context will be
derived from that element, and must match any explictly set
derived from that element, and must match any explicitly set
context. Using elements in multiple contexts in a single action chain
is not supported.

View file

@ -237,7 +237,7 @@ promise_test(() => {
}, "DOMContentLoaded");
```
**Note:** Unlike asynchronous tests, teatharness.js queues promise
**Note:** Unlike asynchronous tests, testharness.js queues promise
tests so the next test won't start running until after the previous
promise test finishes. [When mixing promise-based logic and async
steps](https://github.com/web-platform-tests/wpt/pull/17924), the next
@ -627,7 +627,7 @@ harness will assume there are no more results to come when:
1. There are no `Test` objects that have been created but not completed
2. The load event on the document has fired
For single page tests, or when the `explict_done` property has been
For single page tests, or when the `explicit_done` property has been
set in the [setup](#setup), the [`done`](#done) function must be used.
```eval_rst

View file

@ -1,11 +1,5 @@
# testharness.js tutorial
.. contents:: Table of Contents
:depth: 3
:local:
:backlinks: none
```
<!--
Note to maintainers:
@ -31,8 +25,10 @@ purposes of this guide, we'll only consider the features we need to test the
behavior of `fetch`.
```eval_rst
.. contents::
.. contents:: Table of Contents
:depth: 3
:local:
:backlinks: none
```
## Setting up your workspace

View file

@ -167,6 +167,9 @@ are:
* `jsshell`: to be run in a JavaScript shell, without access to the DOM
(currently only supported in SpiderMonkey, and skipped in wptrunner)
* `worker`: shorthand for the dedicated, shared, and service worker scopes
* `shadowrealm`: runs the test code in a
[ShadowRealm](https://github.com/tc39/proposal-shadowrealm) context hosted in
an ordinary Window context; to be run at <code><var>x</var>.any.shadowrealm.html</code>
To check if your test is run from a window or worker you can use the following two methods that will
be made available by the framework:
@ -238,7 +241,7 @@ otherwise too many tests to complete inside the timeout. For example:
<meta name="variant" content="?2001-last">
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script src="/common/subset-tests.js">
<script src="/common/subset-tests.js"></script>
<script>
const tests = [
{ fn: t => { ... }, name: "..." },