Auto merge of #21863 - servo:tc-windows, r=Manishearth

Build on and for Windows on Taskcluster CI

I’ve configured a `servo-win2016` Taskcluster worker type and built an AMI for it. The docs and scripts for this are in `etc/taskcluster/windows` in this PR. They don’t strictly need to be in this repository, but it’s as good a place as any.

This PR also adds a new Windows task similar to Buildbot’s `windows-msvc-dev` job. Like the other tasks triggered on `github-push` events (in particular pushes by Homu to the `auto`), it needs to succeed for a PR to be merged.

CC https://github.com/servo/saltfs/issues/559

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/servo/21863)
<!-- Reviewable:end -->
This commit is contained in:
bors-servo 2018-10-10 04:31:42 -04:00 committed by GitHub
commit 78327fcba5
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
18 changed files with 1267 additions and 454 deletions

1
.gitignore vendored
View file

@ -19,6 +19,7 @@
*.csv
*.rej
*.orig
.coverage
.DS_Store
Servo.app
.config.mk.last

View file

@ -5,7 +5,7 @@ policy:
tasks:
- $if: 'tasks_for == "github-push"'
then:
$if: 'event.ref in ["refs/heads/auto", "refs/heads/try"]'
$if: 'event.ref in ["refs/heads/auto", "refs/heads/try", "refs/heads/try-taskcluster"]'
then:
# NOTE: when updating this consider whether the daily hook needs similar changes:
@ -53,4 +53,4 @@ tasks:
cd repo &&
git fetch --depth 1 "$GIT_URL" "$GIT_REF" &&
git reset --hard "$GIT_SHA" &&
python3 etc/taskcluster/decision-task.py
python3 etc/taskcluster/decision_task.py

0
etc/memory_reports_over_time.py Normal file → Executable file
View file

View file

@ -40,7 +40,7 @@ to build an arbitrary [task graph].
## Servos decision task
This repositorys [`.taskcluster.yml`][tc.yml] schedules a single task
that runs the Python 3 script [`etc/taskcluster/decision-task.py`](decision-task.py).
that runs the Python 3 script [`etc/taskcluster/decision_task.py`](decision_task.py).
It is called a *decision task* as it is responsible for deciding what other tasks to schedule.
The Docker image that runs the decision task
@ -101,7 +101,7 @@ together with multiple testing tasks that each depend on the build task
(wait until it successfully finishes before they can start)
and start by downloading the artifact that was saved earlier.
The logic for all this is in [`decision-task.py`](decision-task.py)
The logic for all this is in [`decision_task.py`](decision_task.py)
and can be modified in any pull request.
[web-platform-tests]: https://github.com/web-platform-tests/wpt
@ -162,7 +162,7 @@ to edit that role in the web UI and grant more scopes to these tasks
The [`project-servo/daily`] hook in Taskclusters [Hooks service]
is used to run some tasks automatically ever 24 hours.
In this case as well we use a decision task.
The `decision-task.py` script can differenciate this from a GitHub push
The `decision_task.py` script can differenciate this from a GitHub push
based on the `$TASK_FOR` environment variable.
Daily tasks can also be triggered manually.
@ -221,7 +221,7 @@ To modify those, submit a pull request.
* The [`.taskcluster.yml`][tc.yml] file,
for starting decision tasks in reaction to GitHub events
* The [`etc/ci/decision-task.py`](decision-task.py) file,
* The [`etc/ci/decision_task.py`](decision_task.py) file,
defining what other tasks to schedule
However some configuration needs to be handled separately.

View file

@ -1,245 +0,0 @@
# coding: utf8
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os.path
import subprocess
from decisionlib import DecisionTask
def main():
task_for = os.environ["TASK_FOR"]
if task_for == "github-push":
linux_tidy_unit()
#linux_wpt()
android_arm32()
# https://tools.taskcluster.net/hooks/project-servo/daily
elif task_for == "daily":
daily_tasks_setup()
with_rust_nightly()
android_arm32()
else:
raise ValueError("Unrecognized $TASK_FOR value: %r", task_for)
ping_on_daily_task_failure = "SimonSapin, nox, emilio"
build_artifacts_expiry = "1 week"
log_artifacts_expiry = "1 year"
build_env = {
"RUST_BACKTRACE": "1",
"RUSTFLAGS": "-Dwarnings",
"CARGO_INCREMENTAL": "0",
"SCCACHE_IDLE_TIMEOUT": "1200",
"CCACHE": "sccache",
"RUSTC_WRAPPER": "sccache",
"SHELL": "/bin/dash", # For SpiderMonkeys build system
}
def linux_tidy_unit():
return decision.create_task(
task_name="Linux x86_64: tidy + dev build + unit tests",
script="""
./mach test-tidy --no-progress --all
./mach build --dev
./mach test-unit
./mach package --dev
./mach test-tidy --no-progress --self-test
python2.7 ./etc/memory_reports_over_time.py --test
python3 ./etc/taskcluster/mock.py
./etc/ci/lockfile_changed.sh
./etc/ci/check_no_panic.sh
""",
**build_kwargs
)
def with_rust_nightly():
return decision.create_task(
task_name="Linux x86_64: with Rust Nightly",
script="""
echo "nightly" > rust-toolchain
./mach build --dev
./mach test-unit
""",
**build_kwargs
)
def android_arm32():
return decision.find_or_create_task(
index_bucket="build.android_armv7_release",
index_key=os.environ["GIT_SHA"], # Set in .taskcluster.yml
index_expiry=build_artifacts_expiry,
task_name="Android ARMv7: build",
# file: NDK parses $(file $SHELL) to tell x86_64 from x86
# wget: servo-media-gstreamers build script
script="""
apt-get install -y --no-install-recommends openjdk-8-jdk-headless file wget
./etc/ci/bootstrap-android-and-accept-licences.sh
./mach build --android --release
""",
artifacts=[
"/repo/target/armv7-linux-androideabi/release/servoapp.apk",
"/repo/target/armv7-linux-androideabi/release/servoview.aar",
],
**build_kwargs
)
def linux_wpt():
release_build_task = linux_release_build()
total_chunks = 2
for i in range(total_chunks):
this_chunk = i + 1
wpt_chunk(release_build_task, total_chunks, this_chunk, extra=(this_chunk == 1))
def linux_release_build():
return decision.find_or_create_task(
index_bucket="build.linux_x86-64_release",
index_key=os.environ["GIT_SHA"], # Set in .taskcluster.yml
index_expiry=build_artifacts_expiry,
task_name="Linux x86_64: release build",
script="""
./mach build --release --with-debug-assertions -p servo
./etc/ci/lockfile_changed.sh
tar -czf /target.tar.gz \
target/release/servo \
target/release/build/osmesa-src-*/output \
target/release/build/osmesa-src-*/out/lib/gallium
""",
artifacts=[
"/target.tar.gz",
],
**build_kwargs
)
def wpt_chunk(release_build_task, total_chunks, this_chunk, extra):
if extra:
name_extra = " + extra"
script_extra = """
./mach test-wpt-failure
./mach test-wpt --release --binary-arg=--multiprocess --processes 24 \
--log-raw test-wpt-mp.log \
--log-errorsummary wpt-mp-errorsummary.log \
eventsource
"""
else:
name_extra = ""
script_extra = ""
script = """
./mach test-wpt \
--release \
--processes 24 \
--total-chunks "$TOTAL_CHUNKS" \
--this-chunk "$THIS_CHUNK" \
--log-raw test-wpt.log \
--log-errorsummary wpt-errorsummary.log \
--always-succeed
./mach filter-intermittents\
wpt-errorsummary.log \
--log-intermittents intermittents.log \
--log-filteredsummary filtered-wpt-errorsummary.log \
--tracker-api default
"""
# FIXME: --reporter-api default
# IndexError: list index out of range
# File "/repo/python/servo/testing_commands.py", line 533, in filter_intermittents
# pull_request = int(last_merge.split(' ')[4][1:])
create_run_task(
build_task=release_build_task,
task_name="Linux x86_64: WPT chunk %s / %s%s" % (this_chunk, total_chunks, name_extra),
script=script_extra + script,
env={
"TOTAL_CHUNKS": total_chunks,
"THIS_CHUNK": this_chunk,
},
)
def create_run_task(*, build_task, script, **kwargs):
fetch_build = """
./etc/taskcluster/curl-artifact.sh ${BUILD_TASK_ID} target.tar.gz | tar -xz
"""
kwargs.setdefault("env", {})["BUILD_TASK_ID"] = build_task
kwargs.setdefault("dependencies", []).append(build_task)
kwargs.setdefault("artifacts", []).extend(
("/repo/" + word, log_artifacts_expiry)
for word in script.split() if word.endswith(".log")
)
return decision.create_task(
script=fetch_build + script,
max_run_time_minutes=60,
dockerfile=dockerfile_path("run"),
**kwargs
)
def daily_tasks_setup():
# ':' is not accepted in an index namepspace:
# https://docs.taskcluster.net/docs/reference/core/taskcluster-index/references/api
now = decision.now.strftime("%Y-%m-%d_%H-%M-%S")
index_path = "%s.daily.%s" % (decision.index_prefix, now)
# Index this task manually rather than with a route,
# so that it is indexed even if it fails.
decision.index_service.insertTask(index_path, {
"taskId": os.environ["TASK_ID"],
"rank": 0,
"data": {},
"expires": decision.from_now_json(log_artifacts_expiry),
})
# Unlike when reacting to a GitHub event,
# the commit hash is not known until we clone the repository.
os.environ["GIT_SHA"] = \
subprocess.check_output(["git", "rev-parse", "HEAD"]).decode("utf8").strip()
# On failure, notify a few people on IRC
# https://docs.taskcluster.net/docs/reference/core/taskcluster-notify/docs/usage
notify_route = "notify.irc-channel.#servo.on-failed"
decision.routes_for_all_subtasks.append(notify_route)
decision.scopes_for_all_subtasks.append("queue:route:" + notify_route)
decision.task_name_template = "Servo daily: %s. On failure, ping: " + ping_on_daily_task_failure
def dockerfile_path(name):
return os.path.join(os.path.dirname(__file__), "docker", name + ".dockerfile")
decision = DecisionTask(
task_name_template="Servo: %s",
index_prefix="project.servo.servo",
worker_type="servo-docker-worker",
)
# https://docs.taskcluster.net/docs/reference/workers/docker-worker/docs/caches
cache_scopes = [
"docker-worker:cache:cargo-*",
]
build_caches = {
"cargo-registry-cache": "/root/.cargo/registry",
"cargo-git-cache": "/root/.cargo/git",
"cargo-rustup": "/root/.rustup",
"cargo-sccache": "/root/.cache/sccache",
}
build_kwargs = {
"max_run_time_minutes": 60,
"dockerfile": dockerfile_path("build"),
"env": build_env,
"scopes": cache_scopes,
"cache": build_caches,
}
if __name__ == "__main__":
main()

View file

@ -0,0 +1,299 @@
# coding: utf8
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os.path
from decisionlib import *
def main(task_for, mock=False):
if task_for == "github-push":
if CONFIG.git_ref in ["refs/heads/auto", "refs/heads/try", "refs/heads/try-taskcluster"]:
linux_tidy_unit()
android_arm32()
windows_dev()
if mock:
windows_release()
linux_wpt()
linux_build_task("Indexed by task definition").find_or_create()
# https://tools.taskcluster.net/hooks/project-servo/daily
elif task_for == "daily":
daily_tasks_setup()
with_rust_nightly()
android_arm32()
else: # pragma: no cover
raise ValueError("Unrecognized $TASK_FOR value: %r", task_for)
ping_on_daily_task_failure = "SimonSapin, nox, emilio"
build_artifacts_expire_in = "1 week"
build_dependencies_artifacts_expire_in = "1 month"
log_artifacts_expire_in = "1 year"
build_env = {
"RUST_BACKTRACE": "1",
"RUSTFLAGS": "-Dwarnings",
"CARGO_INCREMENTAL": "0",
}
linux_build_env = {
"CCACHE": "sccache",
"RUSTC_WRAPPER": "sccache",
"SCCACHE_IDLE_TIMEOUT": "1200",
"SHELL": "/bin/dash", # For SpiderMonkeys build system
}
windows_build_env = {
"LIB": "%HOMEDRIVE%%HOMEPATH%\\gst\\gstreamer\\1.0\\x86_64\\lib;%LIB%",
}
windows_sparse_checkout = [
"/*",
"!/tests/wpt/metadata",
"!/tests/wpt/mozilla",
"!/tests/wpt/webgl",
"!/tests/wpt/web-platform-tests",
"/tests/wpt/web-platform-tests/tools",
]
def linux_tidy_unit():
return linux_build_task("Linux x64: tidy + dev build + unit tests").with_script("""
./mach test-tidy --no-progress --all
./mach build --dev
./mach test-unit
./mach package --dev
./mach test-tidy --no-progress --self-test
./etc/memory_reports_over_time.py --test
./etc/taskcluster/mock.py
./etc/ci/lockfile_changed.sh
./etc/ci/check_no_panic.sh
""").create()
def with_rust_nightly():
return linux_build_task("Linux x64: with Rust Nightly").with_script("""
echo "nightly" > rust-toolchain
./mach build --dev
./mach test-unit
""").create()
def android_arm32():
return (
linux_build_task("Android ARMv7: build")
# file: NDK parses $(file $SHELL) to tell x64 host from x86
# wget: servo-media-gstreamers build script
.with_script("""
apt-get install -y --no-install-recommends openjdk-8-jdk-headless file wget
./etc/ci/bootstrap-android-and-accept-licences.sh
./mach build --android --release
""")
.with_artifacts(
"/repo/target/armv7-linux-androideabi/release/servoapp.apk",
"/repo/target/armv7-linux-androideabi/release/servoview.aar",
)
.find_or_create("build.android_armv7_release." + CONFIG.git_sha)
)
def windows_dev():
return (
windows_build_task("Windows x64: dev build + unit tests")
.with_script(
# Not necessary as this would be done at the start of `build`,
# but this allows timing it separately.
"mach fetch",
"mach build --dev",
"mach test-unit",
"mach package --dev",
)
.with_artifacts("repo/target/debug/msi/Servo.exe",
"repo/target/debug/msi/Servo.zip")
.find_or_create("build.windows_x64_dev." + CONFIG.git_sha)
)
def windows_release():
return (
windows_build_task("Windows x64: release build")
.with_script("mach build --release",
"mach package --release")
.with_artifacts("repo/target/release/msi/Servo.exe",
"repo/target/release/msi/Servo.zip")
.find_or_create("build.windows_x64_release." + CONFIG.git_sha)
)
def linux_wpt():
release_build_task = linux_release_build()
total_chunks = 2
for i in range(total_chunks):
this_chunk = i + 1
wpt_chunk(release_build_task, total_chunks, this_chunk)
def linux_release_build():
return (
linux_build_task("Linux x64: release build")
.with_script("""
./mach build --release --with-debug-assertions -p servo
./etc/ci/lockfile_changed.sh
tar -czf /target.tar.gz \
target/release/servo \
target/release/build/osmesa-src-*/output \
target/release/build/osmesa-src-*/out/lib/gallium
""")
.with_artifacts("/target.tar.gz")
.find_or_create("build.linux_x64_release." + CONFIG.git_sha)
)
def wpt_chunk(release_build_task, total_chunks, this_chunk):
name = "Linux x64: WPT chunk %s / %s" % (this_chunk, total_chunks)
script = """
./mach test-wpt \
--release \
--processes 24 \
--total-chunks "$TOTAL_CHUNKS" \
--this-chunk "$THIS_CHUNK" \
--log-raw test-wpt.log \
--log-errorsummary wpt-errorsummary.log \
--always-succeed
./mach filter-intermittents\
wpt-errorsummary.log \
--log-intermittents intermittents.log \
--log-filteredsummary filtered-wpt-errorsummary.log \
--tracker-api default
"""
# FIXME: --reporter-api default
# IndexError: list index out of range
# File "/repo/python/servo/testing_commands.py", line 533, in filter_intermittents
# pull_request = int(last_merge.split(' ')[4][1:])
if this_chunk == 1:
name += " + extra"
script += """
./mach test-wpt-failure
./mach test-wpt --release --binary-arg=--multiprocess --processes 24 \
--log-raw test-wpt-mp.log \
--log-errorsummary wpt-mp-errorsummary.log \
eventsource
"""
return (
linux_run_task(name, release_build_task, script)
.with_env(TOTAL_CHUNKS=total_chunks, THIS_CHUNK=this_chunk)
.create()
)
def linux_run_task(name, build_task, script):
return (
linux_task(name)
.with_dockerfile(dockerfile_path("run"))
.with_early_script("""
./etc/taskcluster/curl-artifact.sh ${BUILD_TASK_ID} target.tar.gz | tar -xz
""")
.with_env(BUILD_TASK_ID=build_task)
.with_dependencies(build_task)
.with_script(script)
.with_index_and_artifacts_expire_in(log_artifacts_expire_in)
.with_artifacts(*[
"/repo/" + word
for word in script.split() if word.endswith(".log")
])
.with_max_run_time_minutes(60)
)
def daily_tasks_setup():
# ':' is not accepted in an index namepspace:
# https://docs.taskcluster.net/docs/reference/core/taskcluster-index/references/api
now = SHARED.now.strftime("%Y-%m-%d_%H-%M-%S")
index_path = "%s.daily.%s" % (CONFIG.index_prefix, now)
# Index this task manually rather than with a route,
# so that it is indexed even if it fails.
SHARED.index_service.insertTask(index_path, {
"taskId": CONFIG.decision_task_id,
"rank": 0,
"data": {},
"expires": SHARED.from_now_json(log_artifacts_expire_in),
})
# Unlike when reacting to a GitHub event,
# the commit hash is not known until we clone the repository.
CONFIG.git_sha_is_current_head()
# On failure, notify a few people on IRC
# https://docs.taskcluster.net/docs/reference/core/taskcluster-notify/docs/usage
notify_route = "notify.irc-channel.#servo.on-failed"
CONFIG.routes_for_all_subtasks.append(notify_route)
CONFIG.scopes_for_all_subtasks.append("queue:route:" + notify_route)
CONFIG.task_name_template = "Servo daily: %s. On failure, ping: " + ping_on_daily_task_failure
def dockerfile_path(name):
return os.path.join(os.path.dirname(__file__), "docker", name + ".dockerfile")
def linux_task(name):
return DockerWorkerTask(name).with_worker_type("servo-docker-worker")
def windows_task(name):
return WindowsGenericWorkerTask(name).with_worker_type("servo-win2016")
def linux_build_task(name):
return (
linux_task(name)
# https://docs.taskcluster.net/docs/reference/workers/docker-worker/docs/caches
# FIMXE: move to servo-* cache names
.with_scopes("docker-worker:cache:cargo-*")
.with_caches(**{
"cargo-registry-cache": "/root/.cargo/registry",
"cargo-git-cache": "/root/.cargo/git",
"cargo-rustup": "/root/.rustup",
"cargo-sccache": "/root/.cache/sccache",
})
.with_index_and_artifacts_expire_in(build_artifacts_expire_in)
.with_max_run_time_minutes(60)
.with_dockerfile(dockerfile_path("build"))
.with_env(**build_env, **linux_build_env)
.with_repo()
.with_index_and_artifacts_expire_in(build_artifacts_expire_in)
)
def windows_build_task(name):
return (
windows_task(name)
.with_max_run_time_minutes(60)
.with_env(**build_env, **windows_build_env)
.with_repo(sparse_checkout=windows_sparse_checkout)
.with_python2()
.with_rustup()
.with_repacked_msi(
url="https://gstreamer.freedesktop.org/data/pkg/windows/" +
"1.14.3/gstreamer-1.0-devel-x86_64-1.14.3.msi",
sha256="b13ea68c1365098c66871f0acab7fd3daa2f2795b5e893fcbb5cd7253f2c08fa",
path="gst",
)
.with_directory_mount(
"https://github.com/wixtoolset/wix3/releases/download/wix3111rtm/wix311-binaries.zip",
sha256="37f0a533b0978a454efb5dc3bd3598becf9660aaf4287e55bf68ca6b527d051d",
path="wix",
)
.with_path_from_homedir("wix")
)
CONFIG.task_name_template = "Servo: %s"
CONFIG.index_prefix = "project.servo.servo"
CONFIG.docker_images_expire_in = build_dependencies_artifacts_expire_in
CONFIG.repacked_msi_files_expire_in = build_dependencies_artifacts_expire_in
if __name__ == "__main__": # pragma: no cover
main(task_for=os.environ["TASK_FOR"])

View file

@ -13,236 +13,619 @@
Project-independent library for Taskcluster decision tasks
"""
import base64
import datetime
import hashlib
import json
import os
import re
import subprocess
import sys
import taskcluster
class DecisionTask:
# Public API
__all__ = [
"CONFIG", "SHARED", "Task", "DockerWorkerTask",
"GenericWorkerTask", "WindowsGenericWorkerTask",
]
class Config:
"""
Holds some project-specific configuration and provides higher-level functionality
on top of the `taskcluster` package a.k.a. `taskcluster-client.py`.
Global configuration, for users of the library to modify.
"""
def __init__(self):
self.task_name_template = "%s"
self.index_prefix = "garbage.servo-decisionlib"
self.scopes_for_all_subtasks = []
self.routes_for_all_subtasks = []
self.docker_images_expire_in = "1 month"
self.repacked_msi_files_expire_in = "1 month"
DOCKER_IMAGE_ARTIFACT_FILENAME = "image.tar.lz4"
# Set by docker-worker:
# https://docs.taskcluster.net/docs/reference/workers/docker-worker/docs/environment
self.decision_task_id = os.environ.get("TASK_ID")
# https://github.com/servo/taskcluster-bootstrap-docker-images#image-builder
DOCKER_IMAGE_BUILDER_IMAGE = "servobrowser/taskcluster-bootstrap:image-builder@sha256:" \
"0a7d012ce444d62ffb9e7f06f0c52fedc24b68c2060711b313263367f7272d9d"
# Set in the decision tasks payload, such as defined in .taskcluster.yml
self.task_owner = os.environ.get("TASK_OWNER")
self.task_source = os.environ.get("TASK_SOURCE")
self.git_url = os.environ.get("GIT_URL")
self.git_ref = os.environ.get("GIT_REF")
self.git_sha = os.environ.get("GIT_SHA")
def __init__(self, *, index_prefix="garbage.servo-decisionlib", task_name_template="%s",
worker_type="github-worker", docker_image_cache_expiry="1 year",
routes_for_all_subtasks=None, scopes_for_all_subtasks=None):
self.task_name_template = task_name_template
self.index_prefix = index_prefix
self.worker_type = worker_type
self.docker_image_cache_expiry = docker_image_cache_expiry
self.routes_for_all_subtasks = routes_for_all_subtasks or []
self.scopes_for_all_subtasks = scopes_for_all_subtasks or []
def git_sha_is_current_head(self):
output = subprocess.check_output(["git", "rev-parse", "HEAD"])
self.git_sha = output.decode("utf8").strip()
# https://docs.taskcluster.net/docs/reference/workers/docker-worker/docs/features#feature-taskclusterproxy
class Shared:
"""
Global shared state.
"""
def __init__(self):
self.now = datetime.datetime.utcnow()
self.found_or_created_indexed_tasks = {}
# taskclusterProxy URLs:
# https://docs.taskcluster.net/docs/reference/workers/docker-worker/docs/features
self.queue_service = taskcluster.Queue(options={"baseUrl": "http://taskcluster/queue/v1/"})
self.index_service = taskcluster.Index(options={"baseUrl": "http://taskcluster/index/v1/"})
self.now = datetime.datetime.utcnow()
self.found_or_created_indices = {}
def from_now_json(self, offset):
"""
Same as `taskcluster.fromNowJSON`, but uses the creation time of `self` for now.
"""
return taskcluster.stringDate(taskcluster.fromNow(offset, dateObj=self.now))
def find_or_create_task(self, *, index_bucket, index_key, index_expiry, artifacts, **kwargs):
"""
Find a task indexed in the given bucket (kind, category, ) and cache key,
on schedule a new one if there isnt one yet.
Returns the task ID.
"""
index_path = "%s.%s.%s" % (self.index_prefix, index_bucket, index_key)
CONFIG = Config()
SHARED = Shared()
task_id = self.found_or_created_indices.get(index_path)
def chaining(op, attr):
def method(self, *args, **kwargs):
op(self, attr, *args, **kwargs)
return self
return method
def append_to_attr(self, attr, *args): getattr(self, attr).extend(args)
def prepend_to_attr(self, attr, *args): getattr(self, attr)[0:0] = list(args)
def update_attr(self, attr, **kwargs): getattr(self, attr).update(kwargs)
class Task:
"""
A task definition, waiting to be created.
Typical is to use chain the `with_*` methods to set or extend this objects attributes,
then call the `crate` or `find_or_create` method to schedule a task.
This is an abstract class that needs to be specialized for different worker implementations.
"""
def __init__(self, name):
self.name = name
self.description = ""
self.scheduler_id = "taskcluster-github"
self.provisioner_id = "aws-provisioner-v1"
self.worker_type = "github-worker"
self.deadline_in = "1 day"
self.expires_in = "1 year"
self.index_and_artifacts_expire_in = self.expires_in
self.dependencies = []
self.scopes = []
self.routes = []
self.extra = {}
# All `with_*` methods return `self`, so multiple method calls can be chained.
with_description = chaining(setattr, "description")
with_scheduler_id = chaining(setattr, "scheduler_id")
with_provisioner_id = chaining(setattr, "provisioner_id")
with_worker_type = chaining(setattr, "worker_type")
with_deadline_in = chaining(setattr, "deadline_in")
with_expires_in = chaining(setattr, "expires_in")
with_index_and_artifacts_expire_in = chaining(setattr, "index_and_artifacts_expire_in")
with_dependencies = chaining(append_to_attr, "dependencies")
with_scopes = chaining(append_to_attr, "scopes")
with_routes = chaining(append_to_attr, "routes")
with_extra = chaining(update_attr, "extra")
def build_worker_payload(self): # pragma: no cover
"""
Overridden by sub-classes to return a dictionary in a worker-specific format,
which is used as the `payload` property in a task definition request
passed to the Queues `createTask` API.
<https://docs.taskcluster.net/docs/reference/platform/taskcluster-queue/references/api#createTask>
"""
raise NotImplementedError
def create(self):
"""
Call the Queues `createTask` API to schedule a new task, and return its ID.
<https://docs.taskcluster.net/docs/reference/platform/taskcluster-queue/references/api#createTask>
"""
worker_payload = self.build_worker_payload()
assert CONFIG.decision_task_id
assert CONFIG.task_owner
assert CONFIG.task_source
queue_payload = {
"taskGroupId": CONFIG.decision_task_id,
"dependencies": [CONFIG.decision_task_id] + self.dependencies,
"schedulerId": self.scheduler_id,
"provisionerId": self.provisioner_id,
"workerType": self.worker_type,
"created": SHARED.from_now_json(""),
"deadline": SHARED.from_now_json(self.deadline_in),
"expires": SHARED.from_now_json(self.expires_in),
"metadata": {
"name": CONFIG.task_name_template % self.name,
"description": self.description,
"owner": CONFIG.task_owner,
"source": CONFIG.task_source,
},
"payload": worker_payload,
}
scopes = self.scopes + CONFIG.scopes_for_all_subtasks
routes = self.routes + CONFIG.routes_for_all_subtasks
if any(r.startswith("index.") for r in routes):
self.extra.setdefault("index", {})["expires"] = \
SHARED.from_now_json(self.index_and_artifacts_expire_in)
dict_update_if_truthy(
queue_payload,
scopes=scopes,
routes=routes,
extra=self.extra,
)
task_id = taskcluster.slugId().decode("utf8")
SHARED.queue_service.createTask(task_id, queue_payload)
print("Scheduled %s" % self.name)
return task_id
def find_or_create(self, index_path=None):
"""
Try to find a task in the Index and return its ID.
The index path used is `{CONFIG.index_prefix}.{index_path}`.
`index_path` defaults to `by-task-definition.{sha256}`
with a hash of the worker payload and worker type.
If no task is found in the index,
it is created with a route to add it to the index at that same path if it succeeds.
<https://docs.taskcluster.net/docs/reference/core/taskcluster-index/references/api#findTask>
"""
if not index_path:
worker_type = self.worker_type
index_by = json.dumps([worker_type, self.build_worker_payload()]).encode("utf-8")
index_path = "by-task-definition." + hashlib.sha256(index_by).hexdigest()
index_path = "%s.%s" % (CONFIG.index_prefix, index_path)
task_id = SHARED.found_or_created_indexed_tasks.get(index_path)
if task_id is not None:
return task_id
try:
result = self.index_service.findTask(index_path)
task_id = result["taskId"]
task_id = SHARED.index_service.findTask(index_path)["taskId"]
except taskcluster.TaskclusterRestFailure as e:
if e.status_code == 404:
task_id = self.create_task(
routes=[
"index." + index_path,
],
extra={
"index": {
"expires": self.from_now_json(self.docker_image_cache_expiry),
},
},
artifacts=[
(artifact, index_expiry)
for artifact in artifacts
],
**kwargs
)
else:
if e.status_code != 404: # pragma: no cover
raise
self.routes.append("index." + index_path)
task_id = self.create()
self.found_or_created_indices[index_path] = task_id
SHARED.found_or_created_indexed_tasks[index_path] = task_id
return task_id
def find_or_build_docker_image(self, dockerfile):
class GenericWorkerTask(Task):
"""
Find a task that built a Docker image based on this `dockerfile`,
or schedule a new image-building task if needed.
Task definition for a worker type that runs the `generic-worker` implementation.
Returns the task ID.
This is an abstract class that needs to be specialized for different operating systems.
<https://github.com/taskcluster/generic-worker>
"""
dockerfile_contents = expand_dockerfile(dockerfile)
digest = hashlib.sha256(dockerfile_contents).hexdigest()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.max_run_time_minutes = 30
self.env = {}
self.mounts = []
self.artifacts = []
return self.find_or_create_task(
index_bucket="docker-image",
index_key=digest,
index_expiry=self.docker_image_cache_expiry,
with_max_run_time_minutes = chaining(setattr, "max_run_time_minutes")
with_mounts = chaining(append_to_attr, "mounts")
with_env = chaining(update_attr, "env")
task_name="Docker image: " + image_name(dockerfile),
script="""
echo "$DOCKERFILE" | docker build -t taskcluster-built -
docker save taskcluster-built | lz4 > /%s
""" % self.DOCKER_IMAGE_ARTIFACT_FILENAME,
env={
"DOCKERFILE": dockerfile_contents,
},
def build_command(self): # pragma: no cover
"""
Overridden by sub-classes to return the `command` property of the worker payload,
in the format appropriate for the operating system.
"""
raise NotImplementedError
def build_worker_payload(self):
"""
Return a `generic-worker` worker payload.
<https://docs.taskcluster.net/docs/reference/workers/generic-worker/docs/payload>
"""
worker_payload = {
"command": self.build_command(),
"maxRunTime": self.max_run_time_minutes * 60
}
return dict_update_if_truthy(
worker_payload,
env=self.env,
mounts=self.mounts,
artifacts=[
"/" + self.DOCKER_IMAGE_ARTIFACT_FILENAME,
{
"type": type_,
"path": path,
"name": "public/" + url_basename(path),
"expires": SHARED.from_now_json(self.index_and_artifacts_expire_in),
}
for type_, path in self.artifacts
],
max_run_time_minutes=20,
docker_image=self.DOCKER_IMAGE_BUILDER_IMAGE,
features={
"dind": True, # docker-in-docker
},
with_repo=False,
)
def create_task(self, *, task_name, script, max_run_time_minutes,
docker_image=None, dockerfile=None, # One of these is required
artifacts=None, dependencies=None, env=None, cache=None, scopes=None,
routes=None, extra=None, features=None,
with_repo=True):
def with_artifacts(self, *paths, type="file"):
"""
Schedule a new task. Only supports `docker-worker` for now.
Add each path in `paths` as a task artifact
that expires in `self.index_and_artifacts_expire_in`.
Returns the new task ID.
`type` can be `"file"` or `"directory"`.
One of `docker_image` or `dockerfile` (but not both) must be given.
If `dockerfile` is given, the corresponding Docker image is built as needed and cached.
`with_repo` indicates whether `script` should start in a clone of the git repository.
Paths are relative to the tasks home directory.
"""
if docker_image and dockerfile:
raise TypeError("cannot use both `docker_image` or `dockerfile`")
if not docker_image and not dockerfile:
raise TypeError("need one of `docker_image` or `dockerfile`")
self.artifacts.extend((type, path) for path in paths)
return self
# https://docs.taskcluster.net/docs/reference/workers/docker-worker/docs/environment
decision_task_id = os.environ["TASK_ID"]
def _mount_content(self, url_or_artifact_name, task_id, sha256):
if task_id:
content = {"taskId": task_id, "artifact": url_or_artifact_name}
else:
content = {"url": url_or_artifact_name}
if sha256:
content["sha256"] = sha256
return content
dependencies = [decision_task_id] + (dependencies or [])
def with_file_mount(self, url_or_artifact_name, task_id=None, sha256=None, path=None):
"""
Make `generic-worker` download a file before the task starts
and make it available at `path` (which is relative to the tasks home directory).
if dockerfile:
image_build_task = self.find_or_build_docker_image(dockerfile)
dependencies.append(image_build_task)
docker_image = {
"type": "task-image",
"taskId": image_build_task,
"path": "public/" + self.DOCKER_IMAGE_ARTIFACT_FILENAME,
If `sha256` is provided, `generic-worker` will hash the downloaded file
and check it against the provided signature.
If `task_id` is provided, this task will depend on that task
and `url_or_artifact_name` is the name of an artifact of that task.
"""
return self.with_mounts({
"file": path or url_basename(url_or_artifact_name),
"content": self._mount_content(url_or_artifact_name, task_id, sha256),
})
def with_directory_mount(self, url_or_artifact_name, task_id=None, sha256=None, path=None):
"""
Make `generic-worker` download an archive before the task starts,
and uncompress it at `path` (which is relative to the tasks home directory).
`url_or_artifact_name` must end in one of `.rar`, `.tar.bz2`, `.tar.gz`, or `.zip`.
The archive must be in the corresponding format.
If `sha256` is provided, `generic-worker` will hash the downloaded archive
and check it against the provided signature.
If `task_id` is provided, this task will depend on that task
and `url_or_artifact_name` is the name of an artifact of that task.
"""
supported_formats = ["rar", "tar.bz2", "tar.gz", "zip"]
for fmt in supported_formats:
suffix = "." + fmt
if url_or_artifact_name.endswith(suffix):
return self.with_mounts({
"directory": path or url_basename(url_or_artifact_name[:-len(suffix)]),
"content": self._mount_content(url_or_artifact_name, task_id, sha256),
"format": fmt,
})
raise ValueError(
"%r does not appear to be in one of the supported formats: %r"
% (url_or_artifact_name, ", ".join(supported_formats))
) # pragma: no cover
class WindowsGenericWorkerTask(GenericWorkerTask):
"""
Task definition for a `generic-worker` task running on Windows.
Scripts are written as `.bat` files executed with `cmd.exe`.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.scripts = []
with_script = chaining(append_to_attr, "scripts")
with_early_script = chaining(prepend_to_attr, "scripts")
def build_command(self):
return [deindent(s) for s in self.scripts]
def with_path_from_homedir(self, *paths):
"""
Interpret each path in `paths` as relative to the tasks home directory,
and add it to the `PATH` environment variable.
"""
for p in paths:
self.with_early_script("set PATH=%HOMEDRIVE%%HOMEPATH%\\{};%PATH%".format(p))
return self
def with_repo(self, sparse_checkout=None):
"""
Make a shallow clone the git repository at the start of the task.
This uses `CONFIG.git_url`, `CONFIG.git_ref`, and `CONFIG.git_sha`,
and creates the clone in a `repo` directory in the tasks home directory.
If `sparse_checkout` is given, it must be a list of path patterns
to be used in `.git/info/sparse-checkout`.
See <https://git-scm.com/docs/git-read-tree#_sparse_checkout>.
"""
git = """
git init repo
cd repo
"""
if sparse_checkout:
git += """
git config core.sparsecheckout true
echo %SPARSE_CHECKOUT_BASE64% > .git\\info\\sparse.b64
certutil -decode .git\\info\\sparse.b64 .git\\info\\sparse-checkout
type .git\\info\\sparse-checkout
"""
self.env["SPARSE_CHECKOUT_BASE64"] = base64.b64encode(
"\n".join(sparse_checkout).encode("utf-8"))
git += """
git fetch --depth 1 %GIT_URL% %GIT_REF%
git reset --hard %GIT_SHA%
"""
return self \
.with_git() \
.with_script(git) \
.with_env(**git_env())
def with_git(self):
"""
Make the task download `git-for-windows` and make it available for `git` commands.
This is implied by `with_repo`.
"""
return self \
.with_path_from_homedir("git\\cmd") \
.with_directory_mount(
"https://github.com/git-for-windows/git/releases/download/" +
"v2.19.0.windows.1/MinGit-2.19.0-64-bit.zip",
sha256="424d24b5fc185a9c5488d7872262464f2facab4f1d4693ea8008196f14a3c19b",
path="git",
)
def with_rustup(self):
"""
Download rustup.rs and make it available to task commands,
but does not download any default toolchain.
"""
return self \
.with_path_from_homedir(".cargo\\bin") \
.with_early_script(
"%HOMEDRIVE%%HOMEPATH%\\rustup-init.exe --default-toolchain none -y"
) \
.with_file_mount(
"https://static.rust-lang.org/rustup/archive/" +
"1.13.0/i686-pc-windows-gnu/rustup-init.exe",
sha256="43072fbe6b38ab38cd872fa51a33ebd781f83a2d5e83013857fab31fc06e4bf0",
)
def with_repacked_msi(self, url, sha256, path):
"""
Download an MSI file from `url`, extract the files in it with `lessmsi`,
and make them available in the directory at `path` (relative to the tasks home directory).
`sha256` is required and the MSI file must have that hash.
The file extraction (and recompression in a ZIP file) is done in a separate task,
wich is indexed based on `sha256` and cached for `CONFIG.repacked_msi_files_expire_in`.
<https://github.com/activescott/lessmsi>
"""
repack_task = (
WindowsGenericWorkerTask("MSI repack: " + url)
.with_worker_type(self.worker_type)
.with_max_run_time_minutes(20)
.with_file_mount(url, sha256=sha256, path="input.msi")
.with_directory_mount(
"https://github.com/activescott/lessmsi/releases/download/" +
"v1.6.1/lessmsi-v1.6.1.zip",
sha256="540b8801e08ec39ba26a100c855898f455410cecbae4991afae7bb2b4df026c7",
path="lessmsi"
)
.with_directory_mount(
"https://www.7-zip.org/a/7za920.zip",
sha256="2a3afe19c180f8373fa02ff00254d5394fec0349f5804e0ad2f6067854ff28ac",
path="7zip",
)
.with_path_from_homedir("lessmsi", "7zip")
.with_script("""
lessmsi x input.msi extracted\\
cd extracted\\SourceDir
7za a repacked.zip *
""")
.with_artifacts("extracted/SourceDir/repacked.zip")
.with_index_and_artifacts_expire_in(CONFIG.repacked_msi_files_expire_in)
.find_or_create("repacked-msi." + sha256)
)
return self \
.with_dependencies(repack_task) \
.with_directory_mount("public/repacked.zip", task_id=repack_task, path=path)
def with_python2(self):
"""
Make Python 2, pip, and virtualenv accessible to the tasks commands.
For Python 3, use `with_directory_mount` and the "embeddable zip file" distribution
from python.org.
You may need to remove `python37._pth` from the ZIP in order to work around
<https://bugs.python.org/issue34841>.
"""
return self \
.with_repacked_msi(
"https://www.python.org/ftp/python/2.7.15/python-2.7.15.amd64.msi",
sha256="5e85f3c4c209de98480acbf2ba2e71a907fd5567a838ad4b6748c76deb286ad7",
path="python2"
) \
.with_early_script("""
python -m ensurepip
pip install virtualenv==16.0.0
""") \
.with_path_from_homedir("python2", "python2\\Scripts")
class DockerWorkerTask(Task):
"""
Task definition for a worker type that runs the `generic-worker` implementation.
Scripts are interpreted with `bash`.
<https://github.com/taskcluster/docker-worker>
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.docker_image = "ubuntu:bionic-20180821"
self.max_run_time_minutes = 30
self.scripts = []
self.env = {}
self.caches = {}
self.features = {}
self.artifacts = []
with_docker_image = chaining(setattr, "docker_image")
with_max_run_time_minutes = chaining(setattr, "max_run_time_minutes")
with_artifacts = chaining(append_to_attr, "artifacts")
with_script = chaining(append_to_attr, "scripts")
with_early_script = chaining(prepend_to_attr, "scripts")
with_caches = chaining(update_attr, "caches")
with_env = chaining(update_attr, "env")
def build_worker_payload(self):
"""
Return a `docker-worker` worker payload.
<https://docs.taskcluster.net/docs/reference/workers/docker-worker/docs/payload>
"""
worker_payload = {
"image": self.docker_image,
"maxRunTime": self.max_run_time_minutes * 60,
"command": [
"/bin/bash", "--login", "-x", "-e", "-c",
deindent("\n".join(self.scripts))
],
}
return dict_update_if_truthy(
worker_payload,
env=self.env,
cache=self.caches,
features=self.features,
artifacts={
"public/" + url_basename(path): {
"type": "file",
"path": path,
"expires": SHARED.from_now_json(self.index_and_artifacts_expire_in),
}
for path in self.artifacts
},
)
# Set in .taskcluster.yml
task_owner = os.environ["TASK_OWNER"]
task_source = os.environ["TASK_SOURCE"]
def with_features(self, *names):
"""
Enable the give `docker-worker` features.
env = env or {}
<https://docs.taskcluster.net/docs/reference/workers/docker-worker/docs/features>
"""
self.features.update({name: True for name in names})
return self
if with_repo:
# Set in .taskcluster.yml
for k in ["GIT_URL", "GIT_REF", "GIT_SHA"]:
env[k] = os.environ[k]
def with_repo(self):
"""
Make a shallow clone the git repository at the start of the task.
This uses `CONFIG.git_url`, `CONFIG.git_ref`, and `CONFIG.git_sha`,
and creates the clone in a `/repo` directory
at the root of the Docker containers filesystem.
script = """
`git` and `ca-certificate` need to be installed in the Docker image.
"""
return self \
.with_env(**git_env()) \
.with_early_script("""
git init repo
cd repo
git fetch --depth 1 "$GIT_URL" "$GIT_REF"
git reset --hard "$GIT_SHA"
""" + script
""")
payload = {
"taskGroupId": decision_task_id,
"dependencies": dependencies or [],
"schedulerId": "taskcluster-github",
"provisionerId": "aws-provisioner-v1",
"workerType": self.worker_type,
"created": self.from_now_json(""),
"deadline": self.from_now_json("1 day"),
"metadata": {
"name": self.task_name_template % task_name,
"description": "",
"owner": task_owner,
"source": task_source,
},
"scopes": (scopes or []) + self.scopes_for_all_subtasks,
"routes": (routes or []) + self.routes_for_all_subtasks,
"extra": extra or {},
"payload": {
"cache": cache or {},
"maxRunTime": max_run_time_minutes * 60,
"image": docker_image,
"command": [
"/bin/bash",
"--login",
"-x",
"-e",
"-c",
deindent(script)
],
"env": env,
"artifacts": {
"public/" + os.path.basename(path): {
"type": "file",
"path": path,
"expires": self.from_now_json(expires),
}
for path, expires in artifacts or []
},
"features": features or {},
},
}
task_id = taskcluster.slugId().decode("utf8")
self.queue_service.createTask(task_id, payload)
print("Scheduled %s" % task_name)
return task_id
def image_name(dockerfile):
def with_dockerfile(self, dockerfile):
"""
Guess a short name based on the path `dockerfile`.
Build a Docker image based on the given `Dockerfile`, and use it for this task.
`dockerfile` is a path in the filesystem where this code is running.
Some non-standard syntax is supported, see `expand_dockerfile`.
The image is indexed based on a hash of the expanded `Dockerfile`,
and cached for `CONFIG.docker_images_expire_in`.
Images are built without any *context*.
<https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#understand-build-context>
"""
basename = os.path.basename(dockerfile)
suffix = ".dockerfile"
if basename == "Dockerfile":
return os.path.basename(os.path.dirname(os.path.abspath(dockerfile)))
elif basename.endswith(suffix):
return basename[:-len(suffix)]
else:
return basename
assert basename.endswith(suffix)
image_name = basename[:-len(suffix)]
dockerfile_contents = expand_dockerfile(dockerfile)
digest = hashlib.sha256(dockerfile_contents).hexdigest()
image_build_task = (
DockerWorkerTask("Docker image: " + image_name)
.with_worker_type(self.worker_type)
.with_max_run_time_minutes(30)
.with_index_and_artifacts_expire_in(CONFIG.docker_images_expire_in)
.with_features("dind")
.with_env(DOCKERFILE=dockerfile_contents)
.with_artifacts("/image.tar.lz4")
.with_script("""
echo "$DOCKERFILE" | docker build -t taskcluster-built -
docker save taskcluster-built | lz4 > /image.tar.lz4
""")
.with_docker_image(
# https://github.com/servo/taskcluster-bootstrap-docker-images#image-builder
"servobrowser/taskcluster-bootstrap:image-builder@sha256:" \
"0a7d012ce444d62ffb9e7f06f0c52fedc24b68c2060711b313263367f7272d9d"
)
.find_or_create("docker-image." + digest)
)
return self \
.with_dependencies(image_build_task) \
.with_docker_image({
"type": "task-image",
"path": "public/image.tar.lz4",
"taskId": image_build_task,
})
def expand_dockerfile(dockerfile):
@ -263,5 +646,26 @@ def expand_dockerfile(dockerfile):
return b"\n".join([expand_dockerfile(path), rest])
def git_env():
assert CONFIG.git_url
assert CONFIG.git_ref
assert CONFIG.git_sha
return {
"GIT_URL": CONFIG.git_url,
"GIT_REF": CONFIG.git_ref,
"GIT_SHA": CONFIG.git_sha,
}
def dict_update_if_truthy(d, **kwargs):
for key, value in kwargs.items():
if value:
d[key] = value
return d
def deindent(string):
return re.sub("\n +", " \n ", string).strip()
return re.sub("\n +", "\n ", string).strip()
def url_basename(url):
return url.rpartition("/")[-1]

View file

@ -15,7 +15,7 @@ RUN \
ca-certificates \
#
# Running mach
python2.7 \
python \
virtualenv \
#
# Installing rustup and sccache (build dockerfile) or fetching build artifacts (run tasks)

View file

@ -2,6 +2,9 @@
RUN \
apt-get install -qy --no-install-recommends \
#
# Testing decisionlib (see etc/taskcluster/mock.py)
python3-coverage \
#
# Multiple C/C++ dependencies built from source
g++ \

View file

@ -1,4 +1,4 @@
#!/usr/bin/python3
#!/bin/bash
# Copyright 2018 The Servo Project Developers. See the COPYRIGHT
# file at the top-level directory of this distribution.
@ -9,6 +9,12 @@
# option. This file may not be copied, modified, or distributed
# except according to those terms.
''''set -e
python3 -m coverage run $0
python3 -m coverage report -m --fail-under 100
exit
'''
"""
Run the decision task with fake Taskcluster APIs, to catch Python errors before pushing.
"""
@ -29,17 +35,20 @@ class Index:
raise TaskclusterRestFailure
Queue = stringDate = fromNow = slugId = MagicMock()
stringDate = str
slugId = b"id".lower
Queue = fromNow = MagicMock()
sys.modules["taskcluster"] = sys.modules[__name__]
sys.dont_write_bytecode = True
code = open(os.path.join(os.path.dirname(__file__), "decision-task.py"), "rb").read()
for k in "TASK_ID TASK_OWNER TASK_SOURCE GIT_URL GIT_REF GIT_SHA".split():
os.environ[k] = k
os.environ.update(**{k: k for k in "TASK_ID TASK_OWNER TASK_SOURCE GIT_URL GIT_SHA".split()})
os.environ["GIT_REF"] = "refs/heads/auto"
import decision_task
print("Push:")
os.environ["TASK_FOR"] = "github-push"
exec(code)
print("\n# Push:")
decision_task.main("github-push", mock=True)
print("Daily:")
os.environ["TASK_FOR"] = "daily"
exec(code)
print("\n# Push with hot caches:")
decision_task.main("github-push", mock=True)
print("\n# Daily:")
decision_task.main("daily", mock=True)

1
etc/taskcluster/windows/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
*.id_rsa

View file

@ -0,0 +1,88 @@
# Windows AMIs for Servo on Taskcluster
Unlike Linux tasks on `docker-worker` where each tasks is executed in a container
based on a Docker image provided with the task,
Windows tasks on Taskcluster are typically run by `generic-worker`
where tasks are executed directly in the workers environment.
So we may want to install some tools globally on the system, to make them available to tasks.
With the [AWS provisioner], this means building a custom AMI.
We need to boot an instance on a base Windows AMI,
install what we need (including `generic-worker` itself),
then take an image of that instance.
The [`worker_types`] directory in `generic-worker`s repository
has some scripts that automate this,
in order to make it more reproducible than clicking around.
The trick is that a PowerShell script to run on boot can be provided
when starting a Windows instance on EC2, and of course AWS has an API.
[AWS provisioner]: https://docs.taskcluster.net/docs/reference/integrations/aws-provisioner/references/api
[`worker_types`]: https://github.com/taskcluster/generic-worker/blob/master/worker_types/
## Building and deploying a new image
* Install and configure the [AWS command-line tool].
* Make your changes to `first-boot.ps1` and/or `base-ami.txt`.
* Run `python3 build-ami.py`. Note that it can take many minutes to complete.
* Save the administrator password together with the image ID
in Servos shared 1Password account, in the *Taskcluster Windows AMIs* note.
* In the [worker type definition], edit `ImageId` and `DeploymentId`.
Note that the new worker type definition will only apply to newly-provisionned workers.
`DeploymentId` can be any string. It can for example include the image ID.
Workers check it between tasks (if `checkForNewDeploymentEverySecs` since the last check).
If it has changed, they shut down in order to leave room for new workers with the new definition.
The [EC2 Resources] page has red *Terminate All Instances* button,
but that will make any running task fail.
[AWS command-line tool]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
[worker type definition]: https://tools.taskcluster.net/aws-provisioner/servo-win2016/edit
[EC2 Resources]: https://tools.taskcluster.net/aws-provisioner/servo-win2016/resources
## FIXME: possible improvement
* Have a separate staging worker type to try new AMIs without affecting the production CI
* Automate cleaning up old, unused AMIs and their backing EBS snapshots
* Use multiple AWS regions
* Use the Taskcluster API to automate updating worker type definitions?
## Picking a base AMI
Amazon provides an ovewhelming number of different Windows images,
so its hard to find whats relevant.
Their console might show a paginated view like this:
> ⇤ ← 1 to 50 of 13,914 AMIs → ⇥
Lets grep through this with the API:
```sh
aws ec2 describe-images --owners amazon --filters 'Name=platform,Values=windows' \
--query 'Images[*].[ImageId,Name,Description]' --output table > /tmp/images
< /tmp/images less -S
```
It turns out that these images are all based on Windows Server,
but their number is explained by the presence of many (all?) combinations of:
* Multiple OS Version
* Many available locales
* *Full* (a.k.a. *with Desktop Experience*), or *Core*
* *Base* with only the OS, or multiple flavors with tools like SQL Server pre-installed
If we make some choices and filter the list:
```sh
< /tmp/images grep 2016-English-Full-Base | less -S
```
… we get a much more manageable handlful of images with names like
`Windows_Server-2016-English-Full-Base-2018.09.15` or other dates.
Lets set `base-ami.txt` to `Windows_Server-2016-English-Full-Base-*`,
and have `build-ami.py` pick the most recently-created AMI whose name matches that pattern.

View file

@ -0,0 +1 @@
Windows_Server-2016-English-Full-Base-*

View file

@ -0,0 +1,55 @@
# Use this script is to get a build environment
# when booting a Windows EC2 instance outside of Taskcluster.
[Environment]::SetEnvironmentVariable("Path", $env:Path +
";C:\git\cmd;C:\python2;C:\python2\Scripts;C:\Users\Administrator\.cargo\bin",
[EnvironmentVariableTarget]::Machine)
[Environment]::SetEnvironmentVariable("Lib", $env:Lib +
";C:\gstreamer\1.0\x86_64\lib",
[EnvironmentVariableTarget]::Machine)
# Optional
$client.DownloadFile(
"http://download.tuxfamily.org/dvorak/windows/bepo.exe",
"C:\bepo.exe"
)
# use TLS 1.2 (see bug 1443595)
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
# For making http requests
$client = New-Object system.net.WebClient
$shell = new-object -com shell.application
# Download a zip file and extract it
function Expand-ZIPFile($file, $destination, $url)
{
$client.DownloadFile($url, $file)
$zip = $shell.NameSpace($file)
foreach($item in $zip.items())
{
$shell.Namespace($destination).copyhere($item)
}
}
md C:\git
Expand-ZIPFile -File "C:\git.zip" -Destination "C:\git" -Url `
"https://github.com/git-for-windows/git/releases/download/v2.19.0.windows.1/MinGit-2.19.0-64-bit.zip"
$client.DownloadFile(
"https://static.rust-lang.org/rustup/archive/1.13.0/i686-pc-windows-gnu/rustup-init.exe",
"C:\rustup-init.exe"
)
Start-Process C:\rustup-init.exe -Wait -NoNewWindow -ArgumentList `
"--default-toolchain none -y"
md C:\python2
Expand-ZIPFile -File "C:\python2.zip" -Destination "C:\python2" -Url `
"https://queue.taskcluster.net/v1/task/RIuts6jOQtCSjMbuaOU6yw/runs/0/artifacts/public/repacked.zip"
Expand-ZIPFile -File "C:\gst.zip" -Destination "C:\" -Url `
"https://queue.taskcluster.net/v1/task/KAzPF1ZYSFmg2BQKLt0LwA/runs/0/artifacts/public/repacked.zip"

View file

@ -0,0 +1,116 @@
#!/usr/bin/python3
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import json
import datetime
import subprocess
REGION = "us-west-2"
WORKER_TYPE = "servo-win2016"
AWS_PROVISIONER_USER_ID = "692406183521"
def main():
base_ami_pattern = read_file("base-ami.txt").strip()
base_ami = most_recent_ami(base_ami_pattern)
print("Starting an instance with base image:", base_ami["ImageId"], base_ami["Name"])
key_name = "%s_%s" % (WORKER_TYPE, REGION)
key_filename = key_name + ".id_rsa"
ec2("delete-key-pair", "--key-name", key_name)
result = ec2("create-key-pair", "--key-name", key_name)
write_file(key_filename, result["KeyMaterial"].encode("utf-8"))
user_data = b"<powershell>\n%s\n</powershell>" % read_file("first-boot.ps1")
result = ec2(
"run-instances", "--image-id", base_ami["ImageId"],
"--key-name", key_name,
"--user-data", user_data,
"--instance-type", "c4.xlarge",
"--block-device-mappings",
"DeviceName=/dev/sda1,Ebs={VolumeSize=75,DeleteOnTermination=true,VolumeType=gp2}",
"--instance-initiated-shutdown-behavior", "stop"
)
assert len(result["Instances"]) == 1
instance_id = result["Instances"][0]["InstanceId"]
ec2("create-tags", "--resources", instance_id, "--tags",
"Key=Name,Value=TC %s base instance" % WORKER_TYPE)
print("Waiting for password data to be available…")
ec2_wait("password-data-available", "--instance-id", instance_id)
result = ec2("get-password-data", "--instance-id", instance_id,
"--priv-launch-key", here(key_filename))
print("Administrator password:", result["PasswordData"])
print("Waiting for the instance to finish executing first-boot.ps1 and shut down…")
ec2_wait("instance-stopped", "--instance-id", instance_id)
now = datetime.datetime.utcnow().strftime("%Y-%m-%d_%H.%M.%S")
image_id = ec2("create-image", "--instance-id", instance_id,
"--name", "TC %s %s" % (WORKER_TYPE, now))["ImageId"]
print("Started creating image with ID %s" % image_id)
ec2_wait("image-available", "--image-ids", image_id)
ec2("modify-image-attribute", "--image-id", image_id,
"--launch-permission", "Add=[{UserId=%s}]" % AWS_PROVISIONER_USER_ID)
print("Image available. Terminating the temporary instance…")
ec2("terminate-instances", "--instance-ids", instance_id)
def most_recent_ami(name_pattern):
result = ec2(
"describe-images", "--owners", "amazon",
"--filters", "Name=platform,Values=windows", b"Name=name,Values=" + name_pattern,
)
return max(result["Images"], key=lambda x: x["CreationDate"])
def ec2_wait(*args):
# https://docs.aws.amazon.com/cli/latest/reference/ec2/wait/password-data-available.html
# “It will poll every 15 seconds until a successful state has been reached.
# This will exit with a return code of 255 after 40 failed checks.”
while True:
try:
return ec2("wait", *args)
except subprocess.CalledProcessError as err:
if err.returncode != 255:
raise
def try_ec2(*args):
try:
return ec2(*args)
except subprocess.CalledProcessError:
return None
def ec2(*args):
args = ["aws", "ec2", "--region", REGION, "--output", "json"] + list(args)
output = subprocess.check_output(args)
if output:
return json.loads(output)
def read_file(filename):
with open(here(filename), "rb") as f:
return f.read()
def write_file(filename, contents):
with open(here(filename), "wb") as f:
f.write(contents)
def here(filename, base=os.path.dirname(__file__)):
return os.path.join(base, filename)
if __name__ == "__main__":
main()

View file

@ -0,0 +1,81 @@
Start-Transcript -Path "C:\first_boot.txt"
Get-ChildItem Env: | Out-File "C:\install_env.txt"
# DisableIndexing: Disable indexing on all disk volumes (for performance)
Get-WmiObject Win32_Volume -Filter "IndexingEnabled=$true" | Set-WmiInstance -Arguments @{IndexingEnabled=$false}
# Disable Windows Defender
# https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-antivirus/windows-defender-antivirus-on-windows-server-2016#install-or-uninstall-windows-defender-av-on-windows-server-2016
Uninstall-WindowsFeature -Name Windows-Defender
# use TLS 1.2 (see bug 1443595)
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
# For making http requests
$client = New-Object system.net.WebClient
$shell = new-object -com shell.application
# Download a zip file and extract it
function Expand-ZIPFile($file, $destination, $url)
{
$client.DownloadFile($url, $file)
$zip = $shell.NameSpace($file)
foreach($item in $zip.items())
{
$shell.Namespace($destination).copyhere($item)
}
}
# Open up firewall for livelog (both PUT and GET interfaces)
New-NetFirewallRule -DisplayName "Allow livelog PUT requests" `
-Direction Inbound -LocalPort 60022 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Allow livelog GET requests" `
-Direction Inbound -LocalPort 60023 -Protocol TCP -Action Allow
# Install generic-worker and dependencies
md C:\generic-worker
$client.DownloadFile("https://github.com/taskcluster/generic-worker/releases/download" +
"/v10.11.3/generic-worker-windows-amd64.exe", "C:\generic-worker\generic-worker.exe")
$client.DownloadFile("https://github.com/taskcluster/livelog/releases/download" +
"/v1.1.0/livelog-windows-amd64.exe", "C:\generic-worker\livelog.exe")
Expand-ZIPFile -File "C:\nssm-2.24.zip" -Destination "C:\" `
-Url "http://www.nssm.cc/release/nssm-2.24.zip"
Start-Process C:\generic-worker\generic-worker.exe -ArgumentList `
"new-openpgp-keypair --file C:\generic-worker\generic-worker-gpg-signing-key.key" `
-Wait -NoNewWindow -PassThru `
-RedirectStandardOutput C:\generic-worker\generate-signing-key.log `
-RedirectStandardError C:\generic-worker\generate-signing-key.err
Start-Process C:\generic-worker\generic-worker.exe -ArgumentList (
"install service --nssm C:\nssm-2.24\win64\nssm.exe " +
"--config C:\generic-worker\generic-worker.config"
) -Wait -NoNewWindow -PassThru `
-RedirectStandardOutput C:\generic-worker\install.log `
-RedirectStandardError C:\generic-worker\install.err
# # For debugging, let us know the workers IP address through:
# # ssh servo-master.servo.org tail -f /var/log/nginx/access.log | grep ping
# Start-Process C:\nssm-2.24\win64\nssm.exe -ArgumentList `
# "install", "servo-ping", "powershell", "-Command", @"
# (New-Object system.net.WebClient).DownloadData(
# 'http://servo-master.servo.org/ping/generic-worker')
# "@
# # This "service" isnt a long-running service: it runs once on boot and then terminates.
# Start-Process C:\nssm-2.24\win64\nssm.exe -ArgumentList `
# "set", "servo-ping", "AppExit", "Default", "Exit"
# Visual C++ Build Tools
# https://blogs.msdn.microsoft.com/vcblog/2016/11/16/introducing-the-visual-studio-build-tools/
$client.DownloadFile("https://aka.ms/vs/15/release/vs_buildtools.exe", "C:\vs_buildtools.exe")
Start-Process C:\vs_buildtools.exe -ArgumentList (`
"--passive --norestart --includeRecommended " +
"--add Microsoft.VisualStudio.Workload.VCTools " +
"--add Microsoft.VisualStudio.Component.VC.ATL " +
"--add Microsoft.VisualStudio.Component.VC.ATLMFC"
) -Wait
# Now shutdown, in preparation for creating an image
shutdown -s

View file

@ -25,13 +25,13 @@ IF EXIST "%VS_VCVARS%" (
call "%VS_VCVARS%" x64
) ELSE (
ECHO 32-bit Windows is currently unsupported.
EXIT /B
EXIT /B 1
)
)
) ELSE (
ECHO Visual Studio 2015 or 2017 is not installed.
ECHO Download and install Visual Studio 2015 or 2017 from https://www.visualstudio.com/
EXIT /B
EXIT /B 1
)
popd

View file

@ -69,7 +69,7 @@ files = [
"./tests/wpt/mozilla/tests/css/pre_with_tab.html",
"./tests/wpt/mozilla/tests/mozilla/textarea_placeholder.html",
# Python 3 syntax causes "E901 SyntaxError" when flake8 runs in Python 2
"./etc/taskcluster/decision-task.py",
"./etc/taskcluster/decision_task.py",
"./etc/taskcluster/decisionlib.py",
]
# Directories that are ignored for the non-WPT tidy check.