GithubHelp home page GithubHelp logo

cloudflare / workerd Goto Github PK

View Code? Open in Web Editor NEW
5.8K 52.0 247.0 10.2 MB

The JavaScript / Wasm runtime that powers Cloudflare Workers

Home Page: https://blog.cloudflare.com/workerd-open-source-workers-runtime/

License: Apache License 2.0

Starlark 1.94% Rust 0.05% C++ 67.20% Cap'n Proto 1.50% C 0.28% Shell 0.08% TypeScript 7.84% JavaScript 20.57% Batchfile 0.07% Dockerfile 0.03% Python 0.45%

workerd's Introduction

๐Ÿ‘ท workerd, Cloudflare's JavaScript/Wasm Runtime

Banner

workerd (pronounced: "worker-dee") is a JavaScript / Wasm server runtime based on the same code that powers Cloudflare Workers.

You might use it:

  • As an application server, to self-host applications designed for Cloudflare Workers.
  • As a development tool, to develop and test such code locally.
  • As a programmable HTTP proxy (forward or reverse), to efficiently intercept, modify, and route network requests.

Introduction

Design Principles

  • Server-first: Designed for servers, not CLIs nor GUIs.

  • Standard-based: Built-in APIs are based on web platform standards, such as fetch().

  • Nanoservices: Split your application into components that are decoupled and independently-deployable like microservices, but with performance of a local function call. When one nanoservice calls another, the callee runs in the same thread and process.

  • Homogeneous deployment: Instead of deploying different microservices to different machines in your cluster, deploy all your nanoservices to every machine in the cluster, making load balancing much easier.

  • Capability bindings: workerd configuration uses capabilities instead of global namespaces to connect nanoservices to each other and external resources. The result is code that is more composable -- and immune to SSRF attacks.

  • Always backwards compatible: Updating workerd to a newer version will never break your JavaScript code. workerd's version number is simply a date, corresponding to the maximum "compatibility date" supported by that version. You can always configure your worker to a past date, and workerd will emulate the API as it existed on that date.

Read the blog post to learn more about these principles.

WARNING: This is a beta. Work in progress

Although most of workerd's code has been used in Cloudflare Workers for years, the workerd configuration format and top-level server code is brand new. We don't yet have much experience running this in production. As such, there will be rough edges, maybe even a few ridiculous bugs. Deploy to production at your own risk (but please tell us what goes wrong!).

The config format may change in backwards-incompatible ways before workerd leaves beta, but should remain stable after that.

A few caveats:

  • General error logging is awkward. Traditionally we have separated error logs into "application errors" (e.g. a Worker threw an exception from JavaScript) and "internal errors" (bugs in the implementation which the Workers team should address). We then sent these errors to completely different places. In the workerd world, the server admin wants to see both of these, so logging has become entirely different and, at the moment, is a bit ugly. For now, it may help to run workerd with the --verbose flag, which causes application errors to be written to standard error in the same way that internal errors are (but may also produce more noise). We'll be working on making this better out-of-the-box.
  • Binary packages are only available via npm, not as distro packages. This works well for testing with Miniflare, but is awkward for a production server that doesn't actually use Node at all.
  • Multi-threading is not implemented. workerd runs in a single-threaded event loop. For now, to utilize multiple cores, we suggest running multiple instances of workerd and balancing load across them. We will likely add some built-in functionality for this in the near future.
  • Performance tuning has not been done yet, and there is low-hanging fruit here. workerd performs decently as-is, but not spectacularly. Experiments suggest we can roughly double performance on a "hello world" load test with some tuning of compiler optimization flags and memory allocators.
  • Durable Objects currently always run on the same machine that requested them, using local disk storage. This is sufficient for testing and small services that fit on a single machine. In scalable production, though, you would presumably want Durable Objects to be distributed across many machines, always choosing the same machine for the same object.
  • Parameterized workers are not implemented yet. This is a new feature specified in the config schema, which doesn't have any precedent on Cloudflare.
  • Tests for most APIs are conspicuously missing. This is because the testing harness we have used for the past five years is deeply tied to the internal version of the codebase. Ideally, we need to translate those tests into the new workerd test format and move them to this repo; this is an ongoing effort. For the time being, we will be counting on the internal tests to catch bugs. We understand this is not ideal for external contributors trying to test their changes.
  • Documentation is growing quickly but is definitely still a work in progress.

WARNING: workerd is not a hardened sandbox

workerd tries to isolate each Worker so that it can only access the resources it is configured to access. However, workerd on its own does not contain suitable defense-in-depth against the possibility of implementation bugs. When using workerd to run possibly-malicious code, you must run it inside an appropriate secure sandbox, such as a virtual machine. The Cloudflare Workers hosting service in particular uses many additional layers of defense-in-depth.

With that said, if you discover a bug that allows malicious code to break out of workerd, please submit it to Cloudflare's bug bounty program for a reward.

Getting Started

Supported Platforms

In theory, workerd should work on any POSIX system that is supported by V8 and Windows.

In practice, workerd is tested on:

  • Linux and macOS (x86-64 and arm64 architectures)
  • Windows (x86-64 architecture)

On other platforms, you may have to do tinkering to make things work.

Building workerd

To build workerd, you need:

  • Bazel
    • If you use Bazelisk (recommended), it will automatically download and use the right version of Bazel for building workerd.
  • On Linux:
    • We use the clang/LLVM toolchain to build workerd and support version 15 and higher. Earlier versions of clang may still work, but are not officially supported.
    • Clang 15+ (e.g. package clang-15 on Debian Bookworm).
    • libc++ 15+ (e.g. packages libc++-15-dev and libc++abi-15-dev)
    • LLD 15+ (e.g. package lld-15).
    • python3, python3-distutils, and tcl8.6
  • On macOS:
    • Xcode 15 installation (available on macOS 13 and higher)
    • Homebrew installed tcl-tk package (provides Tcl 8.6)
  • On Windows:
    • Install App Installer from the Microsoft Store for the winget package manager and then run install-deps.bat from an administrator prompt to install bazel, LLVM, and other dependencies required to build workerd on Windows.
    • Add startup --output_user_root=C:/tmp to the .bazelrc file in your user directory.
    • When developing at the command-line, run bazel-env.bat in your shell first to select tools and Windows SDK versions before running bazel.

You may then build workerd at the command-line with:

bazel build //src/workerd/server:workerd

You can also build from within Visual Studio Code using the instructions in docs/vscode.md.

The compiled binary will be located at bazel-bin/src/workerd/server/workerd.

If you run a Bazel build before you've installed some dependencies (like clang or libc++), and then you install the dependencies, you must resync locally cached toolchains, or clean Bazel's cache, otherwise you might get strange errors:

bazel sync --configure

If that fails, you can try:

bazel clean --expunge

The cache will now be cleaned and you can try building again.

If you have a fairly recent clang packages installed you can build a more performant release version of workerd:

bazel build --config=thin-lto //src/workerd/server:workerd

Configuring workerd

workerd is configured using a config file written in Cap'n Proto text format.

A simple "Hello World!" config file might look like:

using Workerd = import "/workerd/workerd.capnp";

const config :Workerd.Config = (
  services = [
    (name = "main", worker = .mainWorker),
  ],

  sockets = [
    # Serve HTTP on port 8080.
    ( name = "http",
      address = "*:8080",
      http = (),
      service = "main"
    ),
  ]
);

const mainWorker :Workerd.Worker = (
  serviceWorkerScript = embed "hello.js",
  compatibilityDate = "2023-02-28",
  # Learn more about compatibility dates at:
  # https://developers.cloudflare.com/workers/platform/compatibility-dates/
);

Where hello.js contains:

addEventListener("fetch", event => {
  event.respondWith(new Response("Hello World"));
});

Complete reference documentation is provided by the comments in workerd.capnp.

There is also a library of sample config files.

(TODO: Provide a more extended tutorial.)

Running workerd

To serve your config, do:

workerd serve my-config.capnp

For more details about command-line usage, use workerd --help.

Prebuilt binaries are distributed via npm. Run npx workerd ... to use these. If you're running a prebuilt binary, you'll need to make sure your system has the right dependencies installed:

  • On Linux:
    • glibc 2.31 or higher (already included on e.g. Ubuntu 20.04, Debian Bullseye)
  • On macOS:
    • macOS 11.5 or higher
    • The Xcode command line tools, which can be installed with xcode-select --install

Local Worker development with wrangler

You can use Wrangler (v3.0 or greater) to develop Cloudflare Workers locally, using workerd. Run:

wrangler dev

Serving in production

workerd is designed to be unopinionated about how it runs.

One good way to manage workerd in production is using systemd. Particularly useful is systemd's ability to open privileged sockets on workerd's behalf while running the service itself under an unprivileged user account. To help with this, workerd supports inheriting sockets from the parent process using the --socket-fd flag.

Here's an example system service file, assuming your config defines two sockets named http and https:

# /etc/systemd/system/workerd.service
[Unit]
Description=workerd runtime
After=local-fs.target remote-fs.target network-online.target
Requires=local-fs.target remote-fs.target workerd.socket
Wants=network-online.target

[Service]
Type=exec
ExecStart=/usr/bin/workerd serve /etc/workerd/config.capnp --socket-fd http=3 --socket-fd https=4
Sockets=workerd.socket

# If workerd crashes, restart it.
Restart=always

# Run under an unprivileged user account.
User=nobody
Group=nogroup

# Hardening measure: Do not allow workerd to run suid-root programs.
NoNewPrivileges=true

[Install]
WantedBy=multi-user.target

And corresponding sockets file:

# /etc/systemd/system/workerd.socket
[Unit]
Description=sockets for workerd
PartOf=workerd.service

[Socket]
ListenStream=0.0.0.0:80
ListenStream=0.0.0.0:443

[Install]
WantedBy=sockets.target

(TODO: Fully explain how to get systemd to recognize these files and start the service.)

workerd's People

Contributors

a-robinson avatar bcaimano avatar dom96 avatar edevil avatar elithrar avatar fhanau avatar frederik-baetens avatar garrettgu10 avatar geelen avatar harrishancock avatar hoodmane avatar irvinebroque avatar jasnell avatar jclee avatar jp4a50 avatar jqmmes avatar jspspike avatar justin-mp avatar kentonv avatar kflansburg avatar mellowyarker avatar mikea avatar mrbbot avatar obsidianminor avatar ohodson avatar penalosa avatar smerritt avatar vlovich avatar warfields avatar xortive avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

workerd's Issues

username/password silently dropped from durable object request urls

Given the following worker:

export default {
  fetch(req, env) {
    let url = new URL(req.url)
    url.username = 'kentonv'
    console.log(url.href) // ie, http://kentonv@localhost:8080/
    return env.do.get(env.do.newUniqueId()).fetch(url, req)
  }
}

export class DO {
  fetch(req) {
    return new Response(req.url)
  }
}

The URL returned by the durable object's response does not contain the username set in the worker, despite serializing correctly (ie, shows up in url.href) inside the worker itself.

It seems as though username and password are being silently dropped, despite being valid URL components. Ideally URLs would be sent as-is, but if for some unfortunate reason workerd strips them intentionally, fetch should fail in the worker itself before reaching the DO (same as protocol errors a la fetch_refuses_unknown_protocols).

๐Ÿ› BUG: Warning about "redundant `response.clone()`" even if you call `request.clone()`

What version of Wrangler are you using?

2.0.23

What operating system are you using?

Windows

Describe the Bug

If you have redundant calls to response.clone() on non-local wrangler dev, you normally get this message:

Your worker called response.clone(), but did not read the body of both clones. This is wasteful, as it forces the system to buffer the entire response body in memory, rather than streaming it through. This may cause your worker to be unexpectedly terminated for going over the memory limit. If you only meant to copy the response headers and metadata (e.g. in order to be able to modify them), use new Response(response.body, response) instead.

However, you get this same message if you have redundant calls to request.clone() as well.
Ideally when this happens, it should say:

- Your worker called response.clone(), but...
+ Your worker called request.clone(), but...

โ€“to be less confusing. Or at least, a more generalized message could be used:

Your worker called either response.clone() or request.clone(), but...

Since I couldn't find this string in the Wrangler source, and it doesn't occur when using Miniflare, I'd imagine it exists in another codebase.

The Binary in npm package is not executable

And I think can be solved by performing this To use this, supply a bin field in your package.json which is a map of command name to local file name. On install, npm will symlink that file into prefix/bin for global installs, or ./node_modules/.bin/ for local installs.

For example, myapp could have this:

{ "bin" : { "myapp" : "./cli.js" } }

So, when you install myapp, itโ€™ll create a symlink from the cli.js script to /usr/local/bin/myapp.

Support Offscreen Canvas, FileReader, Image, createImageBitmap APIs

Please support:

Just like a regular web worker in the browser would.

It would allow to redimension images and/or play with Tensorflow on the server, which is a secure environment VS the brower.
(Browser is not secure because extensions act as proxies and can break/rewrite the CSP before serving the page to the user)

This feature is constantly asked since 2019 (3 years):

Of course there is the option of using Cloudflare Images, but the use cases are limited (only for resizing).
Playing with composable primitives is prefered (image analysis, pixel per pixel, etc).

[feature request] console output

I would like a simple way to have workerd push just the userland script stdout to a file. Currently, using the --verbose flag userland code is a) in stderr, b) wrapped around and escaped beyond repair.

workerd/io/worker.c++:1465: info: console.log(); message() = ["my escaped console log\n"]

The fact that the verbose logs are escaped and wrapped removes the ability to e.g. capture a universal library's test suite TAP output to evaluate.

Expected Performance/Throughput

I have been doing some benchmarking of workerd against other JS runtimes and I am not seeing very good results. Do you have any recommendations for config for peak performance for benchmarking, or an expectation of what kind of numbers we should see?

RPS using wrk for a simple hello world benchmark is currently showing only one tenth what i see with Deno or Bun.sh and tail latencies are also very high.

Memory usage is also very high compared to Deno - ~220 MB for a simple hello world.

feature(proposal): Bindings for system environment variable

At the moment I have to put the values for environment variables as key bindings. I'd like a way to read system environment variables similar to process.env in Node.js. Perhaps one way to make it work could be explicitly binding to a system environment variable. Another option could be specifying it via the runtime/cli.

Tests fail to run on macOS arm64 with _lol_html_attribute_name_get symbol not found

A number of tests (eg, api:actor-state-test, api:api-rtti-test, api:basics-test, etc) currently fail on macOS (Apple M1 Pro). Example test.log output:

exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //src/workerd/api:actor-state-test
-----------------------------------------------------------------------------
dyld[19611]: symbol not found in flat namespace (_lol_html_attribute_name_get)

FAILED: Build did NOT complete successfully - workerd Ubuntu 22.04.01 x86-64

1664488957.661890805: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /sys/fs/cgroup
1664488957.661898383: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /sys/fs/pstore
1664488957.661905646: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /sys/fs/bpf
1664488957.661911495: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /sys/kernel/debug
1664488957.661916752: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /sys/kernel/tracing
1664488957.661921753: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /sys/kernel/config
1664488957.661926458: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /sys/fs/fuse/connections
1664488957.661932116: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /proc
1664488957.661936706: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /proc/sys/fs/binfmt_misc
1664488957.661943104: src/main/tools/linux-sandbox-pid1.cc:391: remount(nullptr, /proc/sys/fs/binfmt_misc, nullptr, 2101281, nullptr) failure (Operation not permitted) ignored
1664488957.661948677: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /proc/sys/fs/binfmt_misc
1664488957.661953427: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /snap/core20/1518
1664488957.661958087: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /snap/lxd/22923
1664488957.661962319: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /snap/snapd/16292
1664488957.661978338: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /boot/efi
1664488957.661982909: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /snap/snapd/17029
1664488957.661986858: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /snap/core20/1623
1664488957.661990698: src/main/tools/linux-sandbox-pid1.cc:371: remount ro: /snap/lxd/23541
1664488957.661995210: src/main/tools/linux-sandbox-pid1.cc:371: remount rw: /root/.cache/bazel/_bazel_root/e9082452e993cc03c3ef8055d948726b/sandbox/linux-sandbox/1/execroot/workerd
1664488957.662000220: src/main/tools/linux-sandbox-pid1.cc:371: remount rw: /root/.cache/bazel/_bazel_root/e9082452e993cc03c3ef8055d948726b/sandbox/linux-sandbox/1/execroot/workerd
1664488957.662011398: src/main/tools/linux-sandbox-pid1.cc:371: remount rw: /tmp
1664488957.662015858: src/main/tools/linux-sandbox-pid1.cc:371: remount rw: /dev/shm
1664488957.662060759: src/main/tools/linux-sandbox-pid1.cc:460: calling fork...
1664488957.662174897: src/main/tools/linux-sandbox-pid1.cc:490: child started with PID 2
bazel-out/k8-opt-exec-2B5CBBC6/bin/external/capnp-cpp/src/capnp/capnpc-c++: plugin failed: Killed
1664488959.589073935: src/main/tools/linux-sandbox-pid1.cc:507: wait returned pid=2, status=0x100
1664488959.589088742: src/main/tools/linux-sandbox-pid1.cc:525: child exited normally with code 1
1664488959.589825799: src/main/tools/linux-sandbox.cc:233: child exited normally with code 1
Target //src/workerd/server:workerd failed to build
INFO: Elapsed time: 36.602s, Critical Path: 2.03s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully

Support node-like module resolution in dev mode

Add support for a node-like module resolution in workerd dev mode would make workerd play nicely with existing solutions.

Problem

Currently, when people use meta-frameworks such as qwik, remix, etc, they all expect to use the built-in dev server with all kinds of DX features. And most of these frameworks depend on Vite's dev server to provide most DX features.

However, Vite's dev server SSR support is not compatible with workerd (or any other edge runtimes that I'm aware of). Vite dev server expects the SSR will happen in node runtime so it loads the code "in-place" and does SSR in node runtime instead of the edge runtime. To make things faster, Vite will resolve and transform the requested modules on the fly as node imports them so things are always lazily processed. I created an issue here vitejs/vite#10770 proposing if it's possible to create a bundled script on request in the dev server.

On the other side, even if Vite does provide a bundled script, some frameworks, e.g., qwik, may still not work with workerd. This is due to in dev mode, qwik may load the component code with import. I created an issue here QwikDev/qwik#1984.

Root cause

All these incompatibilities happen because all these tools are designed without the edge runtime limitation in mind, so they do optimization by assuming everything will happen in node runtime with fs access and in the same process.

Solution

I'm wondering if it's possible on workerd side, that we can provide an interface for a special dev mode that is more compatible with node module resolution behavior, so we can hook workerd into existing tools and frameworks more easily. Because none of the issues above are trivial to fix, some parts of them even contradict the optimization they promise.

If, for example, in node we can programmatically have workerd load a script with a provided import/require, then all of these issues can be more easily fixed and we won't have to fix all existing and (probably most) new tools and frameworks by changing their design.

Support locales Chromium disables

I notice that workerd appears to be using Chromium's locale data. This is unfortunate, since Chromium deliberately hobbles locale support to be in line with full translations of its own UI. This removes support for a substantial number of languages. Node.js and Edge appear to fix this, presumably by using alternative ICU data files.

This prevents me from using either Cloudflare Workers or workerd for any task that requires sorting strings in my native language, and since the client is just as likely to be Chrome I can't rely on the browser knowing how to do it either. While that can be worked around, the workarounds almost always involve running a server somewhere that does know how to do collation.

Would it be possible to at least investigate including more complete locale data?

jaeger-test fails to compile on macOS arm64 with long conversion

$ bazel test //src/workerd/io:jaeger-test
# ...

src/workerd/io/jaeger-test.c++:223:40: error: no viable conversion from 'long' to 'kj::OneOf<bool, long long, double, kj::String>'
  spanData.tags.insert("int64_tag"_kj, 123l);
                                       ^~~~
bazel-out/darwin_arm64-fastbuild/bin/external/capnp-cpp/src/kj/_virtual_includes/kj/kj/one-of.h:366:3: note: candidate constructor not viable: no known conversion from 'long' to 'const kj::OneOf<bool, long long, double, kj::String> &' for 1st argument
  OneOf(const OneOf& other) { copyFrom(other); }
  ^
bazel-out/darwin_arm64-fastbuild/bin/external/capnp-cpp/src/kj/_virtual_includes/kj/kj/one-of.h:367:3: note: candidate constructor not viable: no known conversion from 'long' to 'kj::OneOf<bool, long long, double, kj::String> &' for 1st argument
  OneOf(OneOf& other) { copyFrom(other); }
  ^
bazel-out/darwin_arm64-fastbuild/bin/external/capnp-cpp/src/kj/_virtual_includes/kj/kj/one-of.h:368:3: note: candidate constructor not viable: no known conversion from 'long' to 'kj::OneOf<bool, long long, double, kj::String> &&' for 1st argument
  OneOf(OneOf&& other) { moveFrom(other); }
  ^
bazel-out/darwin_arm64-fastbuild/bin/external/capnp-cpp/src/kj/_virtual_includes/kj/kj/one-of.h:372:3: note: candidate template ignored: could not match 'OneOf<type-parameter-0-0...>' against 'long'
  OneOf(const OneOf<OtherVariants...>& other) { copyFromSubset(other); }
  ^
bazel-out/darwin_arm64-fastbuild/bin/external/capnp-cpp/src/kj/_virtual_includes/kj/kj/one-of.h:374:3: note: candidate template ignored: could not match 'OneOf<type-parameter-0-0...>' against 'long'
  OneOf(OneOf<OtherVariants...>& other) { copyFromSubset(other); }
  ^
bazel-out/darwin_arm64-fastbuild/bin/external/capnp-cpp/src/kj/_virtual_includes/kj/kj/one-of.h:376:3: note: candidate template ignored: could not match 'OneOf<type-parameter-0-0...>' against 'long'
  OneOf(OneOf<OtherVariants...>&& other) { moveFromSubset(other); }
  ^
bazel-out/darwin_arm64-fastbuild/bin/external/capnp-cpp/src/kj/_virtual_includes/kj/kj/one-of.h:380:3: note: candidate template ignored: substitution failure [with T = long]: no type named 'Success' in 'kj::OneOf<bool, long long, double, kj::String>::HasAll<0, long>'
  OneOf(T&& other): tag(typeIndex<Decay<T>>()) {
  ^
bazel-out/darwin_arm64-fastbuild/bin/external/capnp-cpp/src/kj/_virtual_includes/kj/kj/map.h:61:32: note: passing argument to parameter 'value' here
  Entry& insert(Key key, Value value);
                               ^
1 error generated.
Target //src/workerd/io:jaeger-test failed to build

Support SIMD

SIMD instructions (https://v8.dev/features/simd) are interesting for parallel processing tasks like

  • Machine Learning (Tensorflow)
  • Image manipulation
  • ...etc.

For example, WASM VIPS (https://github.com/kleisauke/wasm-vips), used for image manipulation, requires a platform that supports SIMD.

For V8-based engines, at least version 9.1.54 is required to match the final SIMD opcodes, this corresponds to Chrome 91, Node.js 16.4.0 and Deno 1.9.0.

For Spidermonkey-based engines, the JavaScript engine used in Mozilla Firefox and whose version numbers are aligned, at least version 89 is required.

JavaScriptCore-based (e.g. Safari, Bun) engines are currently not supported.

WASM VIPS cannot currently run on Cloudflare Workers for other reasons (SharedBufferAPI)
See kleisauke/wasm-vips#2

That being said, supporting SIMD would make Cloudflare Workers a step closer to the browser's web workers.

Update `workers-types` `README` to make it clear only types can be imported

I'd like to directly import required types, and not utilize the types field in tsconfig.json.

During development it all seems to work fine, but whenever I run wrangler dev I get errors like this:
No matching export in "node_modules/@cloudflare/workers-types/2022-08-04/index.ts" for import "Response".

What's the correct way to import?

๐Ÿš€ Feature Request: Breakpoint debugging support in `workerd`

Hi,

Basically, I don't know how to debug a worker with VS Code. When clicking "Run/Debug" in VS Code, it basically invokes what is configured in .vscode/launch.json. I just have no idea what should be written there since the dev environment is wrapped by wrangler.

Since most of the people are using VS Code and they probably have a similar problem, I assume it would be a nice addition and improving developer experience significantly.

I've also posted a related thread here: https://community.cloudflare.com/t/how-to-debug-use-breakpoints-with-vs-code/423775

Thanks for consideration.

`network` shorthands don't allow connecting to a LAN address

Trying to connect to 192.168.1.105.

Setting (name = "internet", network = (allow = ["public", "private", "local", "network"])) doesn't work (connect() blocked by restrictPeers()).

But (name = "internet", network = (allow = ["0.0.0.0/0"])) does work.

Additionally, specifying multiple CIDRs also works (e.g. (name = "internet", network = (allow = ["192.168.1.104/32", "192.168.1.105/32", "192.168.1.106/32"])), so it doesn't seem to be a problem with unioning the list together.

Allow "fetch" to use IP address instead of a domain

Is there an option to allow using an IP instead of a full domain when using fetch()?

Edit: I found the option, but I cannot get it to work.


Config:

using Workerd = import "/workerd/workerd.capnp";

# A constant of type `Workerd.Config` will be recognized as the top-level configuration.
const config :Workerd.Config = (
  services = [
    # The site worker contains JavaScript logic to serve static files from a directory. The logic
    # includes things like setting the right content-type (based on file name), defaulting to
    # `index.html`, and so on.
    (name = "site-worker", worker = .siteWorker),
    
    # Underneath the site worker we have a second service which provides direct access to files on
    # disk. We only configure site-worker to be able to access this (via a binding, below), so it
    # won't be served publicly as-is. (Note that disk access is read-only by default, but there is
    # a `writable` option which enables PUT requests.)
    (name = "site-files", disk = "content-dir"),
	
    # Fetch
    # (name = "internal-network", external = ( address = "127.0.0.1:7379" ) ), # First try
    (name = "internal-network", network = ( allow = ["public", "private"], ) ), # Second try

    #KV
    (name = "kv", disk = ( path = "kv", writable = true, allowDotfiles = false ) )
  ],

  # We export it via HTTP on port 8080.
  sockets = [ ( name = "http", address = "*:8080", http = (), service = "site-worker" ) ],
);

# For legibility we define the Worker's config as a separate constant.
const siteWorker :Workerd.Worker = (
  # All Workers must declare a compatibility date, which ensures that if `workerd` is updated to
  # a newer version with breaking changes, it will emulate the API as it existed on this date, so
  # the Worker won't break.
  compatibilityDate = "2022-09-16",

  # This worker is modules-based.
  modules = [
    (name = "hello.js", esModule = embed "hello.js"),
  ],

  bindings = [
    # Give this worker permission to request files on disk, via the "site-files" service we
    # defined earlier.
    ( name = "files", service = "site-files"),

    # Fetch
    ( name ="internalNetwork", service = "internal-network" ),    

    # KV
    ( name = "KV", kvNamespace = ( name = "kv" ) ),

    # This worker supports some configuration options via a JSON binding. Here we set the option
    # so that we hide the `.html` extension from URLs. (See the code for all config options.)
    (name = "config", json = "{\"hideHtmlExtension\": true}")
  ],
);

This is how I call the fetch function in the code: env.internalNetwork.fetch("http://127.0.0.1:7379/endpoint")

Initially, I used the ExternalServer option. I then tried using the Network option. Neither worked.

MacOS-build appears to have broken refresh_compile_commands

Trying to run bazel run //:refresh_compile_commands now fails with...

INFO: Build option --features has changed, discarding analysis cache.
INFO: Analyzed target //:refresh_compile_commands (75 packages loaded, 438 targets configured).
INFO: Found 1 target...
Target //:refresh_compile_commands up-to-date:
  bazel-bin/refresh_compile_commands.py
  bazel-bin/refresh_compile_commands
INFO: Elapsed time: 0.499s, Critical Path: 0.00s
INFO: 5 processes: 5 internal.
INFO: Build completed successfully, 5 total actions
INFO: Build completed successfully, 5 total actions
>>> Automatically added //external workspace link:
    This link makes it easy for you--and for build tooling--to see the external dependencies you bring in. It also makes your source tree have the same directory structure as the build sandbox.
    It's a win/win: It's easier for you to browse the code you use, and it eliminates whole categories of edge cases for build tooling.
>>> Analyzing commands used in @//...
ERROR: /home/james/cloudflare/workerd/BUILD.bazel: no such target '//:macos_arm64': target 'macos_arm64' not declared in package '' defined by /home/james/cloudflare/workerd/BUILD.bazel
ERROR: /home/james/cloudflare/workerd/rust-deps/BUILD.bazel:40:14: errors encountered resolving select() keys for //rust-deps:crates_vendor
ERROR: Analysis of target '//rust-deps:crates_vendor' failed; build aborted: 
Bazel aquery failed. Command: ['bazel', 'aquery', "mnemonic('(Objc|Cpp)Compile',deps(@//...))", '--output=jsonproto', '--include_artifacts=false', '--ui_event_filters=-info', '--noshow_progress', '--features=-compiler_param_file']
>>> Failed extracting commands for @//...
    Continuing gracefully...

Likewise, running bazel build //... also fails:

james@DESKTOP-5KK9VIR:~/cloudflare/workerd$ bazel build //...
INFO: Build option --features has changed, discarding analysis cache.
ERROR: /home/james/cloudflare/workerd/BUILD.bazel: no such target '//:macos_arm64': target 'macos_arm64' not declared in package '' defined by /home/james/cloudflare/workerd/BUILD.bazel
INFO: Repository rust_linux_x86_64__x86_64-unknown-linux-gnu_tools instantiated at:
  /home/james/cloudflare/workerd/WORKSPACE:130:25: in <toplevel>
  /home/james/.cache/bazel/_bazel_james/728d89b94395895921db6cbedc832917/external/rules_rust/rust/repositories.bzl:163:14: in rust_register_toolchains
  /home/james/.cache/bazel/_bazel_james/728d89b94395895921db6cbedc832917/external/bazel_tools/tools/build_defs/repo/utils.bzl:233:18: in maybe
  /home/james/.cache/bazel/_bazel_james/728d89b94395895921db6cbedc832917/external/rules_rust/rust/repositories.bzl:586:61: in rust_repository_set
  /home/james/.cache/bazel/_bazel_james/728d89b94395895921db6cbedc832917/external/rules_rust/rust/repositories.bzl:403:36: in rust_toolchain_repository
Repository rule rust_toolchain_tools_repository defined at:
  /home/james/.cache/bazel/_bazel_james/728d89b94395895921db6cbedc832917/external/rules_rust/rust/repositories.bzl:241:50: in <toplevel>
INFO: Repository rules_proto instantiated at:
  /home/james/cloudflare/workerd/WORKSPACE:84:14: in <toplevel>
  /home/james/.cache/bazel/_bazel_james/728d89b94395895921db6cbedc832917/external/com_google_protobuf/protobuf_deps.bzl:55:21: in protobuf_deps
Repository rule http_archive defined at:
  /home/james/.cache/bazel/_bazel_james/728d89b94395895921db6cbedc832917/external/bazel_tools/tools/build_defs/repo/http.bzl:355:31: in <toplevel>
INFO: Repository 'rules_proto' used the following cache hits instead of downloading the corresponding file.
 * Hash 'a4382f78723af788f0bc19fd4c8411f44ffe0a72723670a34692ffad56ada3ac' for https://github.com/bazelbuild/rules_proto/archive/f7a30f6f80006b591fa7c437fe5a951eb10bcbcf.zip
If the definition of 'rules_proto' was updated, verify that the hashes were also updated.
ERROR: /home/james/cloudflare/workerd/rust-deps/BUILD.bazel:40:14: errors encountered resolving select() keys for //rust-deps:crates_vendor
ERROR: Analysis of target '//rust-deps:crates_vendor' failed; build aborted: 
INFO: Elapsed time: 0.267s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (2 packages loaded, 848 targets configured)
    currently loading: @com_google_protobuf// ... (2 packages)
    Fetching @com_googlesource_chromium_icu; Restarting.
    Fetching @cxxbridge_vendor__cxxbridge-cmd-1.0.75; fetching
    Fetching @v8; Cloning 35952837d5c420b727642b88e28651cb80c3f538 of https://chromium.googlesource.com/v8/v8.git
    Fetching @workerd-v8; Restarting.
    Fetching @com_cloudflare_lol_html; fetching
    Fetching @zlib; Restarting.
    Fetching ...1.0.75; Extracting /home/james/.cache/bazel/_bazel_james/728d89b94395895921db6cbedc832917/external/cxxbridge_ven\
dor__cxxbridge-cmd-1.0.75/temp5227076737996925508/download.tar.gz
    Fetching ..._proto; Extracting /home/james/.cache/bazel/_bazel_james/728d89b94395895921db6cbedc832917/external/rules_proto/t\
emp5294362049385986586/f7a30f6f80006b591fa7c437fe5a951eb10bcbcf.zip ... (9 fetches)
james@DESKTOP-5KK9VIR:~/cloudflare/workerd$ 

Running bazel build //src/workerd/server:workerd works as expected.

/cc @mrbbot @dom96

Support class exports with constructors for stateless workers

Durable Objects let me do this

export class Worker {
  constructor(state, env) {
    // hell yea constructors
    this.state = state
    this.env = env
  }

  async fetch(req) {
    return new Response(`Hello world`)
  }
}

and that's cool. It's especially helpful with SubtleCrypto since a lot of the API uses promises for some reason. I can then call SubtleCrypto APIs once on Worker construction to make keys and store them in the class using a call to state.blockConcurrencyWhile.

this.state.blockConcurrencyWhile((async () {
  this.key = crypto.subtle.importKey(
    'raw',
    this.env.keyData,
    { name: 'HMAC', hash: 'SHA-256' },
    false,
    ['verify']
  );
})());

But what if stateless workers could do this?

export default class {
  constructor(ctx, env) {
    // hell yea constructors
    this.ctx = ctx
    this.env = env
    this.ctx.blockConcurrencyWhile((async () {
      this.key = crypto.subtle.importKey(
        'raw',
        this.env.keyData,
        { name: 'HMAC', hash: 'SHA-256' },
        false,
        ['verify']
      );
    })());
  }
  
  async fetch(req) {
    return new Response(`Hello world`)
  }
}

This way users can do things like SubtleCrypto key imports once instead of every request in a way that makes more obvious sense (the fact that the env object is reused and your modifications persist across requests is not obvious or pure).

Invalid RsaKeyAlgorithm publicExponent returned

crypto.subtle.generateKey({
  name: 'RSA-PSS',
  hash: { name: 'SHA-256' },
  modulusLength: 2048,
  publicExponent: new Uint8Array([0x01, 0x00, 0x01]),
}, false, ['sign', 'verify'])

An RSA-PSS, RSASSA-PKCS1-v1_5, or RSA-OAEP crypto key that's generated or imported is supposed to have an algorithm.publicExponent property that's supposed to be a Uint8Array. ref.

Workerd (and most likely also the workers platform) returns an ArrayBuffer.

cc @jasnell

[feature request] script eval

I would like a simple way to have workerd evaluate a script input. The runtime globals would be present. No need to register an event listener.

Having this capability would make it so much easier to test universal js libraries for this runtime.

Run test suite in workerd

Hi there,

Is there a way to enable globalAsyncIO, globalTimers, and globalRandom for workerd in order to use the runtime for executing a tests suite that requires async IO, timers, and random generation outside handlers?

I can find the relevant options in the Miniflare API but I'd prefer to run the tests with workerd directly.

Thanks in advance.

bug: devtools logs some errors even when they're caught

(Reported by Mozzy#9999 on discord)

Given this script

addEventListener("fetch", (event) => {
  event.respondWith(handleRequest(event.request));
});

async function handleRequest(request) {
  if (request.method === "POST") {
    // Issue(?) with "wrangler dev":
    // Expected: Send a POST request with an empty body or a body that has invalid JSON -> catch and return 400 with no errors reported
    // Reality: Send a POST request with an empty body or a body that has invalid JSON -> reports "X [ERROR] Uncaught SyntaxError: Unexpected end of JSON input" in the console, AND returns the 400
    // Post request is sent via Postman

    let invalidBody = false;
    const { item1, item2 } = await request.json().catch((e) => {
      console.log("xxx", e);
      invalidBody = true;
      return {};
    });

    if (invalidBody)
      return new Response(
        "invalidBody is true, meaning request.json() failed.",
        { headers: { "Content-Type": "text/plain" }, status: 400 }
      );

    return new Response(JSON.stringify({ item1, item2 }), {
      headers: { "Content-Type": "application/json" },
      status: 200,
    });
  }

  if (request.method === "GET")
    return new Response("I see your GET.", {
      headers: { "Content-Type": "text/plain" },
      status: 200,
    });
}

Run the above with wrangler dev.

Make another script

// post.js 

async function run() {
  console.log(
    await (await fetch("http://localhost:8787", { method: "POST" })).text()
  );
}

run();

Call it with node post.js.

You'll see this in the terminal/devtools -
image

Depite having the .catch(). The worker itself doesn't error, and returns the response as requested.

So it looks like our runtime is being a little overenthusiastic here and sending that logging message across the wire. It can be confusing to a dev, so we should fix this.

Invalid JWK import "use" check

The following imports should all succeed but currently don't in workerd and fail with

Error: Asymmetric "jwk" key import with usages requires a JSON Web Key with Public Key Use parameter "use" ("enc") equal to "sig".

These succeed in all other runtimes.

{
  const jwk = {
    kty: 'RSA',
    ext: false,
    use: 'enc',
    n: 'wbdxI55VaanZXPY29Lg5hdmv2XhvqAhoxUkanfzf2-5zVUxa6prHRrI4pP1AhoqJRlZfYtWWd5mmHRG2pAHIlh0ySJ9wi0BioZBl1XP2e-C-FyXJGcTy0HdKQWlrfhTm42EW7Vv04r4gfao6uxjLGwfpGrZLarohiWCPnkNrg71S2CuNZSQBIPGjXfkmIy2tl_VWgGnL22GplyXj5YlBLdxXp3XeStsqo571utNfoUTU8E4qdzJ3U1DItoVkPGsMwlmmnJiwA7sXRItBCivR4M5qnZtdw-7v4WuR4779ubDuJ5nalMv2S66-RPcnFAzWSKxtBDnFJJDGIUe7Tzizjg1nms0Xq_yPub_UOlWn0ec85FCft1hACpWG8schrOBeNqHBODFskYpUc2LC5JA2TaPF2dA67dg1TTsC_FupfQ2kNGcE1LgprxKHcVWYQb86B-HozjHZcqtauBzFNV5tbTuB-TpkcvJfNcFLlH3b8mb-H_ox35FjqBSAjLKyoeqfKTpVjvXhd09knwgJf6VKq6UC418_TOljMVfFTWXUxlnfhOOnzW6HSSzD1c9WrCuVzsUMv54szidQ9wf1cYWf3g5qFDxDQKis99gcDaiCAwM3yEBIzuNeeCa5dartHDb1xEB_HcHSeYbghbMjGfasvKn0aZRsnTyC0xhWBlsolZE',
    e: 'AQAB',
    d: 'n7fzJc3_WG59VEOBTkayzuSMM780OJQuZjN_KbH8lOZG25ZoA7T4Bxcc0xQn5oZE5uSCIwg91oCt0JvxPcpmqzaJZg1nirjcWZ-oBtVk7gCAWq-B3qhfF3izlbkosrzjHajIcY33HBhsy4_WerrXg4MDNE4HYojy68TcxT2LYQRxUOCf5TtJXvM8olexlSGtVnQnDRutxEUCwiewfmmrfveEogLx9EA-KMgAjTiISXxqIXQhWUQX1G7v_mV_Hr2YuImYcNcHkRvp9E7ook0876DhkO8v4UOZLwA1OlUX98mkoqwc58A_Y2lBYbVx1_s5lpPsEqbbH-nqIjh1fL0gdNfihLxnclWtW7pCztLnImZAyeCWAG7ZIfv-Rn9fLIv9jZ6r7r-MSH9sqbuziHN2grGjD_jfRluMHa0l84fFKl6bcqN1JWxPVhzNZo01yDF-1LiQnqUYSepPf6X3a2SOdkqBRiquE6EvLuSYIDpJq3jDIsgoL8Mo1LoomgiJxUwL_GWEOGu28gplyzm-9Q0U0nyhEf1uhSR8aJAQWAiFImWH5W_IQT9I7-yrindr_2fWQ_i1UgMsGzA7aOGzZfPljRy6z-tY_KuBG00-28S_aWvjyUc-Alp8AUyKjBZ-7CWH32fGWK48j1t-zomrwjL_mnhsPbGs0c9WsWgRzI-K8gE',
    p: '7_2v3OQZzlPFcHyYfLABQ3XP85Es4hCdwCkbDeltaUXgVy9l9etKghvM4hRkOvbb01kYVuLFmxIkCDtpi-zLCYAdXKrAK3PtSbtzld_XZ9nlsYa_QZWpXB_IrtFjVfdKUdMz94pHUhFGFj7nr6NNxfpiHSHWFE1zD_AC3mY46J961Y2LRnreVwAGNw53p07Db8yD_92pDa97vqcZOdgtybH9q6uma-RFNhO1AoiJhYZj69hjmMRXx-x56HO9cnXNbmzNSCFCKnQmn4GQLmRj9sfbZRqL94bbtE4_e0Zrpo8RNo8vxRLqQNwIy85fc6BRgBJomt8QdQvIgPgWCv5HoQ',
    q: 'zqOHk1P6WN_rHuM7ZF1cXH0x6RuOHq67WuHiSknqQeefGBA9PWs6ZyKQCO-O6mKXtcgE8_Q_hA2kMRcKOcvHil1hqMCNSXlflM7WPRPZu2qCDcqssd_uMbP-DqYthH_EzwL9KnYoH7JQFxxmcv5An8oXUtTwk4knKjkIYGRuUwfQTus0w1NfjFAyxOOiAQ37ussIcE6C6ZSsM3n41UlbJ7TCqewzVJaPJN5cxjySPZPD3Vp01a9YgAD6a3IIaKJdIxJS1ImnfPevSJQBE79-EXe2kSwVgOzvt-gsmM29QQ8veHy4uAqca5dZzMs7hkkHtw1z0jHV90epQJJlXXnH8Q',
    dp: '19oDkBh1AXelMIxQFm2zZTqUhAzCIr4xNIGEPNoDt1jK83_FJA-xnx5kA7-1erdHdms_Ef67HsONNv5A60JaR7w8LHnDiBGnjdaUmmuO8XAxQJ_ia5mxjxNjS6E2yD44USo2JmHvzeeNczq25elqbTPLhUpGo1IZuG72FZQ5gTjXoTXC2-xtCDEUZfaUNh4IeAipfLugbpe0JAFlFfrTDAMUFpC3iXjxqzbEanflwPvj6V9iDSgjj8SozSM0dLtxvu0LIeIQAeEgT_yXcrKGmpKdSO08kLBx8VUjkbv_3Pn20Gyu2YEuwpFlM_H1NikuxJNKFGmnAq9LcnwwT0jvoQ',
    dq: 'S6p59KrlmzGzaQYQM3o0XfHCGvfqHLYjCO557HYQf72O9kLMCfd_1VBEqeD-1jjwELKDjck8kOBl5UvohK1oDfSP1DleAy-cnmL29DqWmhgwM1ip0CCNmkmsmDSlqkUXDi6sAaZuntyukyflI-qSQ3C_BafPyFaKrt1fgdyEwYa08pESKwwWisy7KnmoUvaJ3SaHmohFS78TJ25cfc10wZ9hQNOrIChZlkiOdFCtxDqdmCqNacnhgE3bZQjGp3n83ODSz9zwJcSUvODlXBPc2AycH6Ci5yjbxt4Ppox_5pjm6xnQkiPgj01GpsUssMmBN7iHVsrE7N2iznBNCeOUIQ',
    qi: 'FZhClBMywVVjnuUud-05qd5CYU0dK79akAgy9oX6RX6I3IIIPckCciRrokxglZn-omAY5CnCe4KdrnjFOT5YUZE7G_Pg44XgCXaarLQf4hl80oPEf6-jJ5Iy6wPRx7G2e8qLxnh9cOdf-kRqgOS3F48Ucvw3ma5V6KGMwQqWFeV31XtZ8l5cVI-I3NzBS7qltpUVgz2Ju021eyc7IlqgzR98qKONl27DuEES0aK0WE97jnsyO27Yp88Wa2RiBrEocM89QZI1seJiGDizHRUP4UZxw9zsXww46wy0P6f9grnYp7t8LkyDDk8eoI4KX6SNMNVcyVS9IWjlq8EzqZEKIA',
  }

  await crypto.subtle.importKey(
    'jwk',
    jwk,
    {
      name: 'RSA-OAEP',
      hash: 'SHA-1',
    },
    false,
    ['decrypt', 'unwrapKey'],
  )
}

{
  const jwk = {
    kty: 'RSA',
    ext: false,
    use: 'enc',
    n: 'wbdxI55VaanZXPY29Lg5hdmv2XhvqAhoxUkanfzf2-5zVUxa6prHRrI4pP1AhoqJRlZfYtWWd5mmHRG2pAHIlh0ySJ9wi0BioZBl1XP2e-C-FyXJGcTy0HdKQWlrfhTm42EW7Vv04r4gfao6uxjLGwfpGrZLarohiWCPnkNrg71S2CuNZSQBIPGjXfkmIy2tl_VWgGnL22GplyXj5YlBLdxXp3XeStsqo571utNfoUTU8E4qdzJ3U1DItoVkPGsMwlmmnJiwA7sXRItBCivR4M5qnZtdw-7v4WuR4779ubDuJ5nalMv2S66-RPcnFAzWSKxtBDnFJJDGIUe7Tzizjg1nms0Xq_yPub_UOlWn0ec85FCft1hACpWG8schrOBeNqHBODFskYpUc2LC5JA2TaPF2dA67dg1TTsC_FupfQ2kNGcE1LgprxKHcVWYQb86B-HozjHZcqtauBzFNV5tbTuB-TpkcvJfNcFLlH3b8mb-H_ox35FjqBSAjLKyoeqfKTpVjvXhd09knwgJf6VKq6UC418_TOljMVfFTWXUxlnfhOOnzW6HSSzD1c9WrCuVzsUMv54szidQ9wf1cYWf3g5qFDxDQKis99gcDaiCAwM3yEBIzuNeeCa5dartHDb1xEB_HcHSeYbghbMjGfasvKn0aZRsnTyC0xhWBlsolZE',
    e: 'AQAB',
  }

  await crypto.subtle.importKey(
    'jwk',
    jwk,
    {
      name: 'RSA-OAEP',
      hash: 'SHA-1',
    },
    false,
    ['encrypt', 'wrapKey'],
  )
}

{
  const jwk = {
    kty: 'EC',
    ext: false,
    use: 'enc',
    crv: 'P-384',
    x: 'YU4rRUzdmVqmRtWOs2OpDE_T5fsNIodcG8G5FWPrTPMyxpzsSOGaQLpe2FpxBmu2',
    y: 'A8-yxCHxkfBz3hKZfI1jUYMjUhsEveZ9THuwFjH2sCNdtksRJU7D5-SkgaFL1ETP',
    d: 'iTx2pk7wW-GqJkHcEkFQb2EFyYcO7RugmaW3mRrQVAOUiPommT0IdnYK2xDlZh-j',
  }

  await crypto.subtle.importKey(
    'jwk',
    jwk,
    {
      name: 'ECDH',
      namedCurve: 'P-384',
    },
    false,
    ['deriveBits', 'deriveKey'],
  )
}

{
  const jwk = {
    kty: 'EC',
    ext: false,
    use: 'enc',
    crv: 'P-384',
    x: 'YU4rRUzdmVqmRtWOs2OpDE_T5fsNIodcG8G5FWPrTPMyxpzsSOGaQLpe2FpxBmu2',
    y: 'A8-yxCHxkfBz3hKZfI1jUYMjUhsEveZ9THuwFjH2sCNdtksRJU7D5-SkgaFL1ETP',
  }

  await crypto.subtle.importKey(
    'jwk',
    jwk,
    {
      name: 'ECDH',
      namedCurve: 'P-384',
    },
    false,
    [],
  )
}

Binary in npm package is not executable

Testing this on Linux, not sure if it affects other platforms.

The workerd binary in the workerd npm package is not marked executable, resulting in

/bin/sh: 1: /home/erisa/worker-links/node_modules/@cloudflare/workerd-linux-64/bin/workerd: Permission denied

With wrangler dev --experimenal-local this manifests as

โŽ” Starting an experimental local server...
Running the ๐Ÿฆ„ Cloudflare Workers Runtime ๐Ÿฆ„ natively โšก๏ธ...
/bin/sh: 1: /home/erisa/worker-links/node_modules/@cloudflare/workerd-linux-64/bin/workerd: Permission denied
โœ˜ [ERROR] local worker: MiniflareCoreError [ERR_RUNTIME_FAILURE]: The Workers runtime failed to start. There is likely additional logging output above.

      at Miniflare.#waitForRuntime
  (/home/erisa/worker-links/node_modules/@miniflare/tre/dist/src/index.js:4770:13)
      at processTicksAndRejections (node:internal/process/task_queues:96:5)
      at async Miniflare.#init
  (/home/erisa/worker-links/node_modules/@miniflare/tre/dist/src/index.js:4680:9)
      at async startLocalWorker
  (/home/erisa/worker-links/node_modules/wrangler/wrangler-dist/cli.js:144649:9) {
    code: 'ERR_RUNTIME_FAILURE',
    cause: undefined
  }

Running chmod +x on the path in the error causes it to work as intended.

Worth noting that I am using Yarn:

$ yarn --version
1.22.19

Builds take too long -- can we use bazel cache?

A clean build takes a very long time, largely due to the need to build V8 and other dependencies. #8 introduces GitHub Actions for CI and those builds are now at 2 hours and counting. On my local 32-core EPYC I can build in 10 minutes.

Incremental builds are comparatively very fast (as long as you didn't modify V8).

We need to do something about this. Probably, the answer is "something something bazel cache something", but I don't know enough about bazel cache at the moment to say exactly how this should work.

How to support inspector?

What is the plan for inspector support? Just looking for some understanding on what you've been thinking here @kentonv.

A few somewhat random questions and notes:

Would inspector details be added to the config file or added as command line options? For instance, if embedded in the configuration, one could conceive of variations where there could be inspector options on the Worker service configuration, or a separate Inspector Service configuration that the Worker points to, or anything else. Or, will inspector details be passed on the command line?

Given that each worker has its own Inspector, and we can have multiple workers, and each worker can have ingress Socket configured, are we just going to accept inspector requests on the same ingress Socket we use for requests to the worker itself and rely on headers to differentiate, or are we going to have a separate Socket configuration that will listen for inspector requests?

How will the user enable inspector support for a particular workerd process? For example, using an alternative command such as workder inspect config.capnp to explicitly say "serve this config with inspector enabled" ... or will we rely on the config file to determine if it is enabled?

Really just trying to get an idea of what you're thinking around this currently as I'm starting to thinking about how to implement.

WebCrypto types need to be cleaned up

Opening this to track this work but the fixes should live in the internal Cloudflare runtime code (either as code changes or overrides). The various operations are too generic. For example, we should export a AesGcmParams that has a fixed name of AES-GCM whereas currently we have a catch-all SubtleCryptoEncryptAlgorithm that is the union of all possible values. Same for things like SubtleCryptoImportKeyAlgorithm, SubtleCryptoHashAlgorithm which don't technically exist. As a nicety, we should hard-code the set of legal values for params that only have a valid set.

CPU/Memory limits

Is there a way to configure CPU or memory limits for workers? I don't see anything related in workerd.capnp; in the code, I see the IsolateLimitEnforcer interface, but no non-null implementations.

Unable to distinguish tail message type from the websocket message payload

What version of Wrangler are you using?

0.0.0

What operating system are you using?

Mac

Describe the Bug

(sent here from the #do-alarms Discord channel by Matt @ cloudflare)

Please add a new field to every tail websocket wire protocol message to distinguish the various contexts now present in workers. Right now it is impossible to always tell the difference between eyeball requests, DO requests, cron requests, and now DO alarm requests [*].

Could be a type code number if necessary to keep the payload small.

Would be immediately useful, thanks! I don't use wrangler, but use the WS directly.

* strictly speaking cron requests can be identified by the payload properties, but the others are often ambiguous

Requires `libc++-dev` for execution

libc++-dev is listed as a build dependency in the README, but prebuilt workerd from npm will not execute without it installed.

Running Ubuntu Jammy Jellyfish.

Support for extending or customising the runtime?

Is this planned/possible to support?

Some use cases I'm thinking of:

  • Changing the exports/event handler semantics
  • Adding new global JS APIs
  • Adding JS APIs that can use new types of bindings/services

[V8] getsectdatafromheader_64 is deprecated in macOS 13.0

Builds on macOS 13.0 will fail since getsectdatafromheader_64 is deprecated as of 13.0, replaced by getsectiondata(). This is in V8's codebase as opposed to workerd, so it may just be worth noting in the README (or not, since 13.0 is still in beta).

Culprit file

V8: external/v8/src/base/platform/platform-darwin.cc:56:22

Build output

external/v8/src/base/platform/platform-darwin.cc:56:22: error: 'getsectdatafromheader_64' is deprecated: first deprecated in macOS 13.0 [-Werror,-Wdeprecated-declarations]
    char* code_ptr = getsectdatafromheader_64(
                     ^~~~~~~~~~~~~~~~~~~~~~~~
                     use getsectiondata()
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/mach-o/getsect.h:130:14: note: 'getsectdatafromheader_64' has been explicitly marked deprecated here
extern char *getsectdatafromheader_64(
             ^
1 error generated.
Target //src/workerd/server:workerd failed to build

System information

System Version:	macOS 13.0 (22A5352e)
Kernel Version:	Darwin 22.1.0

RSASSA-PKCS1-v1_5 invalid key use "sign"

Hey,

We may have found a bug in the workerd runtime, per the Mozilla docs - https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/sign#rsassa-pkcs1-v1_5

SubtleCrypto sign supports RSASSA-PKCS1-v1_5 however doing this in the workerd runtime throws: "Attempt to import public RSASSA-PKCS1-v1_5 key with invalid usage "sign"."

Repro code:

/*
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCd78p5mdexgywgdd0MJEiJ0eHa
X0PLYodYsIl6pvk8KuHEllUM8FQpRh8S5DRzCyYWTIAQJiZOSuITKF93IGk1UwmC
FhG6ZFu+oT7GApe/kx6k3tp3ruWA7+FBkbC01rDNdviEUQ0QzONMMDWNFppGppMl
w392z1/RXYXxFfT11QIDAQAB
-----END PUBLIC KEY-----
*/
const PUBLIC_KEY = 'MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCd78p5mdexgywgdd0MJEiJ0eHaX0PLYodYsIl6pvk8KuHEllUM8FQpRh8S5DRzCyYWTIAQJiZOSuITKF93IGk1UwmCFhG6ZFu+oT7GApe/kx6k3tp3ruWA7+FBkbC01rDNdviEUQ0QzONMMDWNFppGppMlw392z1/RXYXxFfT11QIDAQAB';

export default {
  async fetch() {
    try {
      const key = await this.keyFromPublicKey(PUBLIC_KEY);
      const encrypted = await this.encryptRsa(key, 'Hello world');

      return new Response(encrypted);
    } catch(e) {
      return new Response(e.message + '\n' + e.stack, { status: 500 });
    }
  },

  base64ToArrayBuffer(base64) {
    const binaryString = atob(base64);
    const len = binaryString.length;
    const bytes = new Uint8Array(len);
    for (let i = 0; i < len; i++) {
        bytes[i] = binaryString.charCodeAt(i);
    }
    return bytes.buffer;
  },

  keyFromPublicKey(publicKey) {
    const key = this.base64ToArrayBuffer(publicKey);

    return crypto.subtle.importKey(
      'spki',
      key,
      {
        name: 'RSASSA-PKCS1-v1_5',
        length: 256,
        hash: 'SHA-256',
      },
      false,
      ['sign']
    );
  },

  async encryptRsa(key, str) {
    const sig = crypto.subtle.sign('RSASSA-PKCS1-v1_5', key, encoder.encode(str));

    return btoa(new Uint8Array(sig).reduce((data, byte) => data + String.fromCharCode(byte), ''));
  }
}

and it produces the same error with encrypt. It only seems to work with verify.

Same kinda thing in node:

import * as crypto from 'crypto';

const PUBLIC_KEY = `-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCd78p5mdexgywgdd0MJEiJ0eHa
X0PLYodYsIl6pvk8KuHEllUM8FQpRh8S5DRzCyYWTIAQJiZOSuITKF93IGk1UwmC
FhG6ZFu+oT7GApe/kx6k3tp3ruWA7+FBkbC01rDNdviEUQ0QzONMMDWNFppGppMl
w392z1/RXYXxFfT11QIDAQAB
-----END PUBLIC KEY-----`;

async function run() {
  const encryptedData = crypto.publicEncrypt(
    {
      key: PUBLIC_KEY,
      padding: crypto.constants.RSA_PKCS1_PADDING,
    },
    Buffer.from('Hello world')
  );

  console.log(encryptedData.toString('base64'));
}

run();

As far as I know, you should be able to sign/encrypt with this per the Mozilla docs.

workerd --version: workerd 2022-11-08

Compiled from 1f8a561

Better handling of SIGINT

Currently if I run workerd server config.capnp and ctrl-c to exit, the output to the console is a very unfriendly...

bazel-bin/src/workerd/server/workerd serve /home/james/cloudflare/woerd/samples/helloworld/config.capnp 
^C*** Received signal #2: Interrupt
stack: /lib/x86_64-linux-gnu/libc.so.6@11f46d bazel-bin/src/workerd/server/workerd@7347d23 bazel-bin/src/workerd/server/workerd@72849c0 bazel-bin/src/workerd/server/workerd@7285748 bazel-bin/src/workerd/server/workerd@23cb7ce bazel-bin/src/workerd/server/workerd@23ca7d9 bazel-bin/src/workerd/server/workerd@23ca368 bazel-bin/src/workerd/server/workerd@23ca33e bazel-bin/src/workerd/server/workerd@23ca273 bazel-bin/src/workerd/server/workerd@23ca243 bazel-bin/src/workerd/server/workerd@73af5ef bazel-bin/src/workerd/server/workerd@73abe1d bazel-bin/src/workerd/server/workerd@73b5f50 bazel-bin/src/workerd/server/workerd@73afcd1 bazel-bin/src/workerd/server/workerd@73aabe1 bazel-bin/src/workerd/server/workerd@73b5f50 bazel-bin/src/workerd/server/workerd@73afcd1 bazel-bin/src/workerd/server/workerd@73ad4eb bazel-bin/src/workerd/server/workerd@73ad488 bazel-bin/src/workerd/server/workerd@7363cef bazel-bin/src/workerd/server/workerd@73a7ecc bazel-bin/src/workerd/server/workerd@73a7cbd bazel-bin/src/workerd/server/workerd@23a32dd /lib/x86_64-linux-gnu/libc.so.6@24082 bazel-bin/src/workerd/server/workerd@23a302d
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here
    ??:0: returning here

In this case, it would be ideal if there wasn't a gnarly stack printed (or any stack)... Perhaps if the --verbose option is enabled, printing the stack would be fine?

SCRAM key signatures are derived incorrectly

I have been trying to connect from a cloudflare worker to postgres, using SCRAM (which is the new default since PG14).

For that, a modified version of https://github.com/cloudflare/worker-template-postgres was used, which showed that PG with SCRAM auth method is never able to get connected to from the worker, while MD5 and passwordless methods worked just fine.

When similar example was run from Deno, it worked for all auth methods, including SCRAM with the same postgres.

I've narrowed down the case to the following snippet:

const text_encoder = new TextEncoder();

// interface KeySignatures {
//   client: Uint8Array;
//   server: Uint8Array;
//   stored: Uint8Array;
// }

// See https://github.com/denodrivers/postgres/blob/8a07131efa17f4a6bcab86fd81407f149de93449/connection/scram.ts#L93
// async function deriveKeySignatures(
//   password: string,
//   salt: Uint8Array,
//   iterations: number,
// ): Promise<KeySignatures> {
async function deriveKeySignatures(
  password,
  salt,
  iterations,
) {
  console.log(`Input: \n${JSON.stringify({
    password,
    salt: arrayToPrettyString(salt),
    iterations
  }, false, 2)}\n================\n`);

  const pbkdf2_password = await crypto.subtle.importKey(
    "raw",
    text_encoder.encode(password),
    "PBKDF2",
    false,
    ["deriveBits", "deriveKey"],
  );
  const key = await crypto.subtle.deriveKey(
    {
      hash: "SHA-256",
      iterations,
      name: "PBKDF2",
      salt,
    },
    pbkdf2_password,
    { name: "HMAC", hash: "SHA-256" },
    false,
    ["sign"],
  );

  const client = new Uint8Array(
    await crypto.subtle.sign("HMAC", key, text_encoder.encode("Client Key")),
  );
  const server = new Uint8Array(
    await crypto.subtle.sign("HMAC", key, text_encoder.encode("Server Key")),
  );
  const stored = new Uint8Array(await crypto.subtle.digest("SHA-256", client));
  return { client, server, stored };
}

////////////////////////

// See https://www.rfc-editor.org/rfc/rfc7677
// See https://github.com/denodrivers/postgres/blob/8a07131efa17f4a6bcab86fd81407f149de93449/tests/auth_test.ts#L6
function decode(b64) {
  const binString = atob(b64);
  const size = binString.length;
  const bytes = new Uint8Array(size);
  for (let i = 0; i < size; i++) {
    bytes[i] = binString.charCodeAt(i);
  }
  return bytes;
}
const password = 'pencil'
const salt = decode('W22ZaJ0SNY7soEsUEjb6gQ==');
const iterations = 4096

function signaturesToString(signatures) {
  let { client, server, stored } = signatures;
  return JSON.stringify({
    client: arrayToPrettyString(client),
    server: arrayToPrettyString(server),
    stored: arrayToPrettyString(stored),
  }, false, 2);
}

function arrayToPrettyString(array) {
  return `[${Array.apply([], array).join(",")}]`
}

////////////////////////   DENO
(async () => {
  const signatures = await deriveKeySignatures(password, salt, iterations);
  console.log(`Output: \n${signaturesToString(signatures)}`);
})();

////////////////////////   workerd
addEventListener("fetch", async event => {
  const signatures = await deriveKeySignatures(password, salt, iterations);
  const signaturesString = signaturesToString(signatures);
  console.log(`Output: \n${signaturesString}`);
  event.respondWith(new Response(signaturesString));
});

The snippet reuses slightly modified code from postgres deno driver:
https://github.com/denodrivers/postgres/blob/8a07131efa17f4a6bcab86fd81407f149de93449/connection/scram.ts#L93
and a test for that: https://github.com/denodrivers/postgres/blob/8a07131efa17f4a6bcab86fd81407f149de93449/tests/auth_test.ts#L6
based on https://www.rfc-editor.org/rfc/rfc7677


When launched from deno, it outputs the following:

~/work/workerd_configs master*โ€‹โ€‹ โฏ deno run ./crypto/crypto.ts
Input:
{
  "password": "pencil",
  "salt": "[91,109,153,104,157,18,53,142,236,160,75,20,18,54,250,129]",
  "iterations": 4096
}
================

Output:
{
  "client": "[166,15,201,35,214,126,134,68,169,45,22,185,110,218,94,244,101,107,12,114,92,72,67,116,190,37,83,85,118,153,110,139]",
  "server": "[193,243,203,193,193,58,157,53,161,76,9,144,238,217,118,41,234,34,88,99,229,102,164,49,74,185,159,63,0,229,217,213]",
  "stored": "[88,110,93,242,131,230,220,235,92,62,121,29,139,133,40,236,25,30,102,64,69,206,151,23,146,226,230,181,187,19,226,166]"
}

When deployed to the cloudflare worker, it outputs the following (sometimes, twice):
online_worker

Input: 
{
  "password": "pencil",
  "salt": "[91,109,153,104,157,18,53,142,236,160,75,20,18,54,250,129]",
  "iterations": 4096
}
================

worker.js:96 Output: 
{
  "client": "[39,109,21,199,34,123,195,118,110,149,158,193,245,83,213,158,108,10,144,29,179,78,203,90,42,67,86,166,43,73,194,174]",
  "server": "[191,207,32,144,249,160,146,168,90,192,19,74,137,113,122,251,66,75,42,5,24,220,131,66,45,238,213,39,31,56,245,224]",
  "stored": "[177,174,171,65,56,228,4,105,247,107,133,55,230,13,215,114,1,133,143,68,20,119,43,239,251,225,142,245,250,194,40,71]"
}

When run with workerd (built from f7bd5f4) locally, it outputs the following:

work/workerd_configs/crypto master*โ€‹โ€‹ โฏ cat crypto.capnp
using Workerd = import "/workerd/workerd.capnp";

const config :Workerd.Config = (
  services = [
    (name = "main", worker = .mainWorker),
  ],

  sockets = [
    # Serve HTTP on port 8080.
    ( name = "http",
      address = "*:8080",
      http = (),
      service = "main"
    ),
  ]
);

const mainWorker :Workerd.Worker = (
  serviceWorkerScript = embed "crypto.ts",
  compatibilityDate = "2022-09-16",
);

work/workerd_configs/crypto master*โ€‹โ€‹ โฏ ../../workerd/bazel-bin/src/workerd/server/workerd serve "crypto.capnp"
workerd/io/worker-entrypoint.c++:209: error: e = kj/async-io-unix.c++:1652: failed: connect() blocked by restrictPeers()
stack: 104c30cf4 104c32160 104c1d5b8 104c1da5c 1049fbb6c 104a0c764 104a2f6c0 104a37308 104a540ac 104c0f8cc 104a540ac 102b59fa0; sentryErrorContext = workerEntrypoint
workerd/server/server.c++:1834: error: Uncaught exception: kj/async-io-unix.c++:1652: failed: worker_do_not_log; Request failed due to internal error
stack: 104c30cf4 104c32160 104c1d5b8 104c1da5c 1049fbb6c 104a0c764 104a2f6c0 104a37308 104a540ac 104c0f8cc 104a540ac 102b59fa0 104a066cc 104a097f0
workerd/io/worker-entrypoint.c++:209: error: e = kj/async-io-unix.c++:1652: failed: connect() blocked by restrictPeers()
stack: 104c30cf4 104c32160 104c1d5b8 104c1da5c 1049fbb6c 104a0c764 104a2f6c0 104a37308 104a540ac 104c0f8cc 104a540ac 102b59fa0; sentryErrorContext = workerEntrypoint
workerd/server/server.c++:1834: error: Uncaught exception: kj/async-io-unix.c++:1652: failed: worker_do_not_log; Request failed due to internal error
stack: 104c30cf4 104c32160 104c1d5b8 104c1da5c 1049fbb6c 104a0c764 104a2f6c0 104a37308 104a540ac 104c0f8cc 104a540ac 102b59fa0 104a066cc 104a097f0

and returns internal server error.
This is a bug, according to https://github.com/cloudflare/workerd/blame/f7bd5f40adb3909cefcc63b7b06a2d6c1baafbd5/README.md#L40

"internal errors" (bugs in the implementation which the Workers team should address)


I would expect both locally built workerd and the one from cloudflare to compute the signatures with their values identical to what Deno script produces for the same input.

Switch to hard tabs for indentation

First of all, so excited for this release! I'm barely 1/10th done reading through the source ๐Ÿ‘€


Prior precedent:

Everyone has their own opinions about how much visual indentation they want in their code. Some prefer 2 spaces, some prefer 4, and then some users may be visually impaired and need higher font sizes with smaller tab widths, etc.

There's been a tonne of debate over the years about which is better, and it's always come to down to arguments that are opinion based, but the one major and consequential difference between the two is this:

Tabs allow individuals in the same codebase to select their indentation widths across every tool.

To affirm the accessibility issue, I'd recommend reading through https://www.reddit.com/r/javascript/comments/c8drjo/nobody_talks_about_the_real_reason_to_use_tabs/:

coworkers who unfortunately are highly visually impaired, and each has a different visual impairment:

  • one of them uses tab-width 1 because he uses such a gigantic font-size
  • the other uses tab-width 8 and a really wide monitor
  • these guys have serious problems using codebases with spaces, they have to convert, do their work, and then unconvert before committing

Other references:

In the JavaScript ecosystem, Prettier is likely to switch to useTabs by default with the next major version too, for this exact reason: prettier/prettier#7475

Other than the obvious code change differences, shipping an .editorconfig in this repo for some sensible defaults would probably be a good idea too, to combat one of the most common counters to this, which is how some tools render tabs by default. This way, tools like GitHub can render tabs with a 4 character width by default instead of the usual 8, but folks can still customise this themselves if they so choose. Something like this:

# http://editorconfig.org
root = true

[*]
indent_style = tab
tab_width = 4

I'd like to avoid any debate around personal preference here and focus on the accessibility issue of continuing to indent code with spaces.

latest workerd-linux-64 fails in github CI

See https://github.com/panva/oauth4webapi/actions/runs/3459753365/jobs/5775500941 for $ npm clean-install result in github CI.

npm ERR! code 1
npm ERR! path /home/runner/work/oauth4webapi/oauth4webapi/node_modules/workerd
npm ERR! command failed
npm ERR! command sh -c -- node install.js
npm ERR! node:internal/errors:863
npm ERR!   const err = new Error(message);
npm ERR!               ^
npm ERR! 
npm ERR! Error: Command failed: /home/runner/work/oauth4webapi/oauth4webapi/node_modules/workerd/bin/workerd --version
npm ERR! /home/runner/work/oauth4webapi/oauth4webapi/node_modules/workerd/bin/workerd: error while loading shared libraries: libunwind.so.1: cannot open shared object file: No such file or directory
npm ERR! 
npm ERR!     at checkExecSyncError (node:child_process:871:11)
npm ERR!     at Object.execFileSync (node:child_process:907:1[5](https://github.com/panva/oauth4webapi/actions/runs/3459753365/jobs/5775658350#step:4:6))
npm ERR!     at validateBinaryVersion (/home/runner/work/oauth4webapi/oauth4webapi/node_modules/workerd/install.js:5[6](https://github.com/panva/oauth4webapi/actions/runs/3459753365/jobs/5775658350#step:4:7):4[7](https://github.com/panva/oauth4webapi/actions/runs/3459753365/jobs/5775658350#step:4:8))
npm ERR!     at /home/runner/work/oauth4webapi/oauth4webapi/node_modules/workerd/install.js:213:5 {
npm ERR!   status: 127,
npm ERR!   signal: null,
npm ERR!   output: [
npm ERR!     null,
npm ERR!     Buffer(0) [Uint[8](https://github.com/panva/oauth4webapi/actions/runs/3459753365/jobs/5775658350#step:4:9)Array] [],
npm ERR!     Buffer(1[9](https://github.com/panva/oauth4webapi/actions/runs/3459753365/jobs/5775658350#step:4:10)0) [Uint8Array] [
npm ERR!        47, [10](https://github.com/panva/oauth4webapi/actions/runs/3459753365/jobs/5775658350#step:4:11)4, [11](https://github.com/panva/oauth4webapi/actions/runs/3459753365/jobs/5775658350#step:4:12)1, 109, 101,  47, 114, 117, 110, 110, 101, 114,
npm ERR!        47, 119, 111, 114, 107,  47, 111,  97, 117, 116, 104,  52,
npm ERR!       119, 101,  98,  97, 112, 105,  47, 111,  97, 117, 116, 104,
npm ERR!        52, 119, 101,  98,  97, 112, 105,  47, 110, 111, 100, 101,
npm ERR!        95, 109, 111, 100, 117, 108, 101, 115,  47, 119, 111, 114,
npm ERR!       107, 101, 114, 100,  47,  98, 105, 110,  47, 119, 111, 114,
npm ERR!       107, 101, 114, 100,  58,  32, 101, 114, 114, 111, 114,  32,
npm ERR!       119, 104, 105, 108, 101,  32, 108, 111,  97, 100, 105, 110,
npm ERR!       103,  32, 115, 104,
npm ERR!       ... 90 more items
npm ERR!     ]
npm ERR!   ],
npm ERR!   pid: 1743,
npm ERR!   stdout: Buffer(0) [Uint8Array] [],
npm ERR!   stderr: Buffer(190) [Uint8Array] [
npm ERR!      47, 104, 111, 109, 101,  47, 114, 117, 110, 110, 101, 114,
npm ERR!      47, 119, 111, 114, 107,  47, 111,  97, 117, 116, 104,  52,
npm ERR!     119, 101,  98,  97, 112, 105,  47, 111,  97, 117, 116, 104,
npm ERR!      52, 119, 101,  98,  97, 112, 105,  47, 110, 111, 100, 101,
npm ERR!      95, 109, 111, 100, 117, 108, 101, 115,  47, 119, 111, 114,
npm ERR!     107, 101, 114, 100,  47,  98, 105, 110,  47, 119, 111, 114,
npm ERR!     107, 101, 114, 100,  58,  32, 101, 114, 114, 111, 114,  32,
npm ERR!     119, 104, 105, 108, 101,  32, 108, 111,  97, 100, 105, 110,
npm ERR!     103,  32, 115, 104,
npm ERR!     ... 90 more items
npm ERR!   ]
npm ERR! }
npm ERR! 
npm ERR! Node.js v18.[12](https://github.com/panva/oauth4webapi/actions/runs/3459753365/jobs/5775658350#step:4:13).0

1.20220926.3 did not have this issue, there was also no post install script configured.

Unable to build `workerd` on Ubuntu 22.04.1

Expected outcome

workerd should build fine using bazel.

Current outcome

When I try to compile workerd, it fails with the following error message:

jonas@jonas-asus:~/repos/clones/cloudflare/workerd$ bazel build -c opt //src/workerd/server:workerd
INFO: Analyzed target //src/workerd/server:workerd (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
ERROR: /home/jonas/.cache/bazel/_bazel_jonas/c86f39fb6ae3f61b12a0ae374033648e/external/capnp-cpp/src/kj/BUILD.bazel:5:11: Compiling src/kj/source-location.c++ failed: undeclared inclusion(s) in rule '@capnp-cpp//src/kj:kj':
this rule is missing dependency declarations for the following files included by 'src/kj/source-location.c++':
  '/usr/lib/llvm-14/include/c++/v1/initializer_list'
  '/usr/lib/llvm-14/include/c++/v1/__config'
  '/usr/lib/llvm-14/include/c++/v1/__config_site'
  '/usr/lib/llvm-14/include/c++/v1/cstddef'
  '/usr/lib/llvm-14/include/c++/v1/version'
  '/usr/lib/llvm-14/include/c++/v1/__nullptr'
  '/usr/lib/llvm-14/include/c++/v1/stddef.h'
  '/usr/lib/llvm-14/include/c++/v1/cstring'
  '/usr/lib/llvm-14/include/c++/v1/string.h'
Target //src/workerd/server:workerd failed to build
INFO: Elapsed time: 1.298s, Critical Path: 0.52s
INFO: 18 processes: 17 internal, 1 linux-sandbox.
FAILED: Build did NOT complete successfully

Tries

I have tried the following:

  • Install clang-14 using apt
  • Install libc++-dev using apt
  • Install libc++abi-dev using apt

OS info

jonas@jonas-asus:~/repos/clones/cloudflare/workerd$ cat /etc/os-release 
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.1 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Github Actions Always Updates

The issue
To speed up actions and reduce bandwidth we should be caching an image after installing dependencies. Then grab the cached image for testing, releasing, etc.

What to keep in mind
The new cached image should be refreshed every once in a while in case there is an update to one of the dependencies (Ex. clang, build-essential, etc).

Another bazel build failure on a fresh checkout

Fresh checkout of the repo...

jasnell@Cloudflare:~/projects/workerd$ bazel build //src/workerd/server:workerd
INFO: Repository v8_python_deps instantiated at:
  /home/jasnell/projects/workerd/WORKSPACE:200:12: in <toplevel>
  /home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/rules_python/python/pip.bzl:53:19: in pip_install
Repository rule pip_repository defined at:
  /home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/rules_python/python/pip_install/pip_repository.bzl:67:33: in <toplevel>
ERROR: An error occurred during the fetch of repository 'v8_python_deps':
   Traceback (most recent call last):
        File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/rules_python/python/pip_install/pip_repository.bzl", line 63, column 13, in _pip_repository_impl
                fail("rules_python_external failed: %s (%s)" % (result.stdout, result.stderr))
Error in fail: rules_python_external failed:  (Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/__main__.py", line 16, in <module>
    from pip._internal.main import main as _main  # isort:skip # noqa
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/main.py", line 13, in <module>
    from pip._internal.cli.autocompletion import autocomplete
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/cli/autocompletion.py", line 11, in <module>
    from pip._internal.cli.main_parser import create_main_parser
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/cli/main_parser.py", line 7, in <module>
    from pip._internal.cli import cmdoptions
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/cli/cmdoptions.py", line 19, in <module>
    from distutils.util import strtobool
ModuleNotFoundError: No module named 'distutils.util'
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/rules_python/python/pip_install/extract_wheels/__main__.py", line 5, in <module>
    main()
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/rules_python/python/pip_install/extract_wheels/__init__.py", line 87, in main
    subprocess.run(pip_args, check=True)
  File "/usr/lib/python3.10/subprocess.py", line 524, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'pip', 'wheel', '-r', '/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/v8/bazel/requirements.txt', '--require-hashes']' returned non-zero exit status 1.
)
ERROR: /home/jasnell/projects/workerd/WORKSPACE:200:12: fetching pip_repository rule //external:v8_python_deps: Traceback (most recent call last):
        File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/rules_python/python/pip_install/pip_repository.bzl", line 63, column 13, in _pip_repository_impl
                fail("rules_python_external failed: %s (%s)" % (result.stdout, result.stderr))
Error in fail: rules_python_external failed:  (Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/__main__.py", line 16, in <module>
    from pip._internal.main import main as _main  # isort:skip # noqa
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/main.py", line 13, in <module>
    from pip._internal.cli.autocompletion import autocomplete
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/cli/autocompletion.py", line 11, in <module>
    from pip._internal.cli.main_parser import create_main_parser
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/cli/main_parser.py", line 7, in <module>
    from pip._internal.cli import cmdoptions
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/cli/cmdoptions.py", line 19, in <module>
    from distutils.util import strtobool
ModuleNotFoundError: No module named 'distutils.util'
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/rules_python/python/pip_install/extract_wheels/__main__.py", line 5, in <module>
    main()
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/rules_python/python/pip_install/extract_wheels/__init__.py", line 87, in main
    subprocess.run(pip_args, check=True)
  File "/usr/lib/python3.10/subprocess.py", line 524, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'pip', 'wheel', '-r', '/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/v8/bazel/requirements.txt', '--require-hashes']' returned non-zero exit status 1.
)
INFO: Repository ssl instantiated at:
  /home/jasnell/projects/workerd/WORKSPACE:44:13: in <toplevel>
Repository rule http_archive defined at:
  /home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/bazel_tools/tools/build_defs/repo/http.bzl:355:31: in <toplevel>
ERROR: /home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/workerd-v8/BUILD.bazel:1:11: @workerd-v8//:v8 depends on @v8//:v8_icu in repository @v8 which failed to fetch. no such package '@v8_python_deps//': rules_python_external failed:  (Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/__main__.py", line 16, in <module>
    from pip._internal.main import main as _main  # isort:skip # noqa
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/main.py", line 13, in <module>
    from pip._internal.cli.autocompletion import autocomplete
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/cli/autocompletion.py", line 11, in <module>
    from pip._internal.cli.main_parser import create_main_parser
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/cli/main_parser.py", line 7, in <module>
    from pip._internal.cli import cmdoptions
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/pypi__pip/pip/_internal/cli/cmdoptions.py", line 19, in <module>
    from distutils.util import strtobool
ModuleNotFoundError: No module named 'distutils.util'
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/rules_python/python/pip_install/extract_wheels/__main__.py", line 5, in <module>
    main()
  File "/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/rules_python/python/pip_install/extract_wheels/__init__.py", line 87, in main
    subprocess.run(pip_args, check=True)
  File "/usr/lib/python3.10/subprocess.py", line 524, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'pip', 'wheel', '-r', '/home/jasnell/.cache/bazel/_bazel_jasnell/d8d6f6915f66665582ff8c3cdeaeb6c6/external/v8/bazel/requirements.txt', '--require-hashes']' returned non-zero exit status 1.
)
ERROR: Analysis of target '//src/workerd/server:workerd' failed; build aborted: 
INFO: Elapsed time: 2.363s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 16 targets configured)
    currently loading: @v8//
jasnell@Cloudflare:~/projects/workerd$ ^C
jasnell@Cloudflare:~/projects/workerd$ 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.