GithubHelp home page GithubHelp logo

tilblechschmidt / webgrid Goto Github PK

View Code? Open in Web Editor NEW
34.0 5.0 8.0 12.05 MB

Decentralized, scalable and robust implementation of a selenium-grid equivalent. Based on the WebDriver specification by the W3C.

Home Page: https://webgrid.dev

License: MIT License

Dockerfile 0.29% Shell 1.29% Rust 91.88% Makefile 0.33% Mustache 1.01% JavaScript 0.96% HTML 0.11% Svelte 3.96% TypeScript 0.17%
selenium selenium-grid selenium-node rust webdriver scaleable kubernetes docker helm-chart w3c

webgrid's Introduction

WebGrid

Banner

Install | Usage | Docs

Contributor Covenant GitHub
Maintenance GitHub last commit
You have an idea for a logo? Submit it here!


  • Cluster ready. Designed with concurrency and on-demand scalability1 in mind
  • Debuggable. Provides browser screen recordings, extensive logs, and tracing
  • Fast. Built for speed and performance on a single grid instance
  • W3C Specification compilant. Fully compatible with existing Selenium 4 clients

1All the way down to zero, obviously


Install

Below are quick-start tutorials to get you started. For a more detailed introduction visit the dedicated Getting Started guide!

🐳 Docker

To run a basic grid in Docker you can use Docker Compose. Below is a bare-bones example of getting all required components up and running!

# Create prerequisites
docker network create webgrid

# Download compose file
curl -fsSLO webgrid.dev/docker-compose.yml

# Launch the grid
docker-compose up

You can now point your Selenium client to localhost:8080 and browse the API at /api.

☸️ Kube

For deployment to Kubernetes a Helm repository is available. The default values provide a good starting point for basic cluster setups like K3s or microk8s.

# Add the repository
helm repo add webgrid https://webgrid.dev/

# List all available versions
helm search repo --versions --devel webgrid/demo

# Install the chart
helm install example webgrid/demo --version "<pick-a-version-from-the-list>"

# Make it accessible locally for evaluation
kubectl port-forward service/example-webgrid 8080:80

Your grid is now available at localhost:8080.

If you are deploying to a RBAC enabled cluster you might have to tweak some settings. Take a look at the documentation on how to use your own ServiceAccount and PersistentVolumeClaims.

Usage

Once you have your grid up and running there is a couple of things you can do!

πŸš€ Launch browser instances

Point your selenium client to http://localhost:8080 to create a new browser container/pod and interact with it! You can use all features supported by Selenium.

πŸ” Browse the API

The grid provides a GraphQL API at /api with a Playground for you to explore. It exposes all available metadata about sessions, grid health and advanced features like video recordings.

πŸ“Ί Watch your browsers

You can take a live look at what your browsers are doing by taking the Session ID of a instance and visiting localhost:8080. You can also embed the videos in your existing tools! Head over to the embedding documentation to learn how.

Note:
Video recordings are disabled by default in K8s as every cluster has specific requirements for file storage. The storage documentation explains how to enable it.

Developing

If you want to build the project locally you can use the Makefile. To create Docker images for every component and run them locally run these commands:

# Build docker images
make

# Start components in docker
make install

To start individual components outside of Docker or setup the development environment, see the development environment documentation.

License

This project is licensed under the MIT License. While this does grant you a lot of freedom in how to use the software and keeps the legal headache to a minimum, it also no longer requires you to publish modifications made to the project. The original intention behind the AGPL license was to encourage contributions by users who added features for their own use.

Since this project is so small at this stage, it heavily relies on feedback and contributions from the community (thats you!). So please strongly consider contributing any changes you make for the benefit of all users πŸ™‚

webgrid's People

Contributors

dependabot[bot] avatar jpjuni0r avatar tilblechschmidt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

webgrid's Issues

Convert TODOs to Issues

The code base still contains a large number of TODOs and related thoughts that have been added on a whim. This makes it hard to prioritise and reference issues. To remedy this, all remaining TODOs should be removed from the code and converted into issues on GitHub, linking back to the location in the code.

Pretty simple, really.

Readiness probes of some components not correct

πŸ› Bug description

It appears that some components do not propagate the correct readiness probe state regarding connectivity to the Redis server. It seems to be affecting the manager, orchestrator, and gangway. The collector and api probably suffer the same issue but in the observed scenario they kept on crashing because the mongodb server was unavailable.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Deploy a webgrid fresh
  2. Make sure the redis and/or MongoDB don't come up
  3. Watch it burn πŸ”₯

🎯 Expected behaviour

This is more of a philosophical discussion on whether the software should crash upon encountering an error or just report a negative readiness state. Probably the latter, however, even that is currently not given. Redis connectivity should be reflected in the readiness state!

πŸ“Ί Screenshots

image

Hire a chaos monkey

Recent experiments on the graceful shutdown branch have confirmed that WebGrid survives the random (graceful) termination of components.

Make this into a test case and run it in the pipeline!

Unable to create WebDriver session

πŸ› Bug description

Default WebGrid with docker-compose can`t do ParallelSeleniumTest.

🦢 Reproduction steps

Start WebGrid with docker-compose.
Run in ParallelSeleniumTest : cargo run -- http://localhost:8080/ 1

🎯 Expected behaviour

ParallelSeleniumTest should pass.

πŸ“Ί Screenshots

I can login to WebGrid localhost:8080/watch and /api.

ParallelSeleniumTest:

cargo run --color=always -- http://localhost:8080/ 1
    Finished dev [unoptimized + debuginfo] target(s) in 0.06s
     Running `target/debug/basic-test 'http://localhost:8080/' 1`
 2021-10-01T01:43:53.664Z INFO  basic_test > Running 1 tests against 'http://localhost:8080/'
 2021-10-01T01:48:53.722Z INFO  basic_test > Test #0 failed: Unable to create WebDriver session:     
    Status: 500
    Additional info:
        Timed out while waiting for orchestrator to respond
        Error: session not created
 2021-10-01T01:48:53.722Z INFO  basic_test > All tests finished. 0 / 1 succeeded.

WebGrid:

docker-compose up
Starting webgrid-redis ... done
Starting webgrid-storage      ... done
Starting webgrid-api          ... done
Starting webgrid-gc           ... done
Starting webgrid-manager      ... done
Starting webgrid-proxy        ... done
Starting webgrid-orchestrator ... done
Attaching to webgrid-redis, webgrid-storage, webgrid-api, webgrid-manager, webgrid-orchestrator, webgrid-proxy, webgrid-gc
webgrid-api     |  2021-10-01T01:32:25.888Z INFO  webgrid > v0.5.1-beta
webgrid-api     |  2021-10-01T01:32:25.888Z DEBUG webgrid::libraries::lifecycle::heart > Heart starts beating
webgrid-api     |  2021-10-01T01:32:25.888Z INFO  jatsl::scheduler                     > Startup          jatsl::status_server
webgrid-api     |  2021-10-01T01:32:25.888Z INFO  jatsl::scheduler                     > Startup          webgrid::services::api::jobs::server
webgrid-api     |  2021-10-01T01:32:25.888Z INFO  jatsl::scheduler                     > Startup          webgrid::libraries::net::advertise
webgrid-api     |  2021-10-01T01:32:25.888Z INFO  jatsl::status_server                 > Status server is disabled, exiting.
webgrid-api     |  2021-10-01T01:32:25.888Z INFO  jatsl::scheduler                     > Finished         jatsl::status_server
webgrid-api     |  2021-10-01T01:32:25.889Z DEBUG webgrid::libraries::resources::redis > Reusing existing shared connection!
webgrid-api     |  2021-10-01T01:32:25.890Z INFO  webgrid::services::api::jobs::server > Serving files and API at 0.0.0.0:40007
webgrid-api     |  2021-10-01T01:32:25.890Z INFO  jatsl::scheduler                     > Ready            webgrid::services::api::jobs::server
webgrid-manager |  2021-10-01T01:32:25.910Z INFO  webgrid > v0.5.1-beta
webgrid-manager |  2021-10-01T01:32:25.910Z DEBUG webgrid::libraries::lifecycle::heart > Heart starts beating
webgrid-manager |  2021-10-01T01:32:25.910Z INFO  jatsl::scheduler                     > Startup          jatsl::status_server
webgrid-manager |  2021-10-01T01:32:25.910Z INFO  jatsl::scheduler                     > Startup          webgrid::libraries::lifecycle::heart_beat
webgrid-manager |  2021-10-01T01:32:25.910Z INFO  jatsl::scheduler                     > Startup          webgrid::libraries::metrics::processor
webgrid-manager |  2021-10-01T01:32:25.910Z INFO  jatsl::scheduler                     > Startup          webgrid::libraries::net::advertise
webgrid-orchestrator |  2021-10-01T01:32:26.171Z INFO  webgrid > v0.5.1-beta
webgrid-api     |  2021-10-01T01:32:25.890Z INFO  jatsl::scheduler                     > Ready            webgrid::libraries::net::advertise
webgrid-manager |  2021-10-01T01:32:25.910Z INFO  jatsl::status_server                 > Status server is disabled, exiting.
webgrid-manager |  2021-10-01T01:32:25.910Z INFO  jatsl::scheduler                     > Startup          webgrid::services::manager::jobs::session_handler
webgrid-manager |  2021-10-01T01:32:25.910Z INFO  jatsl::scheduler                     > Finished         jatsl::status_server
webgrid-orchestrator |  2021-10-01T01:32:26.172Z DEBUG webgrid::libraries::lifecycle::heart_beat > Added heartbeat orchestrator:example-orchestrator:heartbeat
webgrid-orchestrator |  2021-10-01T01:32:26.172Z DEBUG webgrid::libraries::lifecycle::heart_beat > Added heartbeat orchestrator:example-orchestrator:retain
webgrid-manager |  2021-10-01T01:32:25.910Z DEBUG webgrid::libraries::resources::redis > Reusing existing shared connection!
webgrid-redis   | 1:C 01 Oct 2021 01:32:24.789 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
webgrid-redis   | 1:C 01 Oct 2021 01:32:24.789 # Redis version=6.2.5, bits=64, commit=00000000, modified=0, pid=1, just started
webgrid-redis   | 1:C 01 Oct 2021 01:32:24.789 # Configuration loaded
webgrid-manager |  2021-10-01T01:32:25.910Z INFO  webgrid::services::manager::jobs::session_handler > Listening at 0.0.0.0:40001
webgrid-manager |  2021-10-01T01:32:25.910Z INFO  jatsl::scheduler                                  > Ready            webgrid::services::manager::jobs::session_handler
webgrid-storage |  2021-10-01T01:32:25.472Z INFO  webgrid > v0.5.1-beta
webgrid-storage |  2021-10-01T01:32:25.472Z DEBUG webgrid::libraries::storage::storage_handler > Attempting to read storage ID from /storage/.webgrid-storage
webgrid-manager |  2021-10-01T01:32:25.911Z INFO  jatsl::scheduler                                  > Ready            webgrid::libraries::metrics::processor
webgrid-redis   | 1:M 01 Oct 2021 01:32:24.790 * monotonic clock: POSIX clock_gettime
webgrid-redis   | 1:M 01 Oct 2021 01:32:24.791 * Running mode=standalone, port=6379.
webgrid-storage |  2021-10-01T01:32:25.474Z DEBUG webgrid::services::storage                   > Size threshold: 10000000000 bytes
webgrid-storage |  2021-10-01T01:32:25.474Z DEBUG webgrid::services::storage                   > Cleanup target: 2000000000 bytes
webgrid-storage |  2021-10-01T01:32:25.474Z DEBUG webgrid::libraries::lifecycle::heart         > Heart starts beating
webgrid-storage |  2021-10-01T01:32:25.474Z INFO  jatsl::scheduler                             > Startup          jatsl::status_server
webgrid-storage |  2021-10-01T01:32:25.474Z INFO  jatsl::scheduler                             > Startup          webgrid::services::storage::jobs::server
webgrid-storage |  2021-10-01T01:32:25.474Z INFO  jatsl::scheduler                             > Startup          webgrid::libraries::metrics::processor
webgrid-redis   | 1:M 01 Oct 2021 01:32:24.791 # Server initialized
webgrid-redis   | 1:M 01 Oct 2021 01:32:24.791 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
webgrid-manager |  2021-10-01T01:32:25.911Z DEBUG webgrid::libraries::resources::redis              > Reusing existing shared connection!
webgrid-orchestrator |  2021-10-01T01:32:26.172Z DEBUG webgrid::libraries::lifecycle::heart      > Heart starts beating
webgrid-storage |  2021-10-01T01:32:25.474Z INFO  jatsl::status_server                         > Status server is disabled, exiting.
webgrid-storage |  2021-10-01T01:32:25.474Z INFO  jatsl::scheduler                             > Finished         jatsl::status_server
webgrid-manager |  2021-10-01T01:32:25.911Z INFO  jatsl::scheduler                                  > Ready            webgrid::libraries::lifecycle::heart_beat
webgrid-storage |  2021-10-01T01:32:25.474Z INFO  jatsl::scheduler                             > Startup          webgrid::services::storage::jobs::cleanup
webgrid-gc      |  2021-10-01T01:32:26.225Z INFO  webgrid > v0.5.1-beta
webgrid-gc      |  2021-10-01T01:32:26.225Z DEBUG webgrid::libraries::lifecycle::heart > Heart starts beating
webgrid-gc      |  2021-10-01T01:32:26.225Z INFO  jatsl::scheduler                     > Startup          webgrid::services::gc::jobs::garbage_collector
webgrid-gc      |  2021-10-01T01:32:26.225Z INFO  jatsl::scheduler                     > Startup          jatsl::status_server
webgrid-storage |  2021-10-01T01:32:25.475Z INFO  jatsl::scheduler                             > Startup          webgrid::services::storage::jobs::metadata
webgrid-storage |  2021-10-01T01:32:25.475Z INFO  webgrid::services::storage::jobs::cleanup    > Synchronising filesystem
webgrid-redis   | 1:M 01 Oct 2021 01:32:24.791 * Ready to accept connections
webgrid-proxy   |  2021-10-01T01:32:26.183Z INFO  webgrid > v0.5.1-beta
webgrid-storage |  2021-10-01T01:32:25.475Z INFO  jatsl::scheduler                             > Startup          webgrid::libraries::net::advertise
webgrid-manager |  2021-10-01T01:32:25.911Z INFO  jatsl::scheduler                                  > Ready            webgrid::libraries::net::advertise
webgrid-gc      |  2021-10-01T01:32:26.225Z INFO  jatsl::status_server                 > Status server is disabled, exiting.
webgrid-gc      |  2021-10-01T01:32:26.225Z INFO  jatsl::scheduler                     > Finished         jatsl::status_server
webgrid-gc      |  2021-10-01T01:32:26.225Z INFO  jatsl::scheduler                     > Ready            webgrid::services::gc::jobs::garbage_collector
webgrid-gc      |  2021-10-01T01:32:26.225Z INFO  webgrid::services::gc::jobs::garbage_collector > Retaining session metadata for 7 days
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Startup          webgrid::libraries::lifecycle::heart_beat
webgrid-proxy   |  2021-10-01T01:32:26.183Z DEBUG webgrid::libraries::lifecycle::heart > Heart starts beating
webgrid-proxy   |  2021-10-01T01:32:26.183Z INFO  jatsl::scheduler                     > Startup          jatsl::status_server
webgrid-proxy   |  2021-10-01T01:32:26.183Z INFO  jatsl::scheduler                     > Startup          webgrid::services::proxy::jobs::proxy
webgrid-proxy   |  2021-10-01T01:32:26.184Z INFO  jatsl::scheduler                     > Startup          webgrid::libraries::net::discovery
webgrid-proxy   |  2021-10-01T01:32:26.183Z INFO  jatsl::scheduler                     > Startup          webgrid::libraries::metrics::processor
webgrid-proxy   |  2021-10-01T01:32:26.184Z INFO  jatsl::status_server                 > Status server is disabled, exiting.
webgrid-proxy   |  2021-10-01T01:32:26.184Z INFO  jatsl::scheduler                     > Finished         jatsl::status_server
webgrid-storage |  2021-10-01T01:32:25.475Z INFO  jatsl::scheduler                             > Ready            webgrid::services::storage::jobs::cleanup
webgrid-storage |  2021-10-01T01:32:25.475Z DEBUG webgrid::libraries::resources::redis         > Reusing existing shared connection!
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Startup          jatsl::status_server
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Startup          webgrid::services::orchestrator::core::jobs::registration
webgrid-storage |  2021-10-01T01:32:25.475Z INFO  webgrid::services::storage::jobs::server     > Listening at 0.0.0.0:40006/storage/383357a0-9375-4c34-b520-b1f743a6ca48
webgrid-storage |  2021-10-01T01:32:25.475Z INFO  jatsl::scheduler                             > Ready            webgrid::services::storage::jobs::server
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Startup          webgrid::services::orchestrator::core::jobs::slot_reclaim
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Startup          webgrid::services::orchestrator::core::jobs::processor
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Startup          webgrid::services::orchestrator::core::jobs::slot_count_adjuster
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Startup          webgrid::services::orchestrator::core::jobs::node_watcher
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Startup          webgrid::services::orchestrator::core::jobs::slot_recycle
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Startup          webgrid::libraries::net::discovery
webgrid-storage |  2021-10-01T01:32:25.476Z INFO  jatsl::scheduler                             > Ready            webgrid::services::storage::jobs::metadata
webgrid-proxy   |  2021-10-01T01:32:26.184Z DEBUG webgrid::libraries::resources::redis > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T01:32:26.226Z DEBUG webgrid::services::gc::jobs::garbage_collector > Running garbage collector cycle
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::status_server                      > Status server is disabled, exiting.
webgrid-orchestrator |  2021-10-01T01:32:26.172Z DEBUG webgrid::libraries::resources::redis      > Reusing existing shared connection!
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Finished         jatsl::status_server
webgrid-gc      |  2021-10-01T01:32:26.227Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T01:32:26.227Z INFO  webgrid::services::gc::jobs::garbage_collector > Terminated 0 and purged 0 sessions
webgrid-gc      |  2021-10-01T01:32:26.227Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-proxy   |  2021-10-01T01:32:26.184Z INFO  webgrid::services::proxy::jobs::proxy > Listening on 0.0.0.0:40005
webgrid-storage |  2021-10-01T01:32:25.476Z INFO  jatsl::scheduler                             > Ready            webgrid::libraries::metrics::processor
webgrid-proxy   |  2021-10-01T01:32:26.184Z INFO  jatsl::scheduler                      > Ready            webgrid::services::proxy::jobs::proxy
webgrid-gc      |  2021-10-01T01:32:26.227Z INFO  webgrid::services::gc::jobs::garbage_collector > Purged 0 orchestrators
webgrid-storage |  2021-10-01T01:32:25.476Z INFO  jatsl::scheduler                             > Ready            webgrid::libraries::net::advertise
webgrid-proxy   |  2021-10-01T01:32:26.184Z INFO  jatsl::scheduler                      > Ready            webgrid::libraries::metrics::processor
webgrid-proxy   |  2021-10-01T01:32:26.184Z INFO  jatsl::scheduler                      > Ready            webgrid::libraries::net::discovery
webgrid-storage |  2021-10-01T01:32:25.500Z DEBUG webgrid::services::storage::jobs::cleanup    > Running cleanup cycle #0
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Ready            webgrid::services::orchestrator::core::jobs::processor
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Ready            webgrid::services::orchestrator::core::jobs::slot_recycle
webgrid-storage |  2021-10-01T01:32:25.500Z DEBUG webgrid::libraries::storage::storage_handler > Used bytes: 0
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Ready            webgrid::services::orchestrator::core::jobs::node_watcher
webgrid-orchestrator |  2021-10-01T01:32:26.172Z DEBUG webgrid::libraries::resources::redis      > Reusing existing shared connection!
webgrid-orchestrator |  2021-10-01T01:32:26.172Z DEBUG webgrid::libraries::resources::redis      > Reusing existing shared connection!
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Ready            webgrid::services::orchestrator::core::jobs::slot_reclaim
webgrid-orchestrator |  2021-10-01T01:32:26.172Z DEBUG webgrid::libraries::resources::redis      > Reusing existing shared connection!
webgrid-orchestrator |  2021-10-01T01:32:26.172Z INFO  jatsl::scheduler                          > Ready            webgrid::libraries::net::discovery
webgrid-orchestrator |  2021-10-01T01:32:26.173Z INFO  jatsl::scheduler                          > Ready            webgrid::libraries::lifecycle::heart_beat
webgrid-orchestrator |  2021-10-01T01:32:26.173Z INFO  webgrid::services::orchestrator::core::jobs::slot_count_adjuster::subjobs > Adjusting slot amount from 0 -> 5
webgrid-orchestrator |  2021-10-01T01:32:26.173Z INFO  webgrid::services::orchestrator::core::jobs::slot_recycle                 > Recycled slot 37d53a1e-cd18-4590-867f-05b4664dddc0
webgrid-orchestrator |  2021-10-01T01:32:26.173Z INFO  webgrid::services::orchestrator::core::jobs::slot_reclaim                 > Reclaim cycle executed (D: [], O: [])
webgrid-orchestrator |  2021-10-01T01:32:26.173Z INFO  webgrid::services::orchestrator::core::jobs::slot_recycle                 > Recycled slot 3a7cf84e-cb9d-4553-955f-4d99d5e45f4a
webgrid-orchestrator |  2021-10-01T01:32:26.173Z INFO  jatsl::scheduler                                                          > Ready            webgrid::services::orchestrator::core::jobs::registration
webgrid-orchestrator |  2021-10-01T01:32:26.173Z INFO  webgrid::services::orchestrator::core::jobs::slot_recycle                 > Recycled slot 5e91430a-b17a-43cc-b887-6f13f2773b17
webgrid-orchestrator |  2021-10-01T01:32:26.173Z INFO  webgrid::services::orchestrator::core::jobs::slot_recycle                 > Recycled slot a6d9bbd1-e884-40d6-b759-97d1ea2b68ca
webgrid-orchestrator |  2021-10-01T01:32:26.173Z INFO  webgrid::services::orchestrator::core::jobs::slot_recycle                 > Recycled slot 029d8d3c-e7c7-42cd-bf40-0175802897da
webgrid-orchestrator |  2021-10-01T01:32:26.173Z INFO  webgrid::services::orchestrator::core::jobs::slot_count_adjuster::subjobs > Managed slots: 
webgrid-orchestrator | 	"a6d9bbd1-e884-40d6-b759-97d1ea2b68ca\n\t37d53a1e-cd18-4590-867f-05b4664dddc0\n\t5e91430a-b17a-43cc-b887-6f13f2773b17\n\t3a7cf84e-cb9d-4553-955f-4d99d5e45f4a\n\t029d8d3c-e7c7-42cd-bf40-0175802897da"
webgrid-orchestrator |  2021-10-01T01:32:26.173Z INFO  jatsl::scheduler                                                          > Ready            webgrid::services::orchestrator::core::jobs::slot_count_adjuster
webgrid-proxy   |  2021-10-01T01:32:43.940Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-manager:40001
webgrid-proxy   |  2021-10-01T01:32:43.940Z DEBUG webgrid::services::proxy::jobs::proxy > POST /session -> webgrid-manager:40001
webgrid-proxy   |  2021-10-01T01:32:43.941Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-manager |  2021-10-01T01:32:43.942Z INFO  webgrid::services::manager::jobs::session_handler > Session creation requested from 172.20.0.7:35402
webgrid-manager | {"firstMatch":[{}],"alwaysMatch":{"browserName":"firefox","webgrid:options":{"metadata":{"build":"test-build","name":"test-name"}}}}
webgrid-manager |  2021-10-01T01:32:43.943Z DEBUG webgrid::services::manager::tasks::create_session > Created session object d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2
webgrid-manager |  2021-10-01T01:32:43.943Z DEBUG webgrid::libraries::lifecycle::heart_beat         > Added heartbeat session:d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2:heartbeat.manager
webgrid-orchestrator |  2021-10-01T01:32:43.944Z DEBUG webgrid::libraries::resources::redis                                      > Reusing existing shared connection!
webgrid-orchestrator |  2021-10-01T01:32:43.944Z INFO  webgrid::services::orchestrator::core::tasks::provision::subtasks         > Provisioning d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2
webgrid-orchestrator | Matching Capabilities { strict_file_interactability: None, accept_insecure_certs: None, browser_name: Some("firefox"), browser_version: None, platform_name: None, page_load_strategy: None, proxy: None, timeouts: None, unhandled_prompt_behavior: None, webgrid_options: Some(WebGridOptions { metadata: Some({"build": "test-build", "name": "test-name"}) }), extension_capabilities: {} } against webgrid/node-firefox:v0.5.1-beta firefox::68.7.0esr
webgrid-orchestrator | true true
webgrid-orchestrator |  2021-10-01T01:32:43.944Z DEBUG webgrid::services::orchestrator::provisioners::docker::provisioner        > Creating docker container webgrid-session-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2
webgrid-orchestrator |  2021-10-01T01:32:43.944Z DEBUG bollard::docker                                                           > {"Hostname":"webgrid-session-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2","Env":["REDIS=redis://webgrid-redis/","ID=d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2","RUST_LOG=debug,hyper=warn,warp=warn,sqlx=warn,tower=warn,h2=warn","HOST=webgrid-session-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2","STORAGE_DIRECTORY=/storage","TRACE_ENDPOINT=http://webgrid-otelcol:4317"],"Image":"webgrid/node-firefox:v0.5.1-beta","HostConfig":{"Binds":["webgrid:/storage"],"NetworkMode":"webgrid","AutoRemove":true,"ShmSize":2147483648}}
webgrid-orchestrator |  2021-10-01T01:32:43.944Z DEBUG bollard::uri                                                              > Parsing uri: unix://2f7661722f72756e2f646f636b65722e736f636b/containers/create?name=webgrid-session-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2, client_type: Unix, socket: /var/run/docker.sock
webgrid-orchestrator |  2021-10-01T01:32:43.944Z DEBUG bollard::docker                                                           > unix://2f7661722f72756e2f646f636b65722e736f636b/containers/create?name=webgrid-session-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2
webgrid-orchestrator |  2021-10-01T01:32:43.944Z DEBUG bollard::docker                                                           > request: Request { method: POST, uri: unix://2f7661722f72756e2f646f636b65722e736f636b/containers/create?name=webgrid-session-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2, version: HTTP/1.1, headers: {"content-type": "application/json"}, body: Body(Full(b"{\"Hostname\":\"webgrid-session-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2\",\"Env\":[\"REDIS=redis://webgrid-redis/\",\"ID=d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2\",\"RUST_LOG=debug,hyper=warn,warp=warn,sqlx=warn,tower=warn,h2=warn\",\"HOST=webgrid-session-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2\",\"STORAGE_DIRECTORY=/storage\",\"TRACE_ENDPOINT=http://webgrid-otelcol:4317\"],\"Image\":\"webgrid/node-firefox:v0.5.1-beta\",\"HostConfig\":{\"Binds\":[\"webgrid:/storage\"],\"NetworkMode\":\"webgrid\",\"AutoRemove\":true,\"ShmSize\":2147483648}}")) }
webgrid-orchestrator |  2021-10-01T01:32:43.946Z ERROR webgrid::services::orchestrator::core::tasks::provision                   > Failed to provision node d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2 API responded with a 404 not found: {"message":"No such image: webgrid/node-firefox:v0.5.1-beta"}
webgrid-orchestrator | 
webgrid-orchestrator |  2021-10-01T01:32:43.946Z DEBUG webgrid::services::orchestrator::provisioners::docker::provisioner        > Killing docker container webgrid-node-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2
webgrid-orchestrator |  2021-10-01T01:32:43.946Z DEBUG bollard::uri                                                              > Parsing uri: unix://2f7661722f72756e2f646f636b65722e736f636b/containers/webgrid-node-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2/kill, client_type: Unix, socket: /var/run/docker.sock
webgrid-orchestrator |  2021-10-01T01:32:43.946Z DEBUG bollard::docker                                                           > unix://2f7661722f72756e2f646f636b65722e736f636b/containers/webgrid-node-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2/kill
webgrid-orchestrator |  2021-10-01T01:32:43.946Z DEBUG bollard::docker                                                           > request: Request { method: POST, uri: unix://2f7661722f72756e2f646f636b65722e736f636b/containers/webgrid-node-d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2/kill, version: HTTP/1.1, headers: {"content-type": "application/json"}, body: Body(Empty) }
webgrid-manager |  2021-10-01T01:32:50.913Z DEBUG tonic::transport::service::reconnect              > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-manager |  2021-10-01T01:32:50.913Z DEBUG tonic::transport::service::reconnect              > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-manager | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-orchestrator |  2021-10-01T01:32:51.175Z DEBUG tonic::transport::service::reconnect                                      > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-orchestrator |  2021-10-01T01:32:51.175Z DEBUG tonic::transport::service::reconnect                                      > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-orchestrator | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-proxy   |  2021-10-01T01:32:51.188Z DEBUG tonic::transport::service::reconnect  > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-proxy   |  2021-10-01T01:32:51.188Z DEBUG tonic::transport::service::reconnect  > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-proxy   | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-storage |  2021-10-01T01:37:25.502Z DEBUG webgrid::services::storage::jobs::cleanup    > Running cleanup cycle #1
webgrid-storage |  2021-10-01T01:37:25.502Z DEBUG webgrid::libraries::storage::storage_handler > Used bytes: 0
webgrid-orchestrator |  2021-10-01T01:37:26.174Z INFO  webgrid::services::orchestrator::core::jobs::slot_reclaim                 > Reclaim cycle executed (D: [], O: [])
webgrid-manager |  2021-10-01T01:37:43.950Z WARN  webgrid::services::manager::tasks::create_session > Failed to setup session d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2 SchedulingTimeout
webgrid-manager |  2021-10-01T01:37:43.950Z DEBUG webgrid::libraries::lifecycle::heart_beat         > Removed heartbeat session:d5d44bb1-48f0-4d32-a7a4-0fb7df2e17c2:heartbeat.manager
webgrid-manager |  2021-10-01T01:37:43.951Z WARN  webgrid::services::manager::jobs::session_handler > Failed to create session: Timed out while waiting for orchestrator to respond
webgrid-manager |  2021-10-01T01:37:50.916Z DEBUG tonic::transport::service::reconnect              > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-manager |  2021-10-01T01:37:50.916Z DEBUG tonic::transport::service::reconnect              > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-manager | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-proxy   |  2021-10-01T01:37:51.190Z DEBUG tonic::transport::service::reconnect  > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-proxy   |  2021-10-01T01:37:51.190Z DEBUG tonic::transport::service::reconnect  > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-proxy   | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-storage |  2021-10-01T01:42:25.503Z DEBUG webgrid::services::storage::jobs::cleanup    > Running cleanup cycle #2
webgrid-storage |  2021-10-01T01:42:25.504Z DEBUG webgrid::libraries::storage::storage_handler > Used bytes: 0
webgrid-orchestrator |  2021-10-01T01:42:26.174Z INFO  webgrid::services::orchestrator::core::jobs::slot_recycle                 > Recycled slot 029d8d3c-e7c7-42cd-bf40-0175802897da
webgrid-orchestrator |  2021-10-01T01:42:26.174Z INFO  webgrid::services::orchestrator::core::jobs::slot_reclaim                 > Reclaim cycle executed (D: ["029d8d3c-e7c7-42cd-bf40-0175802897da"], O: [])
webgrid-gc      |  2021-10-01T01:42:26.227Z DEBUG webgrid::services::gc::jobs::garbage_collector > Running garbage collector cycle
webgrid-gc      |  2021-10-01T01:42:26.227Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T01:42:26.227Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T01:42:26.229Z INFO  webgrid::services::gc::jobs::garbage_collector > Terminated 0 and purged 0 sessions
webgrid-gc      |  2021-10-01T01:42:26.229Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T01:42:26.230Z INFO  webgrid::services::gc::jobs::garbage_collector > Purged 0 orchestrators
webgrid-proxy   |  2021-10-01T01:43:53.673Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-manager:40001
webgrid-proxy   |  2021-10-01T01:43:53.673Z DEBUG webgrid::services::proxy::jobs::proxy > POST /session -> webgrid-manager:40001
webgrid-proxy   |  2021-10-01T01:43:53.674Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-manager |  2021-10-01T01:43:53.674Z INFO  webgrid::services::manager::jobs::session_handler > Session creation requested from 172.20.0.7:35404
webgrid-manager | {"firstMatch":[{}],"alwaysMatch":{"browserName":"firefox","webgrid:options":{"metadata":{"build":"test-build","name":"test-name"}}}}
webgrid-manager |  2021-10-01T01:43:53.674Z DEBUG webgrid::services::manager::tasks::create_session > Created session object d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74
webgrid-manager |  2021-10-01T01:43:53.674Z DEBUG webgrid::libraries::lifecycle::heart_beat         > Added heartbeat session:d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74:heartbeat.manager
webgrid-orchestrator |  2021-10-01T01:43:53.675Z DEBUG webgrid::libraries::resources::redis                                      > Reusing existing shared connection!
webgrid-orchestrator |  2021-10-01T01:43:53.675Z INFO  webgrid::services::orchestrator::core::tasks::provision::subtasks         > Provisioning d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74
webgrid-orchestrator | Matching Capabilities { strict_file_interactability: None, accept_insecure_certs: None, browser_name: Some("firefox"), browser_version: None, platform_name: None, page_load_strategy: None, proxy: None, timeouts: None, unhandled_prompt_behavior: None, webgrid_options: Some(WebGridOptions { metadata: Some({"build": "test-build", "name": "test-name"}) }), extension_capabilities: {} } against webgrid/node-firefox:v0.5.1-beta firefox::68.7.0esr
webgrid-orchestrator | true true
webgrid-orchestrator |  2021-10-01T01:43:53.675Z DEBUG webgrid::services::orchestrator::provisioners::docker::provisioner        > Creating docker container webgrid-session-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74
webgrid-orchestrator |  2021-10-01T01:43:53.675Z DEBUG bollard::docker                                                           > {"Hostname":"webgrid-session-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74","Env":["REDIS=redis://webgrid-redis/","ID=d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74","RUST_LOG=debug,hyper=warn,warp=warn,sqlx=warn,tower=warn,h2=warn","HOST=webgrid-session-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74","STORAGE_DIRECTORY=/storage","TRACE_ENDPOINT=http://webgrid-otelcol:4317"],"Image":"webgrid/node-firefox:v0.5.1-beta","HostConfig":{"Binds":["webgrid:/storage"],"NetworkMode":"webgrid","AutoRemove":true,"ShmSize":2147483648}}
webgrid-orchestrator |  2021-10-01T01:43:53.675Z DEBUG bollard::uri                                                              > Parsing uri: unix://2f7661722f72756e2f646f636b65722e736f636b/containers/create?name=webgrid-session-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74, client_type: Unix, socket: /var/run/docker.sock
webgrid-orchestrator |  2021-10-01T01:43:53.675Z DEBUG bollard::docker                                                           > unix://2f7661722f72756e2f646f636b65722e736f636b/containers/create?name=webgrid-session-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74
webgrid-orchestrator |  2021-10-01T01:43:53.675Z DEBUG bollard::docker                                                           > request: Request { method: POST, uri: unix://2f7661722f72756e2f646f636b65722e736f636b/containers/create?name=webgrid-session-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74, version: HTTP/1.1, headers: {"content-type": "application/json"}, body: Body(Full(b"{\"Hostname\":\"webgrid-session-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74\",\"Env\":[\"REDIS=redis://webgrid-redis/\",\"ID=d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74\",\"RUST_LOG=debug,hyper=warn,warp=warn,sqlx=warn,tower=warn,h2=warn\",\"HOST=webgrid-session-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74\",\"STORAGE_DIRECTORY=/storage\",\"TRACE_ENDPOINT=http://webgrid-otelcol:4317\"],\"Image\":\"webgrid/node-firefox:v0.5.1-beta\",\"HostConfig\":{\"Binds\":[\"webgrid:/storage\"],\"NetworkMode\":\"webgrid\",\"AutoRemove\":true,\"ShmSize\":2147483648}}")) }
webgrid-orchestrator |  2021-10-01T01:43:53.677Z ERROR webgrid::services::orchestrator::core::tasks::provision                   > Failed to provision node d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74 API responded with a 404 not found: {"message":"No such image: webgrid/node-firefox:v0.5.1-beta"}
webgrid-orchestrator | 
webgrid-orchestrator |  2021-10-01T01:43:53.677Z DEBUG webgrid::services::orchestrator::provisioners::docker::provisioner        > Killing docker container webgrid-node-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74
webgrid-orchestrator |  2021-10-01T01:43:53.677Z DEBUG bollard::uri                                                              > Parsing uri: unix://2f7661722f72756e2f646f636b65722e736f636b/containers/webgrid-node-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74/kill, client_type: Unix, socket: /var/run/docker.sock
webgrid-orchestrator |  2021-10-01T01:43:53.677Z DEBUG bollard::docker                                                           > unix://2f7661722f72756e2f646f636b65722e736f636b/containers/webgrid-node-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74/kill
webgrid-orchestrator |  2021-10-01T01:43:53.677Z DEBUG bollard::docker                                                           > request: Request { method: POST, uri: unix://2f7661722f72756e2f646f636b65722e736f636b/containers/webgrid-node-d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74/kill, version: HTTP/1.1, headers: {"content-type": "application/json"}, body: Body(Empty) }
webgrid-manager |  2021-10-01T01:44:00.916Z DEBUG tonic::transport::service::reconnect              > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-manager |  2021-10-01T01:44:00.916Z DEBUG tonic::transport::service::reconnect              > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-manager | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-orchestrator |  2021-10-01T01:44:01.178Z DEBUG tonic::transport::service::reconnect                                      > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-orchestrator |  2021-10-01T01:44:01.178Z DEBUG tonic::transport::service::reconnect                                      > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-orchestrator | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-proxy   |  2021-10-01T01:44:01.190Z DEBUG tonic::transport::service::reconnect  > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-proxy   |  2021-10-01T01:44:01.190Z DEBUG tonic::transport::service::reconnect  > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-proxy   | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-proxy   |  2021-10-01T01:45:32.399Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.399Z DEBUG webgrid::services::proxy::jobs::proxy > GET / -> webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.400Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-proxy   |  2021-10-01T01:45:32.548Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.548Z DEBUG webgrid::services::proxy::jobs::proxy > GET /build/main.css -> webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.548Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-proxy   |  2021-10-01T01:45:32.549Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.549Z DEBUG webgrid::services::proxy::jobs::proxy > GET /build/main.js -> webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.549Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-proxy   |  2021-10-01T01:45:32.557Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.557Z DEBUG webgrid::services::proxy::jobs::proxy > GET /build/main-d3d7089a.js -> webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.557Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-proxy   |  2021-10-01T01:45:32.635Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.635Z DEBUG webgrid::services::proxy::jobs::proxy > GET /build/_layout-9636e18b.js -> webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.635Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-proxy   |  2021-10-01T01:45:32.636Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.636Z DEBUG webgrid::services::proxy::jobs::proxy > GET /build/index-853220de.js -> webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.636Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-proxy   |  2021-10-01T01:45:32.641Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.641Z DEBUG webgrid::services::proxy::jobs::proxy > GET /favicon.png -> webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.641Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-proxy   |  2021-10-01T01:45:32.647Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.647Z DEBUG webgrid::services::proxy::jobs::proxy > GET /build/api-a91f66fa.js -> webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:45:32.647Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-proxy   |  2021-10-01T01:45:41.186Z DEBUG tonic::transport::service::reconnect  > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-proxy   |  2021-10-01T01:45:41.186Z DEBUG tonic::transport::service::reconnect  > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-proxy   | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-proxy   |  2021-10-01T01:46:33.998Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:46:33.998Z DEBUG webgrid::services::proxy::jobs::proxy > GET /watch/nepomnu? -> webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:46:33.999Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-proxy   |  2021-10-01T01:46:34.236Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:46:34.236Z DEBUG webgrid::services::proxy::jobs::proxy > GET /build/[sessionID]-228b6f09.js -> webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:46:34.236Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-proxy   |  2021-10-01T01:46:34.302Z DEBUG webgrid::services::proxy::jobs::proxy > Attempting connection to webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:46:34.302Z DEBUG webgrid::services::proxy::jobs::proxy > POST /api -> webgrid-api:40007
webgrid-proxy   |  2021-10-01T01:46:34.302Z WARN  hyper::proto::h2                      > Connection header illegal in HTTP/2: connection 
webgrid-proxy   |  2021-10-01T01:46:41.189Z DEBUG tonic::transport::service::reconnect  > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-proxy   |  2021-10-01T01:46:41.189Z DEBUG tonic::transport::service::reconnect  > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-proxy   | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-storage |  2021-10-01T01:47:25.507Z DEBUG webgrid::services::storage::jobs::cleanup    > Running cleanup cycle #3
webgrid-storage |  2021-10-01T01:47:25.508Z DEBUG webgrid::libraries::storage::storage_handler > Used bytes: 0
webgrid-orchestrator |  2021-10-01T01:47:26.174Z INFO  webgrid::services::orchestrator::core::jobs::slot_reclaim                 > Reclaim cycle executed (D: [], O: [])
webgrid-manager |  2021-10-01T01:48:53.717Z WARN  webgrid::services::manager::tasks::create_session > Failed to setup session d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74 SchedulingTimeout
webgrid-manager |  2021-10-01T01:48:53.717Z DEBUG webgrid::libraries::lifecycle::heart_beat         > Removed heartbeat session:d2bd0c8a-d6a5-4b81-9343-bf26c1df6a74:heartbeat.manager
webgrid-manager |  2021-10-01T01:48:53.717Z WARN  webgrid::services::manager::jobs::session_handler > Failed to create session: Timed out while waiting for orchestrator to respond
webgrid-manager |  2021-10-01T01:49:00.916Z DEBUG tonic::transport::service::reconnect              > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-manager |  2021-10-01T01:49:00.916Z DEBUG tonic::transport::service::reconnect              > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-manager | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-proxy   |  2021-10-01T01:49:01.190Z DEBUG tonic::transport::service::reconnect  > reconnect::poll_ready: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: Try again" })) 
webgrid-proxy   |  2021-10-01T01:49:01.190Z DEBUG tonic::transport::service::reconnect  > error: error trying to connect: dns error: failed to lookup address information: Try again 
webgrid-proxy   | OpenTelemetry trace error occurred Other(Custom("failed to export batch ExportFailed(Status(Status { code: Unknown, message: \"transport error: error trying to connect: dns error: failed to lookup address information: Try again\" }))"))
webgrid-storage |  2021-10-01T01:52:25.510Z DEBUG webgrid::services::storage::jobs::cleanup    > Running cleanup cycle #4
webgrid-storage |  2021-10-01T01:52:25.511Z DEBUG webgrid::libraries::storage::storage_handler > Used bytes: 0
webgrid-orchestrator |  2021-10-01T01:52:26.174Z INFO  webgrid::services::orchestrator::core::jobs::slot_recycle                 > Recycled slot 029d8d3c-e7c7-42cd-bf40-0175802897da
webgrid-orchestrator |  2021-10-01T01:52:26.175Z INFO  webgrid::services::orchestrator::core::jobs::slot_reclaim                 > Reclaim cycle executed (D: ["029d8d3c-e7c7-42cd-bf40-0175802897da"], O: [])
webgrid-gc      |  2021-10-01T01:52:26.228Z DEBUG webgrid::services::gc::jobs::garbage_collector > Running garbage collector cycle
webgrid-gc      |  2021-10-01T01:52:26.228Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T01:52:26.228Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T01:52:26.230Z INFO  webgrid::services::gc::jobs::garbage_collector > Terminated 0 and purged 0 sessions
webgrid-gc      |  2021-10-01T01:52:26.230Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T01:52:26.231Z INFO  webgrid::services::gc::jobs::garbage_collector > Purged 0 orchestrators
webgrid-storage |  2021-10-01T01:57:25.513Z DEBUG webgrid::services::storage::jobs::cleanup    > Running cleanup cycle #5
webgrid-storage |  2021-10-01T01:57:25.514Z DEBUG webgrid::libraries::storage::storage_handler > Used bytes: 0
webgrid-orchestrator |  2021-10-01T01:57:26.174Z INFO  webgrid::services::orchestrator::core::jobs::slot_reclaim                 > Reclaim cycle executed (D: [], O: [])
webgrid-storage |  2021-10-01T02:02:25.516Z DEBUG webgrid::services::storage::jobs::cleanup    > Running cleanup cycle #6
webgrid-storage |  2021-10-01T02:02:25.522Z DEBUG webgrid::libraries::storage::storage_handler > Used bytes: 0
webgrid-orchestrator |  2021-10-01T02:02:26.174Z INFO  webgrid::services::orchestrator::core::jobs::slot_reclaim                 > Reclaim cycle executed (D: [], O: [])
webgrid-gc      |  2021-10-01T02:02:26.227Z DEBUG webgrid::services::gc::jobs::garbage_collector > Running garbage collector cycle
webgrid-gc      |  2021-10-01T02:02:26.227Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T02:02:26.228Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T02:02:26.228Z INFO  webgrid::services::gc::jobs::garbage_collector > Terminated 0 and purged 0 sessions
webgrid-gc      |  2021-10-01T02:02:26.228Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T02:02:26.228Z INFO  webgrid::services::gc::jobs::garbage_collector > Purged 0 orchestrators
webgrid-storage |  2021-10-01T02:07:25.525Z DEBUG webgrid::services::storage::jobs::cleanup    > Running cleanup cycle #7
webgrid-storage |  2021-10-01T02:07:25.526Z DEBUG webgrid::libraries::storage::storage_handler > Used bytes: 0
webgrid-orchestrator |  2021-10-01T02:07:26.174Z INFO  webgrid::services::orchestrator::core::jobs::slot_reclaim                 > Reclaim cycle executed (D: [], O: [])
webgrid-storage |  2021-10-01T02:12:25.528Z DEBUG webgrid::services::storage::jobs::cleanup    > Running cleanup cycle #8
webgrid-storage |  2021-10-01T02:12:25.529Z DEBUG webgrid::libraries::storage::storage_handler > Used bytes: 0
webgrid-orchestrator |  2021-10-01T02:12:26.174Z INFO  webgrid::services::orchestrator::core::jobs::slot_reclaim                 > Reclaim cycle executed (D: [], O: [])
webgrid-gc      |  2021-10-01T02:12:26.228Z DEBUG webgrid::services::gc::jobs::garbage_collector > Running garbage collector cycle
webgrid-gc      |  2021-10-01T02:12:26.228Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T02:12:26.228Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T02:12:26.230Z INFO  webgrid::services::gc::jobs::garbage_collector > Terminated 0 and purged 0 sessions
webgrid-gc      |  2021-10-01T02:12:26.230Z DEBUG webgrid::libraries::resources::redis           > Reusing existing shared connection!
webgrid-gc      |  2021-10-01T02:12:26.231Z INFO  webgrid::services::gc::jobs::garbage_collector > Purged 0 orchestrators

Context

Version
Latest docker

Where did the problem occur?

  • 🐳 Docker

idleTimeout set in the capabilities do not overrides IDLE_TIMEOUT from global variable

πŸ› Bug description

idleTimeout set in the capabilities do not overrides IDLE_TIMEOUT from global variable.

🦢 Reproduction steps

Steps to reproduce the behaviour:

  1. Configure idleTimeout using webgrid:options set it to 900
  2. Check that global IDLE_TIMEOUT set to default value 120
  3. Establish session with webgrid
  4. Send selenium command
  5. Wait

🎯 Expected behaviour

Node should be automatically killed after 900 seconds

πŸ› Actual behaviour

Node is killed after 120 seconds

Version
0.8.0

Where did the problem occur?
List the platform on which you encountered the issue.

  • ☸️ Kubernetes

Which browsers cause the bug?

Additional context

HeartBeat does not consider service health

πŸ› Bug description

Currently, the HeartBeat struct does not take any external status indicators into account. In case of manager/session heartbeats this may cause issues as jobs can be unavailable, thus compromising the overall service health, but the heartbeat persists. This bug presumably does not manifest itself as of now though, because the only resource that could become unavailable is the database (which the HeartBeat struct uses). However, to ease the future addition of resources this behaviour should be fixed and crashes of services, albeit unlikely, can still surface this.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Compromise a job by e.g. letting it crash
  2. Launch the service (manager, session or storage)
  3. The heartbeat stays in the database even though /status reports Degraded

🎯 Expected behaviour

When the JobScheduler is in a degraded state, the heartbeat should be temporarily removed from the database.


Context

Version
This bug is not version related.

Where did the problem occur?

  • ☸️ Kubernetes
  • 🐳 Docker
  • πŸ‘¨β€πŸ’» Locally

Which browsers cause the bug?
This bug is not browser related

Failed to pull image "webgrid/api"

πŸ› Bug description

Using helm chart failed to pull images (webgrid-api, webgrid-manager, webgrid-metrics, webgrid-orchestrator, webgrid-proxy), facing ImagePullBackOff error.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Add helm repo - helm repo add webgrid https://webgrid.dev/
  2. Install helm repo - helm install example webgrid/webgrid

🎯 Expected behaviour

Images should be downloaded

πŸ“Ί Screenshots

Failed to pull image "webgrid/api": rpc error: code = Unknown desc = Error response from daemon: manifest for webgrid/api:latest not found: manifest unknown: manifest unknown

image


Context

Version
On which version of WebGrid was the bug present?

Where did the problem occur?
List the platform on which you encountered the issue.

  • ☸️ Kubernetes - Minikube

Additional context
Do I have to change any versions in helm chart ?

404 links in readme

πŸ› Bug description

The shields at the top of the readme are linking to pages that don't exist. The relevant badges are for the code of conduct, license, and "maintained".

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Click all the badges in the readme
  2. See 404 errors

🎯 Expected behaviour

Badges should link to pages that exist. Alternatively, you could just display the badges without a link.

Redis disconnect yields unexpected ServiceRunner termination

πŸ› Bug description

When the Redis connection is interrupted, the ServiceRunner loop just terminates with an "Ok" status code.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Launch the collector locally and connect to a remote redis through kubectl port-forward
  2. Terminate the redis in K8s
  3. Watch as "unexpected EOF" errors are thrown
  4. See error

🎯 Expected behaviour

It should log the error and either try reconnecting or be restarted by jatsl!

πŸ“Ί Screenshots

 2021-08-17T13:34:57.877Z DEBUG webgrid::harness::redis > unexpected end of file
 2021-08-17T13:34:57.877Z DEBUG webgrid::harness::redis > unexpected end of file
 2021-08-17T13:34:57.885Z ERROR webgrid::library::communication::implementation::redis::queue_provider > Encountered error reading from redis stream unexpected end of file
 2021-08-17T13:34:57.885Z ERROR webgrid::library::communication::implementation::redis::queue_provider > Encountered error reading from redis stream unexpected end of file
 2021-08-17T13:34:57.893Z INFO  jatsl::scheduler                                                       > Finished         ServiceRunner(SchedulingWatcherService)
 2021-08-17T13:34:57.893Z INFO  jatsl::scheduler                                                       > Finished         ServiceRunner(CreationWatcherService)

Context

Version
On the architecture rework branch head πŸ™‚

Great dependency bump of 2021

Since the project has been updated last time a number of dependencies used have had some major releases. Update all of them πŸ˜‰

  • Core Rust dependencies (mostly as a result of Tokio 1.0, might cause some code breaks)
  • rust-musl-builder (now has root building support natively πŸ₯³)
  • API NodeJS dependencies

Version mismatch between browser and WebDriver

πŸ› Bug description

The WebDriver install script currently uses a pinned version for the driver but installs the latest version of the selected browser. This is known to cause issues due to incompatibilities especially with Google Chrome.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Wait until a new Browser version is released
  2. Attempt to build the node image (chrome is more likely to cause issues)
  3. Run the grid locally in docker
  4. Launch a new session
  5. Watch it burn on startup πŸ”₯

🎯 Expected behaviour

When selecting a browser version the install script should always download a matching WebDriver executable.


Context

Version
This bug is not version dependent.

Where did the problem occur?

  • ☸️ Kubernetes
  • 🐳 Docker
  • πŸ‘¨β€πŸ’» Locally

Which browsers cause the bug?
Chrome is more likely to break with a version mismatch but it is expected to happen to Firefox as well.

Additional context
A subsequent issue is that there are seemingly no past binary releases for Google Chrome, thus we can't pin it to prevent issues. To remedy this problem there has to be a reliable method of choosing a chromedriver version for the latest Chrome.

Support for OpenTelemetry Tracing

Is your feature request related to a problem? Please describe.
Recent networking issues local to our cluster deployment brought up the requirement for proper request tracing. This would reduce the workload of collecting all the timestamps and events related to e.g. session creation.

Describe the solution you'd like
Full support for OpenTelemetry traces on session creation and requests to active sessions.

CI Build not running for PRs from forks

πŸ› Bug description

CI Build Pipeline is not executed for PRs from Forks.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Create a fork
  2. Make some changes
  3. Send a PR
  4. Watch as only the pull_request (Code Validation) pipeline runs and not the push (Build) pipeline

🎯 Expected behaviour

Both pipelines should trigger, albeit the build pipeline needs a slug like pr10 instead of the commit hash.

Pipeline does not check Chrome sessions

πŸ› Bug description

When running the test pipeline, it only checks whether Firefox sessions work. Chrome is not checked so e.g. version mismatches between the WebDriver and Chrome will not be caught (as has happened a few hours ago).

To resolve this, the Firefox test should be duplicated and executed for Chrome (any supported browser, really) as well.

Add project logo

A logo improves a projects discoverability and can be used in many places like documentation, GitHub Repository, on slides etc.

I'm searching for your cool logo idea! Don't be shy, just write a comment if you have an idea πŸ™‚

Workspace refactoring broke documentation links

Refactoring the core project into a workspace broke countless documentation links 😒

 Documenting library v0.1.0 (/home/rust/src/library)
warning: unresolved link to `super::domain`
 --> library/src/lib.rs:7:35
  |
7 | //! extracted into the [`domain`](super::domain) module.
  |                                   ^^^^^^^^^^^^^ no item named `super` in scope
  |
  = note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default

warning: unresolved link to `ConsumerIdentifier`
  --> library/src/communication/event/queue_provider.rs:13:5
   |
13 | /     /// Subscribes to new notifications on a given queue joining the specified [`ConsumerGroup`](ConsumerGroupDescriptor)
14 | |     /// with the given [`ConsumerIdentifier`] or creates it if it does not exist.
   | |_________________________________________________________________________________^
   |
   = note: the link appears in this line:

           with the given [`ConsumerIdentifier`] or creates it if it does not exist.
                           ^^^^^^^^^^^^^^^^^^^^
   = note: no item named `ConsumerIdentifier` in scope
   = help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`

warning: unresolved link to `self::library::communication::event::QueueEntry`
  --> library/src/communication/implementation/redis/queue_entry.rs:12:54
   |
12 | /// Redis based implementation of the [`QueueEntry`](crate::library::communication::event::QueueEntry) trait
   |                                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `library` in module `library`

    Checking domain v0.1.0 (/home/rust/src/domain)
 Documenting domain v0.1.0 (/home/rust/src/domain)
warning: `library` (lib doc) generated 3 warnings
    Checking harness v0.1.0 (/home/rust/src/harness)
 Documenting harness v0.1.0 (/home/rust/src/harness)
warning: unresolved link to `self::module::api`
 --> domain/src/discovery.rs:8:17
  |
8 |     /// [`Api`](crate::module::api) instance
  |                 ^^^^^^^^^^^^^^^^^^ no item named `module` in module `domain`
  |
  = note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default

warning: unresolved link to `self::module::node`
  --> domain/src/discovery.rs:10:18
   |
10 |     /// [`Node`](crate::module::node) instance
   |                  ^^^^^^^^^^^^^^^^^^^ no item named `module` in module `domain`

warning: unresolved link to `super::super::library::communication::event::Notification`
 --> domain/src/event/mod.rs:1:38
  |
1 | //! Domain specific [`Notification`](super::super::library::communication::event::Notification) structures
  |                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `super` in module `domain`

warning: unresolved link to `self::library::communication::event::QueueDescriptorExtension`
  --> domain/src/event/provisioner.rs:23:67
   |
23 | /// It is intended to be used with a [`QueueDescriptorExtension`](crate::library::communication::event::QueueDescriptorExtension)
   |                                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `library` in module `domain`

warning: unresolved link to `self::domain::webdriver::CapabilitiesRequest`
  --> domain/src/event/provisioner.rs:30:37
   |
30 |     /// Raw [`CapabilitiesRequest`](crate::domain::webdriver::CapabilitiesRequest) json string used for scheduling.
   |                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `domain` in module `domain`

warning: unresolved link to `self::domain::webdriver::CapabilitiesRequest`
  --> domain/src/event/session/created.rs:19:37
   |
19 |     /// Raw [`CapabilitiesRequest`](crate::domain::webdriver::CapabilitiesRequest) json string provided by the end-user
   |                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `domain` in module `domain`

warning: unresolved link to `self::library::communication::discovery`
  --> domain/src/event/session/operational.rs:12:20
   |
12 | /// [discoverable](crate::library::communication::discovery), and can
   |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `library` in module `domain`

warning: unresolved link to `self::domain::webdriver::Capabilities`
  --> domain/src/event/session/operational.rs:19:30
   |
19 |     /// Raw [`Capabilities`](crate::domain::webdriver::Capabilities) json string
   |                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `domain` in module `domain`

warning: unresolved link to `Heart`
  --> domain/src/event/session/terminated.rs:36:11
   |
36 |     /// [`Heart`] provided by module died
   |           ^^^^^ no item named `Heart` in scope
   |
   = help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`

warning: unresolved link to `super::super::library::communication::request::Request`
 --> domain/src/request/mod.rs:1:33
  |
1 | //! Domain specific [`Request`](super::super::library::communication::request::Request) structures
  |                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `super` in module `domain`

warning: unresolved link to `self::domain::webdriver::CapabilitiesRequest`
   --> domain/src/webdriver/capabilities.rs:238:33
    |
238 | /// Raw [`CapabilitiesRequest`](crate::domain::webdriver::CapabilitiesRequest) json string containing all fields
    |                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `domain` in module `domain`

warning: unresolved link to `self::module::api`
 --> domain/src/discovery.rs:8:17
  |
8 |     /// [`Api`](crate::module::api) instance
  |                 ^^^^^^^^^^^^^^^^^^ no item named `module` in module `domain`

    Checking modules v0.1.0 (/home/rust/src/modules)
 Documenting modules v0.1.0 (/home/rust/src/modules)
warning: public documentation for `RedisCommunicationFactory` links to private item `MonitoredRedisFactory`
  --> harness/src/redis/factory.rs:95:38
   |
95 | /// Communication factory based on [`MonitoredRedisFactory`]
   |                                      ^^^^^^^^^^^^^^^^^^^^^ this item is private
   |
   = note: `#[warn(rustdoc::private_intra_doc_links)]` on by default
   = note: this link will resolve properly if you pass `--document-private-items`

warning: unresolved link to `self::library::communication::request::Responder`
  --> harness/src/service.rs:23:60
   |
23 |     /// function would return an instance of [`Responder`](crate::library::communication::request::Responder)
   |                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `library` in module `harness`
   |
   = note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default

warning: public documentation for `RedisCommunicationFactory` links to private item `MonitoredRedisFactory`
  --> harness/src/redis/factory.rs:95:38
   |
95 | /// Communication factory based on [`MonitoredRedisFactory`]
   |                                      ^^^^^^^^^^^^^^^^^^^^^ this item is private
   |
   = note: this link will resolve properly if you pass `--document-private-items`

warning: unresolved link to `self::library::communication::request::Responder`
  --> harness/src/service.rs:23:60
   |
23 |     /// function would return an instance of [`Responder`](crate::library::communication::request::Responder)
   |                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `library` in module `harness`

warning: `domain` (lib doc) generated 12 warnings
warning: `harness` (lib doc) generated 4 warnings
warning: unresolved link to `self::domain::request::ProvisionerMatchRequest`
 --> modules/src/orchestrator/services/matching/mod.rs:1:67
  |
1 | //! Structures related to processing [`ProvisionerMatchRequests`](crate::domain::request::ProvisionerMatchRequest)
  |                                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no item named `domain` in module `modules`
  |
  = note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default

warning: `modules` (lib doc) generated 1 warning
 Documenting webgrid v0.1.0 (/home/rust/src/webgrid)
    Finished release [optimized] target(s) in 25.26s

Auto-deletion of old events

Is your feature request related to a problem? Please describe.
When for whatever reason the grid is shutdown with pending events and then restarted at a later point in time, those sessions will be scheduled even though the clients that requested them are long gone.

Describe the solution you'd like
Add a filter to the managers to discard (and terminate) session creation events which are older than a configurable time interval.

Database access through proxy

Is your feature request related to a problem? Please describe.
When running the grid in K8s and adding external orchestrators/nodes it is problematic to access the database.

Describe the solution you'd like
Being able to provide the proxy URL as the database with either auto-detect or a custom protocol in the URL. This could be realized by routing the TCP traffic through a WebSocket.

Describe alternatives you've considered
An alternative would be to expose the database with a NodePort but that requires manual configuration of the IP and may break eventually/unexpectedly.

Sessions not timing out on startup with database unavailable

πŸ› Bug description

When a new session is created but the database is unavailable it does not terminate after the configured session timeout period.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Modify the docker orchestrator to pass a bogus/unreachable Redis URL to the session
  2. Build and start the grid in docker
  3. Launch a new session
  4. Wait for the default timeout of 60 seconds
  5. The session does not terminate indefinitely until the database becomes available again

🎯 Expected behaviour

The session should bail after a specified startup period which could either be a new timeout or re-use the existing one.


Context

Version
This bug is not version dependent.

Where did the problem occur?

  • ☸️ Kubernetes
  • 🐳 Docker
  • πŸ‘¨β€πŸ’» Locally

Which browsers cause the bug?
Not browser specific.

Additional context
Although unlikely this bug will probably manifest itself when a node has limited/misconfigured network connectivity. A clean failure would more clearly indicate the issue to the administrator during configuration.

Allow docker volume mounts for session containers

Is your feature request related to a problem? Please describe.
A user attempted to download a PDF file and expected it to be locally available on-disk.

Describe the solution you'd like
It should be possible for session containers to have user-defined volume mounts.

Describe alternatives you've considered
Providing a directory whose contents get uploaded to the S3 backend upon session termination seems like a more generic and clean solution. However, it is a bit more involved but would make it possible to retrieve files from K8s sessions without relying on ReadWriteMany storage classes.

Additional context
See TilBlechschmidt/ParallelSeleniumTest#2

MongoDB & API/Collector not coming up

πŸ› Bug description

The current Helm chart mistakenly does not contain the API service. Additionally, the mongoDB instance is not able to start for some reason.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Install the latest Helm chart version
  2. Look at the pods crashing

🎯 Expected behaviour

The database should come up and the API should be deployed.


Context

Version
Latest.

Where did the problem occur?

  • ☸️ Kubernetes

Additional context
The API service is not in the chart and the database crash seems to be related to the volume mount.

Docker container does not have webgrid in PATH

πŸ› Bug description

When you docker run the core docker container it errors stating the following

docker run webgrid/core:v0.1.3-beta
Unable to find image 'webgrid/core:v0.1.3-beta' locally
v0.1.3-beta: Pulling from webgrid/core
2d27e6858d13: Pull complete 
Digest: sha256:2787f2be59c33fb3a2e3d260ab41dc6dd6481218c5e4b10b98986f6c7018a091
Status: Downloaded newer image for webgrid/core:v0.1.3-beta
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "webgrid": executable file not found in $PATH: unknown.
ERRO[0002] error waiting for container: context canceled

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. docker run webgrid/core:v0.1.3-beta
  2. See console output

🎯 Expected behaviour

I assume its supposed to start running the webgrid cli (Granted it would need args to run the specific funciton but it still should find webgrid cli)

If I build the docker container using the make file it works fine using the locally built container.

Context

Version
I tried all official docker containers

I have also used the helm chart which has the same outcome

resource "helm_release" "webgrid" {
  depends_on = [kubernetes_namespace.selenium]
  name       = "webgrid"
  namespace  = kubernetes_namespace.selenium.metadata[0].name
  repository = "https://webgrid.dev/"
  chart      = "webgrid"
}

Where did the problem occur?
List the platform on which you encountered the issue.

  • ☸️ Kubernetes
  • 🐳 Docker

Required metadata properties

Is your feature request related to a problem? Please describe.
When running a grid in-house with multiple teams, having statistics on which team uses how many resources could be helpful.

Describe the solution you'd like
Forcing clients to submit certain required metadata properties like e.g. project would provide a reasonable solution.

Describe alternatives you've considered
Adding explicit identification keys, using IP based identification, and guesswork based on other metadata. All of those options seem inferior and more complex than just making a single metadata value mandatory.

Proxy does not fetch upstreams on startup

πŸ› Bug description

When starting the proxy component does not query the Redis server for currently available upstreams but relies on the components broadcasting their next heartbeat. Contrary to this the Proxy reports itself as being Operational through its status server. This may inevitably cause issues when dynamically scaling it up in cluster environments as the proxy is not fully operational until it has all upstreams loaded.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Deploy the grid to Kubernetes
  2. Reduce the replica count of the proxy to 0
  3. Increase the replica count back to 1
  4. Immediately send a request to the proxy
  5. It reports No available upstream

🎯 Expected behaviour

The expected behaviour would be for the proxy to either stay in Startup until it is confident that it collected all upstreams or to fetch it from the database on startup. The latter would be preferable as it provides a faster startup time.


Context

Version
This bug is not version related.

Where did the problem occur?

  • ☸️ Kubernetes
  • 🐳 Docker
  • πŸ‘¨β€πŸ’» Locally

Which browsers cause the bug?
This bug is not browser specific.

Wrong description for subcommands

πŸ› Bug description

Currently, the description for the subcommands contains some text derived from sub-objects that have been merged into the root options object for each module. See screenshot below.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Run cargo run -- help

🎯 Expected behaviour

Each module should have a clear and concise description of its function in the help text.

πŸ“Ί Screenshots

SUBCOMMANDS:
    api             Options regarding the permanent storage backend
    collector       Options regarding the permanent storage backend
    gangway         Options for connection to a storage backend
    help            Prints this message or the help of the given subcommand(s)
    manager         Options for connecting to the Redis server
    node            Options for connection to a storage backend
    orchestrator    Variants of provisioners

Docker orchestrator sometimes goes above limit

πŸ› Bug description

When creating a large number of sessions (e.g. 30) and closely observing the docker orchestrator, it sometimes deploys exactly twice the amount of containers it is supposed to create.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. make install
  2. cargo run -- http://127.0.0.1:8080/ 35 firefox (with this)
  3. Observe number of containers closely
  4. See that sometimes 10 instead of 5 containers are started

🎯 Expected behaviour

The orchestrator should never go above its limit.

Empty response when a manager is not available

πŸ› Bug description

When the proxy selects a manager and this manager is unreachable for whatever reason it replies back to the client with an empty response.

🦢 Reproduction steps

Steps to reproduce the behavior:

  1. Deploy a grid to docker
  2. Kill the manager container
  3. Use redis-client to create a dummy manager that does not exist
  4. Send a new-session request to the grid

🎯 Expected behaviour

The proxy server is expected to check availability of upstreams before forwarding data to them and fall back to another upstream if it isn't available. If no upstream is available a reasonable error message should be returned.


Context

Version
This bug is not version related.

Where did the problem occur?

  • ☸️ Kubernetes
  • 🐳 Docker
  • πŸ‘¨β€πŸ’» Locally

Which browsers cause the bug?
This bug is not browser related

Additional context
This problem is not limited to managers and can be observed with the API and Storage components as well.

Tracking Issue: Architecture overhaul feature parity

Since the project has reached a state of functional maturity, it has become time to put a focus on code quality, maintainability, and most importantly testability. Most of the new architecture has already been discussed and tested internally. However, implementation is still in progress. This issue is intended to be a collection of small tasks that are required to reach feature parity with the current release version.

Any previously opened issues which are resolved by the refactoring and reimplementation of core services may be closed with a reference to this issue. New bugs and issues discovered in the alpha version of the new architecture will receive their own issue.

Merge Milestones

  • Deploy a browser (punch through the whole stack once, basic functionality check)
    • Docker
    • Kubernetes
  • Update Helm Chart and Docker Compose file to match new architecture and use StatefulSets
    • Parametrize update strategy for StatefulSets
  • Re-implement tracing
    • Basic event tracing
    • Environmental information
    • Control over sampling
  • Add video recording capability based on S3 backed (but abstracted) storage
  • Extend capabilities with additional switches
  • Add timed and global session metadata storage
  • Extend/fix the dashboard
    • Use new metadata API
    • Add search functionality
  • Add/update documentation
    • Architecture
    • Technical overview
    • Features

Enable conditional profiling of Firefox

Is your feature request related to a problem? Please describe.
Sometimes Firefox takes a long time to start up on certain hardware and container solutions. To debug such scenarios more efficiently, a performance profile of the browsers would be useful.

Describe the solution you'd like
A flag, preferably passed through the requested capabilities, which adds the following environment variables to Firefox:

.env("MOZ_PROFILER_STARTUP", "1")
.env("MOZ_PROFILER_SHUTDOWN", "/profile.json")

Additionally, the created file should be uploaded to the storage provider used for video recording. Making this as generic as possible would be of interest as it would allow arbitrary files stored within the container to be uploaded on shutdown (e.g. files downloaded during a test run).

An example profile can be found here: https://share.firefox.dev/3jGOd8u

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.