GithubHelp home page GithubHelp logo

ufoscout / docker-compose-wait Goto Github PK

View Code? Open in Web Editor NEW
1.6K 17.0 145.0 116 KB

A simple script to wait for other docker images to be started while using docker-compose (or Kubernetes or docker stack or whatever)

License: Apache License 2.0

Shell 1.77% Rust 96.87% Dockerfile 1.36%
docker docker-images dockerfile kubernetes sleep synchronization wait containers

docker-compose-wait's Introduction

docker-compose-wait

Build Status codecov

A small command-line utility to wait for other docker images to be started while using docker-compose (or Kubernetes or docker stack or whatever).

It permits waiting for:

  • a fixed amount of seconds
  • until a TCP port is open on a target image
  • until a file or directory is present on the local filesystem

Usage

This utility should be used in the docker build process and launched before your application starts.

For example, your application "MySuperApp" uses MongoDB, Postgres and MySql (wow!) and you want to be sure that, when it starts, all other systems are available, then simply customize your dockerfile this way:

## Use whatever base image
FROM alpine

## Add the wait script to the image
COPY --from=ghcr.io/ufoscout/docker-compose-wait:latest /wait /wait

## Otherwise you can directly download the executable from github releases. E.g.:
#  ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.11.0/wait /wait
#  RUN chmod +x /wait

## Add your application to the docker image
ADD MySuperApp.sh /MySuperApp.sh

## Launch the wait tool and then your application
CMD /wait && /MySuperApp.sh

Done! the image is ready.

Now let's modify the docker-compose.yml file:

version: "3"

services:
  mongo:
    image: mongo:3.4
    hostname: mongo
    ports:
      - "27017:27017"

  postgres:
    image: "postgres:9.4"
    hostname: postgres
    ports:
      - "5432:5432"

  mysql:
    image: "mysql:5.7"
    hostname: mysql
    ports:
      - "3306:3306"

  mySuperApp:
    image: "mySuperApp:latest"
    hostname: mySuperApp
    environment:
      WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017

When docker-compose is started (or Kubernetes or docker stack or whatever), your application will be started only when all the pairs host:port in the WAIT_HOSTS variable are available. The WAIT_HOSTS environment variable is not mandatory, if not declared, the script executes without waiting.

If you want to use the script directly in docker-compose.yml instead of the Dockerfile, please note that the command: configuration option is limited to a single command so you should wrap in a sh call. For example:

command: sh -c "/wait && /MySuperApp.sh"

This is discussed further here and here.

Usage in images that do not have a shell

When using distroless or building images FROM scratch, it is common to not have sh available. In this case, it is necessary to specify the command for wait to run explicitly. The invoked command will be invoked with any arguments configured for it and will completely replace the wait process in your container via a syscall to exec. Because there is no shell to expand arguments in this case, wait must be the ENTRYPOINT for the container and has to be specified in the exec form. Note that because there is no shell to perform expansion, arguments like * must be interpreted by the program that receives them.

FROM golang

COPY myApp /app
WORKDIR /app
RUN go build -o /myApp -ldflags '-s -w -extldflags -static' ./...

## ----------------

FROM scratch

COPY --from=ghcr.io/ufoscout/docker-compose-wait:latest /wait /wait

COPY --from=0 /myApp /myApp
ENV WAIT_COMMAND="/myApp arg1 argN..."
ENTRYPOINT ["/wait"]

Additional configuration options

The behaviour of the wait utility can be configured with the following environment variables:

  • WAIT_LOGGER_LEVEL : the output logger level. Valid values are: debug, info, error, off. the default is debug.
  • WAIT_HOSTS: comma-separated list of pairs host:port for which you want to wait.
  • WAIT_PATHS: comma-separated list of paths (i.e. files or directories) on the local filesystem for which you want to wait until they exist.
  • WAIT_COMMAND: command and arguments to run once waiting completes. The invoked command will completely replace the wait process. The default is none.
  • WAIT_TIMEOUT: max number of seconds to wait for all the hosts/paths to be available before failure. The default is 30 seconds.
  • WAIT_HOST_CONNECT_TIMEOUT: The timeout of a single TCP connection to a remote host before attempting a new connection. The default is 5 seconds.
  • WAIT_BEFORE: number of seconds to wait (sleep) before start checking for the hosts/paths availability
  • WAIT_AFTER: number of seconds to wait (sleep) once all the hosts/paths are available
  • WAIT_SLEEP_INTERVAL: number of seconds to sleep between retries. The default is 1 second.

Supported architectures

From release 2.11.0, the following executables are available for download:

  • wait: This is the executable intended for Linux x64 systems
  • wait_x86_64: This is the very same executable than wait
  • wait_aarch64: This is the executable to be used for aarch64 architectures
  • wait_arm7: This is the executable to be used for arm7 architectures

All executables are built with MUSL for maximum portability.

To use any of these executables, simply replace the executable name in the download link: https://github.com/ufoscout/docker-compose-wait/releases/download/{{VERSION}}/{{executable_name}}

Docker images

Official docker images based on scratch can be found here: https://github.com/users/ufoscout/packages/container/package/docker-compose-wait

Using on other systems

The simplest way of getting the wait executable is to download it from

https://github.com/ufoscout/docker-compose-wait/releases/download/{{VERSION}}/wait

or to use one of the pre-built docker images.

If you need it for an architecture for which a pre-built file is not available, you should clone this repository and build it for your target.

As it has no external dependencies, an being written in the mighty rust programming language, the build process is just a simple cargo build --release (well... of course you need to install the rust compiler before...)

For everything involving cross-compilation, you should take a look at Cross.

For example, to build for a raspberry pi, everything you have to do is:

  1. Install the latest stable rust toolchain using rustup
  2. Correctly configure Docker on your machine
  3. Open a terminal and type:
cargo install cross
cross build --target=armv7-unknown-linux-musleabihf --release

Use your shiny new executable on your raspberry device!

Notes

This utility was explicitly written to be used with docker-compose; however, it can be used everywhere since it has no dependencies on docker.

docker-compose-wait's People

Contributors

anthonyblond avatar dependabot-preview[bot] avatar dependabot[bot] avatar edumco avatar fossabot avatar jonasbn avatar ryantate13 avatar ufoscout avatar wagnerpmc avatar zhangt2333 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-compose-wait's Issues

[BUG] WAIT_COMMAND does not work correctly

Hello,

I notice that the executable command is inserted into the arguments of the same executable command:

let argv = shell_words::split(&command_string)?;
Ok(Some((
Command {
program: argv[0].clone(),
argv,
},
command_string,
)))

In this way, it turns out that the conditional "hello man" command turns into the "hello hello man" command.
To prevent this from happening, we need to remove the first element of the vector from the argv variable, in the same place.

Quick fix:

let mut argv = shell_words::split(&command_string)?;

Ok(Some((
    Command {
        program: argv.remove(0),
        argv,
    },
    command_string,
)))

If it's a hassle, I can create a PR for a quick fix - just let me know.

Configure the sleep time

Checking the code I saw that it's hardcoded to 1 second, I think it'd be easy and useful to configure the sleep time via environment variable!

Great work! Thanks

Added wait script but not working

Check my dockerFile and docker compose config-server is not waiting for MYSQL_SERVER to complete.

#Dockerfile

FROM openjdk:8
EXPOSE 8888
WORKDIR usr/jpop
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait wait
RUN chmod +x wait
COPY ./config-server/target/config-server-0.0.1-SNAPSHOT.jar config-server-0.0.1-SNAPSHOT.jar
COPY config-store config-store
CMD wait && java -jar config-server-0.0.1-SNAPSHOT.jar

#Docker Compose file

version: '3'
services:
  MYSQL-SERVER:
    image: mysql:8.0.20
    container_name: MYSQL-SERVER
    environment:
    - MYSQL_ROOT_PASSWORD=root
    volumes:
    - ./docker:/docker-entrypoint-initdb.d
    ports:
    - 3307:3306
    networks:
      - spring-cloud-network
    command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8mb4 --explicit_defaults_for_timestamp --default-authentication-plugin=mysql_native_password
  config-server:
    container_name: config-server
    build:
      context: config-store
      dockerfile: Dockerfile
    image: jpop/config-server-service:latest
    depends_on:
      - MYSQL-SERVER
    environment:
    - spring.profiles.active=native
    - spring.cloud.config.server.native.search-locations=config-store
    - WAIT_HOSTS=MYSQL-SERVER:3306
    ports:
      - 8888:8888
    networks:
      - spring-cloud-network
 volumes:
  database:
    driver: local
networks:
  spring-cloud-network:
    driver: bridge

Odd behavior with should_execute_without_wait test with arm64

I am building wait for arm64 in a Docker container because I do not have a Rust build environment on my regular environment (this is using the recent Docker qemu feature, but the same thing happens if I run the same on a Raspberry Pi 4). I am not familiar with Rust, but I wanted to show that running cargo test fails on the should_execute_without_wait test:

$ docker run --rm -i --platform linux/arm64 --workdir /work --env WAIT_VERSION=2.9.0 rust:1.52.1-buster bash -s <<'EOF' 
set -euo pipefail
set -x
git clone https://github.com/ufoscout/docker-compose-wait.git source
cd source
git checkout "${WAIT_VERSION}"
R_TARGET="$( rustup target list --installed | grep -- '-gnu' | tail -1 | awk '{print $1}'| sed 's/gnu/musl/' )"
rustup target add "$R_TARGET"
cargo test
cargo build --release --target="$R_TARGET"
strip ./target/"$R_TARGET"/release/wait
target/"$R_TARGET"/release/wait
EOF

Output:

+ git clone https://github.com/ufoscout/docker-compose-wait.git source
Cloning into 'source'...
+ cd source
+ git checkout 2.9.0
Note: checking out '2.9.0'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 5098f22 Merge pull request #52 from ufoscout/feature/wait_for_path
++ rustup target list --installed
++ grep -- -gnu
++ tail -1
++ awk '{print $1}'
++ sed s/gnu/musl/
+ R_TARGET=aarch64-unknown-linux-musl
+ rustup target add aarch64-unknown-linux-musl
info: downloading component 'rust-std' for 'aarch64-unknown-linux-musl'
info: installing component 'rust-std' for 'aarch64-unknown-linux-musl'
info: using up to 500.0 MiB of RAM to unpack components
+ cargo test
    Updating crates.io index
 Downloading crates ...
  Downloaded lazy_static v1.4.0
  Downloaded rand_core v0.6.2
  Downloaded rand v0.8.3
  Downloaded env_logger v0.8.3
  Downloaded cfg-if v1.0.0
  Downloaded atomic-counter v1.0.1
  Downloaded log v0.4.14
  Downloaded rand_chacha v0.3.0
  Downloaded ppv-lite86 v0.2.10
  Downloaded port_check v0.1.5
  Downloaded libc v0.2.94
  Downloaded getrandom v0.2.2
   Compiling cfg-if v1.0.0
   Compiling libc v0.2.94
   Compiling getrandom v0.2.2
   Compiling log v0.4.14
   Compiling port_check v0.1.5
   Compiling ppv-lite86 v0.2.10
   Compiling lazy_static v1.4.0
   Compiling atomic-counter v1.0.1
   Compiling env_logger v0.8.3
   Compiling wait v2.9.0 (/work/source)
   Compiling rand_core v0.6.2
   Compiling rand_chacha v0.3.0
   Compiling rand v0.8.3
    Finished test [unoptimized + debuginfo] target(s) in 2m 19s
     Running unittests (target/debug/deps/wait-9844c520be7e6e56)

running 11 tests
test sleeper::test::should_not_wait ... ok
test test::should_return_int_value ... ok
test test::should_return_zero_when_empty_value ... ok
test test::should_return_zero_when_invalid_value ... ok
test test::should_return_zero_when_negative_value ... ok
test env_reader::test::should_return_an_env_variable ... ok
test test::should_get_config_values_from_env ... ok
test test::should_get_default_config_values ... ok
test test::config_should_use_default_values ... ok
test env_reader::test::should_return_the_default_value_if_env_variable_not_present ... ok
test sleeper::test::should_wait_for_a_second ... ok

test result: ok. 11 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 3.07s

     Running unittests (target/debug/deps/wait-060dd7f6789cdbf7)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

     Running tests/integration_test.rs (target/debug/deps/integration_test-330808617b3ccde1)

running 16 tests
test should_exit_on_path_timeout ... ok
test should_execute_without_wait ... FAILED
test should_exit_on_host_timeout ... ok
test should_wait_before_and_after ... ok
test should_wait_for_10_seconds_after ... ok
test should_wait_for_5_seconds_before ... ok
test should_identify_the_open_port ... ok
test should_fail_if_not_all_hosts_are_available ... ok
test should_fail_if_hosts_are_available_but_paths_are_not ... ok
test should_wait_for_multiple_hosts ... ok
test should_fail_if_paths_are_available_but_hosts_are_not ... ok
test should_wait_for_multiple_paths ... ok
test should_fail_if_not_all_paths_are_available ... ok
test should_wait_for_multiple_hosts_and_paths ... ok
test should_sleep_the_specified_time_between_path_checks ... ok
test should_sleep_the_specified_time_between_host_checks ... ok

failures:

---- should_execute_without_wait stdout ----
Millis elapsed 35
thread 'should_execute_without_wait' panicked at 'assertion failed: millis_elapsed(start) <= 5', tests/integration_test.rs:56:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    should_execute_without_wait

test result: FAILED. 15 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 2.21s

error: test failed, to rerun pass '--test integration_test'

I ran it again with RUST_BACKTRACE=full in the environment, which you can see by expanding below:

Full output of `cargo test`:
  Updating crates.io index
Downloading crates ...
Downloaded env_logger v0.8.3
Downloaded rand_core v0.6.2
Downloaded rand v0.8.3
Downloaded ppv-lite86 v0.2.10
Downloaded libc v0.2.94
Downloaded cfg-if v1.0.0
Downloaded atomic-counter v1.0.1
Downloaded log v0.4.14
Downloaded rand_chacha v0.3.0
Downloaded port_check v0.1.5
Downloaded lazy_static v1.4.0
Downloaded getrandom v0.2.2
 Compiling cfg-if v1.0.0
 Compiling libc v0.2.94
 Compiling getrandom v0.2.2
 Compiling log v0.4.14
 Compiling port_check v0.1.5
 Compiling ppv-lite86 v0.2.10
 Compiling lazy_static v1.4.0
 Compiling atomic-counter v1.0.1
 Compiling env_logger v0.8.3
 Compiling wait v2.9.0 (/work/source)
 Compiling rand_core v0.6.2
 Compiling rand_chacha v0.3.0
 Compiling rand v0.8.3
  Finished test [unoptimized + debuginfo] target(s) in 2m 19s
   Running unittests (target/debug/deps/wait-9844c520be7e6e56)

running 11 tests
test sleeper::test::should_not_wait ... ok
test test::should_return_int_value ... ok
test env_reader::test::should_return_an_env_variable ... ok
test test::should_return_zero_when_invalid_value ... ok
test test::should_return_zero_when_empty_value ... ok
test test::should_return_zero_when_negative_value ... ok
test test::config_should_use_default_values ... ok
test test::should_get_default_config_values ... ok
test test::should_get_config_values_from_env ... ok
test env_reader::test::should_return_the_default_value_if_env_variable_not_present ... ok
test sleeper::test::should_wait_for_a_second ... ok

test result: ok. 11 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 3.06s

   Running unittests (target/debug/deps/wait-060dd7f6789cdbf7)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Running tests/integration_test.rs (target/debug/deps/integration_test-330808617b3ccde1)

running 16 tests
test should_exit_on_path_timeout ... ok
test should_exit_on_host_timeout ... ok
test should_execute_without_wait ... FAILED
test should_wait_before_and_after ... ok
test should_wait_for_10_seconds_after ... ok
test should_wait_for_5_seconds_before ... ok
test should_identify_the_open_port ... ok
test should_fail_if_not_all_hosts_are_available ... ok
test should_fail_if_hosts_are_available_but_paths_are_not ... ok
test should_fail_if_paths_are_available_but_hosts_are_not ... ok
test should_wait_for_multiple_paths ... ok
test should_wait_for_multiple_hosts ... ok
test should_fail_if_not_all_paths_are_available ... ok
test should_wait_for_multiple_hosts_and_paths ... ok
test should_sleep_the_specified_time_between_host_checks ... ok
test should_sleep_the_specified_time_between_path_checks ... ok

failures:

---- should_execute_without_wait stdout ----
Millis elapsed 43
thread 'should_execute_without_wait' panicked at 'assertion failed: millis_elapsed(start) <= 5', tests/integration_test.rs:56:5
stack backtrace:
 0:       0x550008e438 - std::backtrace_rs::backtrace::libunwind::trace::hda657801c412a178
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/../../backtrace/src/backtrace/libunwind.rs:90:5
 1:       0x550008e438 - std::backtrace_rs::backtrace::trace_unsynchronized::ha5cf1c0cc7b28d87
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
 2:       0x550008e438 - std::sys_common::backtrace::_print_fmt::h06f4428783169e20
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/sys_common/backtrace.rs:67:5
 3:       0x550008e438 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hbd0b65b71b30cbfa
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/sys_common/backtrace.rs:46:22
 4:       0x55000aa024 - core::fmt::write::hd6f2f4af62bce3bf
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/fmt/mod.rs:1092:17
 5:       0x550008818c - std::io::Write::write_fmt::ha49aaf2a7ffaeb5c
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/io/mod.rs:1572:15
 6:       0x5500090488 - std::sys_common::backtrace::_print::hac02f6d3130c889e
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/sys_common/backtrace.rs:49:5
 7:       0x5500090488 - std::sys_common::backtrace::print::hf0f7bc9e2e1ec9fe
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/sys_common/backtrace.rs:36:9
 8:       0x5500090488 - std::panicking::default_hook::{{closure}}::h6906422a8d9cd315
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:208:50
 9:       0x550008ffc0 - std::panicking::default_hook::h249340b3b0e2b967
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:222:9
10:       0x5500090a64 - std::panicking::rust_panic_with_hook::hfacf5c096789c749
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:591:17
11:       0x55000905f4 - std::panicking::begin_panic_handler::{{closure}}::h9fe6008fcc526910
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:495:13
12:       0x550008e8a8 - std::sys_common::backtrace::__rust_end_short_backtrace::h14e2d4a0ec7cd5ed
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/sys_common/backtrace.rs:141:18
13:       0x5500090588 - rust_begin_unwind
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:493:5
14:       0x5500015750 - core::panicking::panic_fmt::he3e4e24148ccccf4
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/panicking.rs:92:14
15:       0x55000156d4 - core::panicking::panic::h10de7e903b95002f
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/panicking.rs:50:5
16:       0x5500016560 - integration_test::should_execute_without_wait::hc987e53b31165050
                             at /work/source/tests/integration_test.rs:56:5
17:       0x5500016490 - integration_test::should_execute_without_wait::{{closure}}::h0796158b6d016d1e
                             at /work/source/tests/integration_test.rs:48:1
18:       0x5500020c64 - core::ops::function::FnOnce::call_once::h1853cc29f09ccc98
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/ops/function.rs:227:5
19:       0x55000576c0 - core::ops::function::FnOnce::call_once::hc614cfda2d76a421
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/ops/function.rs:227:5
20:       0x55000576c0 - test::__rust_begin_short_backtrace::h3782a60a0c46ece1
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/test/src/lib.rs:567:5
21:       0x55000563b8 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h114659ff9311b761
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/alloc/src/boxed.rs:1546:9
22:       0x55000563b8 - <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once::h6d8f493d6ab774de
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panic.rs:344:9
23:       0x55000563b8 - std::panicking::try::do_call::h00b2d6daface8b3a
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:379:40
24:       0x55000563b8 - std::panicking::try::h424df84fd72e5d4a
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:343:19
25:       0x55000563b8 - std::panic::catch_unwind::h735f98ec1ddd14b1
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panic.rs:431:14
26:       0x55000563b8 - test::run_test_in_process::hd98fe513aee31462
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/test/src/lib.rs:589:18
27:       0x55000563b8 - test::run_test::run_test_inner::{{closure}}::h64f247ac1bfeb38e
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/test/src/lib.rs:486:39
28:       0x5500035ae4 - test::run_test::run_test_inner::{{closure}}::h850a7b5ab90773f3
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/test/src/lib.rs:511:37
29:       0x5500035ae4 - std::sys_common::backtrace::__rust_begin_short_backtrace::h8330955d6196bd5c
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/sys_common/backtrace.rs:125:18
30:       0x550003a340 - std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}}::h858cd23d2ab84db0
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/thread/mod.rs:474:17
31:       0x550003a340 - <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once::h8b6676efe30f1133
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panic.rs:344:9
32:       0x550003a340 - std::panicking::try::do_call::h5422fdfdb0191ed8
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:379:40
33:       0x550003a340 - std::panicking::try::h99380dad0c88fe3c
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:343:19
34:       0x550003a340 - std::panic::catch_unwind::h2a5ad8f0807b9bc9
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panic.rs:431:14
35:       0x550003a340 - std::thread::Builder::spawn_unchecked::{{closure}}::hf9e63a3826fa6fc0
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/thread/mod.rs:473:30
36:       0x550003a340 - core::ops::function::FnOnce::call_once{{vtable.shim}}::h0af8eb1afa95af3e
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/ops/function.rs:227:5
37:       0x5500095d38 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h256369ac5fdc57d7
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/alloc/src/boxed.rs:1546:9
38:       0x5500095d38 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hc713c7cbccbe973f
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/alloc/src/boxed.rs:1546:9
39:       0x5500095d38 - std::sys::unix::thread::Thread::new::thread_start::hcdb827e8c49af92b
                             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/sys/unix/thread.rs:71:17
40:       0x5501aaa7e4 - start_thread
41:       0x5501a00adc - <unknown>
42:                0x0 - <unknown>


failures:
  should_execute_without_wait

test result: FAILED. 15 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 2.23s

error: test failed, to rerun pass '--test integration_test'

I do appear to be able to get the tests to pass if I exclude should_execute_without_wait from the full set of tests, and it also appears to pass on its own:

docker run --rm -i --platform linux/arm64 --workdir /work --env WAIT_VERSION=2.9.0 rust:1.52.1-buster bash -s <<'EOF' 
set -euo pipefail
set -x
git clone https://github.com/ufoscout/docker-compose-wait.git source
cd source
git checkout "${WAIT_VERSION}"
R_TARGET="$( rustup target list --installed | grep -- '-gnu' | tail -1 | awk '{print $1}'| sed 's/gnu/musl/' )"
rustup target add "$R_TARGET"
cargo test should_execute_without_wait -- --exact
cargo test -- --skip should_execute_without_wait
cargo build --release --target="$R_TARGET"
strip ./target/"$R_TARGET"/release/wait
target/"$R_TARGET"/release/wait
EOF

Output:

+ git clone https://github.com/ufoscout/docker-compose-wait.git source
Cloning into 'source'...
+ cd source
+ git checkout 2.9.0
Note: checking out '2.9.0'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 5098f22 Merge pull request #52 from ufoscout/feature/wait_for_path
++ rustup target list --installed
++ grep -- -gnu
++ tail -1
++ awk '{print $1}'
++ sed s/gnu/musl/
+ R_TARGET=aarch64-unknown-linux-musl
+ rustup target add aarch64-unknown-linux-musl
info: downloading component 'rust-std' for 'aarch64-unknown-linux-musl'
info: installing component 'rust-std' for 'aarch64-unknown-linux-musl'
info: using up to 500.0 MiB of RAM to unpack components
+ cargo test should_execute_without_wait -- --exact
    Updating crates.io index
 Downloading crates ...
  Downloaded atomic-counter v1.0.1
  Downloaded port_check v0.1.5
  Downloaded rand_core v0.6.2
  Downloaded cfg-if v1.0.0
  Downloaded rand_chacha v0.3.0
  Downloaded rand v0.8.3
  Downloaded ppv-lite86 v0.2.10
  Downloaded libc v0.2.94
  Downloaded log v0.4.14
  Downloaded lazy_static v1.4.0
  Downloaded getrandom v0.2.2
  Downloaded env_logger v0.8.3
   Compiling cfg-if v1.0.0
   Compiling libc v0.2.94
   Compiling log v0.4.14
   Compiling getrandom v0.2.2
   Compiling port_check v0.1.5
   Compiling ppv-lite86 v0.2.10
   Compiling lazy_static v1.4.0
   Compiling atomic-counter v1.0.1
   Compiling env_logger v0.8.3
   Compiling wait v2.9.0 (/work/source)
   Compiling rand_core v0.6.2
   Compiling rand_chacha v0.3.0
   Compiling rand v0.8.3
    Finished test [unoptimized + debuginfo] target(s) in 2m 12s
     Running unittests (target/debug/deps/wait-9844c520be7e6e56)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 11 filtered out; finished in 0.00s

     Running unittests (target/debug/deps/wait-060dd7f6789cdbf7)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

     Running tests/integration_test.rs (target/debug/deps/integration_test-330808617b3ccde1)

running 1 test
test should_execute_without_wait ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 15 filtered out; finished in 0.04s

+ cargo test -- --skip should_execute_without_wait
    Finished test [unoptimized + debuginfo] target(s) in 0.48s
     Running unittests (target/debug/deps/wait-9844c520be7e6e56)

running 11 tests
test sleeper::test::should_not_wait ... ok
test test::should_return_int_value ... ok
test env_reader::test::should_return_an_env_variable ... ok
test test::should_return_zero_when_empty_value ... ok
test test::should_return_zero_when_invalid_value ... ok
test test::config_should_use_default_values ... ok
test test::should_return_zero_when_negative_value ... ok
test test::should_get_default_config_values ... ok
test test::should_get_config_values_from_env ... ok
test env_reader::test::should_return_the_default_value_if_env_variable_not_present ... ok
test sleeper::test::should_wait_for_a_second ... ok

test result: ok. 11 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 3.06s

     Running unittests (target/debug/deps/wait-060dd7f6789cdbf7)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

     Running tests/integration_test.rs (target/debug/deps/integration_test-330808617b3ccde1)

running 15 tests
test should_exit_on_path_timeout ... ok
test should_exit_on_host_timeout ... ok
test should_wait_before_and_after ... ok
test should_wait_for_10_seconds_after ... ok
test should_wait_for_5_seconds_before ... ok
test should_identify_the_open_port ... ok
test should_fail_if_not_all_hosts_are_available ... ok
test should_fail_if_hosts_are_available_but_paths_are_not ... ok
test should_wait_for_multiple_hosts ... ok
test should_fail_if_paths_are_available_but_hosts_are_not ... ok
test should_wait_for_multiple_paths ... ok
test should_fail_if_not_all_paths_are_available ... ok
test should_wait_for_multiple_hosts_and_paths ... ok
test should_sleep_the_specified_time_between_host_checks ... ok
test should_sleep_the_specified_time_between_path_checks ... ok

test result: ok. 15 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 2.19s

   Doc-tests wait

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

+ cargo build --release --target=aarch64-unknown-linux-musl
   Compiling log v0.4.14
   Compiling cfg-if v1.0.0
   Compiling port_check v0.1.5
   Compiling env_logger v0.8.3
   Compiling wait v2.9.0 (/work/source)
    Finished release [optimized] target(s) in 1m 12s
+ strip ./target/aarch64-unknown-linux-musl/release/wait
+ target/aarch64-unknown-linux-musl/release/wait
[INFO  wait] --------------------------------------------------------
[INFO  wait]  docker-compose-wait 2.9.0
[INFO  wait] ---------------------------
[DEBUG wait] Starting with configuration:
[DEBUG wait]  - Hosts to be waiting for: []
[DEBUG wait]  - Paths to be waiting for: []
[DEBUG wait]  - Timeout before failure: 30 seconds
[DEBUG wait]  - TCP connection timeout before retry: 5 seconds
[DEBUG wait]  - Sleeping time before checking for hosts/paths availability: 0 seconds
[DEBUG wait]  - Sleeping time once all hosts/paths are available: 0 seconds
[DEBUG wait]  - Sleeping time between retries: 1 seconds
[DEBUG wait] --------------------------------------------------------
[INFO  wait] docker-compose-wait - Everything's fine, the application can now start!
[INFO  wait] --------------------------------------------------------

From what I can glean from https://github.com/ufoscout/docker-compose-wait/blob/2.9.0/tests/integration_test.rs#L459-L477, this is what the test is attempting to do:

env \
  WAIT_HOSTS="" \
  WAIT_PATHS="" \
  WAIT_TIMEOUT=1 \
  WAIT_BEFORE=0 \
  WAIT_AFTER=0 \
  WAIT_SLEEP_INTERVAL=1 \
  WAIT_HOST_CONNECT_TIMEOUT=1 \
  ./wait

And it runs without issue for me.

Given the above, would it be okay to run this binary in a production environment?

WAIT_SLEEP_INTERVAL not working

Hi,
thanks for this amazing app! I relly like the documentation, it helped me to integrate your script into my docker-compose application.

However, I found, that the WAIT_SLEEP_INTERVAL environment variable doesn't change anything. It always defaults to 1.

After that I looked into your integration test and there were no tests, where you tested a WAIT_SLEEP_INTERVAL different from 1.

Could you please investigate this?

Have a great day!

Capability to disable logs

It would be nice to be able to disable logging output by choice(thinking of an env variable). This could help in production where we send logs to ELK and usually we send them in JSON format, so we don't really want this extra logs there.

Cannot resolve localhost

export WAIT_HOST=localhost:3000
/wait && echo 'Ok' || echo 'Fail'

> Host localhost:3000 not yet available...

But

export WAIT_HOST=127.0.0.1:3000
/wait && echo 'Ok' || echo 'Fail'

> --------------------------------------------------------
 docker-compose-wait 2.7.2
---------------------------
Starting with configuration:
 - Hosts to be waiting for: [127.0.0.1:3000]
 - Timeout before failure: 30 seconds
 - TCP connection timeout before retry: 5 seconds
 - Sleeping time before checking for hosts availability: 0 seconds
 - Sleeping time once all hosts are available: 0 seconds
 - Sleeping time between retries: 1 seconds
--------------------------------------------------------
Checking availability of 127.0.0.1:3000
Host 127.0.0.1:3000 is now available!
--------------------------------------------------------
docker-compose-wait - Everything's fine, the application can now start!
--------------------------------------------------------
OK

Support for docker images that don't have a shell

It's possible that a docker image is based on SCRATCH, and contains nothing but the executable. In that case, the command part of this image should be somthing like:

command: ["/the_executable", "--foo", "bar"]

There won't be && available as well without a shell. Could such a thin image have the favor of wait?

Some ENV variable are never read/set

WAIT_TIMEOUT, WAIT_BEFORE and WAIT_AFTER never get set, and always end up with their defaults. I have tried all of the other ones and they work fine, but I really need WAIT_TIMEOUT.

All of these use the legacy_or_new method, so suspect something going on with that.

docker-compose2

docker-compose

Everything's fine, but not loaded

version: '3.8'

services:
  postgres:
    restart: always
    image: postgres:13.1
    container_name: postgres
    volumes:
      - ./postgres:/var/lib/postgresql/data
    ports:
      - 5432:5432
    environment:
      POSTGRES_DB: data
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      PGDATA: /var/lib/postgresql/data/

  mongodb:
    image: mongo:4.4
    container_name: mongodb
    volumes:
      - ./mongodb:/data/db
    ports:
      - 27017:27017
    environment: 
      - MONGO_INITDB_DATABASE=data
      - MONGO_INITDB_ROOT_USERNAME=user
      - MONGO_INITDB_ROOT_PASSWORD=pass

  redis:
    image: redis:6.2-rc
    container_name: redis
    ports:
      - 6379:6379

  bot:
    build: .
    volumes:
      - ".:/bot"
    container_name: bot
    ports:
      - 5000:5000
    depends_on:
      - postgres
      - mongodb
      - redis
    environment: 
      WAIT_HOSTS: postgres:5432, mongodb:27017, redis:6379
      ```
      

docker-compose-wait - Everything's fine, the application can now start!
UnhandledPromiseRejectionWarning: Error: connect ECONNREFUSED 127.0.0.1:5432

[ioredis] Unhandled error event: Error: connect ECONNREFUSED 127.0.0.1:6379

Control the startup of the container

I am trying to connect my container nzbget through my VPN.
I have an issue as soon as i have to reboot my docker and so I can't access to my container nzbget anymore.

I tried to control the startup of the container with dockerize, docker-compose-wait.. but it didn't work..

Here's my docker-compose file

services:
vpn:
image: ghcr.io/bubuntux/nordvpn:latest
container_name: vpn
network_mode: bridge
cap_add:
- NET_ADMIN
- SYS_MODULE
devices:
- /dev/net/tun
environment:
- USER=xxxxxxxxxxxxxx
- PASS=xxxxxxxxxxxxxxxx
- CONNECT_COUNTRY_CODE=xxxxxxxxxxxxxxx
- TECHNOLOGY=NordLynx
- NETWORK=192.168.2.0/24
ports:
- 6789:6789
- 8081:8081
restart: always

nzbget:
build: ./nzbget
image: "nzbget:latest"
network_mode: service:vpn
container_name: nzbget
volumes:
- /srv/AppData/Nzbget:/config
- /srv/Downloads:/downloads
environment:
- WAIT_HOSTS=vpn:6789
- WAIT_HOSTS_TIMEOUT=90
- WAIT_SLEEP_INTERVAL=10
- WAIT_HOST_CONNECT_TIMEOUT=10
depends_on:
- vpn
command: sh -c "/wait"
Log :
docker-compose-wait 2.7.2
Starting with configuration:

Hosts to be waiting for: [vpn:6789]
Timeout before failure: 90 seconds
TCP connection timeout before retry: 10 seconds
Sleeping time before checking for hosts availability: 0 seconds
Sleeping time once all hosts are available: 0 seconds
Sleeping time between retries: 10 seconds
Checking availability of vpn:6789
[INFO] nzbget 21.1 server-mode
Host vpn:6789 not yet available...
Host vpn:6789 not yet available...
Host vpn:6789 not yet available...
Host vpn:6789 not yet available...
Host vpn:6789 not yet available...
Host vpn:6789 not yet available...
Host vpn:6789 not yet available...
Host vpn:6789 not yet available...
Host vpn:6789 not yet available...

Thanks.

NC command not found

I was running this in my container using node image and I get nc command not found had to install netcat for it to work, I think using curl for pinging would be better

i.e
curl $host:$port instead of nc -z -w1 $host $port

Q: Wait and Chmod in Azure DevOps Pipeline-built container

I'm trying to run this dockerfile:

FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build
WORKDIR /app

COPY . ./
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.0/wait /wait
#RUN chmod +x /wait
RUN /bin/bash -c 'ls -la /wait; chmod +x /wait; ls -la /wait'

CMD /wait && dotnet test --logger trx --results-directory /var/temp /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura && mv /app/tests/Ardalis.Specification.UnitTests/coverage.cobertura.xml /var/temp/coverage.unit.cobertura.xml && mv /app/tests/Ardalis.Specification.IntegrationTests/coverage.cobertura.xml /var/temp/coverage.integration.cobertura.xml

It works fine on my local Windows machine (with just the RUN chmod or the RUN /bin/bash version). But when I run the script as part of a build in Azure Pipelines I get this error:

image

I don't think this is necessarily related to this tool, but I wonder if you have any idea why this would work locally but not from Azure DevOps? It should behave the same in both places - that's part of the beauty of docker, right?

Thanks for any ideas you may have.

Feature request: support for aarch64

The binary provided does not work on aarch64.
Tried on Ubuntu 22.04 for ARM
It would be nice to include support for this architecture as well.

Better debugging ability

I'm trying to set up wait on my own docker container system, and I want to see if wait is running and what it's doing, but there is no way to make wait more verbose. Any chance you can add a -v to give a more verbose output?

Thanks in advance,
Andrew

Using docker-compose-wait without a Dockerfile?

I have a simple use case where I just want to load postgresql (timescaledb) and then Hasura. Problem is that I initialize postgresql with lots of data, after which hasura can set up complex metadata/migrations. Each process can take up to a minute.

Then on top of that I have my application, which can be modified to use docker-compose-wait to wait on Hasura, yes. But first I want Hasura to wait on Postgres. Basically I want to use docker-compose-wait for "third-party images". Is this possible?

Take into example the following docker-compose.yaml file. I would like to tell graphql-engine to wait for port 5432 to be available (instead of depends_on), but I "can't" change timescaledb because it's not "mine". However I'm a bit of a novice, so I'm most likely missing a very obvious point here.

Please enlighten me! Great project!

version: '3.6'

services:
  postgres:
    image: timescale/timescaledb:latest-pg12
    ports:
    - "5432:5432"
    restart: always
    volumes:
    - db_data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: testdb
      POSTGRES_PASSWORD: postgrespassword
      TIMESCALEDB_TELEMETRY: "off"

  graphql-engine:
    image: hasura/graphql-engine:v1.3.0-beta.3.cli-migrations-v2
    ports:
    - "8080:8080"
    depends_on:
    - "postgres"
    restart: always
    volumes:
      - ./hasura/migrations:/hasura-migrations
      - ./hasura/metadata:/hasura-metadata
    environment:
      HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/testdb
      HASURA_GRAPHQL_ENABLE_CONSOLE: "true"
      HASURA_GRAPHQL_ENABLE_TELEMETRY: "false"
      HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log

volumes:
  db_data:

What about wait for existence of a file?

Sometimes, like when we are initializing database, the port is accessible but our database is not yet initialized, so we need to wait for database file creation.
This will be a simple feature but a useful one.

Timeout is Incorrectly Treated As Number of Tries

In my configuration I have the WAIT_HOSTS_TIMEOUT set to 300.

In my CI environment I noticed that instead of timing out after 300 seconds, it actually timed out after 300 tries.

From the logs:

Docker-compose-wait starting with configuration:
------------------------------------------------
 - Hosts to be waiting for: [plug-nginx:4400,plug-server-services:4401,plug-server-services:4402]
 - Timeout before failure: 300 seconds 
 - Sleeping time before checking for hosts availability: 10 seconds
 - Sleeping time once all hosts are available: 15 seconds
------------------------------------------------

Waiting 10 seconds before checking for hosts availability
...
(Line 14) Host plug-server-services:4401 not yet available
Host plug-server-services:4401 not yet available
Host plug-server-services:4401 not yet available
...
Host plug-server-services:4401 not yet available
(Line 314) Host plug-server-services:4401 not yet available
(Line 315) Timeout! After 300 seconds some hosts are still not reachable

I'm not familiar with Rust, but in lib.rs line 43 it looks like the count is being compared against timeout read from env.

No big deal, I think I prefer it to be number of tries instead of time, but figured you might want to update the docs or fix the code to reflect the intention.

Great tool by the way!

docker-compose-wait doesn't start app

Hello! I've got an issue with docker-compose-wait on Ubuntu 20.04.

my Dockerfile:

FROM node:18-alpine AS development

WORKDIR /usr/src/app

RUN apk update && apk upgrade && \
    apk add --no-cache bash git openssh

RUN yarn add glob rimraf

COPY ./package.json .
COPY ./yarn.lock .
COPY ./tsconfig.json .
COPY ./tsconfig.build.json .
COPY ./nest-cli.json .

# RUN yarn --frozen-lockfile --production=false

COPY [^node_modules]* .

ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.12.0/wait /usr/src/wait
RUN chmod +x /usr/src/wait

ENV GIT_WORK_TREE=/usr/src/app
ENV GIT_DIR=/usr/src/app/.git

my docker-compose.yml:

version: "3"

services:
  app:
    container_name: 'app'
    environment:
      - NODE_ENV=development
      - WAIT_HOSTS=postgres:5432
    networks:
      - network
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - 3021:3000
    restart: unless-stopped
    depends_on:
      - postgres
    volumes:
      - ./:/usr/src/app
      - ./node_modules:/usr/src/app/node_modules
    command: sh -c "/usr/src/./wait && npm run typeorm:bootstrap && npm run typeorm:migration:run && npm run typeorm:seed:run && npx nest start --watch"


  postgres:
    image: postgres
    container_name: postgres
    restart: always
    networks:
      - network
    environment:
      - POSTGRES_USER=root
      - POSTGRES_PASSWORD=root
      - POSTGRES_DATABASE=db
    volumes:
      - ./pgdata:/var/lib/postgresql/data
    ports:
      - '5434:5432'

networks:
  network:
    driver: bridge

Everything works well on macOS Ventura 13.2.1. Docker version 24.0.6

The same config on Ubuntu 20.04 doesn't start the application
After docker compose up the database starts, but docker-compose-wait doesn't catch it.
Postgres logs:

{"log":"\n","stream":"stdout","time":"2023-11-10T13:27:40.688396631Z"}
{"log":"PostgreSQL Database directory appears to contain a database; Skipping initialization\n","stream":"stdout","time":"2023-11-10T13:27:40.688432051Z"}
{"log":"\n","stream":"stdout","time":"2023-11-10T13:27:40.688438191Z"}
{"log":"2023-11-10 13:27:40.726 UTC [1] LOG:  starting PostgreSQL 16.0 (Debian 16.0-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit\n","stream":"stderr","time":"2023-11-10T13:27:40.727207193Z"}
{"log":"2023-11-10 13:27:40.729 UTC [1] LOG:  listening on IPv4 address \"0.0.0.0\", port 5432\n","stream":"stderr","time":"2023-11-10T13:27:40.729945457Z"}
{"log":"2023-11-10 13:27:40.729 UTC [1] LOG:  listening on IPv6 address \"::\", port 5432\n","stream":"stderr","time":"2023-11-10T13:27:40.730078787Z"}
{"log":"2023-11-10 13:27:40.734 UTC [1] LOG:  listening on Unix socket \"/var/run/postgresql/.s.PGSQL.5432\"\n","stream":"stderr","time":"2023-11-10T13:27:40.734309639Z"}
{"log":"2023-11-10 13:27:40.739 UTC [28] LOG:  database system was shut down at 2023-11-10 12:40:25 UTC\n","stream":"stderr","time":"2023-11-10T13:27:40.740877331Z"}
{"log":"2023-11-10 13:27:40.749 UTC [1] LOG:  database system is ready to accept connections\n","stream":"stderr","time":"2023-11-10T13:27:40.749522264Z"}
{"log":"2023-11-10 13:32:40.839 UTC [26] LOG:  checkpoint starting: time\n","stream":"stderr","time":"2023-11-10T13:32:40.84036884Z"}
{"log":"2023-11-10 13:32:40.851 UTC [26] LOG:  checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.004 s, sync=0.001 s, total=0.013 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB; lsn=0/1954F50, redo lsn=0/1954F18\n","stream":"stderr","time":"2023-11-10T13:32:40.852040042Z"}

App logs:

{"log":"[INFO  wait] --------------------------------------------------------\n","stream":"stderr","time":"2023-11-10T13:34:13.153880588Z"}
{"log":"[INFO  wait]  docker-compose-wait 2.12.1\n","stream":"stderr","time":"2023-11-10T13:34:13.153931698Z"}
{"log":"[INFO  wait] ---------------------------\n","stream":"stderr","time":"2023-11-10T13:34:13.153935578Z"}
{"log":"[DEBUG wait] Starting with configuration:\n","stream":"stderr","time":"2023-11-10T13:34:13.153938658Z"}
{"log":"[DEBUG wait]  - Hosts to be waiting for: [postgres:5432]\n","stream":"stderr","time":"2023-11-10T13:34:13.153941357Z"}
{"log":"[DEBUG wait]  - Paths to be waiting for: []\n","stream":"stderr","time":"2023-11-10T13:34:13.153944148Z"}
{"log":"[DEBUG wait]  - Timeout before failure: 30 seconds \n","stream":"stderr","time":"2023-11-10T13:34:13.153946597Z"}
{"log":"[DEBUG wait]  - TCP connection timeout before retry: 5 seconds \n","stream":"stderr","time":"2023-11-10T13:34:13.153949308Z"}
{"log":"[DEBUG wait]  - Sleeping time before checking for hosts/paths availability: 0 seconds\n","stream":"stderr","time":"2023-11-10T13:34:13.153951668Z"}
{"log":"[DEBUG wait]  - Sleeping time once all hosts/paths are available: 0 seconds\n","stream":"stderr","time":"2023-11-10T13:34:13.153954528Z"}
{"log":"[DEBUG wait]  - Sleeping time between retries: 1 seconds\n","stream":"stderr","time":"2023-11-10T13:34:13.153957157Z"}
{"log":"[DEBUG wait] --------------------------------------------------------\n","stream":"stderr","time":"2023-11-10T13:34:13.153959448Z"}
{"log":"[INFO  wait] Checking availability of host [postgres:5432]\n","stream":"stderr","time":"2023-11-10T13:34:13.153962208Z"}
{"log":"[INFO  wait] Host [postgres:5432] not yet available...\n","stream":"stderr","time":"2023-11-10T13:34:18.159907725Z"}
{"log":"[INFO  wait] Host [postgres:5432] not yet available...\n","stream":"stderr","time":"2023-11-10T13:34:24.16508254Z"}
{"log":"[INFO  wait] Host [postgres:5432] not yet available...\n","stream":"stderr","time":"2023-11-10T13:34:30.170718887Z"}
{"log":"[INFO  wait] Host [postgres:5432] not yet available...\n","stream":"stderr","time":"2023-11-10T13:34:36.177156683Z"}
{"log":"[INFO  wait] Host [postgres:5432] not yet available...\n","stream":"stderr","time":"2023-11-10T13:34:42.182452694Z"}
{"log":"[INFO  wait] Host [postgres:5432] not yet available...\n","stream":"stderr","time":"2023-11-10T13:34:48.18792846Z"}
{"log":"[ERROR wait] Timeout! After 30 seconds some hosts are still not reachable\n","stream":"stderr","time":"2023-11-10T13:34:48.187997991Z"}

Error because not valid HostIP

Hello again,
I experienced another issue. As I described in #18 , I used a wrong version which caused sleep interval to not work correctly. After upgrading to the most current version 2.7.1, I now get the exception:

worker     | thread 'main' panicked at 'The host IP should be valid: AddrParseError(())', src/libcore/result.rs:1165:5
worker     | note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
worker     | Checking availability of HOSTNAME:PORT

This seems to be weird, considering, it worked before in version 2.2.1. Did you change the api since then? This config gets printed:

worker     | --------------------------------------------------------
worker     |  docker-compose-wait 2.7.1
worker     | ---------------------------
worker     | Starting with configuration:
worker     |  - Hosts to be waiting for: [HOSTNAME:PORT]
worker     |  - Timeout before failure: 180 seconds 
worker     |  - TCP connection timeout before retry: 5 seconds 
worker     |  - Sleeping time before checking for hosts availability: 10 seconds
worker     |  - Sleeping time once all hosts are available: 0 seconds
worker     |  - Sleeping time between retries: 5 seconds
worker     | --------------------------------------------------------
worker     | Waiting 10 seconds before checking for hosts availability
worker     | --------------------------------------------------------

I hope you can help me. The error is probably on my side...

exec format error

Hi, I'm running the script on a raspberry pi 4& raspbian buster. I get an error: exec format error for the script.

FROM arm32v7/node
ENV NODE_VERSION 11.15.0
WORKDIR ./

# install dependencies
RUN apt-get update;\
    apt-get install qemu qemu-user-static binfmt-support -y;\
    apt-get install i2c-tools -y;

# nodejs packages
COPY package*.json ./
RUN npm install --only=production

# copy app
COPY  src/ ./

# add logs folder
RUN mkdir -p /logs/

# wait for MongoDB launch
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.5.1/wait /wait
RUN chmod +x /wait

EXPOSE 8080

STOPSIGNAL SIGTERM

CMD /wait && npm run start

I found that adding a shebang on top of the script helps, e.g.
http://kimh.github.io/blog/en/docker/gotchas-in-writing-dockerfile-en/

But the wait is of type ELF 64-bit LSB executable, x86-64, version 1 (SYSV) so I guess that won't help at least:

I never worked with rust, so I'm not sure if adding one on top of the entry file works. I could add one (probably here: https://github.com/ufoscout/docker-compose-wait/blob/master/src/lib.rs) and make a PR?

Any help appreciated, thx!

Build break when updating time to version 0.2.0 or superior

Looks like time changed get_time()

error[E0425]: cannot find function `get_time` in crate `time`

  --> src/env_reader/mod.rs:39:33

   |

39 |         let mut nanosec = time::get_time().nsec;

   |                                 ^^^^^^^^ not found in `time`

error: aborting due to previous error

For more information about this error, try `rustc --explain E0425`.

error: could not compile `wait`

Using exec instead of &&

It would be nice to support syntax that uses exec instead of a subshell.

The syntax for usage would be simple:

entrypoint: [
    "wait",
    "--",
    "myApp"
]

That way no shell is needed and signal handling becomes easier for the app since there's no longer a shell in between.

unreachable IP prevents timeouts

# export WAIT_HOSTS=11.22.33.44:5555
# ./bin/wait && ping google.com
Checking availability of 11.22.33.44:5555
Host 11.22.33.44:5555 not yet available
Host 11.22.33.44:5555 not yet available
...forever

The command runs for ever.

I would expect it to timeout and exit after a while.

Implementation example

I am having a doubt about how to implement it on my own code, i have 3 images:
-Project image
-MongoDb image
-MariaDb image

I have followed the steps, but i think i am doing something wrong when implementing it.
Can you give me an example with those images of a docker-compose.yml using it on practice

docker-compose-wait work fine but dont start container

Hello I am having difficulty using the docker-compose-wait for my project

I have this docker file:

#building code
FROM node:lts-alpine

ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait

RUN mkdir -p /home/node/api && chown -R node:node /home/node/api

WORKDIR /home/node/api

COPY ormconfig.json .env package.json yarn.* ./

USER node

RUN yarn

COPY --chown=node:node . .
EXPOSE 4000

CMD ["/wait", "yarn", "dev" ]

and this docker compose:

version: '3.7'
services:
  db-pg:
    image: postgres:12
    container_name: db-pg
    ports:
      - '${DB_PORT}:5432'
    environment:
      ALLOW_EMPTY_PASSWORD: 'no'
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASS}
      POSTGRES_DB: ${DB_NAME}
    volumes:
      - ci-postgres-data:/data

  ci-api:
    build: .
    container_name: ci-api
    volumes:
      - .:/home/node/api
    ports:
      - '${SERVER_PORT}:${SERVER_PORT}'
    depends_on:
      - db-pg
    environment:
      WAIT_HOSTS: db-pg:5432
    logging:
      driver: 'json-file'
      options:
        max-size: '10m'
        max-file: '5'

volumes:
  ci-postgres-data:

i got this:

ci-api    | --------------------------------------------------------
ci-api    |  docker-compose-wait 2.7.3
ci-api    | ---------------------------
ci-api    | Starting with configuration:
ci-api    |  - Hosts to be waiting for: [db-pg:5432]
ci-api    |  - Timeout before failure: 30 seconds
ci-api    |  - TCP connection timeout before retry: 5 seconds
ci-api    |  - Sleeping time before checking for hosts availability: 0 seconds
ci-api    |  - Sleeping time once all hosts are available: 0 seconds
ci-api    |  - Sleeping time between retries: 1 seconds
ci-api    | --------------------------------------------------------
ci-api    | Checking availability of db-pg:5432
ci-api    | Host db-pg:5432 is now available!
ci-api    | --------------------------------------------------------
ci-api    | docker-compose-wait - Everything's fine, the application can now start!

but my project in node js is not starting

Not working with multiple command parameters

When starting a node app or a pm2 config file the usual docker command is:

CMD [ "node", "app.js" ]
//or
CMD [ "pm2-runtime", "app.yml" ]

When trying to run with docker-compose-wait:

CMD /wait && [ "pm2-runtime", "app.yml" ]

it fails with following error:
/bin/sh: 1: [: pm2-runtime,: unexpected operator

What is alternative docker-compose-wait for kubernetes?

Hello! Thanks for docker-compose-wait!
I try transfer docker-compose to kubernetes.
But original docker-compose dont support WAIT_HOSTS.
What is alternative docker-compose-wait for kubernetes?
How wait another pod/service in kubernetes?
Thanks!

WAIT_TIMEOUT doesn't work

I'm setting in docker-compose environment WAIT_TIMEOUT or deprecated WAIT_HOSTS_TIMEOUT to 180 but it always defaults to 30 seconds (version 2.9.0)

| [DEBUG wait]  - Hosts to be waiting for: [cassandra:9042, zipkin:9411, localstack:4566]
| [DEBUG wait]  - Paths to be waiting for: []
| [DEBUG wait]  - Timeout before failure: 30 seconds
| [DEBUG wait]  - TCP connection timeout before retry: 5 seconds
| [DEBUG wait]  - Sleeping time before checking for hosts/paths availability: 0 seconds
| [DEBUG wait]  - Sleeping time once all hosts/paths are available: 0 seconds
| [DEBUG wait]  - Sleeping time between retries: 1 seconds

Allow host:port to accept connection url format

Currently the specification of host and port must be in form "{host}:{port}".

In many cases the url to check is already present in the system configuration in form of connection parameter, e.g.: postgres://username:password@hostname:5432/database.

However, current implementation does not seem to accept such host and port specification.

Proposed solution:

  • require presence of explicit host and port in the url
  • rsplit the url on colon :
    • hostname: take the left part: rsplit on / or @, take the last element
    • port: take the right part, split on /, take the first element. Check that it includes only digits.

There are very likely better options, I just wanted to express that to me it seems very acceptable to require explicit hostname and port presence to keep the implementation small and simple.

WAIT_HOSTS without expose ports

The documentations said: "WAIT_HOSTS: comma-separated list of pairs host:port for which you want to wait". Is there any way to use it when you do not expose ports?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.