GithubHelp home page GithubHelp logo

bretfisher / node-docker-good-defaults Goto Github PK

View Code? Open in Web Editor NEW
2.3K 59.0 487.0 428 KB

sample node app for Docker examples

License: MIT License

JavaScript 68.11% Dockerfile 24.49% Shell 7.39%
nodejs docker dockerfile docker-compose npm nodemon vscode

node-docker-good-defaults's Introduction

Node.js + Docker for Showing Good Defaults in Using Node.js with Docker

Lint Code Base Docker Build

This tries to be a "good defaults" example of starting to use Node.js in Docker for local development and shipping to production with basic bells, whistles, and best practices. Issues/PR welcome.

Note I have more advanced examples of Node.js Dockerfiles and Compose files in my DockerCon 2022 talk and repository. I also have more about everything Docker and Node.js in my 8 hour video course Docker for Node.js.

Also Note, I have other resources on Docker and Kubernetes here.

Local Development Features

  • Dev as close to prod as you can. Docker Compose builds a local development image that is just like the production image except for the below dev-only features needed in the image. The goal is to have dev environment be as close to test and prod as possible while still giving all the nice tools to make you a happy dev.
  • Prevent needing node/npm on host. This installs node_modules outside app root in the container image so local development won't run into a problem of bind-mounting over it with local source code. This means it will run npm install once on container build and you don't need to run npm on host or on each docker run. It will re-run on build if you change package.json.
  • One line startup. Uses docker compose up for single-line build and run of local development server.
  • Edit locally while code runs in container. Docker Compose uses proper bind-mounts of host source code into container so you can edit locally while running code in Linux container.
  • Use nodemon in container. Docker Compose uses nodemon for development for auto-restarting Node.js in container when you change files on host.
  • Enable debug from host to container. Opens the inspect port 9229 for using host-based debugging like chrome tools or VS Code. Nodemon enables --inspect by default in Docker Compose.
  • Provides VSCode debug configs and tasks for tests. for Visual Studio Code fans, .vscode directory has the goods, thanks to @JPLemelin.
  • Small image and quick re-builds. COPY in package.json and run npm install before COPY in your source code. This saves big on build time and keeps the container image lean.
  • Bind-mount package.json. This allows adding packages in realtime without rebuilding images. e.g. docker compose exec -w /opt/node_app node npm install --save <package name>

Production-minded Features

  • Use Docker built-in healthchecks. This uses Dockerfile HEALTHCHECK with /healthz route to help Docker know if your container is running properly (example always returns 200, but you get the idea).
  • Proper NODE_ENV use. Defaults to NODE_ENV=production in Dockerfile and overrides to development in docker-compose for local dev.
  • Don't add dev dependencies into the production image. Proper NODE_ENV use means dev dependencies won't be installed in the image by default. Using Docker Compose will build with them by default.
  • Enables proper SIGTERM/SIGINT for graceful exit. Defaults to node index.js rather than npm for allowing graceful shutdown of node. npm doesn't pass SIGTERM/SIGINT properly (you can't ctrl-c when running docker run in foreground). To get node index.js to graceful exit, extra signal-catching code is needed. The Dockerfile and index.js document the options and links to known issues.
  • Run Node.js in the container as node user, not root.
  • Use docker-stack.yml example for Docker Swarm deployments.

Assumptions

  • You have Docker and Docker Compose installed (Docker Desktop for Mac/Windows/Linux).
  • You want to use Docker for local development (i.e. never need to install Node.js/npm on host) and have dev and prod Docker images be as close as possible.
  • You don't want to lose fidelity in your dev workflow. You want a easy environment setup, using local editors, Node.js debug/inspect, local code repository, while Node.js server runs in a container.
  • You use docker-compose for local development only (docker-compose was never intended to be a production deployment tool anyway).
  • The docker-compose.yml is not meant for docker stack deploy in Docker Swarm, it's meant for happy local development. Use docker-stack.yml for Swarm.

Getting Started

If this was your Node.js app, to start local development you would:

  • Running docker compose up is all you need. It will:
  • Build custom local image enabled for development (nodemon, NODE_ENV=development).
  • Start container from that image with ports 80 and 9229 open (on localhost).
  • Starts with nodemon to restart Node.js on file change in host pwd.
  • Mounts the pwd to the app dir in container.
  • If you need other services like databases, just add to compose file and they'll be added to the custom Docker network for this app on up.
  • Compose won't rebuild automatically, so either run docker compose build after changing package.json or do what I do and always run docker compose up --build.
  • Be sure to use docker compose down to cleanup after your done dev'ing.

If you wanted to add a package while docker-compose was running your app:

  • docker compose exec -w /opt/node_app node npm install --save <package name>
  • This installs it inside the running container.
  • Nodemon will detect the change and restart.
  • --save will add it to the package.json for next docker compose build

To execute the unit-tests, you would:

  • Execute docker compose exec node npm test, It will:
  • Run a process npm test in the container.
  • You can use the vscode to debug unit-tests with config Docker Test (Attach 9230 --inspect), It will:
    • Start a debugging process in the container and wait-for-debugger, this is done by vscode tasks
    • It will also kill a previous debugging process if existing.

Ways to improve security

Run Node.js as Non-Root User

As mentioned in the official docker Node.js image docs, Docker runs the image as root. This can pose a potential security issue.

As a security best practice, it is recommended for Node.js apps to listen on non-privileged ports as mentioned here.

Other Resources

node-docker-good-defaults's People

Contributors

0x24d avatar adambro avatar aidengaripoli avatar ajmueller avatar akalipetis avatar atoreson avatar blazeu avatar bretfisher avatar brunoluiz avatar craigh1015 avatar dependabot[bot] avatar earnubs avatar galileo avatar hugodias avatar iamwillbar avatar joebowbeer avatar jplemelin avatar justinpage avatar jveldboom avatar kbariotis avatar mcculleydj avatar mikesir87 avatar outbits avatar remnantkevin avatar shawnstrickland avatar smerlos avatar snyk-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

node-docker-good-defaults's Issues

thoughts about enabling warnings?

There are node options to enable stack traces for deprecations, warnings, and synchronous IO

node ---help

  --trace-deprecation        show stack traces on deprecations
  --trace-warnings           show stack traces on process warnings
  --trace-sync-io            show stack trace when use of sync IO

Do you think it's a good idea to enable these for development?

Do you think there's a problem with having these enabled in production builds?

Typescript integration

Hi Bret,

I'm using typescript with node.

the command 'npm run build' generate the compiled sources on 'dist' folder.

I want to ask you how would you update the Dockerfile to use a prepiler as typescript or babel with node? What is best practice on this case?

Thanks,
Diego

EACCESS on `docker-compose exec node npm i <package>`

First of all, a huge thank you (and all the contributors) for this knowledge nugget of a repo. Being a beginner to containers, it was a blast to go through the files.

I am getting a EACCESS error when trying to install a package using the standard dce node npm i <package>
image

AFAIU, the error comes from a volume ownership issue when operating as non root, since the excluded node_modules (via volume mount) is root owned by default, blocking writes and forcing dep installs with dce -w <parentDir> npm i <package> :
image

It turns out there's a simplified version of this workaround to the volume problem. By creating the target node_modules as a non root user on build time, the ownership stays the same when the volume is mounted. Besides resolving my issue and simplifying the installation command, this has the added benefits of having 1 package* file in the container, and avoiding issues related to file bind mounts (see #28).

If this is to your liking, I am willing (and happy ๐Ÿ˜„ ) to make PR with the fix. Thanks for your work!

Another Way For Graceful Shutdown

I got a solution that work with PM2, it will fix some problem about:

  • Correct PID 1 signals Handling & Forwarding
  • Graceful application Start and Shutdown
  • Seamless application clustering to increase performance and reliability

Docker-PM2-NodeJS

EBUSY on npm install because of rewrite of mounted package.json

Hi! Thanks for your work of gathering all this stuff in one place!

But how do you managed to make npm install work with direct package.json mounts? I get this errors every time:

 resource busy or locked, rename '/opt/package.json.3249071875' -> '/opt/package.json'

because npm tries to rewrite file from scratch rather than make atomic writes.

Multi-stage builds / decreasing size

First of all, thank you (+contributers) so much for creating this example project. It really helped me get past cold feet and actually set up a first nice Docker project!

As this guide popped up as my starting point I imagine others starting here as well. For completeness, I think it's worth highlighting multi-stage builds and smaller parent images to decrease your image size.
It's worth noting that the production version of this Hello World app still includes all the devDependencies, unit-tests and files that are not strictly necessary. Of course, multi-stage builds make more sense in larger applications with client-side js.

Also, just using node:alpine as parent image help getting from 600mb+ to ยฑ70mb, worth mentioning too.

If leaving those out was intentional that's fine, you point to a good further reading resource, if you agree I'm more than happy to create a PR with these comments/changes implemented.

Cheers!

Change example code and compose file to be multi-service with Mongo

My very old repo on a mongo node docker example is very outdated and should be archived, but I get asked about it because this simple repo is just a bit too simple. It doesn't show an example of how node would talk to a mongo database (one of the most popular db's for node apps I think).

  • Add modern express code to talk to a mongo db. The host, port, user, and password should be based on files for user/password (if they exist) and env vars for all 4 (look at mongo image entrypoint script for examples). It doesn't need to store or return data in this example, just verify on the page that the connection was successful or not.
  • Add mongo official image to compose file and add the envvars needed for the 4 settings above. (URI might be better as one setting of host:port, I'm not sure)
  • Support swarm secrets for node/mongo username/password.

docker-compose up - fails on Windows

Hi,

Just cloned this and running "docker-compose up" fails (eventually) to build on my Windows machine:

`Building node
Step 1/17 : FROM node:10
---> f09e7c96b6de
Step 2/17 : ARG NODE_ENV=production
---> Using cache
---> 8798a1c8075e
Step 3/17 : ENV NODE_ENV $NODE_ENV
---> Using cache
---> f5deba597200
Step 4/17 : ARG PORT=3000
---> Using cache
---> 3cfb4711b51f
Step 5/17 : ENV PORT $PORT
---> Using cache
---> df1e71469d97
Step 6/17 : EXPOSE $PORT 9229 9230
---> Using cache
---> 0812ab63129c
Step 7/17 : RUN npm i npm@latest -g
---> Using cache
---> 5b75fadb1282
Step 8/17 : WORKDIR /opt
---> Using cache
---> d4f2926ed1c9
Step 9/17 : COPY package.json package-lock.json* ./
---> Using cache
---> 96500661a754
Step 10/17 : RUN npm install --no-optional && npm cache clean --force
---> Running in 7f7629f34a6f
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/is-obj-34b0d206/readme.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/is-accessor-descriptor-a0138ffa/index.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/color-convert-9072abfc/conversions.js'
npm ERR! code E404
npm ERR! 404 Not Found: [email protected]
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/kind-of-9b8b4aa1/LICENSE'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/arr-flatten-d8e87bef/index.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/kind-of-9b8b4aa1/index.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/nan-53be3a9b/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/he-bba7ac0f/LICENSE-MIT.txt'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/unset-value-3bf1a3af/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/unset-value-3bf1a3af/LICENSE'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/unset-value-3bf1a3af/index.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/lodash.debounce-d98c7fea/LICENSE'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/lodash.debounce-d98c7fea/index.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/fs.realpath-a0512e6a/old.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/atob-fe7edb82/LICENSE.DOCS'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/mongodb-core-41aba90b/lib/topologies/shared.js'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/readdirp-675a2d24/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/proxy-addr-c6c31365/LICENSE'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/safe-buffer-a5a71a84/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/base-e8c8abd4/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/color-name-6f95ad24/README.md'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/color-name-6f95ad24/LICENSE'
npm WARN tar ENOENT: no such file or directory, open '/opt/node_modules/.staging/color-name-6f95ad24/index.js'

npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2018-11-28T23_20_17_863Z-debug.log
ERROR: Service 'node' failed to build: The command '/bin/sh -c npm install --no-optional && npm cache clean --force' returned a non-zero code: 1`

I've not changed anything in the source files.

Docker version:

Engine 18.09.0
Compose 1.23.1
Machine 0.16.0

Any ideas?

Is the recommended version still version 2?

For non-swarm configuration. should we still use version 2?
According to the official docs the recommended version is 3. they also added the --compatibility flag which should mitigate some (all?) v3 drawbacks..

Installing modules workflow

What is the recommended way to install new modules in development?

Problem is that if I do install in container, only package.json in /opt is changed so when i rebuild container we lose new module definition in package.json on host which we actually copy in Dockerfile.
Only way I found is that I manually edit package.json on host and rebuild container.

Improve SIGTERM/INT handling by waiting for connections to close

Currently this repo will server.close() but that only stops new connections, and will not exit if existing long-polling or websocket connections exist. A more complete way would be:

  1. On receiving SIGTERM/INT, run server.close() to stop new connections (note this might have a problem with front-end LB's that still have this container in their rotation. They would still send connections to it but it won't accept them. Orchestrator problems.).
  2. Wait for x seconds for long polling to timeout or just for long enough to feel good about it.
  3. process.exit() to hard stop remaining connections and stop container.

More info:

Bind mountings not working as expected

I created the following example to highlight what I can't make it work:

I don't have any local node_modules. I'm running everything through Docker.

As soon as I bootstrap the project with $ docker-compose up everything looks great.
If I try to change the Hello World! sentence in index.js nodemon reloads the server as expected.

But as soon as I want to add a dependency like lodash while developing and while my container is up and running the package.json is not updated.

As explained in this repo (node-docker-good-defaults) I'm using $ docker-compose exec -w /opt/node_app node npm install --save lodash to install the new dependency and reflect the changes on my host project. Unfortunately it isn't working.

Any hints?

Explain role and use of file_env function in docker-entrypoint.sh

Thank you for a great project!

I was wondering if someone could explain the role and use of the file_env function in docker-entrypoint.sh a bit more?

I have a lot of env. variables I set and have not seen the need to use file_env at all, so when specifically would I need to use it?

Some Question For Babel

If I use babel and babel-node, should I build code inside docker ?
Your target to build the docker for production, and use docker-compose to fit development env.
Please give me some tips about the babel for dev and prod. Thanks

Node new --inspect doesn't work with VSCode in Docker for Mac

When Node and VSCode are set to use new node inspector, I see VSCode connect to the container port 9229, but when trying to set breakpoint I get:

Breakpoint ignored because generated code not found (source map problem?)

Haven't had a chance to dig in much, just putting this here in case others have it or know how to quickly solve it. Might help to try node newer than 6.10... just spitballin'

VSCode 1.12.2
Docker for Mac 17.05.0-ce-mac11
Node 6.10.3

CMD in container is: node --inspect ../node_modules/nodemon/bin/nodemon.js
Ports are open and responding
VSCode launch.json config:

        {
            "name": "Attach 9229 --inspect",
            "type": "node",
            "request": "attach",
            "protocol": "inspector",
            "port": 9229
        }

Using a private package within package.json

Hello,

We currently use a private package or repository within one of our services. We're following this project and are running into the issue where we can't install a package because it does not have permission to access our private repository.

[2/4] Fetching packages...
error Command failed.
Exit code: 128
Command: git
Arguments: ls-remote --tags --heads ssh://[email protected]:9999/secret/private.git
Directory: /opt
Output:
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

I see a wide variety of ways to solve this problem. Some include copying the host SSH key into the container and others recommending the use of docker secrets. I would love to hear your recommendations and perhaps some project examples that solve this problem.

Might add that the example above uses yarn in our Dockerfile and we're hosting our private code inside a private bitbucket instance.

Thanks!

Kubernetes support?

I don't see kubernetes support in this repo. Just docker-compose/docker swarm. If this is desired I would be happy to work on PR, otherwise I will just fork for my own uses. My goal is just to create a skeleton repo that people can get off the ground quickly, and this repo is an excellent foundation! Obviously there are many production aspects that would need to be customized to the user's specific use case, but it may be useful for someone learning how everything fits together. I would consider myself only a novice kubernetes user so I would enjoy the exercise.

HELP: Can you explain this a little more ?

https://github.com/BretFisher/node-docker-good-defaults/blob/master/bin/www#L22

After getting through the docker-compose section of your class, I figured I'd try to get my local env running before continuing to docker swarm.

I used this repo as kind of a guide. My set up isn't complicated services = [nest.js, redis, postgres] (nest.js is a framework around express)

I'm also using yarn and nodemon installed locally.
CMD ['yarn', 'start:dev'] #=> nodemon

nodemon.json
{
  "watch": ["src"],
  "ext": "ts",
  "ignore": ["src/**/*.spec.ts", "src/graphql.schema.ts"],
  "exec": "ts-node -r tsconfig-paths/register src/main.ts"
}

docker-compose up works just find, but when I save a file it fails ....

nest_1      | [Nest] 45   - 12/10/2018, 1:11:09 PM   [RoutesResolver] AppController {/}: +124ms
nest_1      | [Nest] 45   - 12/10/2018, 1:11:09 PM   [RouterExplorer] Mapped {/, GET} route +11ms
nest_1      | [Nest] 45   - 12/10/2018, 1:11:12 PM   [NestApplication] Nest application successfully started +3203ms
nest_1      | Error: listen EADDRINUSE :::4000
nest_1      |     at Server.setupListenHandle [as _listen2] (net.js:1286:14)
nest_1      |     at listenInCluster (net.js:1334:12)
nest_1      |     at Server.listen (net.js:1421:7)
nest_1      |     at NestApplication.listen (/opt/app/node_modules/@nestjs/core/nest-application.js:205:25)
nest_1      | [nodemon] app crashed - waiting for file changes before starting...

Any ideas?

Failing test after following the README.md instruction

I recently cloned the repo and did a fresh docker-compose up. After visiting localhost it works as expected.

Moving forward I tried to run some tests, but found out that /documents test was failing. After looking at some logs, found out that it's unable to connect to database. I'm not sure why it's happening but I suspect it maybe because of mocha timeout not able to wait for the db connection.

Please advise, Thank you :)

Screenshot 2564-02-17 at 03 27 53

Screenshot 2564-02-17 at 03 28 12

docker-entrypoint.sh executable file not found on docker-compose up

Hi,

Thanks for providing these docker-good-defaults ๐Ÿ‘
I'm having an issue on docker-compose up though, getting the error:

ERROR: for node  Cannot start service node: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown

docker-entrypoint.sh sits in the root of my project, I've not modified it from your example, nor the part of Dockerfile that refers to it;

COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]

Any ideas?

show sample data example for mongo

Now that this example includes mongo, another common question for local development is "how do I load sample data into the app for development"

Sure you could use lots of manual methods with docker exec and maybe bind-mount some .js data, but the easiest way IMO is to use the mongo images built-in way of auto-running anything in:

/docker-entrypoint-initdb.d/*.js

Ideally, we would bind-mount the files to execute there in the compose file just for local development.

  • create some sample/test data in a .js file that is seen in the /documents endpoint of this app.
  • bind-mount that in the docker-compose.yml with the mongo service.

How to get app code to see /opt/node_modules

Trying to follow your example and when you are building your image, you put the node modules that are installed into /opt/node_modules. That folder will hold the dev dependencies needed as well. Am I missing the part where you put them into /opt/app/node_modules?

I cloned your project and it seems like your node modules is empty as well

image

About vscode debugging

Maybe I'm too newbie with Vscode, but I don't understand at all how can I debug this nodejs app in a total way.

If I would to start the docker container with docker-compose up then try to debug www file, for example, I'm not able to do it.
So, I've some questions:

  • Is there a way to launch the container in a debug mode at the start?
  • When I change a file, the nodemon detects changes and restart node server. I've to re-attach my debug, is it right or I've missing something?
  • Why some statement aren't debuggable? For example into the healthcheck.js I've always unverified breakpoints or into index.js (see attachment).

Schermata 2019-08-08 alle 09 10 41

Race condition around Mongo restart due to using secure mongo setup

With the current configuration the MongoDB server is starting up setting up the secure user account and then restarting. On my Mac this appears to be be fine. However on Windows 7 the timing of the restart causing the connection that has bee acquired by the node app to be dropped and node fails to start up.

Locally I have resolved this by making Mongo insecure which then prevents the restart.

I am not sure the correct solution but options may be:

  1. Change config to insecure to prevent the restart
  2. A much more complicated and robust Mongo connection process
  3. Use a locally built image for Mongo that has already performed the initialization step
  4. Do no start up node until Mongo is fully ready to go (not recommended in the Docker docs)

Would be interested in others input on a preferred direction.

Permission denied missing write access to /opt/app/node_modules

Following the instructions from the README, I cannot install additional Node packages with docker-compose exec node npm install --save <package name> due to write permissions of the 'node' user. If I update docker-compose.yml to set the user as 'root', I can again install pacakges. I'm not sure if this is the best way to solve the install permissions or not.

Code as root, but node_modules have node user

Not sure if that is intended or not, but package.json file and code are copied as root user, while node_modules use node user. To reproduce run:

docker build . -t dockerfile_test && docker run -it --rm dockerfile_test ls -al /opt/node_app

The container process is started using node process, which is desired behavior, but how about the application files that are owned by root user?

How should this be used for a swarm?

Having taken the excellent course, I've taken this and added the deploy section to deploy to a swarm but had a number of issues. Firstly around the use of volumes and then, when I removed the volumes (in a bid to simplify to try to understand what was happening) my app fails to start up with a 137 error every time. This seems to be a memory related issue but I'm not sure how to go about debugging this. Could it be that the removal of the volumes has caused the load of too much data into memory?

nodemon app crashed

Hi! i recently clone the project and everything works well, but when i made some changes nodemon crashes with this error

node_1   | [nodemon] restarting due to changes...
node_1   | [nodemon] starting `node --inspect=0.0.0.0:9229 ./bin/www`
node_1   | Starting inspector on 0.0.0.0:9229 failed: address already in use
node_1   | [nodemon] app crashed - waiting for file changes before starting...

and the changes i made doesn't work, only after i do docker-compose down and again docker-compose up

VSCode 1.39
Docker for Mac 2.2.0.3
Node 12.16.0

any help? thanks!

Clone or as submodule?

Hi,
what is the best practice to use this project.
Should i copy all files in my project directory ,or use git submodule?
CP is easy, but with submodule i can track new improvements.

node-docker-good-defaults_node_1 exited with code 1

The node service exits prematurely. The output from the node container is:

PS C:\Users\johnf\Desktop> docker logs node-docker-good-defaults_node_1
standard_init_linux.go:207: exec user process caused "no such file or directory"

Please advise.

(Also, when is the Udemy Node course available?)

Question: Where to put app config files?

Hi,

usually you have a config/ folder where a basic default.js file is in, which exports an object of configuration values.
e.g.:

module.exports = {
   session: {
      key1: "abcde"
   },
   mailserver: {
      relayHost: "127.0.0.1"
   }
  // ... a lot more values
}

And then for each env you create a specific copy of that. e.g.: config/production.js. Which is gitignored.

What is your best practice for handling (complex & nested) config values, which are different for each environment?
Of course you don't want to use 'docker secret' for that right?

Thank you

Can't install dependencies

When I try to install a dependecy in a running container using the command listed in the repo's readme docker-compose exec server npm install --save package, it throws the following error:

xxxx-API git:(feature/firbase-push-notifications) โœ— docker-compose exec server npm install --save firebase-admin
npm WARN checkPermissions Missing write access to /opt/server/node_modules
npm WARN [email protected] No repository field.
npm WARN [email protected] No license field.
npm WARN The package @types/uuid is included as both a dev and production dependency.

npm ERR! path /opt/server/node_modules
npm ERR! code EACCES
npm ERR! errno -13
npm ERR! syscall access
npm ERR! Error: EACCES: permission denied, access '/opt/server/node_modules'
npm ERR!  { [Error: EACCES: permission denied, access '/opt/server/node_modules']
npm ERR!   stack:
npm ERR!    'Error: EACCES: permission denied, access \'/opt/server/node_modules\'',
npm ERR!   errno: -13,
npm ERR!   code: 'EACCES',
npm ERR!   syscall: 'access',
npm ERR!   path: '/opt/server/node_modules' }
npm ERR!
npm ERR! The operation was rejected by your operating system.
npm ERR! It is likely you do not have the permissions to access this file as the current user
npm ERR!
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator (though this is not recommended).

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/node/.npm/_logs/2019-02-10T21_18_38_008Z-debug.log

Cannot find node_modules

I just cloned the repo and when i docker-compose up the app crashes because it cant find node_modules. "Error: Cannot find module 'express'". Do I need to change anything to get the test app working? Seems like it's not using the /opt/app_dep/node_modules folder

Add compose sync example, maybe as default

Compose now has a alpha feature for syncing files and avoiding bind-mounts for local development. This is great for many reasons:

  1. increased performance
  2. reducing complexity for compose and dockerfile
  3. removes need for moving node_modules and volume workarounds

First, we need an example (maybe lives in a compose-sync directory and then once feature is out of alpha/beta we make it the default example with a legacy directory for the old way with bind-mounts.

Back to basics... Problem using $ docker build / $ docker run. No browser function.

Bret, this is a pretty awesome project, and I'm really trying to put it to good use. There is something I just don't understand.

If I do a pure clone of the project, and then attempt to build and run a container using $ docker, I can't see the server on localhost. It works fine using $ docker-compose.
Am I doing this correctly?

  1. Clone the repo.
  2. $ docker build -t <yourWebAppName> . # -t flag tag the image with a custom name
    $ docker run <yourWebAppName>
  3. I can't find the port anywhere. The terminal shows things apparently running. But the browser is totally dead at localhost:80, localhost:8080, etc... I can see all the healthz 200 requests in the terminal.
    $ docker ps -a ## identify running container
    $ docker port running_container_hash returns null
  4. close all the containers, remove all docker images and containers for fresh new start
  5. $ docker-compose up
  6. $ docker port <hash> returns as expected. http://localhost in the browser works fine.
80/tcp -> 0.0.0.0:80
9229/tcp -> 0.0.0.0:9229
5858/tcp -> 0.0.0.0:5858

I'm hoping to use your simple repo to do A-B-A testing between $ docker build / run and $ docker-compose. I'd like to input a tiny bit of data on the console and I'm having troubles with docker-compose.yml methods. I'm trying to understand how the interactive modes work with Docker. The fact that I can't see localhost when I $ docker build/run makes me nervous. Am I doing this right?

Note: I'm running the mac version of docker. I'm running everything in the command line. Not using any IDE or Kitematic for anything.

Again, many thanks to you for the work (and documentation) you've done on this project.

--LB

Non root user

Hi Bret,

  1. Arent Alpine based images recommended for production usage?
  2. As per Docker best practises, its recommended that the app runs under non root privileges
    https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#user

We have tried to do the same incorporating your suggestions.

https://github.com/MumbaiHackerspace/Visage/blob/master/services/photos/Dockerfile but using the slim version for now. In fact our next iteration we will use multi stage build in the Dockerfile

Node legacy --debug doesn't work with VSCode in Docker for Mac

node --debug when running VSCode without docker (node running directly on Mac) works with break points.

Trying it when node is inside a container looks like it works, but VSCode never stops for breakpoints. Just like this user describes: microsoft/vscode#22306 (comment)

I've followed documentation from:
https://code.visualstudio.com/docs/nodejs/nodejs-debugging
and
https://github.com/weinand/vscode-recipes/tree/master/Docker-TypeScript

And I believe this setup worked in early 2017, but now doesn't.

VSCode 1.12.2
Docker for Mac 17.05.0-ce-mac11
Node 6.10.3

CMD in container: node --debug=5858 ../node_modules/nodemon/bin/nodemon.js
Ports are open and responding
VSCode launch.json config:

        {
            "name": "Attach 5858 --debug",
            "type": "node",
            "request": "attach",
            "protocol": "legacy",
            "port": 5858,
            "address": "localhost",
            "restart": false,
            "sourceMaps": false,
            "outFiles": [],
            "localRoot": "${workspaceRoot}",
            "remoteRoot": "/opt/app"
        }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.