GithubHelp home page GithubHelp logo

jakub-pomykala / guide-containerize Goto Github PK

View Code? Open in Web Editor NEW

This project forked from openliberty/guide-containerize

0.0 0.0 0.0 326 KB

A guide on how to containerize and run your microservices with Open Liberty using Docker:

Home Page: https://openliberty.io/guides/containerize.html

License: Other

Dockerfile 4.78% Java 68.51% HTML 20.45% Shell 6.26%

guide-containerize's Introduction

Containerizing microservices

Note
This repository contains the guide documentation source. To view the guide in published form, view it on the Open Liberty website.

Learn how to containerize and run your microservices with Open Liberty using Docker.

What you’ll learn

You can easily deploy your microservices in different environments in a lightweight and portable manner by using containers. From development to production and across your DevOps environments, you can deploy your microservices consistently and efficiently with containers. You can run a container from a container image. Each container image is a package of what you need to run your microservice or application, from the code to its dependencies and configuration.

You’ll learn how to build container images and run containers using Docker for your microservices. You’ll construct Dockerfile files, create Docker images by using the docker build command, and run the image as Docker containers by using docker run command.

The two microservices that you’ll be working with are called system and inventory. The system microservice returns the JVM system properties of the running container. The inventory microservice adds the properties from the system microservice to the inventory. This guide demonstrates how both microservices can run and communicate with each other in different Docker containers.

Additional prerequisites

Before you begin, Docker needs to be installed. For installation instructions, refer to the official Docker documentation. You will build and run the microservices in Docker containers.

Make sure to start your Docker daemon before you proceed.

Packaging your microservices

Navigate to the start directory to begin.

You can find the starting Java project in the start directory. It is a multi-module Maven project that is made up of the system and inventory microservices. Each microservice lives in its own corresponding directory, system and inventory.

To try out the microservices by using Maven, run the following Maven goal to build the system microservice and run it inside Open Liberty:

mvn -pl system liberty:run

Open another command-line session and run the following Maven goal to build the inventory microservice and run it inside Open Liberty:

mvn -pl inventory liberty:run

To access the inventory service, which displays the current contents of the inventory, see http://localhost:9081/inventory/systems.

The system service shows the system properties of the running JVM and can be found at http://localhost:9080/system/properties.

The system properties of your localhost can be added to the inventory at http://localhost:9081/inventory/systems/localhost.

After you are finished checking out the microservices, stop the Open Liberty servers by pressing CTRL+C in the command-line sessions where you ran the servers. Alternatively, you can run the liberty:stop goal in another command-line session:

mvn -pl system liberty:stop
mvn -pl inventory liberty:stop

Run the Maven package goal to build the application .war files from the start directory so that the .war files reside in the system/target and inventory/target directories.

mvn package

To learn more about RESTful web services and how to build them, see Creating a RESTful web service for details about how to build the system service. The inventory service is built in a similar way.

Building your Docker images

A Docker image is a binary file. It is made up of multiple layers and is used to run code in a Docker container. Images are built from instructions in Dockerfiles to create a containerized version of the application.

A Dockerfile is a collection of instructions for building a Docker image that can then be run as a container. As each instruction is run in a Dockerfile, a new Docker layer is created. These layers, which are known as intermediate images, are created when a change is made to your Docker image.

Every Dockerfile begins with a parent or base image over which various commands are run. For example, you can start your image from scratch and run commands that download and install a Java runtime, or you can start from an image that already contains a Java installation.

Learn more about Docker on the official Docker page.

Creating your Dockerfiles

You will be creating two Docker images to run the inventory service and system service. The first step is to create Dockerfiles for both services.

Create the Dockerfile for the inventory service.
inventory/Dockerfile

inventory/Dockerfile

link:finish/inventory/Dockerfile[role=include]

The FROM instruction initializes a new build stage, which indicates the parent image of the built image. If you don’t need a parent image, then you can use FROM scratch, which makes your image a base image.

In this case, you’re using the recommended production image, openliberty/open-liberty:kernel-java8-openj9-ubi, as your parent image. If you don’t want any additional runtime features for your kernel image, define the FROM instruction as FROM open-liberty:kernel. To use the default image that comes with the Open Liberty runtime, define the FROM instruction as FROM open-liberty. You can find all the official images and ubi images on the open-liberty Docker Hub.

It is also recommended to label your Docker images with the LABEL command, as the label information can help you manage your images. For more information, see Best practices for writing Dockerfiles.

The COPY instructions are structured as COPY [--chown=<user>:<group>] <source> <destination>. They copy local files into the specified destination within your Docker image. In this case, the inventory server configuration files that are located at src/main/liberty/config are copied to the /config/ destination directory. The inventory application WAR file inventory.war, which was created from running mvn package, is copied to the /config/apps destination directory.

The COPY instructions use the 1001 user ID and 0 group because the openliberty/open-liberty:kernel-java8-openj9-ubi image runs by default with the USER 1001 (non-root) user for security purposes. Otherwise, the files and directories that are copied over are owned by the root user.

Place the RUN configure.sh command at the end to get a pre-warmed Docker image. It improves the startup time of running your Docker container.

The Dockerfile for the system service follows the same instructions as the inventory service, except that some labels are updated, and the system.war archive is copied into /config/apps.

Create the Dockerfile for the system service.
system/Dockerfile

system/Dockerfile

link:finish/system/Dockerfile[role=include]

Building your Docker image

Now that your microservices are packaged and you have written your Dockerfiles, you will build your Docker images by using the docker build command.

Run the following commands to build container images for your application:

docker build -t system:1.0-SNAPSHOT system/.
docker build -t inventory:1.0-SNAPSHOT inventory/.

The -t flag in the docker build command allows the Docker image to be labeled (tagged) in the name[:tag] format. The tag for an image describes the specific image version. If the optional [:tag] tag is not specified, the latest tag is created by default.

To verify that the images are built, run the docker images command to list all local Docker images:

docker images

Or, run the docker images command with --filter option to list your images:

docker images -f "label=org.opencontainers.image.authors=Your Name"

Your two images, inventory and system, should appear in the list of all Docker images:

REPOSITORY    TAG             IMAGE ID        CREATED          SIZE
inventory     1.0-SNAPSHOT    08fef024e986    4 minutes ago    471MB
system        1.0-SNAPSHOT    1dff6d0b4f31    5 minutes ago    470MB

Running your microservices in Docker containers

Now that you have your two images built, you will run your microservices in Docker containers:

docker run -d --name system -p 9080:9080 system:1.0-SNAPSHOT
docker run -d --name inventory -p 9081:9081 inventory:1.0-SNAPSHOT

The flags are described in the table below:

Flag Description

-d

Runs the container in the background.

--name

Specifies a name for the container.

-p

Maps the host ports to the container ports. For example: -p <HOST_PORT>:<CONTAINER_PORT>

Next, run the docker ps command to verify that your containers are started:

docker ps

Make sure that your containers are running and show Up as their status:

CONTAINER ID    IMAGE                   COMMAND                  CREATED          STATUS          PORTS                                        NAMES
2b584282e0f5    inventory:1.0-SNAPSHOT  "/opt/ol/helpers/run…"   2 seconds ago    Up 1 second     9080/tcp, 9443/tcp, 0.0.0.0:9081->9081/tcp   inventory
99a98313705f    system:1.0-SNAPSHOT     "/opt/ol/helpers/run…"   3 seconds ago    Up 2 seconds    0.0.0.0:9080->9080/tcp, 9443/tcp             system

If a problem occurs and your containers exit prematurely, the containers don’t appear in the container list that the docker ps command displays. Instead, your containers appear with an Exited status when they run the docker ps -a command. Run the docker logs system and docker logs inventory commands to view the container logs for any potential problems. Run the docker stats system and docker stats inventory commands to display a live stream of usage statistics for your containers. You can also double-check that your Dockerfiles are correct. When you find the cause of the issues, remove the faulty containers with the docker rm system and docker rm inventory commands. Rebuild your images, and start the containers again.

To access the application, point your browser to the http://localhost:9081/inventory/systems URL. An empty list is expected because no system properties are stored in the inventory yet.

Next, retrieve the system container’s IP address by using the system container’s name that is defined when it ran the Docker containers. Run the following command to retrieve the system IP address:

docker inspect -f "{{.NetworkSettings.IPAddress }}" system

You find the system container’s IP address:

172.17.0.2

In this case, the IP address for the system service is 172.17.0.2. Take note of this IP address to add the system properties to the inventory service.

Point your browser to http://localhost:9081/inventory/systems/[system-ip-address] by replacing [system-ip-address] with the IP address you obtained earlier. You see a result in JSON format with the system properties of your local JVM. When you visit this URL, these system properties are automatically stored in the inventory. Go back to http://localhost:9081/inventory/systems and you see a new entry for [system-ip-address].

Externalizing server configuration

inventory/server.xml

link:finish/inventory/src/main/liberty/config/server.xml[role=include]

As mentioned at the beginning of this guide, one of the advantages of using containers is that they are portable and can be moved and deployed efficiently across all of your DevOps environments. Configuration often changes across different environments, and by externalizing your server configuration, you can simplify the development process.

Imagine a scenario where you are developing an Open Liberty application on port 9081 but to deploy it to production, it must be available on port 9091. To manage this scenario, you can keep two different versions of the server.xml file; one for production and one for development. However, trying to maintain two different versions of a file might lead to mistakes. A better solution would be to externalize the configuration of the port number and use the value of an environment variable that is stored in each environment.

In this example, you will use an environment variable to externally configure the HTTP port number of the inventory service.

In the inventory/server.xml file, the default.http.port variable is declared and is used in the httpEndpoint element to define the service endpoint. The default value of the default.http.port variable is 9081. However, this value is only used if no other value is specified. To find a value for this variable, Open Liberty looks for the following environment variables, in order:

  • default.http.port

  • default_http_port

  • DEFAULT_HTTP_PORT

When you previously ran the inventory container, none of the environment variables mentioned were defined and thus the default value of 9081 was used.

Run the following commands to stop and remove the inventory container and rerun it with the default.http.port environment variable set:

docker stop inventory
docker rm inventory
docker run -d --name inventory -e default.http.port=9091 -p 9091:9091 inventory:1.0-SNAPSHOT

The -e flag can be used to create and set the values of environment variables in a Docker container. In this case, you are setting the default.http.port environment variable to 9091 for the inventory container.

Now, when the service is starting up, Open Liberty finds the default.http.port environment variable and uses it to set the value of the default.http.port variable to be used in the HTTP endpoint.

The inventory service will now be available on the new port number you specified. You can see the contents of the inventory at http://localhost:9091/inventory/systems. You can add your local system properties at http://localhost:9091/inventory/systems/[system-ip-address] by replacing [system-ip-address] with the IP address you obtained in the previous section. The system service remains unchanged and is available at http://localhost:9080/system/properties

You can externalize the configuration of more than just the port numbers. To learn more about Open Liberty server configuration, check out the Server Configuration Overview docs.

Testing the microservices

You can test your microservices manually by hitting the endpoints or with automated tests that check your running Docker containers.

Create the SystemEndpointIT class.
system/src/test/java/it/io/openliberty/guides/system/SystemEndpointIT.java

SystemEndpointIT.java

link:finish/system/src/test/java/it/io/openliberty/guides/system/SystemEndpointIT.java[role=include]

The testGetProperties() method checks for a 200 response code from the system service endpoint.

Create the InventoryEndpointIT class.
inventory/src/test/java/it/io/openliberty/guides/inventory/InventoryEndpointIT.java

InventoryEndpointIT.java

link:finish/inventory/src/test/java/it/io/openliberty/guides/inventory/InventoryEndpointIT.java[role=include]
  • The testEmptyInventory() method checks that the inventory service has a total of 0 systems before anything is added to it.

  • The testHostRegistration() method checks that the system service was added to inventory properly.

  • The testSystemPropertiesMatch() checks that the system properties match what was added into the inventory service.

  • The testUnknownHost() method checks that an error is raised if an unknown host name is being added into the inventory service.

  • The systemServiceIp variable has the same value as what you retrieved in the previous section when manually adding the system service into the inventory service. This value of the IP address is passed in when you run the tests.

Running the tests

Run the Maven package goal to compile the test classes. Run the Maven failsafe goal to test the services that are running in the Docker containers by replacing the [system-ip-address] with the IP address that you determined previously.

mvn package
mvn failsafe:integration-test -Dsystem.ip=[system-ip-address] -Dinventory.http.port=9091 -Dsystem.http.port=9080

If the tests pass, you see a similar output as the following:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.653 s - in it.io.openliberty.guides.system.SystemEndpointIT

Results:

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.inventory.InventoryEndpointIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.935 s - in it.io.openliberty.guides.inventory.InventoryEndpointIT

Results:

Tests run: 4, Failures: 0, Errors: 0, Skipped: 0

When you are finished with the services, run the following commands to stop and remove your containers:

docker stop inventory system
docker rm inventory system

Great work! You’re done!

You have just built Docker images and run two microservices on Open Liberty in containers.

guide-containerize's People

Contributors

ankitagrawa avatar austin0 avatar austinseto avatar gkwan-ibm avatar griffinhadfield avatar guidesbot avatar jakub-pomykala avatar justineechen avatar maihameed avatar manasigandhi avatar nimg98 avatar tt-le avatar yeekangc avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.