GithubHelp home page GithubHelp logo

googlecloudplatform / microservices-demo Goto Github PK

View Code? Open in Web Editor NEW
15.8K 15.8K 6.5K 33.49 MB

Sample cloud-first application with 10 microservices showcasing Kubernetes, Istio, and gRPC.

Home Page: https://cymbal-shops.retail.cymbal.dev

License: Apache License 2.0

C# 8.27% Dockerfile 4.14% Shell 6.56% Go 28.47% JavaScript 4.22% Python 28.39% HTML 9.97% Java 3.08% CSS 4.68% HCL 2.22%
gcp gke google-cloud grpc istio kubernetes kustomize sample-application samples skaffold terraform

microservices-demo's Introduction

Continuous Integration

Online Boutique is a cloud-first microservices demo application. The application is a web-based e-commerce app where users can browse items, add them to the cart, and purchase them.

Google uses this application to demonstrate how developers can modernize enterprise applications using Google Cloud products, including: Google Kubernetes Engine (GKE), Anthos Service Mesh (ASM), gRPC, Cloud Operations, Spanner, Memorystore, AlloyDB, and Gemini. This application works on any Kubernetes cluster.

If you’re using this demo, please ★Star this repository to show your interest!

Note to Googlers: Please fill out the form at go/microservices-demo.

Architecture

Online Boutique is composed of 11 microservices written in different languages that talk to each other over gRPC.

Architecture of microservices

Find Protocol Buffers Descriptions at the ./protos directory.

Service Language Description
frontend Go Exposes an HTTP server to serve the website. Does not require signup/login and generates session IDs for all users automatically.
cartservice C# Stores the items in the user's shopping cart in Redis and retrieves it.
productcatalogservice Go Provides the list of products from a JSON file and ability to search products and get individual products.
currencyservice Node.js Converts one money amount to another currency. Uses real values fetched from European Central Bank. It's the highest QPS service.
paymentservice Node.js Charges the given credit card info (mock) with the given amount and returns a transaction ID.
shippingservice Go Gives shipping cost estimates based on the shopping cart. Ships items to the given address (mock)
emailservice Python Sends users an order confirmation email (mock).
checkoutservice Go Retrieves user cart, prepares order and orchestrates the payment, shipping and the email notification.
recommendationservice Python Recommends other products based on what's given in the cart.
adservice Java Provides text ads based on given context words.
loadgenerator Python/Locust Continuously sends requests imitating realistic user shopping flows to the frontend.

Screenshots

Home Page Checkout Screen
Screenshot of store homepage Screenshot of checkout screen

Quickstart (GKE)

  1. Ensure you have the following requirements:

  2. Clone the repository.

    git clone https://github.com/GoogleCloudPlatform/microservices-demo
    cd microservices-demo/
  3. Set the Google Cloud project and region and ensure the Google Kubernetes Engine API is enabled.

    export PROJECT_ID=<PROJECT_ID>
    export REGION=us-central1
    gcloud services enable container.googleapis.com \
      --project=${PROJECT_ID}

    Substitute <PROJECT_ID> with the ID of your Google Cloud project.

  4. Create a GKE cluster and get the credentials for it.

    gcloud container clusters create-auto online-boutique \
      --project=${PROJECT_ID} --region=${REGION}

    Creating the cluster may take a few minutes.

  5. Deploy Online Boutique to the cluster.

    kubectl apply -f ./release/kubernetes-manifests.yaml
  6. Wait for the pods to be ready.

    kubectl get pods

    After a few minutes, you should see the Pods in a Running state:

    NAME                                     READY   STATUS    RESTARTS   AGE
    adservice-76bdd69666-ckc5j               1/1     Running   0          2m58s
    cartservice-66d497c6b7-dp5jr             1/1     Running   0          2m59s
    checkoutservice-666c784bd6-4jd22         1/1     Running   0          3m1s
    currencyservice-5d5d496984-4jmd7         1/1     Running   0          2m59s
    emailservice-667457d9d6-75jcq            1/1     Running   0          3m2s
    frontend-6b8d69b9fb-wjqdg                1/1     Running   0          3m1s
    loadgenerator-665b5cd444-gwqdq           1/1     Running   0          3m
    paymentservice-68596d6dd6-bf6bv          1/1     Running   0          3m
    productcatalogservice-557d474574-888kr   1/1     Running   0          3m
    recommendationservice-69c56b74d4-7z8r5   1/1     Running   0          3m1s
    redis-cart-5f59546cdd-5jnqf              1/1     Running   0          2m58s
    shippingservice-6ccc89f8fd-v686r         1/1     Running   0          2m58s
    
  7. Access the web frontend in a browser using the frontend's external IP.

    kubectl get service frontend-external | awk '{print $4}'

    Visit http://EXTERNAL_IP in a web browser to access your instance of Online Boutique.

  8. Congrats! You've deployed the default Online Boutique. To deploy a different variation of Online Boutique (e.g., with Google Cloud Operations tracing, Istio, etc.), see Deploy Online Boutique variations with Kustomize.

  9. Once you are done with it, delete the GKE cluster.

    gcloud container clusters delete online-boutique \
      --project=${PROJECT_ID} --region=${REGION}

    Deleting the cluster may take a few minutes.

Additional deployment options

  • Terraform: See these instructions to learn how to deploy Online Boutique using Terraform.
  • Istio / Anthos Service Mesh: See these instructions to deploy Online Boutique alongside an Istio-backed service mesh.
  • Non-GKE clusters (Minikube, Kind, etc): See the Development guide to learn how you can deploy Online Boutique on non-GKE clusters.
  • AI assistant using Gemini: See these instructions to deploy a Gemini-powered AI assistant that suggests products to purchase based on an image.
  • And more: The /kustomize directory contains instructions for customizing the deployment of Online Boutique with other variations.

Documentation

  • Development to learn how to run and develop this app locally.

Demos featuring Online Boutique

microservices-demo's People

Contributors

ahmetb avatar arbrown avatar askmeegs avatar bluphy avatar bourgeoisor avatar daniel-sanche avatar davidstanke avatar dependabot[bot] avatar didier-durand avatar djmailhot avatar google-cloud-policy-bot[bot] avatar j-windsor avatar jaspermai avatar jba avatar jkwlui avatar mathieu-benoit avatar michaelawyu avatar minherz avatar mmcloud avatar mtwo avatar muncus avatar nimjay avatar orthros avatar renovate-bot avatar rghetia avatar sebright avatar smeet07 avatar tpryan avatar xtineskim avatar ymotongpoo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

microservices-demo's Issues

Document features

Document and explain:

  • architecture diagram (image)
  • screenshots
  • features
    • native Kubernetes/GKE support
    • Stackdriver support
    • gRPC usage

Stretch goals:

  • development principles (minimal config required etc.)

cartservice: make the image smaller

Currently cartservice is 1.8 GiB. Takes forever to build and push. This probably accounts for a significant chunk of initial build time.

There are base images like dotnet:2.1-aspnetruntime-alpine which can help reduce this.

AWS Go dependencies

I was wondering why we have a bunch of AWS go dependencies that AFAIK we are not using within the go services code. A good example is in this file.

Deploy it on a local Kubernetes cluster with istio Jaeger tracing enabled

Hi,

I wanted to deploy the application on a local Kubernetes cluster, where I have installed istio. Istio Jaeger tracing is enabled. I was wondering if the microservices have any instrumentation done for Jaeger?
Also, what are the steps to deploy the application on a local Kubernetes cluster?

kubectl apply -f ../kubernetes-manifests/

Looks like the images are not pushed to public Docker repo -
default adservice-7d87b789dc-gcxxf 0/1 ImagePullBackOff 0 45s
default adservice-fbcc45f96-vpsqd 0/2 Terminating 0 9m
default cartservice-7dd67f5c98-68jvt 0/1 ImagePullBackOff 0 45s
default checkoutservice-78b864646c-g6cmx 0/1 ImagePullBackOff 0 45s
default currencyservice-7b4b9b8995-4r6dp 0/1 ImagePullBackOff 0 45s
default emailservice-7f98964f6-nvvmv 0/1 ImagePullBackOff 0 45s
default frontend-6997f7b6d8-vsmmg 0/1 ImagePullBackOff 0 45s
default loadgenerator-6bd89f959b-b74x9 0/1 Init:0/1 0 45s
default paymentservice-6b5564854c-48482 0/1 ErrImagePull 0 44s
default productcatalogservice-b4c4966d9-gkst4 0/1 ImagePullBackOff 0 44s
default recommendationservice-7c77f6bff5-5448r 0/1 ImagePullBackOff 0 44s
default redis-cart-77d754f696-v76fh 1/1 Running 0 44s
default shippingservice-6bd888b795-l4pxw 0/1 ImagePullBackOff 0 44s

kubectl describe pod cartservice-7dd67f5c98-68jvt
Failed to pull image "cartservice": rpc error: code = Unknown desc = Error response from daemon: repository cartservice not found: does not exist or no pull access

Thanks!

adservice: minify docker image

Currently adservice image is around 900 MB. This probably can be reduced by using an alpine-based image, or moving the compiled artifacts into an alpine-based jre image.

Consider moving demo.proto to pb/hipstershop

Very much related to issue #1.

It seems like we'll soon end up with:

pb
├── demo.proto  <-- this doesn't look good here
├── googleapis
│   └── types
│       └── money.proto
└── grpc
    └── health
        └── v1
            └── health.proto

Therefore we should consider moving Hipster Shop related proto(s) to their own directory.

Introducing bug in ProductCatalogService for demos

Hi,

As discussed offline with @ahmetb, I introduced a bug in ProductCatalogService for a keynote demo of Next London 18.

I had some constraints for this demo:

  • The bug needed to consume some CPU cycles for the problem to be easily visible in Stackdriver Profiler
  • I needed to be able to trigger the bug without having to recreate a pod: in the new Stackdriver IRM product, we can show correlations between an alert and an other metric. It's easier with an existing pod: if you trigger the bug with a new pod, it's harder to get a correlation because all metrics for the pod began exactly at the same time.

Here is what I came up with:

  • The bug is just a loop in the parseCatalog function that runs for 200 microseconds. This loop consumes some CPU cycles.
  • The bug is triggered by a SIGUSR1, and removed (if needed) by SIGUSR2.

I think that this bug can be useful to other people for other demos.
Should I push a PR for that?

grpc: Implement gRPC health check rpcs

We can also consider implementing a CLI tool to PING so that we rely on this CLI tool to as exec livenessProbe (instead of TCP PING).

  • adservice
  • cartservice
  • checkoutservice
  • currencyservice
  • emailservice
  • frontend
  • paymentservice
  • productcatalogservice
  • recommendationservice
  • shippingservice

Update skaffold config for compatibility with newer versions

Running skaffold run on v 0.16.0 gives error: config version out of date: run `skaffold fix`

...fortunately, skaffold fix seems to work but we should prob. update the config. I'm not too familiar with backwards-compatibility in skaffold. Should we add STTEO 'minimum skaffold version' to the documentation?

add continuous builds

It would be good to have a docker image build checks to begin with. We recently had some code failing to compile merged.

update Go services to use Modules

Right now, all Go services are copy/pasting the

  • initTracing()
  • initProfiling()
  • initStats()
  • [initDebugger() – does not exist yet]

methods. The problem with extracting these into a utility package is that each src/{SERVICE} is self-contained and doesn't depend to src/{ANOTHER_DIR}.

Not sure how to solve this.

JSON Structured logs for Stackdriver Logging

Currently services in this demo, at least those written in Go, are emitting logs to stderr as of now, which makes the appearance of logs confusing because some info level logs are treated as error level ones, due to Stackdriver Logging's default setting. Changing the log destination to stdout solves this.

Also, all log messages are in text payload. Though this is the demonstration of GKE, it would be great if it can demonstrate the features of Stackdriver Logging as well. For that, using JSON (structured log format) can show how Stackdriver Logging can handle the log more efficiently such as handy filters.

Progress tracker

(added by ahmetb, cc: @ymotongpoo )

  • adservice (#59)
  • cartservice
  • checkoutservice (#48)
  • currencyservice (#66)
  • emailservice (#66)
  • frontend
  • loadgenerator (not eligible, produces human-readable output)
  • paymentservice (#66)
  • productcatalogservice (#48)
  • recommendationservice (#66)
  • shippingservice (#48)

Missing libcrypt dependency in adservice

When starting up, adservice throw a large stacktrace with what seems to be a missing libcrypt dependency:

java.lang.IllegalArgumentException: Failed to load any of the given libraries: [netty_tcnative_linux_x86_64, netty_tcnative_linux_x86_64_fedora, netty_tcnative_x86_64, netty_tcnative]
        at io.netty.util.internal.NativeLibraryLoader.loadFirstAvailable(NativeLibraryLoader.java:93)
        at io.netty.handler.ssl.OpenSsl.loadTcNative(OpenSsl.java:440)
        at io.netty.handler.ssl.OpenSsl.<clinit>(OpenSsl.java:97)
        at io.grpc.netty.GrpcSslContexts.defaultSslProvider(GrpcSslContexts.java:244)
        at io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:171)
        at io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:120)
        at io.grpc.netty.NettyChannelBuilder$NettyTransportFactory$DefaultNettyTransportCreationParamsFilterFactory.<init>(NettyChannelBuilder.java:561)
        at io.grpc.netty.NettyChannelBuilder$NettyTransportFactory$DefaultNettyTransportCreationParamsFilterFactory.<init>(NettyChannelBuilder.java:554)
        at io.grpc.netty.NettyChannelBuilder$NettyTransportFactory.<init>(NettyChannelBuilder.java:489)
        at io.grpc.netty.NettyChannelBuilder.buildTransportFactory(NettyChannelBuilder.java:337)
        at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:405)
        at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:206)
        at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:157)
        at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:149)
        at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:151)
        at com.google.cloud.trace.v2.stub.GrpcTraceServiceStub.create(GrpcTraceServiceStub.java:70)
        at com.google.cloud.trace.v2.stub.TraceServiceStubSettings.createStub(TraceServiceStubSettings.java:99)
        at com.google.cloud.trace.v2.TraceServiceClient.<init>(TraceServiceClient.java:137)
        at com.google.cloud.trace.v2.TraceServiceClient.create(TraceServiceClient.java:118)
        at io.opencensus.exporter.trace.stackdriver.StackdriverV2ExporterHandler.createWithCredentials(StackdriverV2ExporterHandler.java:160)
        at io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter.createAndRegister(StackdriverTraceExporter.java:87)
        at hipstershop.AdService.initStackdriver(AdService.java:214)
        at hipstershop.AdService$2.run(AdService.java:244)
        at java.lang.Thread.run(Thread.java:748)
        Suppressed: java.lang.UnsatisfiedLinkError: /tmp/libnetty_tcnative_linux_x86_646356280454963582202.so: Error loading shared library libcrypt.so.1: No such file or directory (needed by /tmp/libnetty_tcnative_linux_x86_646356280454963582202.so)
                at java.lang.ClassLoader$NativeLibrary.load(Native Method)
                at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
                at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
                at java.lang.Runtime.load0(Runtime.java:809)
                at java.lang.System.load(System.java:1086)
                at io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:36)
                at io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:243)
                at io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:187)
                at io.netty.util.internal.NativeLibraryLoader.loadFirstAvailable(NativeLibraryLoader.java:85)
                ... 23 more

Provide tagged releases/images

It should be easy for someone who is just playing with this or deploying this app for a demo to take YAMLs and apply them to the cluster.

This requires:

  1. tagging git releases
  2. making pre-built container images available on gcr publicly (with :tags)
  3. since tutorials etc will rely on this, also need long-term storage for images (e.g. gcr.io/google-samples)

AdService trace logging is reported as Error instead of Info.

Here is the sample log.

insertId: "8m85fufgieo83"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-pmqdr"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "adservice-74bd8cb69-sw8wj"
container.googleapis.com/stream: "stderr"
}
logName: "projects/microservices-demo-223307/logs/server"
receiveTimestamp: "2018-11-30T11:52:04.363633212Z"
resource: {
labels: {
cluster_name: "demo"
container_name: "server"
instance_id: "8772787556939461677"
namespace_id: "default"
pod_id: "adservice-74bd8cb69-sw8wj"
project_id: "microservices-demo-223307"
zone: "europe-west4-b"
}
type: "container"
}
severity: "ERROR"
textPayload: "INFO: SpanData{context=SpanContext{traceId=TraceId{traceId=0000000000000000aa4969b250fd2c88}, spanId=SpanId{spanId=edb2a981d3d2dfa0}, traceOptions=TraceOptions{sampled=true}}, parentSpanId=SpanId{spanId=3527fd550f2d50ee}, hasRemoteParent=true, name=Recv.hipstershop.AdService.GetAds, kind=null, startTimestamp=Timestamp{seconds=1543578721, nanos=420000047}, attributes=Attributes{attributeMap={}, droppedAttributesCount=0}, annotations=TimedEvents{events=[], droppedEventsCount=0}, messageEvents=TimedEvents{events=[TimedEvent{timestamp=Timestamp{seconds=1543578721, nanos=420086897}, event=MessageEvent{type=RECEIVED, messageId=0, uncompressedMessageSize=0, compressedMessageSize=0}}, TimedEvent{timestamp=Timestamp{seconds=1543578721, nanos=420566443}, event=MessageEvent{type=SENT, messageId=0, uncompressedMessageSize=116, compressedMessageSize=116}}], droppedEventsCount=0}, links=Links{links=[], droppedLinksCount=0}, childSpanCount=null, status=Status{canonicalCode=OK, description=null}, endTimestamp=Timestamp{seconds=1543578721, nanos=420769048}}
"
timestamp: "2018-11-30T11:52:02Z"

Cannot build Go services

Building [gcr.io/bamboo-shift-504/productcatalogservice]...
2018/08/05 08:29:51 No matching credentials found for index.docker.io, falling back on anonymous
Sending build context to Docker daemon  114.2kB
Step 1/15 : FROM golang:1.10-alpine as builder
 ---> 34d3217973fd
Step 2/15 : RUN apk add --no-cache ca-certificates git &&       wget -qO/go/bin/dep https://github.com/golang/dep/releases/download/v0.5.0/dep-linux-amd64 &&       chmod +x /go/bin/dep
 ---> Using cache
 ---> f4220d16f83a
Step 3/15 : ENV PROJECT github.com/GoogleCloudPlatform/microservices-demo/src/productcatalogservice
 ---> Using cache
 ---> aef520226cc9
Step 4/15 : WORKDIR /go/src/$PROJECT
 ---> Using cache
 ---> 189a2e5a473b
Step 5/15 : COPY Gopkg.* ./
 ---> Using cache
 ---> 610e30458416
Step 6/15 : RUN dep ensure --vendor-only -v
 ---> Running in 8bfb1fab44f5
(1/16) Wrote golang.org/x/oauth2@master
(2/16) Wrote google.golang.org/[email protected]
(3/16) Failed to write github.com/google/[email protected]
(4/16) Failed to write github.com/google/pprof@master
(5/16) Failed to write github.com/googleapis/[email protected]
(6/16) Failed to write [email protected]
(7/16) Failed to write golang.org/x/net@master
(8/16) Failed to write golang.org/x/sys@master
(9/16) Failed to write google.golang.org/[email protected]
(10/16) Failed to write google.golang.org/genproto@master
(11/16) Failed to write golang.org/x/[email protected]
(12/16) Failed to write golang.org/x/sync@master
(13/16) Failed to write github.com/golang/[email protected]
(14/16) Failed to write cloud.google.com/[email protected]
(15/16) Failed to write contrib.go.opencensus.io/exporter/[email protected]
(16/16) Failed to write google.golang.org/api@master
grouped write of manifest, lock and vendor: error while writing out vendor tree: failed to write dep tree: failed to export github.com/google/go-cmp: failed to fetch source for https://github.com/google/go-cmp: unable to get repository: Cloning into '/go/pkg/dep/sources/https---github.com-google-go--cmp'...
fatal: unable to access 'https://github.com/google/go-cmp/': Could not resolve host: github.com
: command failed: [git clone --recursive -v --progress https://github.com/google/go-cmp /go/pkg/dep/sources/https---github.com-google-go--cmp]: exit status 128
FATA[0048] build step: building [gcr.io/bamboo-shift-504/productcatalogservice]: build artifact: running build: The command '/bin/sh -c dep ensure --vendor-only -v' returned a non-zero code: 1

Frontend doesn't compile.

Compiling the frontend at this commit (currently master) produces this error:

# github.com/googlecloudplatform/microservices-demo/src/frontend
./main.go:20:2: imported and not used: "log"
./main.go:168:13: not enough arguments in call to initStats
        have (*stackdriver.Exporter)
        want (logrus.FieldLogger, *stackdriver.Exporter)

/cc @rghetia

adservice: revise jvm flags for container awareness

from https://jaxenter.com/nobody-puts-java-container-139373.html:

Memory
The JVM will now consider cgroups memory limits if the following flags are specified:

  • -XX:+UseCGroupMemoryLimitForHeap
  • -XX:+UnlockExperimentalVMOptions

In that case the Max Heap space will be automatically (if not overwritten) be set to the limit specified by the cgroup. As we discussed earlier, the JVM is using memory besides the Heap, so this will not prevent user from the OOM killer removing their containers. But, especially giving that the garbage collector will become more aggressive as the Heap fills up, this is already a great improvement.

If we start running into OOM kills in adservice, this likely would be the underlying reason. It would be good to understand and follow the best practice for this if Java isn't doing the right thing by default.

Standardize port numbers for grpc services

To prevent port bookkeeping across the board.

Similarly we should probably standardize port numbers across all prometheus/opencensus/pprof endpoints as well.

TODO

  • find out what's the preferred port number for grpc serving in the community.
  • update all gRPC services to expose the same server port
  • update Kubernetes manifests for containerport / match service port to container port?
  • update Istio manifests

service pending

Hi,
Thank you for this great demo.
I'm running the demo locally with docker Desktop. I can't access the home page and the following pods are pending : paymentservice, productcatalogservice, recommendationservice and shippingservice.

Please help!

rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.100.182.255:3550: i/o timeout"
could not retrieve products
main.(*frontendServer).homeHandler
/go/src/github.com/GoogleCloudPlatform/microservices-demo/src/frontend/handlers.go:53
main.(*frontendServer).(main.homeHandler)-fm
/go/src/github.com/GoogleCloudPlatform/microservices-demo/src/frontend/main.go:122
net/http.HandlerFunc.ServeHTTP
/usr/local/go/src/net/http/server.go:1947
github.com/GoogleCloudPlatform/microservices-demo/src/frontend/vendor/github.com/gorilla/mux.(*Router).ServeHTTP
/go/src/github.com/GoogleCloudPlatform/microservices-demo/src/frontend/vendor/github.com/gorilla/mux/mux.go:162
main.(*logHandler).ServeHTTP
/go/src/github.com/GoogleCloudPlatform/microservices-demo/src/frontend/middleware.go:81
main.ensureSessionID.func1
/go/src/github.com/GoogleCloudPlatform/microservices-demo/src/frontend/middleware.go:103
net/http.HandlerFunc.ServeHTTP
/usr/local/go/src/net/http/server.go:1947
github.com/GoogleCloudPlatform/microservices-demo/src/frontend/vendor/go.opencensus.io/plugin/ochttp.(*Handler).ServeHTTP
/go/src/github.com/GoogleCloudPlatform/microservices-demo/src/frontend/vendor/go.opencensus.io/plugin/ochttp/server.go:82
net/http.serverHandler.ServeHTTP
/usr/local/go/src/net/http/server.go:2697
net/http.(*conn).serve
/usr/local/go/src/net/http/server.go:1830
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:2361

currencyservice - unable to access external endpoint with istio

Currency service uses this endpoint - http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml

Maybe this could be recent change to the endpoint, but it redirect to a https URL,
so if we enable Istio, we run into the issue described here

there are 2 options

  1. Open all egress traffic [diasble istio introspection]
  2. maybe find another alternative or host a http equivalent of this endpoint?

Thanks,

Go Microservices Build Error: OpenConcensus Stackdriver Exporter Moved to New Repo

Build error for Go microservices, OpenConcensus Stackdriver exporter moved to different repo
When i try to build all images from source code with Skaffold, i got the following error:

package go.opencensus.io/exporter/stackdriver: no Go files in /go/src/go.opencensus.io/exporter/stackdriver

I investigated about the openconcensus exporters, i realized that they moved community exporters to
contrib.go.opencensus.io subdomain. So to build Go microservices successfully, change the Docker files and .go files that uses stackdriver exporters.

Changes in Dockerfile:

from

RUN go get -d golang.org/x/net/context \
 ...\
  go.opencensus.io/exporter/stackdriver \
...

to

RUN go get -d golang.org/x/net/context \
...\
  contrib.go.opencensus.io/exporter/stackdriver \
...

Changes in .go files:

from
"go.opencensus.io/exporter/stackdriver"

to
"contrib.go.opencensus.io/exporter/stackdriver"

Add observability endpoints to the frontend

I think we can make it easy to provide a single pane of glass that:

  • (if onGCE) provides links to Stackdriver console
  • offer proxies to /varz, /debug/{tracez,rpcz}, /healthz

of each service.

This creates an unnecessary coupling between frontend and all backend services (although, on demand).

Then we can offer this at /_admin/monitor/{service}/{feature} with a link from the footer.

"No Space Left on Device Error" - some recommendations in main README.md

One suggestion. Since most users will run this from GKE using Google Cloud Shell, many will run into "No Space Left on Device Error". So, if the main README.md file can specify that the command skaffold run -p gcb to use Google Cloud Container Builder, it'll save time and frustration from users to figure it out.

#24

Also, please include that Google Cloud Container Build API needs to be enabled in README.md

Thanks!

multiple service restart loops after starting to use grpc_health_probe

  • cartservice = low crash rate
  • recommendationservice, productcatalogservice = high crash rate

this is happening due to rpc not finishing within 1 second.

kubectl describe is showing:

health check rpc failed: rpc error: code = DeadlineExceeded desc = Deadline Exceeded
  Warning  BackOff    8m49s (x12099 over 3d)  kubelet, gke-demo-app-default-pool-e4de3ba2-b7ss  Back-off restarting failed container
  Warning  Unhealthy  2m44s (x4308 over 3d)   kubelet, gke-demo-app-default-pool-e4de3ba2-b7ss  Readiness probe failed: config:
> addr=:8080 conn_timeout=1s rpc_timeout=1s
> tls=false
establishing connection
health check rpc failed: rpc error: code = DeadlineExceeded desc = Deadline Exceeded

We probably need to dig into why certain services are responding late. The same set of services have elevated response latencies (could be because pods are crashlooping, but also could be why the probes are failing, too) according to Traces collected.

Instrument all services with OpenCensus tracing

Right now there are several issues like:

  • Python OC libraries cause memory leak with 1.0 sampling rate
  • OC libraries for runtimes like .NET Core are not ready to use
  • Some microservices use Stackdriver tracing directly instead of OC.

Ideally the solution should be vendor-native (and work through some vendor config to export the traces).

image build time doubled after image size optimizations

Below is a snapshot of the build times listed at https://travis-ci.com/GoogleCloudPlatform/microservices-demo/builds.

It looks like after merging PRs

  1. #51 emailservice image optimization
  2. #52 loadgenerator image optimizations

the build times has risen from 12 minutes on average to 22 minutes (after #51), and then to 26 minutes (after #52).

So we're probably providing a faster image push experience, but now the overall build time has doubled. I assume we're probably now serving the developers with high upload bandwidths worse.

image

cc: @orthros

cartservice: unhealthy signals from grpc

cartservice (@ 3b6d386) restarts once about every 30 minutes and has a bunch of failed probe events.

Events:
  Type     Reason     Age                      From                                           Message
  ----     ------     ----                     ----                                           -------
  Warning  Unhealthy  39m (x13 over 3h33m)     kubelet, gke-istio-default-pool-d396f934-mvhs  Liveness probe failed: service unhealthy (responded with "NOT_SERVING")
  Warning  Unhealthy  39m (x12 over 3h33m)     kubelet, gke-istio-default-pool-d396f934-mvhs  Readiness probe failed: service unhealthy (responded with "NOT_SERVING")
  Warning  Unhealthy  21m (x83 over 4h16m)     kubelet, gke-istio-default-pool-d396f934-mvhs  Readiness probe failed: timeout: health rpc did not complete within 1s
  Warning  Unhealthy  6m22s (x103 over 4h16m)  kubelet, gke-istio-default-pool-d396f934-mvhs  Liveness probe failed: timeout: health rpc did not complete within 1s

most prominently the timeout: health rpc did not complete within 1s error from grpc_health_probe.

This requires some investigation as for why Check() can't complete within 1s.

Don't hardcode the GCP project name

The demo hardcodes a GCP project name (microservices-demo-app). Users who don't have access to the hardcoded project cannot run the instructions due to ACL issues.

cartservice: Create genproto.sh

Currently cartservice only has generate_protos.bat which won't on Unix-like systems.

  • Create genproto.sh
  • Rename generate_protos.bat to genproto.bat for consistency.

grpc v1.15 will break health check protocol

grpc 1.15 has added a new rpc to health.proto:

  rpc Watch(HealthCheckRequest) returns (stream HealthCheckResponse);	

Since we implement the service Health directly and its rpc Check, addition of this method will break at least some parts of microservices-demo.

For Go, this 100% breaks all services with [email protected]. So I went ahead and pinned all grpc-go dependencies to =v1.14.0 constraint in Gopkg.toml.

For dynamic languages like Node, Python: I would not be surprised if next time we update requirements.txt/package.json, health servers will stop starting because rpc Watch isn't implemented.

frontend: decouple healthcheck endpoint from path=/

Right now, health check for frontend service is GET /.

This RPC depends on health of:

  • productservice
  • adservice (it was bringing frontend down even though we were ignoring errors from here, because rpc timeout wasn't set)
  • cartservice
    • which depends on redis-cart
  • currencyservice

Ideally these should be decoupled. So we need a /healthz for the frontend.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.