GithubHelp home page GithubHelp logo

amazon-archives / service-discovery-ecs-dns Goto Github PK

View Code? Open in Web Editor NEW
167.0 21.0 80.0 9.32 MB

ARCHIVED: Service Discovery via DNS with ECS.

License: Apache License 2.0

Go 68.47% Python 19.11% HTML 11.49% Makefile 0.93%

service-discovery-ecs-dns's Introduction

Archived

This is no longer needed with ECS.


Service Discovery for AWS EC2 Container Service

Goals

This project has been created to facilitate the creation of microservices on top of AWS ECS.

Some of the tenets are:

  • Start services in any order
  • Stop services with confidence
  • Automatically register/de-register services when started/stopped
  • Load balance access to services
  • Monitor the health of the service

Installation

You need a private hosted zone in Route53 to register all the containers for each service.

To create an ECS cluster with all the required configuration and the Route53 domain and the example microservices you can use the CloudFormation template.

You should create a Lambda function to monitor the services, in case a host fails completely and the agent cannot delete the records. You can also use the Lambda function to do HTTP health checks for your containers.

Create a role for the Lambda function, this role should have full access to Route53 (at least to the internal hosted zone), read only access to ECS, read only access to EC2 and to your VPC. The Lambda function needs to call the AWS APIs and should be added to a subnet that provides internet access via a NAT gateway and should have a security group that allows the respective outbound traffic.

Example CloudFormation template:

    "LambdaRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "AssumeRolePolicyDocument": {
          "Version" : "2012-10-17",
          "Statement": [ {
            "Effect": "Allow",
            "Principal": {
              "Service": [ "lambda.amazonaws.com" ]
            },
            "Action": [ "sts:AssumeRole" ]
          } ]
        },
        "ManagedPolicyArns": [
          "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole",
          "arn:aws:iam::aws:policy/AmazonRoute53FullAccess",
          "arn:aws:iam::aws:policy/AmazonEC2ReadOnlyAccess"
        ],
        "Policies": [
          {
            "PolicyName": "ecs-read-only",
            "PolicyDocument": {
              "Statement": [
                {
                  "Effect": "Allow",
                  "Action": [
                    "ecs:Describe*",
                    "ecs:List*"
                  ],
                  "Resource": "*"
                }
              ]
            }
          }
        ],
        "Path": "/"
      }
    }

Create a Lambda function using this code, you can modify the parameters in the funtion:

  • ecs_clusters: This is an array with all the clusters with the agent installed. You can leave it empty and the function will get the list of clusters from your account.
  • check_health: Indicate if you want to do HTTP Health Check to all the containers.
  • check_health_path: The path of the Health Check URL in the containers.

You should then schedule the Lambda funtion to run every 5 minutes with the python3.6 runtime.

Usage

Once the cluster is created, you can start launching tasks and services into the ECS Cluster. For each task you want to register as a microservice, you should specify an environment variable in the task definition, the name of the variable should be SERVICE_<port>_NAME, where <port> is the port where your service is going to listen inside the container, and the value is the name of the microservice with the standard scheme _service._proto (see https://en.wikipedia.org/wiki/SRV_record), for example SERVICE_8081_NAME=_calc._tcp. You can define multiple services per container using different ports.

You should publish the port of the container using the PortMappings properties. When you publish the port I recommend you to not specify the HostPort and leave it to be assigned randomly within the ephemeral port range, this way you could have multiple containers of the same service running in the same server.

When the service starts, and the container is launched in one of the servers, the ecssd agent registers a new DNS record automatically, with the name <serviceName>.servicediscovery.internal and the type SRV. For each instance, the agent also creates a new A record to make sure that the host name resolves in the private hosted zone.

You can use this name to access the service from your consumers, Route53 balances the requests between your different containers for the same service. For example in Go you can use:

func getServiceEndpoint() (string, error) {
	var addrs []*net.SRV
  	var err error
	if _, addrs, err = net.LookupSRV("serviceName", "tcp", "servicediscovery.internal"); err != nil {
		return "", err
	}
	for _, addr := range addrs {
		return addr.Target + ":" + strconv.Itoa(int(addr.Port)), nil
	}
	return "", errors.New("No record found")
}

Example

We've included an example of usage of the service discovery, the example is composed of the following containers:

  • time: This container is a web service receiving a string with a time format, and returns the current time in that format. The format is a combination of the following date: "Mon Jan 2 15:04:05 -0700 MST 2006", for example: "15:04 Jan 2". To test the service you can use:
curl -u admin:password 127.0.0.1:32804/time/15:04%20Jan%202
  • calc: This container is a web service to resolve a mathematical formula. The input is a formula and it returns the result, for example "(2+2)*3". To test the service you can use:
curl -u admin:password 127.0.0.1:32799/calc/\(2+2\)*3
  • portal: This is a web service to provide a web portal with two boxes to test time and calc services. The portal uses the service discovery DNS to discover the other services and send the request to them showing the results in the web page.

You can launch the examples using the CloudFormation template, then you can connect to the portal from a browser and test both microservices and the service discovery.

You can review the Route53 records created by the service (only for time and calc, because portal is not a microservice and it doesn't provide the SERVICE_<port>_NAME environment variable), and stop a container to see how the Route53 records change automatically.

service-discovery-ecs-dns's People

Contributors

chdanielmueller avatar csanchiz avatar hyandell avatar ingmarstein avatar javierros avatar jim3ma avatar mvanholsteijn avatar nmchae avatar transitorybliss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

service-discovery-ecs-dns's Issues

SRV records for stopped instances don't always get removed

We were getting intermittent 502 errors following deployments, upon investigation we found SRV records existed for tasks that had been stopped. Curious if anyone has seen similar issues with this approach? Is there a best practice for keeping the SRV records healthy?

Make ecssd_agent more resilient

It seems like the agent may crash for various reasons (#27, #31). Until the agent is restarted, it will miss Docker events and services might not get registered. I've also seen that when the health check lambda happens to be executed during a deployment that it can delete a SRV record before the container is fully running and the record won't be recreated afterwards.

So, even when those crashes are fixed, there might be reasons why the agent should periodically do a full sync, i.e. iterate over all running services and make sure they're still registered.

New records are created for each container in a service

I've been trying out this agent in our ECS clusters as I'm keen to use Prometheus for our microservices and it supports DNS SRV records. I've found that this agent creates a new DNS record for each task that is created instead of updating the existing RecordSet to include all the task addresses.

For example what I am seeing is:

If I have a service called 'example' with two tasks then I get two DNS records
example.servicediscovery.internal 1 1 32778 <private dns 1>
example.servicediscovery.internal 1 1 32779 <private dns 2>

so when I do an nslookup I only get one of the records.

I was hoping to get (formatting for clarity):
example.servicediscovery.internal {
1 1 32778 <private dns 1>
1 1 32779 <private dns 2>
}
so that Prometheus can discover all of the hosts and then poll the endpoint.

My questions are

  1. is the desired behaviour to create a new DNS record for each task in a service? If so how do you envision these records to be used for service discovery?
  2. Do you see that behaviour I was expecting as something you would ever implement with this agent?

Thank you for your time and for providing this agent as I've been pulling my hair out trying to find a solution for Service Discovery with ECS that doesn't involve building a Rube Goldberg machine.

multiple ports are not working

when I configure two environment variables for different ports. eg. SERVICE_5672_NAME and SERVICE_15672_NAME, only one DNS record gets created and it's random.

Is ecssd_agent using the correct AWS API endpoints?

I'm getting this error when the ecssd_agent attempts to register DNS records for my Docker service:

AccessDenied: The resource hostedzone/XXXXXXXXX can only be managed through servicediscovery.amazonaws.com (arn:aws:servicediscovery:us-east-1:385298791949:namespace/ns-n5xn65imrgpwaztt)\n\tstatus code: 403

Full logs:

Dec 29 22:42:50 ip-XX-X-XXX-XXX ecssd_agent[2088]: time="2017-12-29T22:42:50Z" level=error msg="AccessDenied: The resource hostedzone/XXXXXXXXX can only be managed through servicediscovery.amazonaws.com (arn:aws:servicediscovery:us-east-1:385298791949:namespace/ns-n5xn65imrgpwaztt)\n\tstatus code: 403, request id: 99baf104-ece9-11e7-b3f7-410794b4d19b"

Dec 29 22:42:50 ip-XX-X-XXX-XXX ecssd_agent[2088]: time="2017-12-29T22:42:50Z" level=info msg="Record _test._tcp.servicediscovery.internal created (1 1 9091 ip-XX-X-XXX-XXX.ec2.internal)"

Dec 29 22:42:50 ip-XX-X-XXX-XXX ecssd_agent[2088]: time="2017-12-29T22:42:50Z" level=error msg="Error creating DNS record"

The second log message indicting success seems to be a bug, as no records of any type (A or SRV) get created in the given hosted zone.

Private hosted zone vs compute.internal

I'm trying to set up ecssd_agent in a similar way to the example template.

I've passed my private hosted zone xyz.internal as the first argument to ecssd_agent and it successfully creates SRV records there. However, those records point to the internal DNS name of the ECS host like ip-10-0-101-95.eu-west-1.compute.internal which cannot be resolved on other instances because their search domain is configured as xyz.internal.

So, I'm looking for guidance how to either change the DNS domain of the ECS host to my private hosted zone on Route 53 or a way to change the record generation… or something that I may be missing :)

Lambda function is stopping running tasks

I have an ECS cluster with two EC2 instances. Both instances are running the ecssd_agent and when new deployments are launched, their SRV record is properly created in Route53 in the servicediscovery.internal. zone.

I setup the lambda function, per the instructions in the README. I ran into some problems with access rights, but I eventually got it running.

The problem is that the lambda function is stopping all running tasks... I'm assuming that's not the right behavior right, because that just guarantees that all my ECS containers lose their DNS entries every 5 minutes, which seems counter productive. :)

I have the check_health set to false. Any help is appreciated.

deleteDNSRecord failed with nil pointer reference

when a container starts and aborts immediately, a new DNS record is created every time but failed to delete. I believe the DNS record has not been saved to Route53 yet when the deleteDNSRecord is called.

Dynamic DNS name in ecssd_agent.go is required

The DNSNAME in ecssd_agent.go is hard coded to "servicediscovery.internal". We need to try making it dynamic suitable for any Rout 53 entry.

When I tried to run the ecssd_agent script obtained multiple issues

go run ecssd_agent.go

command-line-arguments

./ecssd_agent.go:73:21: cannot assign <-chan "vendor/github.com/docker/docker/api/types/events".Message to e.events (type <-chan "github.com/docker/docker/api/types/events".Message) in multiple assignment
./ecssd_agent.go:73:69: cannot use "github.com/docker/docker/api/types".EventsOptions literal (type "github.com/docker/docker/api/types".EventsOptions) as type "vendor/github.com/docker/docker/api/types".EventsOptions in argument to e.dockerClient.Events
./ecssd_agent.go:73:77: cannot use filters (type "github.com/docker/docker/api/types/filters".Args) as type "vendor/github.com/docker/docker/api/types/filters".Args in field value
./ecssd_agent.go:403:96: cannot use "github.com/docker/docker/api/types".ContainerListOptions literal (type "github.com/docker/docker/api/types".ContainerListOptions) as type "vendor/github.com/docker/docker/api/types".ContainerListOptions in argument to dockerClient.ContainerList
./ecssd_agent.go:479:46: cannot use container (type "vendor/github.com/docker/docker/api/types".ContainerJSON) as type "github.com/docker/docker/api/types".ContainerJSON in argument to getNetworkPortAndServiceName
./ecssd_agent.go:582:33: cannot use config (type *"github.com/aws/aws-sdk-go/aws".Config) as type *"vendor/github.com/aws/aws-sdk-go/aws".Config in argument to session.NewSession
./ecssd_agent.go:698:45: cannot use container (type "vendor/github.com/docker/docker/api/types".ContainerJSON) as type "github.com/docker/docker/api/types".ContainerJSON in argument to getNetworkPortAndServiceName
./ecssd_agent.go:727:45: cannot use container (type "vendor/github.com/docker/docker/api/types".ContainerJSON) as type "github.com/docker/docker/api/types".ContainerJSON in argument to getNetworkPortAndServiceName

lambda_health_check doesn't work for multiple ECS clusters

get_ecs_data returns on the first iteration when looping through the list of clusters.

Candiate fix:

def get_ecs_data():
    list_ecs_private_ips = []
    list_ec2_instances = {}
    list_instance_arns = {}
    list_tasks = {}
    for cluster_name in ecs_clusters:
        response = ecs.list_container_instances(cluster=cluster_name)
        for instance_arn in response['containerInstanceArns']:
            list_instance_arns[instance_arn] = {'cluster': cluster_name}
        if len(list_instance_arns.keys()) > 0:
            response = ecs.describe_container_instances(
                cluster=cluster_name,
                containerInstances=list(list_instance_arns.keys()))
            for instance in response['containerInstances']:
                list_ec2_instances[instance['ec2InstanceId']] = {'instanceArn': instance['containerInstanceArn']}
                list_instance_arns[instance['containerInstanceArn']]['instanceId'] = instance['ec2InstanceId']
            if len(list_ec2_instances.keys()) > 0:
                response = ec2.describe_instances(InstanceIds=list(list_ec2_instances.keys()))
                for reservation in response['Reservations']:
                    for instance in reservation['Instances']:
                        list_ec2_instances[instance['InstanceId']]['privateIP'] = instance['PrivateIpAddress']
                        list_ecs_private_ips.append(instance['PrivateIpAddress'])
        response = ecs.list_tasks(cluster=cluster_name, desiredStatus='RUNNING')
        if len(response['taskArns']) > 0:
            responseTasks = ecs.describe_tasks(cluster = cluster_name, tasks = response['taskArns'])
            for task in responseTasks['tasks']:
                list_tasks[task['taskArn']] = {'instance': task['containerInstanceArn'], 'containers': []}
                responseDefinition = ecs.describe_task_definition(taskDefinition=task['taskDefinitionArn'])
                for container in task['containers']:
                    containerDefinition = get_definition_for_container(container['name'], responseDefinition['taskDefinition']['containerDefinitions'])
                    for networkBinding in container['networkBindings']:
                        service = get_service_for_port(networkBinding['containerPort'], containerDefinition['environment'])
                        if service != "":
                            list_tasks[task['taskArn']]['containers'].append({'service': service, 'port': str(networkBinding['hostPort'])})

    return {'instanceArns': list_instance_arns, 'ec2Instances': list_ec2_instances, 'tasks': list_tasks, 'clusterPrivateIPs': list_ecs_private_ips}

InvalidChangeBatch: RRSet with DNS name

Can someone help me with this error? Any idea why I've got this error from the ecssd_agent.go ?
The instances are in the same VPC, I activated those 2 flags for private zone.

Thank you, very much.

Waiting events
time="2016-11-01T06:45:32Z" level=info msg="Processing event: &docker.APIEvents{Action:\"start\", Type:\"container\", Actor:docker.APIActor{ID:\"7ef595cd2eb2785d00ad47e0d19b567457c1dd1d80bcf31037XXXXXXXXXXXXX\", Attributes:map[string]string{\"image\":\"XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/node:latest\", \"name\":\"ecs-node-server-api-gateway-4-node-server-9696fee6dcfa8cafba01\", \"com.amazonaws.ecs.cluster\":\"teamfluent-cluster\", \"com.amazonaws.ecs.container-name\":\"node-server\", \"com.amazonaws.ecs.task-arn\":\"arn:aws:ecs:us-east-1:XXXXXXXXXX:task/bddf9917-51f2-43dd-ba31-9a07dbce5f01\", \"com.amazonaws.ecs.task-definition-family\":\"node-server-api-gateway\", \"com.amazonaws.ecs.task-definition-version\":\"4\"}}, Status:\"start\", ID:\"7ef595cd2eb2785d00ad47e0d19b567457c1dd1d80bcf31037XXXXXXXXXXXXX\", From:\"XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/node:latest\", Time:1477982732, TimeNano:1477982732704550646}" 
time="2016-11-01T06:45:32Z" level=error msg="InvalidChangeBatch: RRSet with DNS name apigateway.servicediscovery.internal. is not permitted in zone servicediscovery.local.\n\tstatus code: 400, request id: c92da539-9ffe-11e6-ac5b-15a8ebe54804" 

Record apigateway.servicediscovery.internal created (1 1 32773 ip-172-31-0-198.ec2.internal)
time="2016-11-01T06:45:33Z" level=error msg="InvalidChangeBatch: RRSet with DNS name apigateway.servicediscovery.internal. is not permitted in zone servicediscovery.local.\n\tstatus code: 400, request id: c9cec69b-9ffe-11e6-81fb-ffa9ee8ee6ef" 

Record apigateway.servicediscovery.internal created (1 1 32773 ip-172-31-0-198.ec2.internal)
time="2016-11-01T06:45:36Z" level=error msg="InvalidChangeBatch: RRSet with DNS name apigateway.servicediscovery.internal. is not permitted in zone servicediscovery.local.\n\tstatus code: 400, request id: cba2c3df-9ffe-11e6-ac5b-15a8ebe54804" 

Record apigateway.servicediscovery.internal created (1 1 32773 ip-172-31-0-198.ec2.internal)
time="2016-11-01T06:45:41Z" level=error msg="InvalidChangeBatch: RRSet with DNS name apigateway.servicediscovery.internal. is not permitted in zone servicediscovery.local.\n\tstatus code: 400, request id: cea81497-9ffe-11e6-ad24-155f74e64b8a" 

Record apigateway.servicediscovery.internal created (1 1 32773 ip-172-31-0-198.ec2.internal)
time="2016-11-01T06:45:49Z" level=error msg="InvalidChangeBatch: RRSet with DNS name apigateway.servicediscovery.internal. is not permitted in zone servicediscovery.local.\n\tstatus code: 400, request id: d2e0188a-9ffe-11e6-87dd-a3e0a5d30019" 

Record apigateway.servicediscovery.internal created (1 1 32773 ip-172-31-0-198.ec2.internal)
time="2016-11-01T06:45:49Z" level=error msg="Error creating DNS record" 
Docker 7ef595cd2eb2785d00ad47e0d19b567457c1dd1d80bcf31037630a8d4f016e3d started

Issues accessing SRV programatically

I have got the service working on my cluster. I can see it register and deregister SRV records in the console for my Route53 servicediscovery.internal zone...

But I cannot get dig working to view the SRV records. Additionally I wrapped the simple go function (example access) in a main and did not get any data back.

If I insert a manual 'A' record using the aws cli I can see it using dig.

I assume there is something fundamental that I am missing? I am not using the cloudformation template but an existing cluster I had configured...

This is the command I am using:

dig <svc_name>.servicediscovery.internal SRV

How is everyone accessing the SRV record?

SRV records creation fails with SRV record doesn't have 4 fields

my task registration fails with the following error message:

[root@ip-10-0-6-110 ~]# time="2017-08-09T10:50:08Z" level=error msg="InvalidChangeBatch: Invalid Resource Record: FATAL problem: SRVRRDATANotFourFields (SRV record doesn't have 4 fields) encountered with '1 1 32769 ip-10-0-6-110.testapi. eu-central-1.compute.internal.'\n\tstatus code: 400, request id: 82832f05-7cf0-11e7-acce-611aaa346953"
Record marksweb.testapi. created (1 1 32769 ip-10-0-6-110.testapi. eu-central-1.compute.internal. )

As you can see the hostname is specified as ip-10-0-6-110.testapi. eu-central-1.compute.internal.

The root cause is this:

# curl http://169.254.169.254/latest/meta-data/hostname
ip-10-0-6-110.testapi. eu-central-1.compute.internal.

I have specified both the internal domain name and the AWS domain as a search domains.

Events are being processed but entries are not being created in Route53

I have followed the instructions found at https://aws.amazon.com/blogs/compute/service-discovery-for-amazon-ecs-using-dns/ to deploy the ecssd_agent. When the agent starts up, I see the following output in the ecssd_agent.log:

time="2017-06-08T14:37:52Z" level=info msg="Processing event: &docker.APIEvents{Action:"start", Type:"container", Actor:docker.APIActor{ID:"959f120fbf284120b8422431e04a763bbed80a8d8703aa390268853dc05ee82a", Attributes:map[string]string{"name":"ecs-RabbitMQ-Deployment-1-rabbitmq-a8bcfc90c2d3b7fc6e00", "com.amazonaws.ecs.cluster":"awseb-feature7034-Deployment-Environment-6w37gh2u6k", "com.amazonaws.ecs.container-name":"rabbitmq", "com.amazonaws.ecs.task-arn":"arn:aws:ecs:us-west-2:233532778289:task/c4a6e5f5-30ec-4035-8012-a54e76cf07c8", "com.amazonaws.ecs.task-definition-family":"RabbitMQ-Deployment", "com.amazonaws.ecs.task-definition-version":"1", "image":"rabbitmq:3.6.8-management"}}, Status:"start", ID:"959f120fbf284120b8422431e04a763bbed80a8d8703aa390268853dc05ee82a", From:"rabbitmq:3.6.8-management", Time:1496932672, TimeNano:1496932672052372543}"
Docker 959f120fbf284120b8422431e04a763bbed80a8d8703aa390268853dc05ee82a started

When I go to Route53 and look at the hosted zone (servicediscovery.internal) there isn't an entry for this service. Why? What should the output look like for an event that creates a Route53 entry?

the agent does not receive all die events

When starting 20 nginx containers

C=0 
while [[ $C -lt 20 ]]; do 
   docker run -d -P -e SERVICE_80_NAME=myapp nginx; 
   C=$(($C+1)); 
done

and stopping them :

docker ps  | awk '/nginx/{print $1}' | xargs docker stop

Only 11 die events are reported in the log.

creating SRV records but not A records

It was previously creating A records fine but It's now only creating SRV records, I don't see any errors in the logs only successful SRV creation.

Record redis.foo.bar.dev created (1 1 6379 ip-blah-eu-west-1.compute.internal)

Any ideas how I can troubleshoot this?

ecssd_agent crashes on deleting a SRV record

The ecssd_agent crashes when, when it encounters a ResourceRecord without a SetIdentifier while searching for the appropriate SRV record.

time="2017-08-10T10:53:56Z" level=info msg="Processing event: &docker.APIEvents{Action:\"die\", Type:\"container\", Actor:docker.APIActor{ID:\"281d25e874277a3e21aa46c708be812068b92807e1e1bd0d8cd471793057a876\", Attributes:map[string]string{\"exitCode\":\"0\", \"image\":\"nginx\", \"name\":\"ecstatic_easley\"}}, Status:\"die\", ID:\"281d25e874277a3e21aa46c708be812068b92807e1e1bd0d8cd471793057a876\", From:\"nginx\", Time:1502362436, TimeNano:1502362436823482445}"
281d25e87427
[ec2-user@ip-10-0-6-110 ~]$ panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0xb901f8]

goroutine 80 [running]:
main.deleteDNSRecord(0xc4202a60f0, 0x4, 0xc420525180, 0x40, 0x1, 0x0)
	/app/src/github.com/awslabs/service-discovery-ecs-dns/ecssd_agent.go:237 +0x398

Container port should be left blank?

You should publish the port of the container using the portMappings properties. When you publish the port I recommend you to not specify the containerPort and leave it to be assigned randomly, this way you could have multiple containers of the same service running in the same server.

I think this part is confusing(at least to me) because the documentation on AWS says:

The port number on the container that is bound to the user-specified or automatically assigned host port. If you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range (for more information, see hostPort).

Which one is true?

It doesn't actually create SRV records.

How to reproduce:

  1. Create the stack from this template
  2. Check the records aws route53 list-resource-record-sets --hosted-zone-id %hostedzoneid% and see that there are only A records.
[root@ip-10-5-10-105 ec2-user]# /usr/local/bin/ecssd_agent -sync
ERRO[0000] InvalidChangeBatch: Tried to create resource record set [name='ip-10-5-10-105.servicediscovery.internal.', type='A', set-identifier='ip-10-5-10-105.eu-west-1.compute.internal'] but it already exists
	status code: 400, request id: 9802ac73-06b1-11e8-a4eb-11c465327fe3
ERRO[0000] Error creating host A record
INFO[0000] Zone 'servicediscovery.internal' for host 'ip-10-5-10-105.eu-west-1.compute.internal' out of sync, adding 1 and removing 0 records

ecssd_agent continuously shows adding 1, but nothing happens

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.