GithubHelp home page GithubHelp logo

quickstart-mongodb's Introduction

quickstart-mongodb's People

Contributors

andrew-glenn avatar annaone avatar aws-ia-ci avatar bmoller avatar davmayd avatar dhruvrai17 avatar handans avatar irecinius avatar ismailyenigul avatar jaydestro avatar jaymccon avatar santiagocardenas avatar tbulding avatar tlammens avatar tonynv avatar tyomo4ka avatar vsnyc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

quickstart-mongodb's Issues

4.0 works but 4.2 does not with the same parameters

I tried setting up 2 stacks, one with 4.0 and one with 4.2 and the 4.2 consistently fails, where the 4.0 works the first time.

i'm using the existing vpc option.

WaitCondition timed out. Received 0 conditions when expecting 1

The following resource(s) failed to create: [PrimaryReplicaNode0WaitForNodeInstall, SecondaryReplicaNode0WaitForNodeInstall, SecondaryReplicaNode1WaitForNodeInstall]. . Rollback requested by user.

rs.initiate() failed in mongo 3.4

{
"ok" : 0,
"errmsg" : "No host described in new configuration 1 for replica set #### maps to this node",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig"
}

MongoDB QuickStart timeout and rollback

Currently Mongo DB deployment times out at node creation. The reason for this is that node template attempts to copy some files from S3 bucket to which it has no access.
For example EC2 instance has access policy to S3 bucket aws-quickstart-eu-central-1 and bootstrap scripts attempts to download script from bucket aws-quickstart. As a result bootstrap script fails, signal final status is never executed and the whole stack is rolled back when timeout is reached.

I found that similar issue was reported here although I don't know if the root cause is the same..
https://forums.aws.amazon.com/thread.jspa?messageID=936559

Problem with scripts deploying MongoDb on the instances

When I deploy the CFT from the link provided in the Quick Start documentation, the IGW, VPC, Subnets, Bastion, and Instances spin up fine. What I don't see is any MongoDB database having been installed on the instance(s).

I tried to run the shell commands that are present in the mongodb-node.yaml file manually in the master EC2 instance and I ran into access denied (so I changed the commands to sudo). The next issue was a lack of permissions for the bastion host seen below.

  1. An error occurred (UnauthorizedOperation) when calling the DescribeTags operation: You are not authorized to perform this operation.
  • I added this and it moved on
  1. An error occurred (AccessDeniedException) when calling the Scan operation: User: arn:aws:sts::799480191532:assumed-role/MongoDBDefaultTemplate-BastionStac-BastionHostRole-ZGYB178JU1YI/i-0e27c1187822d2d72 is not .
  • I added this and it moved on

Then the init.sh completes but upon running the database it says that it is unable to find the server.

Is there anything I am not understanding about this deployment? Please let me know if you need logs or any details for a better understanding of my situation, all help would be greatly appreciated.

Thanks

mongodb.template is not compatible with China mainland regions

There are hard coded service principal and partition in code. Which lead to failure during deployment in China mainland region.

Replace the following code in mongodb.template:

"ec2.amazonaws.com"
replace with:
{"Fn::Sub":"ec2.${AWS::URLSuffix}"}

"Fn::Sub": "arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/MONGODB_"
replace with:
"Fn::Sub": "arn:${AWS::Partition}:dynamodb:${AWS::Region}:${AWS::AccountId}:table/MONGODB_"

And also replace following code in mongodb-node.template
"arn:aws:automate:",
replace with
{"Fn::Sub": "arn:${AWS::Partition}:automate:"},

Bastion Auto scaling group: received 1 FAILURE signal(s) out of 1

[Bastion Auto scaling group] received 1 FAILURE signal(s) out of 1. Unable to satify 100% MinSuccessfulInstancesPercent requirement.

I'm using launch mongodb into a new VPC
region: us-east-1 / ap-northeast-1 , all failed with the same failure message above.
mongo version : 4.0
s3 folders all using the default ones, I only revise mongo password & username etc.

S3 error: Access Denied - when running template through CloudFormation web Console

I'm getting the following message when I try to run the template through CloudFormation's web console:

S3 error: Access Denied For more information check 
http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html

I tried filling out the form in two different ways: providing a new s3 bucket and prefix and providing existing bucket and prefix. Still I got the same error and the result was a rollback.

Any ideas?

Q: Reattach existing volumes

Thanks for the scripts. Is there a scenario for a "host death" to handle re-attaching existing data volumes like StatefulSets in Kubernetes? Seriously, in my line of work I've got to handle both. When server dies in GCP GKE, we just re-attach to existing Stateful drive automatically. Could not find a procedure here to handle that scenario in AWS. Any help appreciated (through CloudFormation).

MongoDB version should be updated

MongoDB available versions in the template are 2.6 and 3.0 only. 3.0 version dates back from March 2015. I think that the template should allow also to work with current stable version already (3.4.2 today).

error with signalFinalStatus.sh

The script signalFinalStatus.sh dose not run as expected, stopping the build process from completing with success.

signalFinalStatus.sh: line 5: $'\r': command not found
signalFinalStatus.sh: line 6: $'\r': command not found
signalFinalStatus.sh: line 12: syntax error near unexpected token `$'{\r''

Changing node name tag breaks template

Hello, in your mongodb.yaml template there's a parameter called ReplicaNodeNameTag which defines the name of the node. I found that changing this seems to cause the CloudFormation to eventually timeout. It seems orchestrator.sh becomes stuck in an infinite loop and the log just says "Waiting for Master to create table..". I wanted to name my nodes along the lines of db-primary0.mydomain.com, db-secondary0.mydomain.com, db-secondary1.mydomain.com, but it doesn't work and the CloudFormation template timeout.

Is there a hard link between the node name and scripts? Is it possible to change the node name tag in the template at all? I find that if I keep the original names and append something at the end it still works okay. For example PrimaryReplicaNode0-Dev or SecondaryReplicaNode0-UAT.

Can't change Volume size?

screen shot 2018-11-02 at 12 58 59 am

Somehow I can upgrade the EBS volume size through the stack, each time I try to do so, I get rolled back. I can make other change such as changing the instance type. Can't figure out what's the problem with upgrading volume.

Accessing database using elastic ip

Hi,

I have created the mongodb cluster using cloud formation templates. Now, I want to access the db from the internet. For that, I have added an elastic IP to all 3 nodes. I also added my public IP to allow inbound rules in the security group. But I am not able to access the database using the elasticIP:27017.

What could be the problem?

Thanks

/etc/init.d/mongod conflicts with /etc/init.d/mongos on server reboot

I used this script some time ago to deploy a mongo cluster. The parameters were 2 shards, each replicated.

Everything had been working fine, until three days ago PrimaryReplicaNode10 rebooted for some reason beyond my knowledge. And that node started running mongod on 27017 instead of mongos.

This caused some trouble in the application layer, because the application did not notice the difference between mongod and mongos and all the data appears to be deleted because the mongod is empty and does not have the query routing functionality.

Trying to locate the problem, I ssh-ed into PrimaryReplicaNode10 and found the script has created three service files for me: /etc/init.d/mongod, /etc/init.d/mongod0, /etc/init.d/mongos. From my understanding of the script, the mongod0 is there because the code block with the When there is sharding, make sure atleast one microshard comment. And it is running on 27018 (a different port), which seems to be correct. And mongos is configured to be running on 27017, which is also expected.

Now what confuses me is the /etc/init.d/mongod service. It is also configured to run on 27017, so it will prevent mongos from starting. In other words, the final running instances would be either be mongos (27017) + mongod0 (27018) (expected), or mongod (27017) and mongod0 (27018) which does not make sense to me.

I manually removed the /etc/init/mongod service and all is good now. Why is that file not removed in the first place? I don't see a reason of having a separate mongod and its configuration files dangling there if micro-sharding is enabled.

Also, the mongod service wasn't started during initial setup because it was in the else branch of if [ "${MICROSHARDS}" != "0" ]; then. So the problem will only occur after reboot and the exact situation depends on the starting order of mongod and mongos service. If mongod starts first, then the problem is reproduced.

P.S. Strictly speaking, the mongod service isn't chkconfig'd either. But a chkconfig --list | grep mongo shows that service is enabled and I think the yum package did that.

1 node setup with mongo 3.4.4 fails to rs.initiate()

With mongo 3.4.4 rs.initiate() fails. rs.initiate() should be called with a parameter to explicitly set up the replica.

Error:
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "ip-10-1-17-132:27017",
"ok" : 0,
"errmsg" : "No host described in new configuration 1 for replica set s0 maps to this node",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig"
}

Help connecting to instances

Hello, I'm having some difficulty in connecting to the database using the NAT gateway. Being kinda new to the aws resources.

How should I proceed to connect from the outside (internet) to the database?

DynamoDB tables are not dropped

Hello. Thanks for your work.
We found some flaw)
We created a stack, then deleted it, and wanted to create another one with the same name and vpс. However, we got an error, because the instance was trying to connect to the wrong IP address. It turns out DynamoDB tables are not dropped when the stack is deleted. When we dropped the old tables, the problem was resolved.
Please pay attention to this.

Setting replica set members to 1 still asking for secondary nodes

Hi,

The quickstart provides options to choose from 1 or 3 member replica set.
I believe choosing 1 will create a standalone mongodb converted into replica set, right?
But, why it is mandatory to choose the secondary node's subnet if I am cluster replica set count as 1?

Thanks

Possible buggy code in orchestrator.sh opt parsing

Hi,

When orchestrator.sh tries to parse the opts (see below), there are two options with the same char "-i" - one used for initiating the environment and the other one used for inserting a new key-pair value into DynamoDB.

I was trying to modify the code by inserting a new key-pair value during setup but it turned out that getopts cannot tell -i from -i key=value.

I would suggest using another character for the insert option (ex. -u for update)

while getopts "hcbpdgikfs:i:n:q:w:" o; do
  case "${o}" in
    h) usage && exit 0
    ;;
    c) CREATE=1
    ;;
    p) PRINT=1
    ;;
    b) BLOCK_UNTIL_TABLE_LIVE=1
    ;;
    d) DELETE_TABLE=1
    ;;
    g) GET_IPv4_TYPE=1
    ;;
    q) QUERY_STATUS=${OPTARG}
    ;;
    s) NEW_STATUS=${OPTARG}
    ;;
    k) CREATE_KEY=1
    ;;
    f) FETCH_KEY=1
    ;;
    i) NEW_ITEM_PAIR=${OPTARG}
    ;;
    n) TABLE_NAME=${OPTARG}
    ;;
    w) WAIT_STATUS_COUNT_PAIR=${OPTARG}
    ;;
    i) INIT_ENV=1
    ;;
  esac
done

what is the purpose of orchestrator.sh when launching MongoDB to VPC

Hi,

A while ago I used your quick-start template to launch 3 MongoDB instances to a new VPC. Later on, I take a look at the scripts and find that the orchestrator.sh performs a lot of operations during stack creation to interact with DynamoDB. But after the creation, I did not see anything that has been created in DynamoDB in the same region.

So what's the purpose of using DynamoDB and when will the script be actually doing something?

Thanks!

Handling change of AZs number

After the Mongo stack (version with with new VPC) is created, subsequent update of the stack with increase of the AZs from 2 to 3 results in the following behaviour:

  • one of the Mongo secondary replicas is created in the new availability zone and the old one is terminated
  • the whole stack update finishes with success (false positive)
  • the newly created replica is not fully functional, the orchestrator script loops waiting for DynamoDB table which does not exist (this table is usually created at the first stack creation when all Mongo nodes are being created, to synchronize the process of creating the nodes)

So currently the sh scripts don't handle the case of a node being recreated.

The workaround for this is:

  1. Log in to the new Mongo instance. Check install.log
  2. Add the table named s0_MONGODB__YourMongoStackName_YourVpcID for example using script:
    aws dynamodb create-table
    --table-name ${TABLE_NAME}
    --attribute-definitions
    AttributeName=PrivateIpAddress,AttributeType=S
    --key-schema
    AttributeName=PrivateIpAddress,KeyType=HASH
    --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
  3. Add Mongo instances IPs to the table with field Status=WORKING. Check install.log, change Status to FINISHED and then SECURED. Wait for scripts to end.
  4. Copy authorization key from other Mongo replica to the new one
  5. Log in to Mongo. Rebuild the replica set:
    cfg = rs.conf()
    cfg.members[index_of_old_secondary_replica].host = "IP_OF_THE_NEW_REPLICA:27017"
    rs.reconfig(cfg)

BastionSecurityGroup creation failed for unknown reason

I'm trying to create a mongo cluster on a new VPC. Somehow, the script always fail as this error, which I don't understand why.

CREATE_FAILED AWS::CloudFormation::Stack BastionStack Embedded stack arn:aws:cloudformation:us-east-1:139587094038:stack/FashionMongoDB-BastionStack-YKHO65CGR2OP/e49790d0-cc48-11e8-b1c6-503aca2616fd was not successfully created: The following resource(s) failed to create: [BastionSecurityGroup, EIP1].

Unable to access Mongo cluster in vpc from web service node

I was able to create a mongodb cluster with mongndb-vpc.template with CloudFormation. I can access PrimaryReplicator node on 27017 from ssh, however unable to access from outside, because the only node exposed to public is the NAT node, which did not expose the port 27017 port. Can someone help?

I have web services running in EC2. I am wondering how I can access MongoDB from my EC2 instance (in web service code). Is there other ways than forcing web service nodes and mongo nodes to be in the same subnet?

Add support for 4.4

Hello,

Would it be possible to update the template and add support for MongoDB 4.4?

Why all the AMIs are customized instead of standard Linux AMI?

This is not an issue actually.. wondering why all the AMIs are customized community AMIs? How are the AMIs different from standard ones, say any pre-installed softwares?

I have tried to launch it in standard AMIs but failed, it seems auto scaling cannot receive cfn success signal. Could you kindly explain on that? Thanks!

Problem with updating yum, when deploying in private subnet

Hi.

I adopted the CloudFormation templates a little bit since I already have VPC set up and I do not need bastion Host so basically I would like to deploy one instance on development environment and 1 primary and 2 secondary for production. I am trying to deploy everything in private subnet, but my deploy always fail with The following resource(s) failed to create: [PrimaryReplicaNode0WaitForNodeInstall]. . Rollback requested by user. and WaitCondition timed out. Received 0 conditions when expecting 1. I have also checked logs from ec2 instance and I have problem with updating the AMI Linux with yum. Since it is private subnet they have NAT gateway attached and also I have checked about VPC endpoint and I have set up that as well but nothing helped. I have checked NACL as well and Inbound is open to 0.0.0.0/0 to all ports. I do not know what else I can try.

Best regards.

Q: Is there a way to create encrypted ebs volumes

When using the cloud formation template there is no way to set the volumes to be encrypted from what i can tell, is there a way to do this after creation if created?

Or is there an easier way to modify the stack to add this in?

Description for secondary subnets reads "Primary"

Just noticed a small bug in templates/mongodb.template - pretty sure this should read "Secondary node(s)":

         "Secondary0NodeSubnet": {
             "Type": "AWS::EC2::Subnet::Id",
             "Description": "Subnet-ID the existing subnet in your VPC where you want to deploy Primary node(s)."
        },
         "Secondary1NodeSubnet": {
             "Type": "AWS::EC2::Subnet::Id",
             "Description": "Subnet-ID the existing subnet in your VPC where you want to deploy Primary node(s)."
         }

Issue with run replicas

Hello,

I need your help with set the replicas.
I have used this template for long time just for 1 node.

Now I need to create replicas. But when I set the Cluster Replica Set Count from 1 to 3, cloudformation create instances but cannot create replicas. Also it cannot cleanup, and then delete my replica nodes.

Changes:
screen shot 2018-03-22 at 5 55 58 pm

From main stack:
screen shot 2018-03-22 at 5 56 53 pm

At PrimaryReplicaNode0
screen shot 2018-03-22 at 5 58 02 pm

At SecondaryReplicaNode0 everything is ok (CREATE_COMPLETE)

Regarding the s3 setup in the cloudformation script

Hi
I am new in cloud formation and have the following queries below please help me to sort out this

  1. Why are we using dynamo-db is it necessary to run the mongo replication set ? How can I remove the dynamo-db from part from the cloud formation script?

How can I use my own s3 bucket instead of aws-quickstart what are the steps that I need to follow?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.