GithubHelp home page GithubHelp logo

awsdocs / amazon-dynamodb-developer-guide Goto Github PK

View Code? Open in Web Editor NEW
187.0 44.0 187.0 2.59 MB

The open source version of the Amazon DynamoDB docs. You can submit feedback & requests for changes by submitting issues in this repo or by making proposed changes & submitting a pull request.

License: Other

amazon-dynamodb-developer-guide's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-dynamodb-developer-guide's Issues

ClusterDaxAsyncClient example code doesn't work

in the page documenting how to modify an existing project to use DAX:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.client.modify-your-app.html

the following code doesn't even seem to exist in ClusterDaxAsyncClient:
ClusterDaxAsyncClient.builder()

There is no builder() method in the ClusterDaxAsyncClient javadoc:
http://dax-sdk.s3-website-us-west-2.amazonaws.com/javadocs/hosted/com/amazon/dax/client/dynamodbv2/ClusterDaxAsyncClient.html#ClusterDaxAsyncClient-com.amazon.dax.client.dynamodbv2.ClientConfig-

The constructor in the documentation looks like this:
public ClusterDaxAsyncClient(ClientConfig cfg)

Am I missing something?

begin_with filter expression support for a random key alone

I have a table structure as :
| id | name | entry_date | exit_date | ... |
where id is the partition key. Now the format for entry_date is : "E MMM dd HH:mm:ss z yyyy"
I want pairs(<id, entry_date>) for all entry_date >= givenStartDate and entry_date <= givenEndDate or only id with entry_date = givenStartDate will work.
I have tried scan with >, <, between filters on dynamodb UI, but it's not giving exact result there and same with a java query. But on UI, begins_with filter is working fine if I consider only givenStartDate. But while quering using java sdk, there must be partition-key specified in the keyconditionexpression, which in my case won't be feasible.

Can anyone suggest me any suitable query for this problem. Or any useful document link can also help. Thanks ...

Outdated TryDaxHelper.java

Hello,

I must say that I am really disappointed and will never use your solution on production because your documentation says a lot about you. First of all, I wanted to check how is DAX working with dynamodb, but of course I could not, because I cannot find any example how create Dax cluster. When I found I see that it is outdated with API even from january this year (sic!).

Simple outdated example:
https://github.com/awsdocs/amazon-dynamodb-developer-guide/blob/master/doc_source/DAX.client.run-application-java.TryDaxHelper.md

Misleading info - GetItem events are not put on a stream

There is some misleading information on the page https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/best-practices-security-detective.html where it says

AWS CloudTrail does not support logging of DynamoDB data-plane operations, such as GetItem and PutItem. So you might want to consider using Amazon DynamoDB Streams as a source for these events occurring in your environment.

This suggests that you can use streams to audit GetItem events but streams will only generate a record for an item that has changed/inserted/deleted.

I believe the reference to GetItem should be removed from the page.

Potentially wrong or confusing PartiQL example

Hello, I do not understand why this condition does NOT result in a full table scan:

https://github.com/awsdocs/amazon-dynamodb-developer-guide/blob/master/doc_source/ql-reference.select.md?plain=1#L53

SELECT * 
FROM Orders 
WHERE OrderID = 100 or pk = 200

is this a mistake in the example? or did I miss something?

couple lines below, there is the following example, which according to the docs results in a full table scan

SELECT * 
FROM Orders 
WHERE OrderID = 100 OR Address='some address'

Why exactly would or pk be ok, if OR Address is not? (I do understand why the second example does result in a full table scan, I'm just confused about the first one)

Incorrect data in the example section of Read Capacity Units and Write Capacity Units

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html

As the above document says "One read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for an item up to 4 KB in size. Transactional read requests require two read capacity units to perform one read per second for items up to 4 KB."

With respect to above definition, the highlighted line in below screen shot should be "Perform transactional read requests of up to 12 KB per second." Instead of "Perform transactional read requests of up to 3 KB per second."

image

Documentation mismatch on Amazon DynmoDB: How It Works

AWS Doc:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.html

GitHub Doc:

https://github.com/awsdocs/amazon-dynamodb-developer-guide/blob/master/doc_source/HowItWorks.md

Problem:

One of the bullets under Topics mismatches from GitHub & Official doc.

# GitHub
- Throughput Capacity for Reads and Writes

# AWS Doc
- Read/Write Capacity Mode

I don't know if I am missing something but markdown page Read/Write Capacity Mode no longer exists, so Edit on GitHub link in that page throws a 404 not found.

DaxClientError

I am new to AWS and to dynamodb. I am trying to set DAX for my dynamodb clusters.
I am going through the examples given in the doc. I see some issues.

When i run with python 03-getitem-test.py i can see the execution.
But when i run with sudo sudo python 03-getitem-test.py i see
DaxClientError('Failed to configure cluster endpoints from {}'.format(seeds), DaxErrorCode.NoRoute).amazondax.DaxError.DaxClientError: An error occurred (NoRouteException) when calling the operation: Failed to configure cluster endpoints.

Since this amazondax client is on the TCP level, do i need do anything to run this as root?

Can someone help me to understand whats the difference between running this as python vs sudo python in EC2 Linux?

Get AmazonDynamoDBException

Amazon.DynamoDBv2.AmazonDynamoDBException: 'Either the KeyConditions or KeyConditionExpression parameter must be specified in the request.'

Incorrect permission in DAX Cluster guide

amazon-dynamodb-developer-guide/doc_source/DAX.create-cluster.md:33:

The `iam:CreateRole`, `iam:CreatePolicy`, `iam:AttachPolicy` and `iam:PassRole` permissions are not included in the AWS\-managed policies for DynamoDB\. This is by design, because these permissions provide the possibility of privilege escalation: A user could use these permissions to create a new administrator policy, and then attach that policy to an existing role\. For this reason, you \(the administrator of your DAX cluster\) must explicitly add these permissions to your policy\. 

iam:AttachPolicy is not a valid permission (at least, anymore) - this should be iam:AttachRolePolicy (if congruent with L27).

I can submit a PR if need be.

Missing `DynamoDBMapperConfig.SaveBehavior`s PUT, APPEND_SET, UPDATE_SKIP_NULL_ATTRIBUTES

According to the API documentation, the Dynamo DB Mapper supports 5 Save Configs.

Currently, the Developer Guide provides only 2.

+ A `DynamoDBMapperConfig.SaveBehavior` enumeration value \- Specifies how the mapper instance should deal with attributes during save operations:
+ `UPDATE`—during a save operation, all modeled attributes are updated, and unmodeled attributes are unaffected\. Primitive number types \(byte, int, long\) are set to 0\. Object types are set to null\.
+ `CLOBBER`—clears and replaces all attributes, included unmodeled ones, during a save operation\. This is done by deleting the item and re\-creating it\. Versioned field constraints are also disregarded\.

Presumable the other 3 should be added as:

+ `APPEND_SET`--treats scalar attributes (String,Number, Binary) the as `UPDATE_SKIP_NULL_ATTRIBUTES` does.
+ `PUT` -- during a save operation, will clear and replace all attributes on save, including unmodeled ones, but fails if values do not match what is persisted on conditional writes and does not overwrite auto-generated values.
+ `UPDATE_SKIP_NULL_ATTRIBUTES` -- is similar to UPDATE, except that it ignores any null value attribute(s) and will NOT remove them from that item in DynamoDB.

Inconsistent terminology

The terms base table and parent table seem to be used interchangeably throughout the docs describing secondary indexes. My recommendation is that one term or the other be used, but not both. Or if both are to be used, then it should be explained somewhere what the difference is (if indeed there is one.)

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-indexes-general.html
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-indexes-general-sparse-indexes.html

Example of `How It Works: DynamoDB Time to Live (TTL)` is not correct

In AWS Doc:

In this example, each item has an ExpirationTime attribute value set when it is created. Consider the following table item.

SessionData

UserName SessionId CreationTime ExpirationTime (TTL) SessionInfo
user1 74686572652773 1571820360 1571827560 {JSON Document} ...

In this example, the item CreationTime is set to Wednesday, October 23 08:46 AM UTC 2019, and the ExpirationTime is set 2 hours later at Wednesday, October 23 10:46 AM UTC 2019. The item expires when the current time, in epoch format, is greater than the time in the ExpirationTime attribute. In this case, the item with the key { Username: user1, SessionId: 74686572652773} expires 10:00 AM (1571827560).

The last line must be: In this case, the item with the key { Username: user1, SessionId: 74686572652773} expires 10:46 AM (1571827560)\

Documentation uses contains condition in a KeyConditionExpression

On this page https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.ReadData.Query.html there is an example that uses a contains condition in the KeyConditionExpression:

// Return all of the songs by an artist, with a particular word in the title...
// ...but only if the price is less than 1.00

{
    TableName: "Music",
    KeyConditionExpression: "Artist = :a and contains(SongTitle, :t)",
    FilterExpression: "price < :p",
    ExpressionAttributeValues: {
        ":a": "No One You Know",
        ":t": "Today",
        ":p": 1.00
    }
}

While on this page https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html it is stated that you can use only use equality for the partition key and only greater than, greater than or equal, less than, less than or equal, equal, BETWEEN and begins_with for the sort key.

Key Condition Expression

To specify the search criteria, you use a key condition expression—a string that determines the items to be read from the table or index.

You must specify the partition key name and value as an equality condition.

You can optionally provide a second condition for the sort key (if present). The sort key condition must use one of the following comparison operators:

    a = b — true if the attribute a is equal to the value b

    a < b — true if a is less than b

    a <= b — true if a is less than or equal to b

    a > b — true if a is greater than b

    a >= b — true if a is greater than or equal to b

    a BETWEEN b AND c — true if a is greater than or equal to b, and less than or equal to c.

The following function is also supported:

    begins_with (a, substr)— true if the value of attribute a begins with a particular substring.

The following AWS CLI examples demonstrate the use of key condition expressions. Note that these expressions use placeholders (such as :name and :sub) instead of actual values. For more information, see Expression Attribute Names and Expression Attribute Values. 

One of these must be wrong and I suspect the example in https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.ReadData.Query.html is wrong.

bp-modeling-nosql-B inconsistent

Hello,

I think that Query conditions are not up to date with database schema.
For exemple for Access pattern 2, the PK of GSI-1 is an id not the employe name
The employe name seems to be in the SK of GSI-1

Missing full list of possible Dax `Cluster.Status` values

It's currently unclear in the docs the full list of possible Dax Cluster Status values.
I saw these two statuses (quote below) at this link.

The cluster status changes to `modifying` when you modify the replication factor. The status changes to `available` when the modification is complete.

But it would be great know a full list of possible Status values, for instance like the ones shown for Amazon Aurora DB clusters below (table truncated).

DB cluster status Billed Description
available Billed The DB cluster is healthy and available. When an Aurora Serverless cluster is available and paused, you're billed for storage only.
backing-up Billed The DB cluster is currently being backed up.
backtracking Billed The DB cluster is currently being backtracked. This status only applies to Aurora MySQL.

Confusing description of classes implementing interfaces

The descriptions of DDB Java SDK programmatic interfaces is confusing at points, because in context, it seems to suggest (wrongly) that the classes under discussion are implementing Java interfaces in the sense of the Java programming language. Basically, the terms implements and interface are keywords with specific meanings in Java, but these terms are being used very colloquially in the docs.

The com.amazonaws.services.dynamodbv2.document.DynamoDB class implements the DynamoDB document interface.

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.SDKs.Interfaces.Document.html

The com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper class implements the DynamoDB object persistence interface.

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.SDKs.Interfaces.Mapper.html

GSI max number

Relating to training book there is 5 allowed which is not exact with the 20 in the documentation

Very important part in a doc page is missing

I've just read JavaScript and DynamoDB doc page. In my opinion, a very important explanation is missing there: When to use such implementation (direct access to DynamoDB from client JavaScript). I may be wrong about this, but I think that it's very risky to create public web applications which are directly accessing DynamoDB from JavaScript for a simple reason: a malicious user would be able to emit any queries to the DB, which are either changing the data in a malicious way or returning a data which shouldn't be available for the user, or at very least to create many scans/selects thus causing throttling and costs. If such access is possible, shouldn't be there some note like: "Use this only in internal apps used by trusted users", or something like that?

Inconsistent "OK to retry" judgments in Programming.Errors.md

In Programming.Errors.md it is described what "OK to retry" means, and then a judgment of "Yes" or "No" is given for each of the error types, without an explanation why and in some cases, this judgment seems wrong or inconsistent.

One example is ItemCollectionSizeLimitExceededException, for which the "OK to retry" is given as "YES". However, retrying the same operation cannot possibly succeed, because the limit is still in place. So why "YES"? True, if some parallel operations remove items we may go below the limit, so maybe this explains the YES? I don't think so... Consider ConditionalCheckFailedException - this one is marked NO. But there to, parallel requests may cause the condition to succeed, so a client may want to retry. In short, it's not clear why ItemCollectionSizeLimitExceededException and ConditionalCheckFailedException should have a different "OK to retry" judgment.

Another interesting case is UnrecognizedClientException. If the client has a wrong key, it is pointless to retry - the key will still be wrong. And yet, this error is marked "YES". Does this suggest that there are cases where AWS may be unable to verify a key (e.g., because of a network problem) and will temporarily return UnrecognizedClientException instead of Internal Server Error (which I would have expected)? Does this mean a client should always retry when getting a UnrecognizedClientException because this possiblity? This is not just a theoretical issue - people have actually been troubled by this question, see boto/boto3#509.

To summarize, I think the YES/NO judgments in this document should be carefully reviewed, and wherever the decision is not obvious - a longer text explaining why this decision was chosen would have been nice.

Discrepancy in DynamoDB Local

I am not sure if this is the most appropriate place to report this but here it goes...

The documentation on page: DynamoDB Streams and Time to Live mentions the type and principalId keys in camelCase which is correct, but while using DynamoDB Local during development & testing using the installable version of DynamoDB Local Asia-Pacific Mumbai Region the keys are actually in PascalCase.

We actually faced a production issue because of this, where our assumption that a functionality working on DynamoDB Local would also seamlessly work on the actual DB costed us. This should be fixed by the concerned team at AWS, or at least should be pointed out in the Usage Notes.

Inconsistency of description of when a Query operation can be performed

API_Query says:

The Query operation finds items based on primary key values. You can query any table or secondary index that has a composite primary key (a partition key and a sort key).

Meanwhile, Primary Key docs says:

DynamoDB supports two different kinds of primary keys:
Partition key – A simple primary key, composed of one attribute known as the partition key.
...
Partition key and sort key - Referred to as a composite primary key, this type of key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key.

But query can be performed on a table or secondary index that doesn't have a composite primary key (i.e., it can be performed on a table / secondary index with a simple primary key.)

Wrong translation regarding "Query" and "Scan"

In Best Practices for Querying and Scanning Data, it's recommended to use Query instead of Scan.

For faster response times, design your tables and indexes so that your applications can use Query instead of Scan.

But in Japanese page, this is translated into following sentence:

高速な応答時間を得るには、アプリケーションが Query ではなく Scan を使用できるようにテーブルとインデックスを設計します

Here, "Query" and "Scan" are written in the other way around.

Where are the pages for the PartiQL reference?

I noticed the new section added in the reference for PartiQL and dynamoDb. This is the docs section I'm talking about.

However, the Getting Started has no Node.Js examples, so I wanted to edit the page and add some examples in a pull request, but the docs for that section seem to be in a different repo.

Can you point me to the direction of where the docs for the reference is? I'm also guessing since this is a new feature just released in November 2020, the docs haven't been properly consolidated here.

partition keys do need to be have a large number of distinct values relative to the number of items in the table

The following example from the documentation seems to suggest that partition keys do not need to be have a large number of distinct values relative to the number of items in the table:

"Suppose that the Pets table has a composite primary key consisting of AnimalType (partition key) and Name (sort key)."

I think using the Name as a partition key would be a better, more correct example given the following Amazon recommendation:

"DynamoDB is optimized for uniform distribution of items across a table's partitions, no matter how many partitions there may be. We recommend that you choose a partition key that can have a large number of distinct values relative to the number of items in the table."

I suggest that the example is updated to use Name as a partition key and AnimalType as a sort key to help developers designing better solutions.

ProvisionedThroughput is not specified for index: GameTitleIndex

Hi there,

your documentation is not up to date: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GCICli.html

When I call

aws dynamodb create-table \
    --table-name GameScores \
    --attribute-definitions AttributeName=UserId,AttributeType=S \
                            AttributeName=GameTitle,AttributeType=S \
                            AttributeName=TopScore,AttributeType=N  \
    --key-schema AttributeName=UserId,KeyType=HASH \
                 AttributeName=GameTitle,KeyType=RANGE \
    --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5 \
    --global-secondary-indexes \
        "[
            {
                \"IndexName\": \"GameTitleIndex\",
                \"KeySchema\": [{\"AttributeName\":\"GameTitle\",\"KeyType\":\"HASH\"},
                                {\"AttributeName\":\"TopScore\",\"KeyType\":\"RANGE\"}],
                \"Projection\":{
                    \"ProjectionType\":\"INCLUDE\",
                    \"NonKeyAttributes\":[\"UserId\"]
                }
            }
        ]"

I get

An error occurred (ValidationException) when calling the CreateTable operation: One or more parameter values were invalid: ProvisionedThroughput is not specified for index: GameTitleIndex

Adding a ProvisionedThroughput like

aws dynamodb create-table \
    --table-name GameScores \
    --attribute-definitions AttributeName=UserId,AttributeType=S \
                            AttributeName=GameTitle,AttributeType=S \
                            AttributeName=TopScore,AttributeType=N  \
    --key-schema AttributeName=UserId,KeyType=HASH \
                 AttributeName=GameTitle,KeyType=RANGE \
    --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5 \
    --global-secondary-indexes \
        "[
            {
                \"IndexName\": \"GameTitleIndex\",
                \"KeySchema\": [{\"AttributeName\":\"GameTitle\",\"KeyType\":\"HASH\"},
                                {\"AttributeName\":\"TopScore\",\"KeyType\":\"RANGE\"}],
                \"Projection\":{
                    \"ProjectionType\":\"INCLUDE\",
                    \"NonKeyAttributes\":[\"UserId\"]
                },
                \"ProvisionedThroughput\":{
                    \"ReadCapacityUnits\": 1,
                    \"WriteCapacityUnits\": 1
                }
            }
        ]"

fixes the error.

Regards

Can't create table right after delete

There is no repo or public issue tracker for dynamodb-local, so I have to put this here.

Steps to reproduce:

  1. delete table.
  2. wait for table deletion. (using a waiter)
  3. describe table (just to be sure), and receive a ResourceNotFoundException
  4. create table

# 4 results in.
ResourceInUseException: Cannot create preexisting table

This is non-determinstic, it only happens sometimes, but often enough. I have to loop creating the table until it finally succeeds, since there is no other way to get dynamodb-local to confirm the table is really gone and the name is ready for re-use.

I'm using the latest aws sdk for go, but I dont think this is go-sdk related.

documentation on pagination is inconsistent

amazon-dynamodb-developer-guide/doc_source/Query.md initially says: If LastEvaluatedKey is present in the response and is non-null, you will need to paginate the result set

However, under Paginating the Results, it then goes on to say If the result contains a LastEvaluatedKey element, proceed to step 2. If there is not a LastEvaluatedKey in the result, then there are no more items to be retrieved

And most further mentions of LastEvaluatedKey no longer say that LastEvaluatedKey needs to be both present and non-null, including code samples.

Therefore, as an application developer, I'm now not sure whether I should check LastEvaluatedKey to be non-null or not.

To avoid confusion, would it make sense to clarify which of the statements is true and make them all consistent?

code samples for aws-sdk-go-v2 is missing

Code samples for most common DynamoDB operations are missing in both go v2 sdk documentation and DynamoDB Documentation.
There is no simple and clear listing of all the Golang modules one may need to use the service. I was not aware of github.com/aws/aws-sdk-go-v2/service/dynamodb/dynamodbattribute module and was trying to reinvent the wheel.
Some guidance on usage of the tag dynamodbav could be super helpful.
Thanks!

Atomic Counters might require ConsistentRead to work properly in concurrency scenarios

I have faced an issue of undercounting when I was trying to use Atomic Counters and I found the information in this guide slightly incomplete.

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html#WorkingWithItems.AtomicCounters says:
"...If an UpdateItem operation should fail, the application could simply retry the operation. This would risk updating the counter twice, but you could probably tolerate a slight overcounting or undercounting of website visitors.
An atomic counter would not be appropriate where overcounting or undercounting cannot be tolerated (For example, in a banking application). In this case, it is safer to use a conditional update instead of an atomic counter."

The way it is written, it might give the impression that this issue would only happen in case you had a problem in your application logic. I faced this behavior when I was keeping atomic counters in high concurrency scenarios, even though my app was behaving accordingly.

And I also discovered that if you add the ConsistentRead
parameter to that update command, it will avoid this problem, like this:

aws dynamodb update-item \
    --table-name ProductCatalog \
    --key '{"Id": { "N": "601" }}' \
    --update-expression "SET counter = counter + :incr" \
    --expression-attribute-values '{":incr":{"N":"1"}}' \
    --consistent-read \
    --return-values UPDATED_NEW

Is that the correct approach? In case this is true, I believe it would be helpful to update that atomic counters example with the one above.

Errors in bp-gsi-overloading.md

Unless I'm missing something, the example in bp-gsi-overloading.md has one serious error, and a couple of cases of confusing terminology:

The error is to suggest that the user should create an index with the base table's sort key as the partition key, and "Data" as the sort key. I think it should be the other way around! I.e., "Data" should be the index's partition key, and the sort key the same as in the base table. In this example, the table's original sort key only has a few different values, so it is a bad choice for a partition key. At the same time, the "Data" string has many different values, so it's a good choice for the partition key.

The smaller points:

  • You keep repeating that the different Data values have different "types". They don't... They are all strings - and that's important for this technique to work. Even the dollar amount is a string like "$5,477" and not a number. This trick wouldn't have worked if some items had non-string data.

  • Please give the base table's Sort Key a name - it makes it hard to refer to that attribute without a name, and anyway, even key attributes have names.

NoRouteException

Hi, i am getting the below error when connecting to the DAX cluster using the python amazon-dax-client from an EC2 instance.
DaxClientError: An error occurred (NoRouteException) when calling the operation: Failed to configure cluster endpoints .

Can someone help?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.