We are running this tool with quite a lot of snapshots being generated (once an hour, kept for 7 days for multiple clusters).
[ERROR] 2018-10-08T02:30:33.680Z 6153e97f-733f-4764-899c-a89c052312c8 Exception sharing dev-aurora-cluster20181001042229986400000001-2018-10-04-23-00: An error occurred (Throttling) when calling the ModifyDBClusterSnapshotAttribute operation (reached max retries: 4): Rate exceeded
An error occurred (Throttling) when calling the ListTagsForResource operation (reached max retries: 4): Rate exceeded: ClientError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 49, in lambda_handler
ResourceName=snapshot_arn)
File "/var/runtime/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (Throttling) when calling the ListTagsForResource operation (reached max retries: 4): Rate exceeded
This happens often enough that it not only fails the Lambda but also the Step Function after retries.
Is there a way to reduce the number of calls the tool makes or are we simply going beyond the limit of what's possible with the current implementation?