GithubHelp home page GithubHelp logo

aws / aws-encryption-sdk-python Goto Github PK

View Code? Open in Web Editor NEW
229.0 26.0 83.0 2.82 MB

AWS Encryption SDK

Home Page: https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html

License: Apache License 2.0

Python 99.97% Shell 0.03% Roff 0.01%

aws-encryption-sdk-python's Introduction

aws-encryption-sdk

Latest Version Supported Python Versions Code style: black Documentation Status

The AWS Encryption SDK for Python provides a fully compliant, native Python implementation of the AWS Encryption SDK.

The latest full documentation can be found at Read the Docs.

Find us on GitHub.

Security issue notifications

See Support Policy for details on the current support status of all major versions of this library.

Getting Started

Required Prerequisites

  • Python 3.8+
  • cryptography >= 3.4.6
  • boto3 >= 1.10.0
  • attrs

Installation

Note

If you have not already installed cryptography, you might need to install additional prerequisites as detailed in the cryptography installation guide for your operating system.

$ pip install aws-encryption-sdk

Concepts

There are four main concepts that you need to understand to use this library:

Cryptographic Materials Managers

Cryptographic materials managers (CMMs) are resources that collect cryptographic materials and prepare them for use by the Encryption SDK core logic.

An example of a CMM is the default CMM, which is automatically generated anywhere a caller provides a master key provider. The default CMM collects encrypted data keys from all master keys referenced by the master key provider.

An example of a more advanced CMM is the caching CMM, which caches cryptographic materials provided by another CMM.

Master Key Providers

Master key providers are resources that provide master keys. An example of a master key provider is AWS KMS.

To encrypt data in this client, a MasterKeyProvider object must contain at least one MasterKey object.

MasterKeyProvider objects can also contain other MasterKeyProvider objects.

Master Keys

Master keys generate, encrypt, and decrypt data keys. An example of a master key is a KMS customer master key (CMK).

Data Keys

Data keys are the encryption keys that are used to encrypt your data. If your algorithm suite uses a key derivation function, the data key is used to generate the key that directly encrypts the data.

Usage

EncryptionSDKClient

To use this module, you (the caller) must first create an instance of the EncryptionSDKClient class. The constructor to this class accepts an optional keyword argument, commitment_policy, that controls which algorithm suites can be used for encryption and decryption. If no value is provided for this argument, a default value of REQUIRE_ENCRYPT_REQUIRE_DECRYPT is used. Unless you have specialized performance requirements or are in the process of migrating from an older version of the AWS Encryption SDK, we recommend using the default value.

import aws_encryption_sdk
from aws_encryption_sdk.identifiers import CommitmentPolicy


client = aws_encryption_sdk.EncryptionSDKClient(
    commitment_policy=CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT
)

You must then create an instance of either a master key provider or a CMM. The examples in this readme use the StrictAwsKmsMasterKeyProvider class.

StrictAwsKmsMasterKeyProvider

A StrictAwsKmsMasterKeyProvider is configured with an explicit list of AWS KMS CMKs with which to encrypt and decrypt data. On encryption, it encrypts the plaintext with all configured CMKs. On decryption, it only attempts to decrypt ciphertexts that have been wrapped with a CMK that matches one of the configured CMK ARNs.

To create a StrictAwsKmsMasterKeyProvider you must provide one or more CMKs. For providers that will only be used for encryption, you can use any valid KMS key identifier. For providers that will be used for decryption, you must use the key ARN; key ids, alias names, and alias ARNs are not supported.

Because the StrictAwsKmsMasterKeyProvider uses the boto3 SDK to interact with AWS KMS, it requires AWS Credentials. To provide these credentials, use the standard means by which boto3 locates credentials or provide a pre-existing instance of a botocore session to the StrictAwsKmsMasterKeyProvider. This latter option can be useful if you have an alternate way to store your AWS credentials or you want to reuse an existing instance of a botocore session in order to decrease startup costs.

If you configure the the StrictAwsKmsMasterKeyProvider with multiple CMKs, the final message will include a copy of the data key encrypted by each configured CMK.

import aws_encryption_sdk

kms_key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider(key_ids=[
    'arn:aws:kms:us-east-1:2222222222222:key/22222222-2222-2222-2222-222222222222',
    'arn:aws:kms:us-east-1:3333333333333:key/33333333-3333-3333-3333-333333333333'
])

You can add CMKs from multiple regions to the StrictAwsKmsMasterKeyProvider.

import aws_encryption_sdk

kms_key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider(key_ids=[
    'arn:aws:kms:us-east-1:2222222222222:key/22222222-2222-2222-2222-222222222222',
    'arn:aws:kms:us-west-2:3333333333333:key/33333333-3333-3333-3333-333333333333',
    'arn:aws:kms:ap-northeast-1:4444444444444:key/44444444-4444-4444-4444-444444444444'
])

DiscoveryAwsKmsMasterKeyProvider

We recommend using a StrictAwsKmsMasterKeyProvider in order to ensure that you can only encrypt and decrypt data using the AWS KMS CMKs you expect. However, if you are unable to explicitly identify the AWS KMS CMKs that should be used for decryption, you can instead use a DiscoveryAwsKmsMasterKeyProvider for decryption operations. This provider attempts decryption of any ciphertexts as long as they match a DiscoveryFilter that you configure. A DiscoveryFilter consists of a list of AWS account ids and an AWS partition.

import aws_encryption_sdk
from aws_encryption_sdk.key_providers.kms import DiscoveryFilter

discovery_filter = DiscoveryFilter(
    account_ids=['222222222222', '333333333333'],
    partition='aws'
)
kms_key_provider = aws_encryption_sdk.DiscoveryAwsKmsMasterKeyProvider(
    discovery_filter=discovery_filter
)

If you do not want to filter the set of allowed accounts, you can also omit the discovery_filter argument.

Note that a DiscoveryAwsKmsMasterKeyProvider cannot be used for encryption operations.

Encryption and Decryption

After you create an instance of an EncryptionSDKClient and a MasterKeyProvider, you can use either of the client's two encrypt/decrypt functions to encrypt and decrypt your data.

import aws_encryption_sdk
from aws_encryption_sdk.identifiers import CommitmentPolicy

client = aws_encryption_sdk.EncryptionSDKClient(
    commitment_policy=CommitmentPolicy.FORBID_ENCRYPT_ALLOW_DECRYPT
)

kms_key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider(key_ids=[
    'arn:aws:kms:us-east-1:2222222222222:key/22222222-2222-2222-2222-222222222222',
    'arn:aws:kms:us-east-1:3333333333333:key/33333333-3333-3333-3333-333333333333'
])
my_plaintext = b'This is some super secret data!  Yup, sure is!'

my_ciphertext, encryptor_header = client.encrypt(
    source=my_plaintext,
    key_provider=kms_key_provider
)

decrypted_plaintext, decryptor_header = client.decrypt(
    source=my_ciphertext,
    key_provider=kms_key_provider
)

assert my_plaintext == decrypted_plaintext
assert encryptor_header.encryption_context == decryptor_header.encryption_context

You can provide an encryption context: a form of additional authenticating information.

import aws_encryption_sdk
from aws_encryption_sdk.identifiers import CommitmentPolicy

client = aws_encryption_sdk.EncryptionSDKClient(
    commitment_policy=CommitmentPolicy.FORBID_ENCRYPT_ALLOW_DECRYPT
)

kms_key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider(key_ids=[
    'arn:aws:kms:us-east-1:2222222222222:key/22222222-2222-2222-2222-222222222222',
    'arn:aws:kms:us-east-1:3333333333333:key/33333333-3333-3333-3333-333333333333'
])
my_plaintext = b'This is some super secret data!  Yup, sure is!'

my_ciphertext, encryptor_header = client.encrypt(
    source=my_plaintext,
    key_provider=kms_key_provider,
    encryption_context={
        'not really': 'a secret',
        'but adds': 'some authentication'
    }
)

decrypted_plaintext, decryptor_header = client.decrypt(
    source=my_ciphertext,
    key_provider=kms_key_provider
)

assert my_plaintext == decrypted_plaintext
assert encryptor_header.encryption_context == decryptor_header.encryption_context

Streaming

If you are handling large files or simply do not want to put the entire plaintext or ciphertext in memory at once, you can use this library's streaming clients directly. The streaming clients are file-like objects, and behave exactly as you would expect a Python file object to behave, offering context manager and iteration support.

import aws_encryption_sdk
from aws_encryption_sdk.identifiers import CommitmentPolicy
import filecmp

client = aws_encryption_sdk.EncryptionSDKClient(
    commitment_policy=CommitmentPolicy.FORBID_ENCRYPT_ALLOW_DECRYPT
)

kms_key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider(key_ids=[
    'arn:aws:kms:us-east-1:2222222222222:key/22222222-2222-2222-2222-222222222222',
    'arn:aws:kms:us-east-1:3333333333333:key/33333333-3333-3333-3333-333333333333'
])
plaintext_filename = 'my-secret-data.dat'
ciphertext_filename = 'my-encrypted-data.ct'

with open(plaintext_filename, 'rb') as pt_file, open(ciphertext_filename, 'wb') as ct_file:
    with client.stream(
        mode='e',
        source=pt_file,
        key_provider=kms_key_provider
    ) as encryptor:
        for chunk in encryptor:
            ct_file.write(chunk)

new_plaintext_filename = 'my-decrypted-data.dat'

with open(ciphertext_filename, 'rb') as ct_file, open(new_plaintext_filename, 'wb') as pt_file:
    with client.stream(
        mode='d',
        source=ct_file,
        key_provider=kms_key_provider
    ) as decryptor:
        for chunk in decryptor:
            pt_file.write(chunk)

assert filecmp.cmp(plaintext_filename, new_plaintext_filename)
assert encryptor.header.encryption_context == decryptor.header.encryption_context

Performance Considerations

Adjusting the frame size can significantly improve the performance of encrypt/decrypt operations with this library.

Processing each frame in a framed message involves a certain amount of overhead. If you are encrypting a large file, increasing the frame size can offer potentially significant performance gains. We recommend that you tune these values to your use-case in order to obtain peak performance.

Thread safety

The EncryptionSDKClient and all provided CryptoMaterialsManager are thread safe. But instances of BaseKMSMasterKeyProvider MUST not be shared between threads, for the reasons outlined in the boto3 docs.

Because the BaseKMSMaterKeyProvider creates a new boto3 sessions per region, users do not need to create a client for every region in every thread; a new BaseKMSMasterKeyProvider per thread is sufficient.

(The BaseKMSMasterKeyProvider is the internal parent class of all the KMS Providers.)

Finally, while the CryptoMaterialsCache is thread safe, sharing entries in that cache across threads needs to be done carefully (see the !Note about partition name in the API Docs).

aws-encryption-sdk-python's People

Contributors

alex-chew avatar ansanper avatar caitlin-tibbetts avatar cjkindel avatar dependabot[bot] avatar farleyb-amazon avatar github-actions[bot] avatar gliptak avatar johnwalker avatar josecorella avatar jpeddicord avatar juneb avatar karlw00t avatar lavaleri avatar lizroth avatar lucasmcdonald3 avatar lucasrc avatar mattsb42-aws avatar meghashetty avatar nitinnt avatar nraval1729 avatar praus avatar ragona avatar ritvikkapila avatar robin-aws avatar salusasecondus avatar seanmcl avatar seebees avatar shubhamchaturvedi7 avatar texastony avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-encryption-sdk-python's Issues

attrs 17.3.0 breaks some of our tests

Problem

The backwards incompatible change in attrs 17.3.0 broke some of our tests. This is apparently due to our use of a feature that was deprecated in 16.1.

Solution

The good news is that none of our actual client code uses this deprecated feature, so users should not be affected by this. Because of this, we just need to update our affected tests to move away from this access pattern.

add pytest markers and refactor testenvs to separate unit/func/integ/accept

Problem

Currently, all the tests run as a big batch, with the integration tests skipped unless a specific environment variable is found.

This is a bit clunky and makes it difficult to tell in Travis whether integration tests were run or skipped.

Solution

  1. Add pytest markers to all functional, integration, and acceptance (xcompat) tests.
  2. Update tox to have separate testenvs for unt/func/integ/accept tests.
  3. Update Travis CI configuration to run the new tox testenvs.

cross-region performance optimization

Hello again my friends - just finished implementation, using this library called from within a child process in node. I initially thought my slow speeds were due to the buildup and takedown of the process itself, but it seems now, after some timing tests, that this isn't the case.

The decrypt function, by itself, takes 1+ seconds. Is this normal behaviour? I'm calling it a single time, with a string payload no larger than 400 characters long. Sometimes I call it with a longer string, but even with strings ~50 characters in length it seems to take this full second of time.

For our use case (calling it within an API endpoint) this doesn't really work. Am I doing something wrong? Let me know if you need to see my code, but it only has a few additions from the example code you guys gave, and I'm timing only the call to the SDK itself.

Thanks guys!

Add support for additional SHA2 hashes in RawMasterKey RSA-OAEP-MGF1 wrapping algorithms

Problem

RawMasterKey provides a compatible implementation of the behavior exhibited by the JceMasterKey provided in the AWS Encryption SDK for Java. Unfortunately, because we did not define constraints for JceMasterKey, when used with an RSA keypair it will accept any JCE Standard Name wrapping algorithm for RSA. This is only constrained by the Standard Names that your JCE Provider supports.

RawMasterKey, by way of WrappingAlgorithm, is much more opinionated and will only accept the specific algorithms that we have pre-defined. This list was defined as PKCS1v15, OAEP-MGF1-SHA1, and OAEP-MGF1-SHA256 because those are the only algorithms defined in the JCE implementation requirements.

We will explicitly not be supporting all possible algorithms for several reasons, including but not limited to:

  1. No constraints are set in the JCE specification, so any JCE Provider could in theory support any names that they want to, including fully custom names/algorithms.
  2. Some commonly supported algorithms, such as some supported by the SunJCE Provider, we explicitly will never support. These include NoPadding and OAEP-MGF1-MD5.
  3. We do need to at some point better define the constraints of algorithms allowed be JceMasterKey. What exactly that will look like, especially considering compatibility requirements, remains to be seen and requires discussion.

Solution

We should add allowed WrappingAlgorithm definitions for RSA-OAEP-MGF1 with additional valid SHA2 algorithms. We should at least add SHA512. Whether we should add SHA384/etc is pending discussion.

Unable to decrypt any data key

code:

kms_key_provider = aws_encryption_sdk.KMSMasterKeyProvider(key_ids=[
    os.environ["ENCRYPTION_KEY_ARN"]
])


def encrypt(plaintext):
    ciphertext, _ = aws_encryption_sdk.encrypt(
        source=plaintext,
        key_provider=kms_key_provider
    )

    return base64.b64encode(ciphertext)


def decrypt(ciphertext_b64):
    ciphertext = base64.b64decode(ciphertext_b64)

    plaintext, _ = aws_encryption_sdk.decrypt(            
        source=ciphertext,
        key_provider=kms_key_provider,
    )
    return plaintext

FYI, this is using v1.3.2.

I get the following error:

Unable to decrypt any data key

When I print out the ciphertext I can clearly see the correct encryption key arn. So I am not sure whats wrong here.

TypeError: unhashable type: 'EncryptedDataKey' - possible incompatibility with attrs-17.1.0

Hi

I'm getting the above error when trying to encrypt using aws_encryption_sdk.encrypt() as per the README.md instructions here at this location:

/aws_encryption_sdk/internal/utils.py", line 173, in prepare_data_keys
    encrypted_data_keys.add(encrypted_key)
TypeError: unhashable type: 'EncryptedDataKey'

I'm using Python 2.7.4 with aws-encryption-sdk-1.2.0 and attrs-17.1.0 (which I think is the cause of the problem). Installation of the SDK looked fine, and the dependencies installed fine too.

The __hash__ method on encrypted_key is None, so it's unable to add it to the set being returned by prepare_data_keys().

When I edit aws_encryption_sdk/structures.py and add frozen=True to the @attr.s() decoration on class MasterKeyInfo(object) and class EncryptedDataKey(object), the objects become hashable courtesy of attrs, and it seems to work OK.

The changes to attrs could be the problem here, looking at commits since 16.3.0, there are a few relating to frozen classes and hashing behaviour.

Reverting to attrs-16.3.0 works fine, which would seem to confirm this.

I'm using a fresh CMK loaded into a KMSMasterKeyProvider as per the instructions:

kms_key_provider = aws_encryption_sdk.KMSMasterKeyProvider(key_ids=[
    'arn:aws:kms:ap-southeast-2:123456789012:key/749cd1b6-dc9b-4a1d-9a90-86ef2b11012a'
])

The EncryptedDataKey encrypted_key that's obtained and returned looks OK and is:

EncryptedDataKey(key_provider=MasterKeyInfo(provider_id=u'aws-kms', key_info='arn:aws:kms:ap-southeast-2:123456789012:key/749cd1b6-dc9b-4a1d-9a90-86ef2b11012a'), encrypted_data_key="\x01\x02\x02\x00x\x1b\x825\xa7\x9aZ^k:0\xa5\xd2rD\xb8\xc3U\xea+\x1d\xe5\x90\xe8\x1b\x90\xe7'X\x00\xa7s+\x01\xbb\x87\xa2w\x04\xe7y\xe6u\xd4I\xa5\x813j+\x00\x00\x00~0|\x06\t*\x86H\x86\xf7\r\x01\x07\x06\xa0o0m\x02\x01\x000h\x06\t*\x86H\x86\xf7\r\x01\x07\x010\x1e\x06\t`\x86H\x01e\x03\x04\x01.0\x11\x04\x0c\xc7\xc3\xe0\x1f\r\xd8\xe0c\x13r?\xd1\x02\x01\x10\x80;J5c\x943\xf9\x965\xd6p%\x1a\xb1S\x9e\x13\x8f\xceV\\\xe6v\xacD9o\x89\xd7\x147\xca\xa8\x84\xc0i\x169?\xef\xe1\xcan~\xe0\xd20_^\xd1M\xc4\x166\x9am\xbe\xa47\xcb")

Please let me know if you need any further information?

Thanks
Richard

DecryptionStream should handle simplest FileObj

So I'm trying to stream a download into a decryption stream, and my code looks something like this:

downloadable = self.bucket.Object(key).get()
destination = open(path_to_file, "wb")

decrypted_data = aws_encryption_sdk.stream(
            mode='d',
            source=downloadable["Body"],
            key_provider=self.static_master_key_provider
)

for chunk in decrypted_data:
    destination.write(chunk)

When I do this, I get an error after a single successful chunk read:
AttributeError: 'StreamingBody' object has no attribute 'closed'
I think it should try and handle this case (object with nothing but read and close).

Unless you think this is an inappropriate way of piping an s3 blob into the decryption streamer.

user agent is not actually being modified

Problem

I could have sworn I checked this, but apparently the user agent modification we are doing is not actually taking effect.

Solution

It looks like this needs to be modified on the botocore level, not on the boto3 client level. Unfortunately, this means that we cannot modify the user agent if a user provides a custom KMS client, as we should not be modifying a botocore client that might be used for something else.

In KMSMasterKeyProvider we can modify the botocore client if we build it directly. However, we should not modify a botocore client that was provided to us.

TODO

Make sure that there is not a way to modify the user agent at the boto3 client layer.

StreamDecryptor._read_header() fails when source stream does not support tell() and seek()

StreamDecryptor._read_header() currently assumes that the input stream supports tell() and seek() and uses these methods to rewind the input read after deserializing the header to feed into the header auth hash. If these methods are not supported (ex: for stdin), decryption will fail.

We cannot simply read the entire thing into memory, as we have no way of knowing how large the input stream will be. We cannot use a BufferedReader because we have no way of reasonably knowing how much we need to buffer (somewhere between ~600B and 8MB).

Proposed fix is to create internal.utils.TeeStream, based on internal.utils.ROStream, which will record all data which is read to an alternate stream. This alternate stream will then be read out after reading the header and the contents returned from internal.deserialize.deserialize_header along with the deserialized header.

update xcompat tests to use public test vectors

Problem

Initial public test vectors are published now: https://github.com/awslabs/aws-encryption-sdk-test-vectors

The xcompat test suites need to be updated to use these rather than just looking for locally.

Solution

The plan for this is to:

  1. Add a git submodule directing to the test vectors repo at the path test/vectors
  2. Update xcompat tests to two-level parameterization:
    1. For each zip file found in test/vectors/vectors
    2. Load test suites from manifest in zip file
      • Read data from zip file for each test

expose header_auth on StreamEncryptor

Problem

StreamDecryptor exposes the header auth after the header is read. However, StreamEncryptor does not expose the header auth after the header is written.

Solution

Add a creation of header auth object after it is created in StreamEncryptor.

readable() from File Object from aws_encryption_sdk.stream always False

I'm trying to upload a file to an S3 bucket with boto3's upload_fileobj. I wanted to create an StreamEncryptor so that I could chain together the encryption and upload without much copying. But I find that the StreamEncryptor object won't take because it's not compatible.

Compatibility means calling readable() on the object if it exists, and ensuring it returns True. But in this case, my stream object returns False on the call. This is unexpected, because I can call read() and get data -- which seems wrong because False means read() should raise an error.

I can't see where readable is implemented in this class, it seems deep in the base/meta class. But maybe it's not implemented at all and defaulting to this value. But I think it should either return a semantically correct value, or not have this function at all.

Cache KMS plaintext and encrypted datakeys with some TTL?

Hi,

I've noticed that each call to encrypt() in the SDK results in network calls to KMS for GenerateDataKey and Encrypt. In my use case, I am using Kinesis and I would like encrypt each record in Kinesis using aws-encryption-sdk-python. Since I can have many records I need to encrypt per second, it would be nice if the SDK would cache data keys for some TTL, so that I'm not making network calls for each record I need to encrypt, as there can be hundreds-thousands of these per second. Current measurements suggest that each call to encrypt takes ~50ms, the majority of this time due to the network calls.

Is there a recommended way to handle such use case?

If not, I would suggest adding two optional kwargs to the KMS key provider:

  • cache_kms_client_response
  • kms_client_response_cache_ttl,

such that I can create a key provider that caches the keys for some time to fix this problem.

custom endpoint support for KMS master key and master key provider

When the KMS master key and provider were designed, there was only one possible KMS endpoint per region. Between FIPS endpoints and VPC endpoints, that is no longer the case.

We should determine the best way to provide this configuration for these two structures.

Sphinx does not follow paths for class aliases for parameter types

Problem

Even though Algorithm is an alias for AlgorithmSuite, it looks like the sphinx module that generates code links for parameter types doesn't realize that and just treats it as an unknown class.

ex:

render:

https://aws-encryption-sdk-python.readthedocs.io/en/latest/generated/aws_encryption_sdk.html#aws_encryption_sdk.encrypt

src:

https://github.com/aws/aws-encryption-sdk-python/blob/master/src/aws_encryption_sdk/__init__.py#L72-L73

Solution

Replace parameter type references with AlgorithmSuite.

raise a better error when decryption of a message with an uncompressed EC point is attempted

Problem

In testing to verify what happens if a caller attempts to decrypt a file created by mrcrypt, I realized that if you attempt to decrypt a message that was written with an uncompressed EC point (note: this is not supported under the AWS Encryption SDK spec), you get a very unhelpful error.

$ mrcrypt encrypt alias/exampleKey test_plaintext -r us-west-2 
$ python
>>> import aws_encryption_sdk
>>> mkp = aws_encryption_sdk.KMSMasterKeyProvider()
>>> with open('test_plaintext.encrypted', 'rb') as f:
...     ct = f.read()
... 
>>> aws_encryption_sdk.decrypt(source=ct, key_provider=mkp)
No handlers could be found for logger "aws_encryption_sdk.streaming_client"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/bullocm/tmp/mrt/lib/python2.7/site-packages/aws_encryption_sdk/__init__.py", line 121, in decrypt
    plaintext = decryptor.read()
  File "/Users/bullocm/tmp/mrt/lib/python2.7/site-packages/aws_encryption_sdk/streaming_client.py", line 201, in read
    self._prep_message()
  File "/Users/bullocm/tmp/mrt/lib/python2.7/site-packages/aws_encryption_sdk/streaming_client.py", line 687, in _prep_message
    self._header, self.header_auth = self._read_header()
  File "/Users/bullocm/tmp/mrt/lib/python2.7/site-packages/aws_encryption_sdk/streaming_client.py", line 719, in _read_header
    decryption_materials = self.config.materials_manager.decrypt_materials(request=decrypt_materials_request)
  File "/Users/bullocm/tmp/mrt/lib/python2.7/site-packages/aws_encryption_sdk/materials_managers/default.py", line 149, in decrypt_materials
    encryption_context=request.encryption_context
  File "/Users/bullocm/tmp/mrt/lib/python2.7/site-packages/aws_encryption_sdk/materials_managers/default.py", line 129, in _load_verification_key_from_encryption_context
    encoded_point=encoded_verification_key
  File "/Users/bullocm/tmp/mrt/lib/python2.7/site-packages/aws_encryption_sdk/internal/crypto/authentication.py", line 159, in from_encoded_point
    compressed_point=base64.b64decode(encoded_point)
  File "/Users/bullocm/tmp/mrt/lib/python2.7/site-packages/aws_encryption_sdk/internal/crypto/elliptic_curve.py", line 182, in _ecc_public_numbers_from_compressed_point
    x, y = _ecc_decode_compressed_point(curve, compressed_point)
  File "/Users/bullocm/tmp/mrt/lib/python2.7/site-packages/aws_encryption_sdk/internal/crypto/elliptic_curve.py", line 140, in _ecc_decode_compressed_point
    y_order = y_order_map[raw_y]
KeyError: '\x04'

Solution

Catch appropriate errors in internal.crypto.elliptic_curve._ecc_decode_compressed_point and raise NotSupportedError('Uncompressed points are not supported')

deserialize encryption context should fail on malformed encryption context

If the client deserialization receives a malformed ciphertext that defines the AAD length as 0 and then also defines a AAD fields as 0, the deserialization logic SHOULD raise an error. It does not. Instead, it accidentally interprets it as an empty encryption context.

https://github.com/awslabs/aws-encryption-sdk-python/blob/master/src/aws_encryption_sdk/internal/formatting/encryption_context.py#L159-L164

Example malformed test vector:

b'AYAAFJwN8IgQ9+0sxyy7+90cCCgAAgAAAAEAE1dFQi1DUllQVE8tUlNBLU9BRVAAKDhDRUQyRkQyMEZDODhBOUMwNkVGREIwNzM3MDdFQjFFRjE2NTU3ODABAFbIi+gmSrvejfOCjbE08rTYHym2uLWsiizQHnTy3z8/VeR+7MKvNv7ZfPf5LX7i9amYwxCMISvY+BCcndLakH/RlDUdgz5/Q0KAxrE5LX7DHxO/wMviJCi+qXWMb+5u0mhwepRihO/dk+3kGqyaLhnGuA6xqYmThUlCZR5BwfyEddSango7umEWw1YQ8vokjqUzCKRyk3VpXwQTXQLLrBz7ZmZ7Anzn0SoaLYk8D0rPWhKHvUXQDJYDYdQ7vpedxpsE5vliLI98CAcIWllkst964DIBwKgAX6Ic8Nj+8T7VurdK2SFuTH4LIvkebmEGCxngdRpfopEU/Rd0LYXZik4CAAAAAAwAAAAGAAAAAAAAAAAAAAAAK9vNRvymDkoxO6dy67pDuf////8AAAABAAAAAAAAAAAAAAABAAAABTAqmilQragTFTYdPz23w1NMR+c8Uw=='

deserialize_header function needs to be refactored

As pylint and flake8 justifiably point out, internal.formatting.deserialize.deserialize_header is too complex. Refactor to simplify.

186590df9307:aws-encryption-sdk-python bullocm$ tox -e flake8,pylint
GLOB sdist-make: /Users/bullocm/git/aws-encryption-sdk-python/setup.py
flake8 inst-nodeps: /Users/bullocm/git/aws-encryption-sdk-python/.tox/dist/aws-encryption-sdk-1.3.2.zip
flake8 installed: asn1crypto==0.23.0,attrs==17.2.0,aws-encryption-sdk==1.3.2,boto3==1.4.7,botocore==1.7.17,cffi==1.11.0,cryptography==2.0.3,docutils==0.14,flake8==3.4.1,flake8-docstrings==1.1.0,flake8-import-order==0.13,flake8-polyfill==1.0.1,idna==2.6,jmespath==0.9.3,mccabe==0.6.1,pycodestyle==2.3.1,pycparser==2.18,pydocstyle==2.0.0,pyflakes==1.5.0,python-dateutil==2.6.1,s3transfer==0.1.11,six==1.11.0,snowballstemmer==1.2.1,wrapt==1.10.11
flake8 runtests: PYTHONHASHSEED='1582386464'
flake8 runtests: commands[0] | flake8 src/aws_encryption_sdk/ setup.py
src/aws_encryption_sdk/internal/formatting/deserialize.py:63:1: C901 'deserialize_header' is too complex (15)
ERROR: InvocationError: '/Users/bullocm/git/aws-encryption-sdk-python/.tox/flake8/bin/flake8 src/aws_encryption_sdk/ setup.py'
pylint inst-nodeps: /Users/bullocm/git/aws-encryption-sdk-python/.tox/dist/aws-encryption-sdk-1.3.2.zip
pylint installed: asn1crypto==0.23.0,astroid==1.5.3,attrs==17.2.0,aws-encryption-sdk==1.3.2,boto3==1.4.7,botocore==1.7.17,cffi==1.11.0,coverage==4.4.1,cryptography==2.0.3,docutils==0.14,idna==2.6,isort==4.2.15,jmespath==0.9.3,lazy-object-proxy==1.3.1,mccabe==0.6.1,mock==2.0.0,pbr==3.1.1,py==1.4.34,pycparser==2.18,pyflakes==1.6.0,pylint==1.7.2,pytest==3.2.2,pytest-catchlog==1.2.2,pytest-cov==2.5.1,pytest-mock==1.6.3,python-dateutil==2.6.1,s3transfer==0.1.11,six==1.11.0,wrapt==1.10.11
pylint runtests: PYTHONHASHSEED='1582386464'
pylint runtests: commands[0] | pylint --rcfile=pylintrc src/aws_encryption_sdk/ setup.py
************* Module aws_encryption_sdk.internal.formatting.deserialize
src/aws_encryption_sdk/internal/formatting/deserialize.py:63: [R0914(too-many-locals), deserialize_header] Too many local variables (27/15)
src/aws_encryption_sdk/internal/formatting/deserialize.py:63: [R0915(too-many-statements), deserialize_header] Too many statements (54/50)

------------------------------------------------------------------
Your code has been rated at 9.99/10 (previous run: 9.99/10, +0.00)

ERROR: InvocationError: '/Users/bullocm/git/aws-encryption-sdk-python/.tox/pylint/bin/pylint --rcfile=pylintrc src/aws_encryption_sdk/ setup.py'
__________________________________________________________________________________________________ summary ___________________________________________________________________________________________________
ERROR:   flake8: commands failed
ERROR:   pylint: commands failed

enable write APIs on crypto streams

Our current crypto streams work well if we have an existing input and want the crypto stream to be the output, but do not work well if we have an unknown input and unknown output.

We need to figure out what this should look like and behave.

avoid leaving hanging object references in helper functions

The encrypt and decrypt helper functions in the top-level aws_encryption_sdk module currently return the header object from the EncryptionStream objects. To avoid any possibility of those objects not being released for garbage collection because of active reference, we should copy the header object and return the copy.

potential issue with large reads on nonframed messages

In tracking down an issue in an unrelated codebase, I discovered that at least in some cases, passing more than 2GiB to the pyca/cryptography Cipher.update() methods can cause issues with the underlying OpenSSL implementation.

This is not an issue with framed messages because our max frame size is 2GiB, but could be an issue for nonframed messages larger than 2GiB.

We should add tests that check for this edge case, and if it is an issue we can add chunking logic.

Generate non-conflicting frozen upstream test requirements

pyca/cryptography runs downstream tests for this project and other to verify that their changes have not broken identified downstream dependencies.

Events today caused the current way that we are handling this integration to be a problem. aws/aws-dynamodb-encryption-python#85

We need to define and automate creation of a special frozen test requirements file for upstream tests to use when testing this repository in order to avoid this happening again.

Add Python 3.7

Add Python 3.7 tests to CI and add 3.7 to supported versions.

streaming_client classes do not properly interpret short reads in source streams

Problem

Per PEP 3116:

.read(n: int) -> bytes

Read up to n bytes from the object and return them. Fewer than n bytes may be
returned if the operating system call returns fewer than n bytes. If 0 bytes are
returned, this indicates end of file. If the object is in non-blocking mode and no
bytes are available, the call returns None.

The current behavior of the clients in streaming_clients is to interpret reads returning fewer than n bytes as OEF.

Solution

Update clients to properly handle short reads per PEP 3116.

How can I encrypt a folder?

I would like to encrypt and decrypt the EBeispielordner1 folder. It would also be interesting to have several
Can encrypt folders in a single slide, e.g.

E/Beispielordner1

E/Beispielordner2

E/Beispielordner3

I would like to start this program from the console without GUI.

Can you help me? How can I customize your code?

RawMasterKeyProvider runtim-error: AttributeError: can't set attribute

hi

I called below method:

cycle_file("sample.txt")

Got below run-time error:


traceback (most recent call last):
File ".\testSdk.py", line 116, in
main()
File ".\testSdk.py", line 109, in main
cycle_file("sample.txt")
File ".\testSdk.py", line 61, in cycle_file
master_key_provider.add_master_key(key_id)
File "C:\Users\A610620\AppData\Local\Continuum\Anaconda3\lib\site-packages\aws_encryption_sdk\key_providers\base.py", line 155, in add_master_key
master_key = self._new_master_key(key_id)
File "C:\Users\A610620\AppData\Local\Continuum\Anaconda3\lib\site-packages\aws_encryption_sdk\key_providers\raw.py", line 234, in _new_master_key
wrapping_key=wrapping_key
File "<attrs generated init 97ee74820011dd83c2b5dad60b4be0b7f1be967f>", line 3, in init
AttributeError: can't set attribute


exception (2nd exception!) is being thrown here (def _get_raw_key)

def _get_raw_key(self, key_id):
    """Returns a static, randomly-generated symmetric key for the specified key ID.

    :param str key_id: Key ID
    :returns: Wrapping key that contains the specified static key
    :rtype: :class:`aws_encryption_sdk.internal.crypto.WrappingKey`
    """
    try:
        static_key = self._static_keys[key_id]
    except KeyError:
        static_key = os.urandom(32)
        self._static_keys[key_id] = static_key
        try:
            return WrappingKey(wrapping_algorithm=WrappingAlgorithm.AES_256_GCM_IV12_TAG16_NO_PADDING,wrapping_key=static_key,wrapping_key_type=EncryptionKeyType.SYMMETRIC)
        except Exception as e:
            print (e)

Encrypt function returning Byte Array instead of string

Hi there! I'm new to using this library, so forgive any stupid questions :). When I call the encrypt function, it returns a byte array... although, in the documentation, it says it's supposed to return a string. My implementation relies on the ciphertext being returned as a string - if I'm missing something, and it can't be, is there an encoding for the byte array I can use to convert it in to a string?

Thanks for your help!

With attrs 17.4.0, "convert" is deprecated in favor of "converter"

Problem

The attr.ib(convert=callable) option is now deprecated in favor of attr.ib(converter=callable).

This is done to achieve consistency with other noun-based arguments like validator.

convert will keep working until at least January 2019 while raising a DeprecationWarning.

https://github.com/python-attrs/attrs/blob/master/CHANGELOG.rst#deprecations

Reported on the AWS forums: https://forums.aws.amazon.com/thread.jspa?threadID=271087

Solution

We'll need to rename each use of "convert" to "converter" and bump requirements to attrs >= 17.4.0.

Can't pass an object with read() to StreamDecryptor

So now I'm trying to download an encrypted file, and I figured I'd pipe it through the StreamDecryptor. But when I try this, I get back

TypeError: 'StreamingBody' does not have the buffer interface

I suspect this is because in def prep_stream_data(data): (L138:internal/utils/__init__.py), my file like object is checked with isinstance(data, (file, io.IOBase, six.StringIO)). But my object is a botocore StreamingBody, which doesn't inherit from any of these. So my object gets passed to io.BytesIO and dies because it's a stream and not a buffer.

I'm not sure what this check is specifically requiring, it might be nicer to check for specific methods -- I expected anything with read() to work here.

Underlying (and useful) exception details are hidden

This has bitten me numerous times!

All of the following instances catch ClientError from botocore and re-raise a new generic (and remarkably useless by comparison) exception:

  • key_providers.kms._generate_data_key:L213
  • key_providers.kms._encrypt_data_key:L250
  • key_providers.kms._decrypt_data_key:L283

I believe a different approach should be taken here to prevent hiding botocore Client exception details (network connectivity, insufficient permissions, etc).

Remove frame-size block-length restriction

While we may need to support non-stream ciphers at some point, currently our only bulk cipher is AES-GCM which has no block size restrictions. Thus, we can (and should) remove the restriction that our frame-sizes be a multiple of 16.

https://github.com/awslabs/aws-encryption-sdk-python/blob/master/src/aws_encryption_sdk/internal/utils/__init__.py#L51

This change should be completely backwards compatible as this restriction is only enforced on the encrypt path and not the decrypt path. As part of this change, backwards compatibility will be manually tested.

Additionally, this task will also be completed in aws-encryption-sdk-java. See issue aws-encryption-sdk-java#42.

How to get 'IV' used for encryption using kms

How to get 'IV' used for encryption using kms?
we are using s3 copy to redshift using client side encryption. and s3 object need s3 data as metadata .
the error i see is
"error: Failed writing body (0 != 16347) Cause: S3 object encryption key iv is of invalid size. Size=17. Expected size 16"

AttributeError while running examples in this repo

Hi,

when I'm running examples from this repo "Encryption and Decryption" , this gives me the following error.

Traceback (most recent call last):
  File "kms-multi.py", line 11, in <module>
    key_provider=kms_key_provider
  File "/home/nbommu/python/local/lib/python2.7/site-packages/aws_encryption_sdk/__init__.py", line 76, in encrypt
    ciphertext = encryptor.read()
  File "/home/nbommu/python/local/lib/python2.7/site-packages/aws_encryption_sdk/streaming_client.py", line 202, in read
    self._prep_message()
  File "/home/nbommu/python/local/lib/python2.7/site-packages/aws_encryption_sdk/streaming_client.py", line 405, in _prep_message
    request=encryption_materials_request
  File "/home/nbommu/python/local/lib/python2.7/site-packages/aws_encryption_sdk/materials_managers/default.py", line 78, in get_encryption_materials
    signing_key = self._generate_signing_key_and_update_encryption_context(algorithm, encryption_context)
  File "/home/nbommu/python/local/lib/python2.7/site-packages/aws_encryption_sdk/materials_managers/default.py", line 54, in _generate_signing_key_and_update_encryption_context
    if algorithm.signing_algorithm_info is None:
AttributeError: 'tuple' object has no attribute 'signing_algorithm_info'

Name: Python
Version: 2.7.14
Name: boto3
Version: 1.7.48

Name aws-encryption

I'm now the owner of the name aws-encryption, trying to install it currently fails with a message "Did you mean to install aws-encryption-sdk?" (code).

Maybe you want it though?

blacken

Apply black and isort autoformatters and add to CI.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.