GithubHelp home page GithubHelp logo

awslabs / aws-crt-java Goto Github PK

View Code? Open in Web Editor NEW
51.0 24.0 38.0 5.86 MB

Java bindings for the AWS Common Runtime

License: Apache License 2.0

CMake 0.39% Shell 0.63% Java 61.39% C 33.01% Batchfile 0.05% Python 4.51% Kotlin 0.02%
hacktoberfest

aws-crt-java's Introduction

AWS CRT Java

Java Bindings for the AWS Common Runtime

License

This library is licensed under the Apache 2.0 License.

Jump To:

Platform

Linux/Unix

Requirements:

  • Clang 3.9+ or GCC 4.4+
  • cmake 3.1+
  • Java: Any JDK8 or above, ensure JAVA_HOME is set
  • Maven

Building:

  1. apt-get install cmake3 maven openjdk-8-jdk-headless -y
  2. git clone https://github.com/awslabs/aws-crt-java.git
  3. cd aws-crt-java
  4. git submodule update --init --recursive
  5. mvn compile

OSX

Requirements:

  • cmake 3.1
  • ninja
  • Java: Any JDK8 or above, ensure JAVA_HOME is set
  • Maven

Building:

  1. brew install maven cmake (if you have homebrew installed, otherwise install these manually)
  2. git clone https://github.com/awslabs/aws-crt-java.git
  3. cd aws-crt-java
  4. git submodule update --init --recursive
  5. mvn compile

Windows

Requirements:

  • Visual Studio 2015 or above
  • CMake 3.1
  • Java: Any JDK8 or above, ensure JAVA_HOME is set
  • Maven

Building:

  1. choco install maven (if you have chocolatey installed), otherwise install maven and the JDK manually
  2. git clone https://github.com/awslabs/aws-crt-java.git
  3. cd aws-crt-java
  4. git submodule update --init --recursive
  5. mvn compile

NOTE: Make sure you run this from a VS Command Prompt or have run VCVARSALL.BAT in your current shell so CMake can find Visual Studio.

Documentation

Java CRT Documentation

Installing

From the aws-crt-java directory: mvn install From maven: (https://search.maven.org/artifact/software.amazon.awssdk.crt/aws-crt/)

Platform-Specific JARs

The aws-crt JAR in Maven Central is a large "uber" jar that contains compiled C libraries for many different platforms (Windows, Linux, etc). If size is an issue, you can pick a smaller platform-specific JAR by setting the <classifier>.

Sample to use classifier from aws-crt:

        <!-- Platform-specific Linux x86_64 JAR -->
        <dependency>
            <groupId>software.amazon.awssdk.crt</groupId>
            <artifactId>aws-crt</artifactId>
            <version>0.20.5</version>
            <classifier>linux-x86_64</classifier>
        </dependency>
        <!-- "Uber" JAR that works on all platforms -->
        <dependency>
            <groupId>software.amazon.awssdk.crt</groupId>
            <artifactId>aws-crt</artifactId>
            <version>0.20.5</version>
        </dependency>

Available classifiers

  • linux-armv6 (no auto-detect)
  • linux-armv7 (no auto-detect)
  • linux-aarch_64
  • linux-x86_32
  • linux-x86_64
  • linux-x86_64-musl (no auto-detect)
  • linux-armv7-musl (no auto-detect)
  • linux-aarch_64-musl (no auto-detect)
  • osx-aarch_64
  • osx-x86_64
  • windows-x86_32
  • windows-x86_64
  • fips-where-available (no auto-detect)

Auto-detect

The os-maven-plugin can automatically detect your platform's classifier at build time.

NOTES: The auto-detected linux-arm_32 platform classifier is not supported, you must specify linux-armv6 or linux-armv7. Additionally, musl vs glibc detection is not supported either. If you are deploying to a musl-based system and wish to use a classifier-based jar, you must specify the classifier name yourself.

<build>
        <extensions>
            <!-- Generate os.detected.classifier property -->
            <extension>
                <groupId>kr.motd.maven</groupId>
                <artifactId>os-maven-plugin</artifactId>
                <version>1.7.0</version>
            </extension>
        </extensions>
 </build>

 <dependencies>
        <dependency>
            <groupId>software.amazon.awssdk.crt</groupId>
            <artifactId>aws-crt</artifactId>
            <version>0.20.5</version>
            <classifier>${os.detected.classifier}</classifier>
        </dependency>
  <dependencies>

FIPS Compliance

Currently the classifier fips-where-available provides an "uber" jar with FIPS compliance on some platforms.

Platforms without FIPS compliance are also included in this jar, for compatibility's sake. Check CRT.isFIPS() at runtime to ensure you are on a FIPS compliant platform. The current breakdown is:

  • FIPS compliant: linux-aarch_64, linux-x86_64
  • NOT compliant: linux-armv6, linux-armv7, linux-armv7-musl, linux-aarch_64-musl, linux-x86_32, linux-x86_64-musl, osx-aarch_64, osx-x86_64, windows-x86_32, windows-x86_64

Warning

The classifier, and platforms with FIPS compliance are subject to change in the future.

System Properties

  • To enable logging, set aws.crt.log.destination or aws.crt.log.level:
    • aws.crt.log.level - Log level. May be: "None", "Fatal", "Error", "Warn" (default), "Info", "Debug", "Trace".
    • aws.crt.log.destination - Log destination. May be: "Stderr" (default), "Stdout", "File", "None".
    • aws.crt.log.filename - File to use when aws.crt.log.destination is "File".
  • aws.crt.libc - (Linux only) Set to "musl" or "glibc" if CRT cannot properly detect which to use.
  • aws.crt.lib.dir - Set directory where CRT may extract its native library (by default, java.io.tmpdir is used)
  • aws.crt.memory.tracing - May be: "0" (default, no tracing), "1" (track bytes), "2" (more detail). Allows the CRT.nativeMemory() and CRT.dumpNativeMemory() functions to report native memory usage.

TLS Behavior

The CRT uses native libraries for TLS, rather than Java's typical Secure Socket Extension (JSSE), KeyStore, and TrustStore. On Windows and Apple devices, the built-in OS libraries are used. On Linux/Unix/etc s2n-tls is used.

If you need to add certificates to the trust store, add them to your OS trust store. The CRT does not use the Java TrustStore. For more customization options, see TlsContextOptions and TlsConnectionOptions.

Mac-Only TLS Behavior

Please note that on Mac, once a private key is used with a certificate, that certificate-key pair is imported into the Mac Keychain. All subsequent uses of that certificate will use the stored private key and ignore anything passed in programmatically. Beginning in v0.6.6, when a stored private key from the Keychain is used, the following will be logged at the "info" log level:

static: certificate has an existing certificate-key pair that was previously imported into the Keychain.  Using key from Keychain instead of the one provided.

Testing

Some tests require pre-configured resources and proper environment variables to be set to run properly. These tests will be quietly skipped if the environment variables they require are not set.

IoT tests

Many IoT related tests require that you have set up an AWS IoT Thing.

  • Some required environment variables
    • AWS_TEST_MQTT311_IOT_CORE_HOST: AWS IoT service endpoint hostname for MQTT3
    • AWS_TEST_MQTT311_IOT_CORE_RSA_CERT: Path to the IoT thing certificate for MQTT3
    • AWS_TEST_MQTT311_IOT_CORE_RSA_KEY: Path to the IoT thing private key for MQTT3
    • AWS_TEST_MQTT311_IOT_CORE_ECC_CERT: Path to the IoT thing with EC-based certificate for MQTT3
    • AWS_TEST_MQTT311_IOT_CORE_ECC_KEY: Path to the IoT thing with ECC private key for MQTT3 (The ECC key file should only contains the ECC Private Key section to working on MacOS.)
    • AWS_TEST_MQTT311_ROOT_CA: Path to the root certificate

Other Environment Variables that can be set can be found in the SetupTestProperties() function in CrtTestFixture.java

These can be set persistently via Maven settings (usually in ~/.m2/settings.xml):

<settings>
    ...
  <profiles>
    <profile>
        <activation>
            <activeByDefault>true</activeByDefault>
        </activation>
        <properties>
            <crt.test.endpoint>XXXXXXXXXX-ats.iot.us-east-1.amazonaws.com</crt.test.endpoint>
            <crt.test.certificate>/path/to/XXXXXXXX-certificate.pem.crt</crt.test.certificate>
            <crt.test.privatekey>/path/to/XXXXXXXX-private.pem.key</crt.test.privatekey>
            <crt.test.rootca>/path/to/AmazonRootCA1.pem</crt.test.rootca>
            ... etc ...
        </properties>
    </profile>
  </profiles>
</settings>%

Proxy Tests

Most of proxy related tests need pre-configured Proxy host to run the tests properly.

  • Required environment variables:
    • AWS_TEST_HTTP_PROXY_HOST: Hostname of proxy
    • AWS_TEST_HTTP_PROXY_PORT: Port of proxy
    • NETWORK_TESTS_DISABLED: Set this if tests are running in a constrained environment where network access is not guaranteed/allowed.

S3 Tests

Most of S3 related tests require AWS credentials and a set of pre-configured S3 buckets. There is a helper script from aws-c-s3 that can be used to set up the test environment, here.

Example to use the helper and run the S3 tests:

cd aws-crt-java
python3 -m pip install boto3
export CRT_S3_TEST_BUCKET_NAME=<bucket_name>
python3 crt/aws-c-s3/tests/test_helper/test_helper.py init
# Run S3ClientTest. eg: mvn -Dtest=S3ClientTest test

more details about the helper can be found from here.

  • Required environment variable:
    • CRT_S3_TEST_BUCKET_NAME: The basic bucket name for S3 tests.

IDEs

  • CMake is configured to export a compilation database at target/cmake-build/compile_commands.json
  • CLion: Build once with maven, then import the project as a Compilation Database Project
  • VSCode: will detect that this is both a java project and if you have the CMake extension, you can point that at CMakeLists.txt and the compilation database

Debugging

Tests can be debugged in Java/Kotlin via the built-in tooling in VSCode and IntelliJ. If you need to debug the native code, it's a bit trickier.

To debug native code with VSCode or CLion or any other IDE:

  1. Find your mvn launch script(e.g. realpath $(which mvn)) and pull the command line at the bottom from it. This changes between versions of maven, so it is difficult to give consistent directions.

    As an example, for Maven 3.6.0 on Linux: /path/to/java -classpath /usr/share/java/plexus-classworlds-2.5.2.jar -Dclassworlds.conf=/usr/share/maven/bin/m2.conf -Dmaven.home=/usr/share/maven -Dlibrary.jansi.path=/usr/share/maven/lib/jansi-native -Dmaven.multiModuleProjectDirectory=. org.codehaus.plexus.classworlds.launcher.Launcher test -DforkCount=0 -Ddebug.native -Dtest=HttpClientConnectionManager#testMaxParallelConnections

    The important parts are:

    • -DforkCount=0 - prevents the Maven process from forking to run tests, so your debugger will be attached to the right process. You can ignore this if you configure your debugger to attach to child processes.
    • -Ddebug.native - Makes CMake compile the JNI bindings and core libraries in debug. By default, we compile in release with symbols, which will help for call stacks, but less so for live debugging.
  2. Set the executable to launch to be your java binary (e.g. /usr/bin/java)

  3. Set the parameters to be the ones used by the mvn script, as per above

  4. Set the working directory to the aws-crt-java directory

  5. On windows, you will need to manually load the PDB via the Modules window in Visual Studio, as it is not embedded in the JAR. It will be in the target/cmake-build/lib/windows/<arch> folder.

aws-crt-java's People

Contributors

aajtodd avatar aceeric avatar alexw91 avatar alfred2g avatar bgklika avatar bretambrose avatar cmello avatar coldencullen avatar dagnir avatar davidogunsaws avatar dmitriymusatkin avatar graebm avatar ianbotsf avatar ilevyor avatar jmklix avatar jonathanhenson avatar justinboswell avatar kggilmer avatar millems avatar rccarper avatar sbstevek avatar sfod avatar singku-china avatar tingdaok avatar twistedtwigleg avatar waahm7 avatar willchilds-klein avatar xiazhvera avatar yasminetalby avatar zoewangg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-crt-java's Issues

Expose errorCode/awsErrorName in HttpException

getErrorCode() in software.amazon.awssdk.crt.http.HttpException is package-private, which means the only method available to classify errors is to parse the message. When we fail to acquire a connection, we throw different exceptions to our consumers (HttpFailureException for generic connection failures, and HttpConnectionTimeoutException if it was specifically a timeout), which leads to brittle and obviously incorrect classification such as the below:

if (err.getMessage() != null && err.getMessage().contains("negotiation failed")) {
    throw new HttpFailureException(err.getMessage());
}

final String msg = format("Failed to get connection within %d ms: %s",
        params.getConnectionTimeoutMillis(), err.getMessage());
    throw new HttpConnectionTimeoutException(msg);
}

In general, I support a singular exception type as it makes error handling cleaner in cases when we don't particularly care about the cause of the exception. Exposing the errorCode or the result of CRT.awsErrorName(errorCode) would be sufficient for cases where we do care.

Add support to run on 32bit ARM Android

Android which run on 32 bit ARM has custom architecture string not the same as on Linux.
To support run CRT on 32 bit ARM Android we need to update file src/main/java/software/amazon/awssdk/crt/CRT.java function getArchIdentifier()

MqttClientConnectionEvents.onConnectionResumed is not called on first successful connect

In the documentation https://awslabs.github.io/aws-crt-java/software/amazon/awssdk/crt/mqtt/MqttClientConnectionEvents.html#onConnectionResumed(boolean) there is stated, that onConnectionResumed is "called on first successful connect, and whenever a reconnect succeeds".

Based on observations on BasicPubSub client from aws-iot-device-sdk-java-v2 onConnectionResumed is never called on first successful connect.

So there is a bug in the documentation or in the code.

Tested with MQTT over WebSocket with

  • aws-rct-java v0.15.18
  • aws-iot-device-sdk-java-v2 v1.6.1
openjdk version "1.8.0_312"
OpenJDK Runtime Environment (Temurin)(build 1.8.0_312-b07)
OpenJDK 64-Bit Server VM (Temurin)(build 25.312-b07, mixed mode)
openjdk version "11.0.14" 2022-01-18
OpenJDK Runtime Environment Temurin-11.0.14+9 (build 11.0.14+9)
OpenJDK 64-Bit Server VM Temurin-11.0.14+9 (build 11.0.14+9, mixed mode)

onConnectionResumed is the only place where one should place subscriptions to topcis so they can survive reconnects. When this callback is not called on the first connect, there must be special handling for the first connect and for every other connect in user code.

withTimeoutMs will throw a null pointer exception if there is no SocketOptions previously set

Describe the bug

public TlsConnectionOptions withTimeoutMs(int timeoutMs) {

Expected Behavior

withTimeoutMs (and similar APIs) should check for null and create default socket options if required.

Current Behavior

Throws a NPE

Reproduction Steps

N/A

Possible Solution

No response

Additional Information/Context

No response

aws-crt-java version used

N/A

Java version used

N/A

Operating System and version

N/A

S3CrtAsyncClientBuilder with path-style access

Hi,

In order to build an asyn S3 client that I can benefit from multipart downloads, I use S3CrtAsyncClientBuilder instead of S3AsyncClientBuilder, but then I loose the capability of using path-style access to S3 instead of virtual-hosted style.

It would be possible to do something semantically equivalent to
S3AsyncClient.crtBuilder().forcePathStyle(true).build()

in the same way you would do it with S3AsyncClientBuilder:

S3AsyncClient.builder().forcePathStyle(true).build()

Thanks,

Bernat

Fatal error when trying to connect and no connection available

Hello,
we are using the aws-crt library in association with the aws-iot-device-sdk-java-v2 for our java based connector for AWS IoT core.
We have noticed that, if the devices is not able to connect to the Internet and we try to connect to AWS IoT Core, it happens that the JVM is restarted. Looking at the errors reported, the following can be

Fatal error condition occurred in /work/aws-common-runtime/aws-c-common/source/allocator.c:166: allocator != ((void *)0)
Exiting Application
################################################################################
Resolved stacktrace:
################################################################################
################################################################################
Raw stacktrace:
################################################################################

This happens with our current bundle with aws-crt-java in version 0.5.8 and aws-iot-device-sdk-java-v2 in version 1.1.1 but also with the latest combination available (0.6.5 and 1.2.5).

To be noticed that the JVM reports also this:

OpenJDK Client VM warning: You have loaded library /tmp/AWSCRT_15959522002727324888219554217087libaws-crt-jni.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.

CredentialsProvider should be an interface

Currently CredentialsProvider is an abstract base class that expects sub-classes to be native implementations.

This prevents a library from providing a pure Java/Kotlin implementation of a credentials provider.

This should be refactored to an interface and the abstract base class can be made internal. The implementation will need to provide a bridge that wraps externally provided providers that proxies to C and back.

Custom ports in the S3Client

This relates to the AWS Java SDK V2 pull request: aws/aws-sdk-java-v2#2976

The PR relates to using the SDK TransferManager to parallelize downloads of large objects. In our testing, this parallelization results in large objects downloading up to 5X faster than non-parallel downloads. The pull request refers to using the CRT S3Client class. In looking at CRT code, the S3Client class makes its way down into the JNI layer and ultimately winds up resolving a connection to either 80 or 443 (hard-coded.)

We would like to use the SDK TransferManager functionality in a Kubernetes environment with an S3-compliant object store (e.g. Minio) and would like the flexibility to use different ports both inside the cluster and also outside the cluster to support - for example - desktop testing via port-forwarding.

To support this, the CRT S3Client and the JNI s3ClientNew method (and related lower-level C functions) would need to be modified to allow a custom port, much like the custom endpoint that is currently accepted. This would replace the 80/443 hardcodes.

We've already done some experimentation with this and the change does not seem too pervasive if localized to the S3Client and below. Before investing the effort in a PR - I wanted to get your views on whether you would accept such a PR - or - if you're already working on changes to support custom ports in the CRT in the future. Thank you.

Improve Performance of Debug Native Object tracking

In a previous Pull Request a change was made to replace the backing native resource object map in debug mode from a ConcurrentHashMap to a HashMap that used the synchronized keyword on the class name around all accesses.

While this implementation is also correct, it has slower performance than continuing to use ConcurrentHashMap when in debug mode. Synchronization will lock the entire HashMap any time any thread is adding or removing any time from the Map, and will cause contention issues if multiple threads are making reads/writes simultaneously. Whereas ConcurrentHashMap only locks a small inner subset of the backing Map during reads/writes, and makes it much less likely that two threads will create contention issues if both are making modifications simultaneously.

For more info on this, see:

Add HttpProxyOptions to S3 client

Http client has a programmatic option of setting up a proxy configuration, but I don't see a way of doing this for the S3 client. It's only possible by setting up the HTTPS_PROXY env variable.

using crt with Linux Alpine jvm crash

jvm crashes when i use s3's mrap to generate presigned url.

my doker file is:
FROM openjdk8
RUN ln -s /lib /lib64 && apk add --no-cache libc6-compat linux-pam krb5 krb5-libs

uname -a results:
Linux test-service-deployment-xxxx-xxxx 5.4.181-99.354.amzn2.x86_64 #1 SMP Wed Mar 2 18:50:46 UTC 2022 x86_64 Linux

and I using pom:

<aws.java.sdk.version>2.17.206</aws.java.sdk.version>

<dependency>
            <groupId>software.amazon.awssdk</groupId>
            <artifactId>auth-crt</artifactId>
            <version>${aws.java.sdk.version}</version>
</dependency>

and the jvm crashed with message :

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x0000000000065eb6, pid=9, tid=0x00007f08b03fdb10
#
# JRE version: OpenJDK Runtime Environment (8.0_212-b04) (build 1.8.0_212-b04)
# Java VM: OpenJDK 64-Bit Server VM (25.212-b04 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 3.12.0
# Distribution: Custom build (Sat May  4 17:33:35 UTC 2019)
# Problematic frame:
# C  0x0000000000065eb6
#
# Core dump written. Default location: //core or core.9
#
# An error report file with more information is saved as:
# //hs_err_pid9.log
#
# If you would like to submit a bug report, please include
# instructions on how to reproduce the bug and visit:
#   https://icedtea.classpath.org/bugzilla
#

Support for devloping on MAC M1

It seems that the latests jar that is pushed to maven does not include binaries for MAC M1's.

Unable to open library in jar for AWS CRT: /osx/armv8/libaws-crt-jni.dylib

Certificates from default truststore not working with TLS

Describe the bug

At work, we use the S3TransferManager. The excerpt below show the
AwsCrtAsyncHttpClient. Unfortunately, with this, TLS does not work.

SdkAsyncHttpClient httpClient = AwsCrtAsyncHttpClient.builder()
         .maxConcurrency(64)
         .build();

S3Configuration serviceConfiguration = S3Configuration.builder()
         .checksumValidationEnabled(false)
         .chunkedEncodingEnabled(true)
         .build();


var localS3AsyncClientBuilder = S3AsyncClient.builder().httpClient(httpClient)
         .region(Region.of("dus"))
         .serviceConfiguration(serviceConfiguration);

Later on, credentials are added

.credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create(accessKeyId, secretAccessKey)))
.build()

and it is then passed to the transfer manager

S3TransferManager.builder()
         .s3Client(s3AsyncClient)
         .build(),

In the extended logs, I see something like this:
Client connection failed with error 1029 (AWS_IO_TLS_ERROR_NEGOTIATION_FAILURE).

And checking with Wireshark shows me:
Description: Certificate Unknown (46)

The same is true, If I use the CRT Client like this:

         S3AsyncClient.crtBuilder()
             .region(Region.of("dus"))
             .checksumValidationEnabled(false)
             .maxConcurrency(multipartMaxConcurrency.orElse(null))
             .minimumPartSizeInBytes(multipartMinPartSize.orElse(null))
             .targetThroughputInGbps(multipartTargetThroughput.orElse(null));

BUT, when I use the NettyNioAsyncHttpClient, it works. No TLS Handshake problem,
and even TLS 1.3 is used.

Now, I am stuck. I could not find any (documented) way to tell the CRT Client how
to find the relevant certificates needed for the TLS handshake to succeed.
Currently, I've simply added them to the default JDK truststore.

Any help is greatly appreciated.

If this is the wrong place for this issue or I simply missed something in the docs, then please bear with me.

Expected Behavior

AwsCrtAsyncHttpClient should behave the same as NettyNioAsyncHttpClient with respect to TLS hanshake and negotiation.

Current Behavior

AwsCrtAsyncHttpClient is not usable. TLS hanshake fails whereas with NettyNioAsyncHttpClient TLS hanshake and negotiation works like a charm.

Reproduction Steps

Unfortunately, I have not working, simple code fragment. All I can say is already provided in the detailed bug report.

Possible Solution

No response

Additional Information/Context

We use the AWS SDK to connect to our local Cloudian S3 storage. In addition we use Quarkus (not native, yet).

aws-crt-java version used

0.24.1

Java version used

OpenJdk 11 and GraalVM 22.3

Operating System and version

MacOS 12.3

mvn compile fails

I'm trying to use aws-crt-java sdk.

I'm stuck at the mvn compile part.

Plugin org.apache.maven.plugins:maven-jar-plugin:3.1.0 or one of its dependencies could not be resolved.
20221207_170459

20221207_170440

Add support for shading

It's currently not possible to shade aws-crt-java (rename its class files and native libraries and embed them into my own artifact). Renaming the class files itself works, but an UnsatisfiedLinkError is raised in software.amazon.awssdk.crt.CRT#awsCrtInit.

An ideal shading solution would allow for us to rename both the native libraries and the Java classes. As an example, Netty allows consumers to shade their classes and native libraries. They support a system property to allow customers to specify the prefix given to the libraries, though they later added support for automatically detecting the prefix. They also had to take special care to make sure multiple shaded versions of their library could coexist peacefully.

Hard JVM crash under high concurrency

Using the test harness in my fork, I can reliably reproduce a hard JVM crash. I am using a c5.4xlarge instance running Amazon Linux 2, executing mvn -Dtest=software.amazon.awssdk.crt.test.PerformanceIntegrationTest#benchmark test to run the aforementioned test against Corretto 11.

Unfortunately, enabling trace CRT logging causes reproduction to fail. The best logging I can provide is at the debug level, as well as the JVM crash report.

Reducing the parameter NUMBER_OF_CRT_CONNECTIONS to something around the number of available processors (Runtime.getRuntime().availableProcessors()) causes the test to succeed instead of crashing. Values such as 64_000, 34_000, 1024 cause immediate crashes; 512 is pretty unstable, usually crashing; 256 will sometimes crash, sometimes generate NullPointerExceptions in application code.

ulimit -n prints 65535 on this instance.

perf_test_crash.tar.gz

CRT TLS context

Problem: The current CRT class S3Client does not support HTTP access to an S3 bucket. The code always establishes a TLS context - either by using the one provided to it, or by creating a default one. This then later results in the selection of HTTPS over 443. We want to use the AWS SDK + CRT in a Kubernetes service-meshed environment in which the endpoints are HTTP, and the mesh handles TLS.

Specifically:

In the CRT code, the code in src/native/s3_client.c in the function JNIEXPORT jlong JNICALL Java_software_amazon_awssdk_crt_s3_S3Client_s3ClientNew has code:

    ...
    struct aws_s3_client_config client_config = {
        .max_active_connections_override = max_connections,
        .region = region,
        .client_bootstrap = client_bootstrap,
        .tls_connection_options = tls_options,
        .signing_config = &signing_config,
        .part_size = (size_t)part_size,
        .throughput_target_gbps = throughput_target_gbps,
        .retry_strategy = retry_strategy,
        .shutdown_callback = s_on_s3_client_shutdown_complete_callback,
        .shutdown_callback_user_data = callback_data,
        .compute_content_md5 = compute_content_md5 ? AWS_MR_CONTENT_MD5_ENABLED : AWS_MR_CONTENT_MD5_DISABLED,
    };
    ....

In the aws_s3_client_config struct there is a member enum aws_s3_meta_request_tls_mode tls_mode; which is not initialized in the above block. Therefore, by default, this struct member has value zero which maps to enum AWS_MR_TLS_ENABLED. This enum is referenced in source/s3_client.c and has the effect of always enabling TLS.

In order to change this to support HTTP or HTTPS, I propose to submit a PR as follows:

  1. Add a parameter boolean tlsEnabled to the JNI method s3ClientNew signature
  2. Inside the JNI C code for that function, initialize the aforementioned struct member to enable or disable TLS on the basis of the parameter value. The code in source/s3_client.c already anticipates this so - no changes are required there
  3. Modify S3ClientOptions and S3Client to also accommodate tlsEnabled

So this change is localized in the CRT Java/JNI code, and doesn't touch any of the git sub-modules. We've already proofed this in-house. Does the idea of this PR seem acceptable? If so I will submit for review. Thanks.

add getTopicAliasMaximum() to software.amazon.awssdk.crt.mqtt5.packets.ConnAckPacket

Describe the feature

At the moment class ConnAckPacket do not provide ability to read CONNACK topic alias maximum property (id 34 0x22).
I have tested another MQTT client on IoT Core and that property actually sent with value 8.

Please provide such method as well as parsing of 0x22 property from raw CONNACK data if missing.

Use Case

Any CRT or SDK-based MQTT client which topic alias feature.

Proposed Solution

Just add method and new Id in parser.

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

createWithMtlsJavaKeystore should use custom key operations to support non-exportable keys

Feature Request:

public static TlsContextOptions createWithMtlsJavaKeystore(

createWithMtlsJavaKeystore extracts the key, assumes it is RSA and then creates the TLS options using the in-memory private key and certificate. There should be a way to use the Java KeyStore via custom key operations to provide security without exporting the key from secure storage such as PKCS11 or AndroidKeyStore.

This can be done by customers manually by writing the necessary code, but having a prebuilt implementation to call the necessary Java APIs to sign and verify using the secure key material would make a lot of sense.

After network disconnect, the connection never recovers.

Attached log
aws-crt-java-logs.txt

App first start with a good iot connection. Then I un-plug network, wait for a while like 2 minutes, then plug network back in.
Expecting iot connection will recover, but it hang in there forever.
Looked at logs, seems [dns] - static, resolving c1awrs03vpwtt1.credentials.iot.us-west-2.amazonaws.com, unsolicited resolve count happened too fast and stopped at count 16.

I'm using AwsIotMqttConnectionBuilder with
withCleanSession(false)
withWebsockets(true)
and X509CredentialsProvider

I did subscribe the same multiple times before I un-plug network. Not sure if this is a good practice or not.

Please help. Thanks!

Using proxy options with authentication leads to access violation exception.

I'm glad to see that a proxy configuration is possible, but if I try to use the proxy options with basic authorization, username and password I get the following error:

Proxy Configuration

var proxy = new HttpProxyOptions();
proxy.setAuthorizationType(HttpProxyAuthorizationType.Basic);
proxy.setHost("<proxy-address>");
proxy.setPort(8080);
proxy.setAuthorizationUsername("<username>");
proxy.setAuthorizationPassword("<password>");

Error Message

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x00007ffc6f64137b, pid=16680, tid=9252
#
# JRE version: OpenJDK Runtime Environment (13.0.1+9) (build 13.0.1+9)
# Java VM: OpenJDK 64-Bit Server VM (13.0.1+9, mixed mode, sharing, tiered, compressed oops, g1 gc, windows-amd64)
# Problematic frame:
# C  [VCRUNTIME140D.dll+0x137b]
#
# No core dump will be written. Minidumps are not enabled by default on client versions of Windows

Error in http_connection_manager.c (aws_jni_check_and_clear_exception)

Hello,

we receive this error:

MicrosoftTeams-image

If this error occurs, the entire java process gets hung up.

This error might be somehow related to concurrency as we observed that if we reduce the fork count in maven to 1 then the error would occur less often, but it still occurs.

Unfortunately, I cannot provide any more context on this error as it is not possible to debug in Java.

Regards

Boris

CRT doesnโ€™t handle graceful shutdown properly

Describe the bug
We are using AWS Java CRT in the context of AWS sdk V2 with transfer manager for files uploads.
When our Spring boot application is shutdown gracefully, if a multipart upload is started through TransferManager (developer preview) during the graceful shutdown, the transfer is initiated but gets stuck.

Expected Behavior

A multipart upload through TransferManager should finish successfully even though it's started during a graceful shutdown.

Current Behavior

The multipart upload starts as we can see logs like :

2022-11-30 14:34:35.205  INFO 116 --- [rding-upload-14] s.a.a.t.s.p.LoggingTransferListener      : [] Transfer initiated...
2022-11-30 14:34:35.208  INFO 116 --- [rding-upload-14] s.a.a.t.s.p.LoggingTransferListener      : [] |                    | 0.0%
2022-11-30 14:34:35.233 DEBUG 116 --- [rding-upload-14] s.a.a.c.i.ExecutionInterceptorChain      : [] Creating an interceptor chain that will apply interceptors in the following order: [software.amazon.awssdk.core.internal.interceptor.HttpChecksumRequiredInterceptor@1366a521, software.amazon.awssdk.core.internal.interceptor.SyncHttpChecksumInTrailerInterceptor@113458ec, software.amazon.awssdk.core.internal.interceptor.HttpChecksumValidationInterceptor@275648d5, software.amazon.awssdk.core.internal.interceptor.AsyncRequestBodyHttpChecksumTrailerInterceptor@308d51cb, software.amazon.awssdk.core.internal.interceptor.HttpChecksumInHeaderInterceptor@16e4b556, software.amazon.awssdk.awscore.interceptor.HelpfulUnknownHostExceptionInterceptor@25a47c5e, software.amazon.awssdk.awscore.eventstream.EventStreamInitialRequestInterceptor@30a198df, software.amazon.awssdk.awscore.interceptor.TraceIdExecutionInterceptor@6af88409, software.amazon.awssdk.services.s3.endpoints.internal.S3ResolveEndpointInterceptor@5985d205, software.amazon.awssdk.services.s3.endpoints.internal.S3EndpointAuthSchemeInterceptor@5f4503e7, software.amazon.awssdk.services.s3.endpoints.internal.S3RequestSetEndpointInterceptor@639e0fb0, software.amazon.awssdk.services.s3.internal.handlers.EnableChunkedEncodingInterceptor@4d680706, software.amazon.awssdk.services.s3.internal.handlers.DisableDoubleUrlEncodingInterceptor@3a8af154, software.amazon.awssdk.services.s3.internal.handlers.EnableTrailingChecksumInterceptor@4da797d9, software.amazon.awssdk.services.s3.internal.handlers.CreateMultipartUploadRequestInterceptor@5e57f252, software.amazon.awssdk.services.s3.internal.handlers.GetObjectInterceptor@538a9ea9, software.amazon.awssdk.services.s3.internal.handlers.ExceptionTranslationInterceptor@3ee7d5dd, software.amazon.awssdk.transfer.s3.internal.ApplyUserAgentInterceptor@4f8f8487, software.amazon.awssdk.services.s3.internal.handlers.AsyncChecksumValidationInterceptor@2eab3a46, software.amazon.awssdk.services.s3.internal.handlers.GetBucketPolicyInterceptor@295396ec, software.amazon.awssdk.services.s3.internal.handlers.PutObjectInterceptor@299af726, software.amazon.awssdk.services.s3.internal.handlers.SyncChecksumValidationInterceptor@1a82e52e, software.amazon.awssdk.services.s3.internal.handlers.DecodeUrlEncodedResponseInterceptor@6d8c8532, software.amazon.awssdk.services.s3.internal.handlers.CreateBucketInterceptor@585d8a99, software.amazon.awssdk.services.s3.internal.handlers.CopySourceInterceptor@26807573, software.amazon.awssdk.services.s3.internal.crt.DefaultS3CrtAsyncClient$ValidateRequestInterceptor@2ba64f18, software.amazon.awssdk.services.s3.internal.crt.DefaultS3CrtAsyncClient$AttachHttpAttributesExecutionInterceptor@228aea4b]
2022-11-30 14:34:35.238 DEBUG 116 --- [rding-upload-14] s.a.a.c.i.ExecutionInterceptorChain      : [] Interceptor 'software.amazon.awssdk.services.s3.endpoints.internal.S3EndpointAuthSchemeInterceptor@5f4503e7' modified the message with its modifyRequest method.
2022-11-30 14:34:35.239 TRACE 116 --- [rding-upload-14] s.a.a.c.i.ExecutionInterceptorChain      : [] Old: PutObjectRequest(Bucket=bucket, Key=key, Tagging=tags)
New: PutObjectRequest(Bucket=bucket, Key=key, Tagging=tags)
2022-11-30 14:34:35.239 DEBUG 116 --- [rding-upload-14] s.a.a.c.i.ExecutionInterceptorChain      : [] Interceptor 'software.amazon.awssdk.transfer.s3.internal.ApplyUserAgentInterceptor@4f8f8487' modified the message with its modifyRequest method.
2022-11-30 14:34:35.240 TRACE 116 --- [rding-upload-14] s.a.a.c.i.ExecutionInterceptorChain      : [] Old: PutObjectRequest(Bucket=bucket, Key=key, Tagging=tags)
New: PutObjectRequest(Bucket=bucket, Key=key, Tagging=tags)
2022-11-30 14:34:35.257 DEBUG 116 --- [rding-upload-14] s.a.a.c.i.ExecutionInterceptorChain      : [] Interceptor 'software.amazon.awssdk.services.s3.endpoints.internal.S3RequestSetEndpointInterceptor@639e0fb0' modified the message with its modifyHttpRequest method.
2022-11-30 14:34:35.257 TRACE 116 --- [rding-upload-14] s.a.a.c.i.ExecutionInterceptorChain      : [] Old: DefaultSdkHttpFullRequest(httpMethod=PUT, protocol=http, host=172.18.0.7, port=4566, encodedPath=encodedpath, headers=[Content-Length, Content-Type, x-amz-tagging], queryParameters=[])
New: DefaultSdkHttpFullRequest(httpMethod=PUT, protocol=http, host=172.18.0.7, port=4566, encodedPath=/bucketencodedpath, headers=[Content-Length, Content-Type, x-amz-tagging], queryParameters=[])
2022-11-30 14:34:35.257 DEBUG 116 --- [rding-upload-14] s.a.a.c.i.ExecutionInterceptorChain      : [] Interceptor 'software.amazon.awssdk.services.s3.internal.handlers.PutObjectInterceptor@299af726' modified the message with its modifyHttpRequest method.
2022-11-30 14:34:35.257 TRACE 116 --- [rding-upload-14] s.a.a.c.i.ExecutionInterceptorChain      : [] Old: DefaultSdkHttpFullRequest(httpMethod=PUT, protocol=http, host=172.18.0.7, port=4566, encodedPath=/bucketencodedpath, headers=[Content-Length, Content-Type, x-amz-tagging], queryParameters=[])
New: DefaultSdkHttpFullRequest(httpMethod=PUT, protocol=http, host=172.18.0.7, port=4566, encodedPath=/bucketencodedpath, headers=[Content-Length, Content-Type, Expect, x-amz-tagging], queryParameters=[])
2022-11-30 14:34:35.257 DEBUG 116 --- [rding-upload-14] software.amazon.awssdk.request           : [] Sending Request: DefaultSdkHttpFullRequest(httpMethod=PUT, protocol=http, host=172.18.0.7, port=4566, encodedPath=/bucketencodedpath, headers=[amz-sdk-invocation-id, Content-Length, Content-Type, Expect, User-Agent, x-amz-tagging], queryParameters=[])

But then nothing happens, so seems like the multipart upload is stuck.

On CRT side, we can see error logs like :

[ERROR] [2022-12-06T15:47:19Z] [00007f821ae37700] [S3MetaRequest] - id=0x7f82a0009090 Could not read from body stream.
[ERROR] [2022-12-06T15:47:19Z] [00007f821ae37700] [S3MetaRequest] - id=0x7f82a0009090 Could not prepare request 0x7f8278012fb0 due to error 9216 (Attempt to use a JVM that has already been destroyed). 

That show CRT has detected beginning of JVM destroy, but not managing graceful shutdown.

Reproduction Steps

Spring boot 2.5.12
AWS SDK 2.18.3
s3-transfer-manager 2.18.3-PREVIEW

Possible Solution

No response

Additional Information/Context

No response

AWS Java SDK version used

2.18.3

JDK version used

17

Operating System and version

Docker Linux Ubuntu Focal 20

Add interfaces to make it possible to use different MQTT clients

The current concrete implementation of MqttClientConnection makes it so that there isn't a way to use a different MQTT client. A customer requested an example Greengrass Lambda function that uses this SDK to handle jobs on Greengrass - aws-samples/aws-greengrass-lambda-functions#792 - and if there was an interface I could use to replace the implementation with a Greengrass compatible one I could create the example.

I'll work on a PR for this.

Originally this issue was opened in the v2 Java Device SDK repo - aws/aws-iot-device-sdk-java-v2#24

public artifact is bundling unneeded files

The public jar hosted on maven central contains some object files and other files that are not needed:

$ wget https://repo1.maven.org/maven2/software/amazon/awssdk/crt/aws-crt/0.15.2/aws-crt-0.15.2.jar
$ zipinfo aws-crt-0.15.2.jar | grep -v .class | grep -v \^d
Archive:  aws-crt-0.15.2.jar
Zip file size: 13872692 bytes, number of entries: 1506
-rw-r--r--  2.0 unx      384 bl defN 21-Sep-24 23:15 META-INF/MANIFEST.MF
-rw-r--r--  2.0 unx  3095628 bl defN 21-Sep-24 23:14 linux/armv7/libaws-crt-jni.so
-rw-r--r--  2.0 unx  3056688 bl defN 21-Sep-24 23:14 linux/armv6/libaws-crt-jni.so
-rw-r--r--  2.0 unx  1588224 bl defN 21-Sep-24 23:14 windows/x86_64/aws-crt-jni.dll
-rw-r--r--  2.0 unx  1222144 bl defN 21-Sep-24 23:14 windows/x86_32/aws-crt-jni.dll
-rw-r--r--  2.0 unx  1380520 bl defN 21-Sep-24 23:14 osx/x86_64/libaws-crt-jni.dylib
-rw-r--r--  2.0 unx    17461 bl defN 21-Sep-24 23:15 META-INF/maven/software.amazon.awssdk.crt/aws-crt/pom.xml
-rw-r--r--  2.0 unx      100 bl defN 21-Sep-24 23:15 META-INF/maven/software.amazon.awssdk.crt/aws-crt/pom.properties
-rw-r--r--  2.0 unx  3759864 bl defN 21-Sep-24 23:14 linux/x86_64/libaws-crt-jni.so
-rw-r--r--  2.0 unx  3676424 bl defN 21-Sep-24 23:14 linux/armv8/libaws-crt-jni.so
-rw-r--r--  2.0 unx  3408100 bl defN 21-Sep-24 23:14 linux/x86_32/libaws-crt-jni.so
-rw-r--r--  2.0 unx   292160 bl defN 21-Sep-24 23:14 libaws-c-event-stream.a
-rw-r--r--  2.0 unx 10337584 bl defN 21-Sep-24 23:14 libs2n.a
1506 files, 34201166 bytes uncompressed, 13563974 bytes compressed:  60.3%

Better naming for CRT-originated threads

Describe the feature

All threads created by the CRT should be identifiable as such.

Use Case

Right now I'm profiling a pretty sophisticated application that uses multiple libraries, each maintaining their own IO-related threadpools. It would be a huge time saver if the CRT library didn't create anonymous sounding thread names such as Thread-10

Screenshot 2023-04-21 at 2 39 33 PM

Proposed Solution

A naming scheme for created threads would be highly desireable, indicating whether that thread is I/O or CPU bound, e.g.

crt-nonblocking-1

or

crt-compute-1

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

aws-crt jar is bunding S3 java code

As a consumer of CRT in Java, I would prefer that S3 specific logic is vended in a seperate artifact from the pure "utility" functionality of CRT itself. My concern here is deployed code size.

STR

$ zipinfo aws-crt-0.15.5.jar | grep S3 | xclip -se c
-rw-r--r--  2.0 unx     5506 bl defN 21-Oct-15 21:20 software/amazon/awssdk/crt/s3/S3Client.class
-rw-r--r--  2.0 unx     2591 bl defN 21-Oct-15 21:20 software/amazon/awssdk/crt/s3/CrtS3RuntimeException.class
-rw-r--r--  2.0 unx     2712 bl defN 21-Oct-15 21:20 software/amazon/awssdk/crt/s3/S3MetaRequestOptions$MetaRequestType.class
-rw-r--r--  2.0 unx     3823 bl defN 21-Oct-15 21:20 software/amazon/awssdk/crt/s3/S3ClientOptions.class
-rw-r--r--  2.0 unx     1819 bl defN 21-Oct-15 21:20 software/amazon/awssdk/crt/s3/S3MetaRequestOptions.class
-rw-r--r--  2.0 unx     1552 bl defN 21-Oct-15 21:20 software/amazon/awssdk/crt/s3/S3MetaRequest.class
-rw-r--r--  2.0 unx     1566 bl defN 21-Oct-15 21:20 software/amazon/awssdk/crt/s3/S3MetaRequestResponseHandlerNativeAdapter.class
-rw-r--r--  2.0 unx      878 bl defN 21-Oct-15 21:20 software/amazon/awssdk/crt/s3/S3MetaRequestResponseHandler.class
-rw-r--r--  2.0 unx     1910 bl defN 21-Oct-15 21:20 com/amazonaws/s3/S3NativeClient$2.class
-rw-r--r--  2.0 unx     3496 bl defN 21-Oct-15 21:20 com/amazonaws/s3/S3NativeClient$3.class
-rw-r--r--  2.0 unx      227 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3KeyFilter$1.class
-rw-r--r--  2.0 unx     1597 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3KeyFilter.class
-rw-r--r--  2.0 unx     1761 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3Error.class
-rw-r--r--  2.0 unx     4398 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3Location$BuilderImpl.class
-rw-r--r--  2.0 unx     2113 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/AnalyticsS3BucketDestination.class
-rw-r--r--  2.0 unx      732 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/InventoryS3BucketDestination$Builder.class
-rw-r--r--  2.0 unx     2297 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/InventoryS3BucketDestination.class
-rw-r--r--  2.0 unx      248 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/SSES3$Builder.class
-rw-r--r--  2.0 unx     3325 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/AnalyticsS3ExportFileFormat.class
-rw-r--r--  2.0 unx     1339 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/SSES3$BuilderImpl.class
-rw-r--r--  2.0 unx     2777 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/AnalyticsS3BucketDestination$BuilderImpl.class
-rw-r--r--  2.0 unx      224 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3Location$1.class
-rw-r--r--  2.0 unx      209 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/SSES3$1.class
-rw-r--r--  2.0 unx      278 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/AnalyticsS3BucketDestination$1.class
-rw-r--r--  2.0 unx      386 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3Error$Builder.class
-rw-r--r--  2.0 unx      215 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3Error$1.class
-rw-r--r--  2.0 unx     2015 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3KeyFilter$BuilderImpl.class
-rw-r--r--  2.0 unx      619 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/AnalyticsS3BucketDestination$Builder.class
-rw-r--r--  2.0 unx      481 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3KeyFilter$Builder.class
-rw-r--r--  2.0 unx     1230 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/SSES3.class
-rw-r--r--  2.0 unx     2912 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3Location.class
-rw-r--r--  2.0 unx     2243 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3Error$BuilderImpl.class
-rw-r--r--  2.0 unx     3138 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/InventoryS3BucketDestination$BuilderImpl.class
-rw-r--r--  2.0 unx     1133 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/S3Location$Builder.class
-rw-r--r--  2.0 unx      278 bl defN 21-Oct-15 21:20 com/amazonaws/s3/model/InventoryS3BucketDestination$1.class
-rw-r--r--  2.0 unx      498 bl defN 21-Oct-15 21:20 com/amazonaws/s3/S3Exception$Builder.class
-rw-r--r--  2.0 unx      215 bl defN 21-Oct-15 21:20 com/amazonaws/s3/S3Exception$1.class
-rw-r--r--  2.0 unx     1728 bl defN 21-Oct-15 21:20 com/amazonaws/s3/S3Exception.class
-rw-r--r--  2.0 unx    23474 bl defN 21-Oct-15 21:20 com/amazonaws/s3/S3NativeClient.class
-rw-r--r--  2.0 unx     2391 bl defN 21-Oct-15 21:20 com/amazonaws/s3/S3Exception$BuilderImpl.class
-rw-r--r--  2.0 unx     3705 bl defN 21-Oct-15 21:20 com/amazonaws/s3/S3NativeClient$1.class

Provide an AWS CRT tailored to AWS Lambda

AWS CRT HTTP Client is relevant for applications on AWS Lambda. However, the aws-crt library of 23 MB has an impact on bootstrap performance. Can you provide one that does not include all libs but linux/x86_64? Looks like you can bring it down to less than 6 MB this way.

AWS CRT: architecture not supported: arm

Hello,
I'm trying to run the SDK on a raspberry pi with openjdk 8 (1.8.0_212-8u212-b01-1+rpi1-b01) but when trying to connect the CRT class is reporting an architecture not supported exception:

Unable to determine platform for AWS CRT: software.amazon.awssdk.crt.CRT$UnknownPlatformException: AWS CRT: architecture not supported: arm
software.amazon.awssdk.crt.CRT$UnknownPlatformException: AWS CRT: architecture not supported: arm
	at software.amazon.awssdk.crt.CRT.getArchIdentifier(CRT.java:95)
	at software.amazon.awssdk.crt.CRT.loadLibraryFromJar(CRT.java:126)
	at software.amazon.awssdk.crt.CRT.<clinit>(CRT.java:38)
	at software.amazon.awssdk.crt.CrtResource.<clinit>(CrtResource.java:99)

It seems to be related to the fact that the JVM for the os.arch property is reporting just "arm" and that makes the code to fail https://github.com/awslabs/aws-crt-java/blob/master/src/main/java/software/amazon/awssdk/crt/CRT.java#L81

What are the SDK and NDK used for prebuild library so in Android aar

Describe the bug

Currently we are adding this CRT with AWS iot device sdk jar into our system, which is Android 12. Want to know the SDK version and NDK version used for the prebuild library so in the aar file.

Expected Behavior

N/A

Current Behavior

N/A

Reproduction Steps

N/A

Possible Solution

No response

Additional Information/Context

No response

aws-crt-java version used

0.21.10

Java version used

8

Operating System and version

Android 12

Application crashes with an image base liberica-openjre-alpine in `s3Transfer` copy operation

Describe the bug

Since the release of version 0.24.0 0.23.0, our application has been crashing during the execution of s3Transfer copy operation.

It's running on JDK 17 and we use bellsoft/liberica-openjre-alpine:17.0.8 as a base image.

The code has been tested and confirmed to run smoothly on amazoncorretto:17.0.8 and may also work on other base images.

Expected Behavior

It doesn't crash in liberica images.

Current Behavior

The application crashes. Here are the logs:

[error occurred during error reporting ((null)), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing fatal error message), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing type of error), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing exception/signal name), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing current thread and pid), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing error message), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing Java version string), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing problematic frame), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing core file information), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing jfr information), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing bug submit message), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing summary), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing VM option summary), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing summary machine and OS info), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing date and time), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing thread), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing current thread), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing current compile task), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing stack bounds), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing native stack), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing Java stack), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing target Java thread stack), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing siginfo), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (CDS archive access warning), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing register info), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing registers, top of stack, instructions near pc), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (inspecting top of stack), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing code blob if possible), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing VM operation), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[error occurred during error reporting (printing process), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]


[error occurred during error reporting (printing user info), id 0xb, SIGSEGV (0xb) at pc=0x00007ff917c0dcf4]

[Too many errors, abort]
[Too many errors, abort]

Reproduction Steps

I have created a sample project along with a guide on how to run it: https://github.com/cenkakin/aws-crt-liberica-problem

Please let me know if you need further assistance.

Possible Solution

No response

Additional Information/Context

No response

aws-crt-java version used

0.24.0 0.23.0

Java version used

17

Operating System and version

FROM bellsoft/liberica-openjre-alpine:17.0.8

Native image support

The AWS CRT is lacking the information to build a native executable with the native image compiler. I have added the necessary jni-config.json and resource-config.json to fix this in #464.

With this change we will be able to build native executables from Java code when someone references the AWS CRT for Java.

S3 upload with BlockingOutputStream leads data corruption on writes using a shared byte array

Describe the bug

Using BlockingOutputStreamAsyncRequestBody (via AsyncRequestBody.forBlockingOutputStream(...)) and sharing the byte array between subsequent writes to the output stream, leads to data corruption when uploading a stream to S3 using CRT / async java sdk.

at a high level the write pattern is as follows (full code snippet below).

BlockingOutputStreamAsyncRequestBody body = AsyncRequestBody.forBlockingOutputStream(null);
... create a make request to s3

Random random = new Random(3470);
OutputStream outputStream = body.outputStream();
// single buffer used for all writes
byte[] buffer = new byte[1024];
for (int i = 0; i < 10; i++) {
  // fill this buffer between writes.
  random.nextBytes(buffer);
  outputStream.write(buffer, 0, bytesToWrite);
}

Expected Behavior

I expect that using re-using an byte array between writes to an OutputStream does not lead to corrupt data.

Current Behavior

The data written to the output stream does not match the data that is written to s3.

Reproduction Steps

gradle imports:

    implementation(platform("software.amazon.awssdk:bom:2.20.118"))
    implementation("software.amazon.awssdk.crt:aws-crt:0.24.0")
    implementation("software.amazon.awssdk:s3-transfer-manager")
    implementation("software.amazon.awssdk:s3")
    implementation("software.amazon.awssdk:sso")

    // https://mvnrepository.com/artifact/commons-codec/commons-codec
    implementation("commons-codec:commons-codec:1.16.0")
import org.apache.commons.codec.binary.Hex;
import org.junit.jupiter.api.Test;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.core.ResponseInputStream;
import software.amazon.awssdk.core.async.AsyncRequestBody;
import software.amazon.awssdk.core.async.AsyncResponseTransformer;
import software.amazon.awssdk.core.async.BlockingOutputStreamAsyncRequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3AsyncClient;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.transfer.s3.S3TransferManager;
import software.amazon.awssdk.transfer.s3.model.CompletedDownload;
import software.amazon.awssdk.transfer.s3.model.DownloadRequest;
import software.amazon.awssdk.transfer.s3.model.Upload;
import software.amazon.awssdk.transfer.s3.model.UploadRequest;

import java.io.InputStream;
import java.io.OutputStream;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.Random;
import java.util.UUID;

import static org.junit.jupiter.api.Assertions.assertEquals;

public class BlockingOutputStreamTest {
  // not necessary for repro, just how I connect to AWS.
  String profileName = "<some profile name>";
  String bucket = "<some bucket name>";
  String algorithm = "SHA-256";

  @Test
  public void reUsedBufferTest() throws NoSuchAlgorithmException {
    S3AsyncClient s3AsyncClient = S3AsyncClient.crtBuilder()
      .credentialsProvider(ProfileCredentialsProvider.create(profileName))
      .region(Region.US_WEST_2)
      .build();

    try (S3TransferManager transferManager = S3TransferManager.builder()
      .s3Client(s3AsyncClient)
      .build()) {

      // not 100% required, but it IS a race, and doesn't always happen on the first try.
      // this reproduction (10 chunks of 32 bytes) on my machine seems to always trigger it the first time.
      for (int iteration = 0; iteration < 30; iteration++) {
        String key = UUID.randomUUID().toString();
        System.out.println("key is: "+ key);

        // unknown content length (but seems to work when we know the length as well).
        BlockingOutputStreamAsyncRequestBody body = AsyncRequestBody.forBlockingOutputStream(null);

        UploadRequest uploadRequest = UploadRequest.builder()
          .putObjectRequest(PutObjectRequest.builder()
            .bucket(bucket)
            .key(key)
            // attempting to manually set the digest values here causes the requests to fail.
            .build())
          .requestBody(body)
          .build();

        Upload uploadResponse = transferManager.upload(uploadRequest);
        MessageDigest uploadMD = MessageDigest.getInstance(algorithm);
        // with a seed of 3470, we expect a crc32 digest of:
        // or sha256: "0e317c890dedbf007e6b4c25bbf347563f645d83e29b8eed85a1b293ea0d31ba"
        // or md5: 7052d097616126ae82c211a9834220f3

        Random random = new Random(3470);

        try (OutputStream outputStream = body.outputStream()) {
          byte[] buffer = new byte[1024];
          for (int i = 0; i < 10; i++) {
            // want new random bytes every time.
            random.nextBytes(buffer);
            int bytesToWrite = 32;

            uploadMD.update(buffer, 0, bytesToWrite);
            // My best guess here is that the BlockingOutputStreamAsyncRequestBody is not appropriately
            // copying the bytes. Instead, I believe that it is using the same buffer for each write
            // however, between each iteration we're overwriting that buffer.
            // then I believe that there's a race to consume those bytes before we get around to overwriting them
            outputStream.write(buffer, 0, bytesToWrite);

            // if you uncomment this line, it seems to work every time.
            // Thread.sleep(10);
          }
        } catch (Exception e) {
          // this typically throws if the upload fails due bad credentials. It also takes some time (10s) to timeout
          // Don't throw, just let the below lines run, one of them will also throw letting us see the reason why
          // we failed to connect.
          e.printStackTrace();
        }

        // wait for the upload to finish.
        uploadResponse.completionFuture().join();

        String uploadDigest = Hex.encodeHexString(uploadMD.digest());
        System.out.println("ulDigest: " + uploadDigest);

        DownloadRequest<ResponseInputStream<GetObjectResponse>> downloadRequest = DownloadRequest.builder()
          .getObjectRequest(
            GetObjectRequest.builder()
              .bucket(bucket)
              .key(key)
              .build()
          ).responseTransformer(AsyncResponseTransformer.toBlockingInputStream())
          .build();

        CompletedDownload<ResponseInputStream<GetObjectResponse>> dl = transferManager.download(downloadRequest)
          .completionFuture()
          .join();

        MessageDigest responseDigest = MessageDigest.getInstance(algorithm);
        try (InputStream inStream = dl.result()) {
          byte[] allBytes = inStream.readAllBytes();
          responseDigest.update(allBytes);
        } catch (Exception e) {
          e.printStackTrace();
        }

        String downloadDigest = Hex.encodeHexString(responseDigest.digest());
        System.out.println("dlDigest: " + downloadDigest);

        assertEquals(uploadDigest, downloadDigest);
      }
    }
  }
}

Possible Solution

I believe this is happening due to wrapping, but not copying the bytes passed to the output stream.

In https://github.com/aws/aws-sdk-java-v2/blob/master/utils/src/main/java/software/amazon/awssdk/utils/async/OutputStreamPublisher.java#L71 the byte buffer is wrapped for compatibility with async nio / publisher APIs. However, due to a lack of immutability and an expectation of blocking behavior from the OutputStream API, this leads to the wrapped data being mutated before it is successfully passed to the CRT library.

  @Test
  public void wrapTest() {
    byte[] buffer = new byte[10];
    // I think this is unnecessary...
    Arrays.fill(buffer, (byte) 0);

    ByteBuffer wrapped = ByteBuffer.wrap(buffer, 0, 10);

    assertEquals(0, wrapped.get(0));
    // mutate the original buffer.
    buffer[0] = (byte) 1;
    
    assertEquals(0, wrapped.get(0));
  }

Likely what needs to happen here is the data needs to be copied before the write call returns.

Additional Information/Context

No response

aws-crt-java version used

0.24.0

Java version used

open JDK 17

Operating System and version

Mac OSX 13.4.1 (M1)

CRT. getArchIdentifier() method cannot identify the armv8l

I try to run the aws-iot-device-sdk-v2 demo in android which cpu is armv8l, it will throw UnknownPlatformException in the CRT. getArchIdentifier() method.
I take a look the source code, this if/else logic cannot identify the 'armv8l' (not armv8)

} else if (arch.startsWith("arm64") || arch.startsWith("aarch64")) {

else if (arch.startsWith("arm64") || arch.startsWith("aarch64")) {
            return "armv8";
        } else if (arch.equals("arm")) {
            return "armv6";
        }
 root # uname -a
Linux localhost 4.9.125 #1 SMP PREEMPT Mon Apr 25 23:13:27 UTC 2022 armv8l

Does the CRT support 'armv8l' ?

Hang on file upload after period of inactivity

Describe the bug

I'm attempting to use the CRT s3 client to upload files to s3 when I don't know their full size at time of upload.

Uploads / Downloads are successful while I use the application, but after some time passes between requests (seems like a few hours is required), the system will hang and I'll begin to receive errors.

java.lang.IllegalStateException: The service request was not made within 10 seconds of outputStream being invoked. Make sure to invoke the service request BEFORE invoking outputStream if your caller is single-threaded.

In the past I've noticed that this is often due to invalid request parameters, so if I catch (and ignore that error) and wait on the futures, (either the upload or download future), which typically will throw the exception, it will just hang for ever.

Expected Behavior

I expect that after some time has passed, the s3client / transfer manager will continue to allow for for uploads rather than hanging forever / failing to make a request.

I guessing that I may be missing some health check settings or need to change the OS keepalive settings,
but would expect that there would at least be an error message somewhere indicating that the connection has failed rather than an indefinite hang.

Current Behavior

Waiting on an upload or download future will hang (seemingly forever) and attempting to create a outputStream body will fail after the default 10 second timeout with the following exception:


java.lang.IllegalStateException: The service request was not made within 10 seconds of outputStream being invoked. Make sure to invoke the service request BEFORE invoking outputStream if your caller is single-threaded. at 
software.amazon.awssdk.core.async.BlockingOutputStreamAsyncRequestBody.waitForSubscriptionIfNeeded(BlockingOutputStreamAsyncRequestBody.java:93) at 
software.amazon.awssdk.core.async.BlockingOutputStreamAsyncRequestBody.outputStream(BlockingOutputStreamAsyncRequestBody.java:68) at
... my code

Reproduction Steps

The upload code looks roughly like this (kotlin), let me know and I can probably reproduce with java.

// initialization
val s3ClientBuilder = S3AsyncClient.crtBuilder()
    .credentialsProvider(ProfileCredentialsProvider.create("SandboxAdmin"))

val s3Client = s3ClientBuilder.build()

val transferManager = S3TransferManager.builder().s3Client(client).build()

Executed per "request" using a transfermanager / s3 client that is built once on initialization

val body = AsyncRequestBody.forBlockingOutputStream(null)
val req = UploadRequest.builder()
    .requestBody(body)
    .putObjectRequest { it.bucket(bucket).key(key) }
    .build()

val upload = transferManager.upload(req).completionFuture()

body.outputStream().use {
  // upload to stream
}

// wait for upload to complete
upload.join()

I also will download files from s3 (again per request using the pre-initialized transfer manager / s3 client)

val downloadRequest = DownloadRequest.builder()
    .getObjectRequest { it.bucket(bucket).key(key) }
    .responseTransformer(AsyncResponseTransformer.toBlockingInputStream())
    .build()

val downloadResponse = transferManager
    .download(downloadRequest)
    .completionFuture()

return downloadResponse.join().result()

Possible Solution

My best guess is that this is some sort of keepalive problem and can be worked around by changing the OS level keepalives (or perhaps using the httpConfiguration to set up a healthcheck properly)

I have also tried setting up a health check on the s3client, but that also didn't fix the issue for me.

.httpConfiguration(S3CrtHttpConfiguration.builder()
    .connectionHealthConfiguration(S3CrtConnectionHealthConfiguration.builder()
        .minimumThroughputInBps(0)
        .minimumThroughputTimeout(Duration.ofSeconds(30))
        .build()
    )
    .build()
)

Additional Information/Context

Seemingly related to:

Only successful reproduction is running in a container based off of public.ecr.aws/docker/library/eclipse-temurin:17-jdk-jammy running in ECS

Keepalive settings in the OS

# cat /proc/sys/net/ipv4/tcp_keepalive_time
7200
# cat /proc/sys/net/ipv4/tcp_keepalive_intvl 
75
# cat /proc/sys/net/ipv4/tcp_keepalive_probes
9

This also appears to happen on an x86 build, also running in ECS.

aws-crt-java version used

0.25.0

Java version used

Open JDK 17

Operating System and version

Linux hostname 5.10.186-179.751.amzn2.aarch64 #1 SMP Tue Aug 1 20:51:46 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

android libs not bundled

CRT fails to init on android (emulator or otherwise):

W/System.err: Unable to unpack AWS CRT lib: java.io.IOException: Unable to open library in jar for AWS CRT: /android/x86/libaws-crt-jni.so
    java.io.IOException: Unable to open library in jar for AWS CRT: /android/x86/libaws-crt-jni.so

I don't see any android platform dir in the release jar (neither 0.12.5 or latest 0.13.1):

> jar -xf ~/Downloads/aws-crt-0.13.1.jar                                                                                                                                                                                    
> ls
META-INF                com                     libaws-c-event-stream.a libs2n.a                linux                   osx                     software                windows

closing CRT in a JVM shutdown hook times-outs its usages in other shutdown hooks, e.g. Spring's

Describe the bug

When I try to use the S3-client during the Spring-application shutdown, I get cryptic time-out exceptions:

java.lang.IllegalStateException: The service request was not made within 10 seconds of outputStream being invoked. Make sure to invoke the service request BEFORE invoking outputStream if your caller is single-threaded.
	at software.amazon.awssdk.core.async.BlockingOutputStreamAsyncRequestBody.waitForSubscriptionIfNeeded(BlockingOutputStreamAsyncRequestBody.java:93)
	at software.amazon.awssdk.core.async.BlockingOutputStreamAsyncRequestBody.outputStream(BlockingOutputStreamAsyncRequestBody.java:68)
	at it.joom.quality.infra.storage.s3.RemoteS3FileTest.lambda$testHangs$1(RemoteS3FileTest.java:44)
	at java.base/java.lang.Thread.run(Thread.java:829)

Expected Behavior

I expect that it would be possible to use S3 in a shutdown hook.

Current Behavior

It hangs for 10 seconds and fails with the cryptic time-out exception.

Reproduction Steps

var s3 = S3AsyncClient.crtBuilder().region(Region.EU_CENTRAL_1).build();
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
    try (s3) {
        var blockingOutputStreamAsyncRequestBody = AsyncRequestBody.forBlockingOutputStream(null);
        var response = s3.putObject(
                PutObjectRequest.builder().bucket("bucket").key("key").build(),
                blockingOutputStreamAsyncRequestBody
        );
        try (var ignored = blockingOutputStreamAsyncRequestBody.outputStream()) {
        } catch (IOException e) {
            throw new RuntimeException(e);
        }
        response.join();
    } catch (Exception e) {
        e.printStackTrace();
    }
}));

Possible Solution

No response

Additional Information/Context

No response

aws-crt-java version used

0.23.2

Java version used

openjdk version "17.0.2" 2022-01-18 OpenJDK Runtime Environment (build 17.0.2+8-86) OpenJDK 64-Bit Server VM (build 17.0.2+8-86, mixed mode, sharing)

Operating System and version

macOS 13.4.1, Ubuntu 18

Proposed changes to the API

  1. The method MqttClientConnection#publish should throw an exception if the message size exceeds 128kb, which is over the allowed limit. Currently it will silently fail. Not the expected behavior when it can't fulfill its function.
  2. The method MqttClientConnection#connect will hang if there was an authentication error. The CRT log will have some details, although not super helpful ones (things like error 1051, or error 1049). A better to handle this would've been to throw an exception in case of failure with a detailed description of the cause.

closing HTTP stream after on_stream_complete fires causes segfault

In the final HTTP request callback (from JNI side) the datastructure which contains the reference to the stream is released(free'd):

Which means that if you call HttpStream.close() from the JVM side AFTER the final callback then you will trigger a segfault in the JNI layer because the datastructure was already released:

stream = connection.makeRequest(...)

...

// sometime after the final callback

stream.close()  // segfault

Segfault usually happens in the call to aws_http_stream_release() although every once in a while the null check on L370 actually triggered.

Resolution is either "don't do that" and better documentation or changes to prevent this possibility either through additional checks internally or at API level (i.e. if it is closed automagically in callback then don't provide a close method).

S3TransferManager results in error when running in AWS but works remotely

Describe the bug

We are attempting to use the S3TransferManager to stream data to an S3 bucket. The code works locally when connecting to AWS remotely, however fails when running from AWS.

Expected Behavior

We expect to see the file created in the S3 bucket.

Current Behavior

The code generates this trace and error:

[TRACE] [2023-07-05T18:11:01Z] [00007fdbc6242700] [event-loop] - id=0x7fdbe42c5a80: detected more scheduled tasks with the next occurring at 0, using timeout of 0.
[TRACE] [2023-07-05T18:11:01Z] [00007fdbc6242700] [event-loop] - id=0x7fdbe42c5a80: waiting for a maximum of 0 ms
[TRACE] [2023-07-05T18:11:01Z] [00007fdbc6242700] [event-loop] - id=0x7fdbe42c5a80: wake up with 0 events to process.
[TRACE] [2023-07-05T18:11:01Z] [00007fdbc6242700] [event-loop] - id=0x7fdbe42c5a80: running scheduled tasks.
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [task-scheduler] - id=0x7fdc0419e908: Running aws_exponential_backoff_retry_task task with <Running> status
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [exp-backoff-strategy] - id=0x7fdbe4509160: Vending retry_token 0x7fdc0419e8a0
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [standard-retry-strategy] - id=0x7fdbe4508fd0: token acquired callback invoked with error Success. with token 0x7fdbfc00c8f0 and nested token 0x7fdc0419e8a0
[TRACE] [2023-07-05T18:11:01Z] [00007fdbc6242700] [standard-retry-strategy] - id=0x7fdbe4508fd0: invoking on_retry_token_acquired callback
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [connection-manager] - id=0x7fdbe45a2950: Acquire connection
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [connection-manager] - id=0x7fdbe45a2950: snapshot - state=1, idle_connection_count=0, pending_acquire_count=1, pending_settings_count=0, pending_connect_count=1, vended_connection_count=0, open_connection_count=0, ref_count=1
[INFO] [2023-07-05T18:11:01Z] [00007fdbc6242700] [connection-manager] - id=0x7fdbe45a2950: Requesting 1 new connections from http
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [http-connection] - https_proxy environment found
[ERROR] [2023-07-05T18:11:01Z] [00007fdbc6242700] [http-connection] - Could not parse found proxy URI.
[ERROR] [2023-07-05T18:11:01Z] [00007fdbc6242700] [connection-manager] - id=0x7fdbe45a2950: http connection creation failed with error code 36(An input string was passed to a parser and the string was incorrectly formatted.)
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [connection-manager] - id=0x7fdbe45a2950: Failing excess connection acquisition with error code 36
[WARN] [2023-07-05T18:11:01Z] [00007fdbc6242700] [connection-manager] - id=0x7fdbe45a2950: Failed to complete connection acquisition with error_code 36(An input string was passed to a parser and the string was incorrectly formatted.)
[ERROR] [2023-07-05T18:11:01Z] [00007fdbc6242700] [S3Endpoint] - id=0x7fdbe4592e80: Could not acquire connection due to error code 36 (An input string was passed to a parser and the string was incorrectly formatted.)
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [S3Client] - id=0x7fdbe4401fa0 Client scheduling retry of request 0x7fdc0419e130 for meta request 0x7fdbe45779f0 with token 0x7fdbfc00c8f0 with error code 36 (An input string was passed to a parser and the string was incorrectly formatted.).
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [standard-retry-strategy] - token_id=0x7fdbfc00c8f0: reducing retry capacity by 10 from 500 and scheduling retry.
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [exp-backoff-strategy] - id=0x7fdbe4509160: Attempting retry on token 0x7fdc0419e8a0 with error type 0
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [exp-backoff-strategy] - id=0x7fdbe4509160: Computed backoff value of 8310907ns on token 0x7fdc0419e8a0
[TRACE] [2023-07-05T18:11:01Z] [00007fdbc6242700] [event-loop] - id=0x7fdbe42c5a80: scheduling task 0x7fdc0419e908 in-thread for timestamp 977992596685
[DEBUG] [2023-07-05T18:11:01Z] [00007fdbc6242700] [task-scheduler] - id=0x7fdc0419e908: Scheduling aws_exponential_backoff_retry_task task for future execution at time 977992596685
[TRACE] [2023-07-05T18:11:01Z] [00007fdbc6242700] [standard-retry-strategy] - id=0x7fdbe4508fd0: on_retry_token_acquired callback completed

Reproduction Steps

public class AWSTransferManager {
  private static final Logger log = LogManager.getLogger(AWSTransferManager.class.getName());

  static final String BUCKET="MYBUCKET";
  static final String FILE ="flux.bin";
  static final String testString = "This is some test data that will get sent to S3 over and over and " +
    "over again and is about 128 chars long, no seriously... really";
  static final int numLinesToSend=1_000;
  static final byte[] testBytes = testString.getBytes(StandardCharsets.UTF_8);

  public static void main (String[] args) throws IOException, InterruptedException {
    doIt();
  }

  public static void doIt() throws InterruptedException {

    log.info("http_proxy: " + System.getProperty("http_proxy")); // returns null
    log.info("https_proxy: " + System.getProperty("https_proxy")); // returns null
    log.info("http.proxyHost: " + System.getProperty("http.proxyHost")); // returns null
    log.info("https.proxyHost: " + System.getProperty("https.proxyHost")); // returns null

    String logFile = System.getProperty("user.dir")+"/s3.log";
    Utils.initCrtLogging(logFile);
    //Use default Creds:
    AwsCredentialsProvider credentialsProvider = DefaultCredentialsProvider.create();
    //or Select an Env Profile
    //AwsCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create(PROFILE);
    //Or use SSO
    //AwsCredentialsProvider credentialsProvider = Utils.getSsoCredentials(PROFILE);

    S3AsyncClient s3AsyncClient = Utils.getS3AsyncClient(credentialsProvider);
    S3TransferManager transferManager = Utils.getTransferManager(s3AsyncClient);

    //Create a feed we can 'stream' messages to:
    ConsumerFeed<byte[]> consumerFeed = new ConsumerFeed<>();
    Flux<ByteBuffer> flux = consumerFeed.flux
      .map(ByteBuffer::wrap)
      .doOnComplete(() -> log.info("Done"));

    UploadRequest.Builder req = UploadRequest.builder()
      .requestBody(AsyncRequestBody.fromPublisher(flux))
      .addTransferListener(new S3TransferListener(FILE))
      //Fix for Error we get without contentLength:
      //https://github.com/awslabs/aws-c-s3/pull/285/files#diff-5ee13dd0ee005b828bde6d60ff89582070983e3cb7e5d92389b629a9850c27f8
      .putObjectRequest(b -> b.bucket(BUCKET).key(FILE)); //.contentLength(maxSize));

    Upload upload = transferManager.upload(req.build());
    //async stuff - wait for it.
    consumerFeed.subscription.await();

    //Send some stuff:
    Consumer<byte[]> dataFeed = consumerFeed.getDataFeed();
    for (int i=0; i < numLinesToSend; i++) {
      dataFeed.accept(testBytes);
    }
    long completionSentTime= System.currentTimeMillis();
    consumerFeed.complete();

    CompletedUpload completedUpload = upload.completionFuture().join();
    double responseTime = (System.currentTimeMillis() - completionSentTime)/1000D;
    log.info("CompleteUploadResponse: {}\n\tCloseDuration: {}",
      completedUpload.response(), String.format("%.3f seconds", responseTime));

    // Validation:
    Thread.sleep(2000L);
    log.info("Getting File");
    Path filePath = Paths.get(System.getProperty("user.dir")+"/"+FILE);

    try {
      if (Files.exists(filePath)) Files.delete(filePath);
      GetObjectResponse response = s3AsyncClient.getObject(o -> o.bucket(BUCKET).key(FILE), AsyncResponseTransformer.toFile(filePath)).join();
      log.info("Resulting File Size: {}", Files.size(filePath));
    }
    catch (IOException e) {
      throw new RuntimeException(e);
    }
  }
}

public class ConsumerFeed<T> {
  Consumer<T> dataFeed;
  FluxSink<T> sink;
  final CountDownLatch subscription = new CountDownLatch(1);
  Flux<T> flux = Flux.create(sink -> {
    System.out.println("Create");
    this.setSink(sink);
    setDataFeed(sink::next);
    subscription.countDown();
  });
  void complete() {
    getSink().complete();
  }

  public Consumer<T> getDataFeed() {
    return dataFeed;
  }

  public void setDataFeed(Consumer<T> dataFeed) {
    this.dataFeed = dataFeed;
  }

  public FluxSink<T> getSink() {
    return sink;
  }

  public void setSink(FluxSink<T> sink) {
    this.sink = sink;
  }
}

public class S3TransferListener implements TransferListener {

  private static final Logger log = LogManager.getLogger(S3TransferListener.class.getName());

  public S3TransferListener(String resource) {
    this.resource = resource;
  }

  final String resource;
  long startTime;
  private int step=0;
  @Override
  public void transferInitiated (Context.TransferInitiated context) {
    log.info("Transfer initiated: {}, {}", resource, context.progressSnapshot().ratioTransferred());
    startTime = System.currentTimeMillis();
    status(context.progressSnapshot().transferredBytes());
  }

  private void status(long l) {
    if (l > step * 1_000_000) {
      log.info("Bytes transferred {}", l);
      step++;
    }
  }

  @Override
  public void bytesTransferred (Context.BytesTransferred context) {
    status(context.progressSnapshot().transferredBytes());
  }

  @Override
  public void transferComplete (Context.TransferComplete context) {
    long seconds = (System.currentTimeMillis() - startTime) / 1000;
    double bytes = (double)context.progressSnapshot().transferredBytes();
    double megabytes = bytes / 1_048_576;
    double throughput = megabytes / seconds;
    log.info("Transfer complete for resource: {}\n\t Bytes: {}\n\t MBs: {}\n\tThroughput: {} MB/s",
      resource, String.format("%10f", bytes), String.format("%.3f", megabytes),
      String.format("%.2f", throughput));
  }

  @Override
  public void transferFailed (Context.TransferFailed context) {
    log.error("Transfer failed on resource "+resource, context.exception());
  }
}


public class Utils {

  private static final Logger log = LogManager.getLogger(Utils.class.getName());


  static Path getPath (String resource) {
    Path path = Paths.get(resource);
    if (!path.getParent().toFile().exists() && !path.getParent().toFile().mkdirs()) {
      throw new RuntimeException("Failed to create path location: " + path);
    }
    return path;
  }

  static void initCrtLogging(String path) {
    //Since this method uses native aws libs (https://github.com/awslabs/aws-c-s3)
    // Only decent logging can be found by enabling this.
    /*Path logFilePath = getPath(path);
    if (Files.exists(logFilePath)) {
      try {
        Files.delete(logFilePath);
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
    }*/
    Log.initLoggingToStdout(Log.LogLevel.Trace);
    log.info("CRT Logging initialized: "+path);
  }

  static S3TransferManager getTransferManager(S3AsyncClient s3AsyncClient) {
    return S3TransferManager.builder()
      .s3Client(s3AsyncClient)
      .build();
  }

  public static S3AsyncClient getS3AsyncClient(AwsCredentialsProvider credentialsProvider) {
    S3AsyncClient s3AsyncClient =
      S3AsyncClient.crtBuilder()
        //.credentialsProvider(credentialsProvider)
        .region(Region.US_EAST_1)
        .targetThroughputInGbps(20.0)
        .minimumPartSizeInBytes(1000000L)
        .build();
    return s3AsyncClient;
  }
}

Possible Solution

No response

Additional Information/Context

Using AWS SDK version 2.20.94

aws-crt-java version used

0.22.2

Java version used

1.8

Operating System and version

Windows 10 Enterprise

Java 11 - Fatal error condition occurred in /work/src/native/crt.c:112

Application is crashing with following error in log. Any idea what is causing this issue and why its crashing the application ?
Version - software.amazon.awssdk:aws-crt-client:2.14.13-PREVIEW
Java version - OpenJDK Runtime Environment Corretto-11.0.11.9.1 (build 11.0.11+9-LTS)

Notice this error when running load testing our application. Not all requests are failing, but when it fails with this error application is crashing.

2021-05-04 23:24:00.753 ERROR 4923 --- [      Thread-23] s.a.a.h.c.i.AwsCrtAsyncHttpStreamAdapter : Response Encountered an Error.
software.amazon.awssdk.crt.http.HttpException: A callback has reported failure.
        at software.amazon.awssdk.http.crt.internal.AwsCrtAsyncHttpStreamAdapter.onResponseComplete(AwsCrtAsyncHttpStreamAdapter.java:116) ~[aws-crt-client-2.14.13-PREVIEW.jar!/:na]
        at software.amazon.awssdk.crt.http.HttpStreamResponseHandlerNativeAdapter.onResponseComplete(HttpStreamResponseHandlerNativeAdapter.java:31) ~[aws-crt-0.8.2.jar!/:0.8.2]

2021-05-04 23:24:00.753 ERROR 4923 --- [      Thread-23] s.a.a.h.c.i.AwsCrtResponseBodyPublisher  : Error processing Response Body

software.amazon.awssdk.crt.http.HttpException: A callback has reported failure.
        at software.amazon.awssdk.http.crt.internal.AwsCrtAsyncHttpStreamAdapter.onResponseComplete(AwsCrtAsyncHttpStreamAdapter.java:116) ~[aws-crt-client-2.14.13-PREVIEW.jar!/:na]
        at software.amazon.awssdk.crt.http.HttpStreamResponseHandlerNativeAdapter.onResponseComplete(HttpStreamResponseHandlerNativeAdapter.java:31) ~[aws-crt-0.8.2.jar!/:0.8.2]

2021-05-04 23:24:00.753 ERROR 4923 --- [      Thread-23] s.a.a.h.c.i.AwsCrtResponseBodyPublisher  : Error before ResponseBodyPublisher could complete: A callback has reported failure.
Fatal error condition occurred in /work/src/native/crt.c:112: !(*env)->ExceptionCheck(env)
Exiting Application
################################################################################
Resolved stacktrace:
################################################################################
################################################################################
Raw stacktrace:
################################################################################

Starting multiple putObject requests hangs the client

Describe the bug

Starting multiple putObject requests to use concurrently hangs the client.

Expected Behavior

I expected it not to hang.

Current Behavior

It actually hanged and JVM did not exit after the stop of main thread .

Reproduction Steps

Log.initLoggingToStderr(Log.LogLevel.Trace);
String bucket = "joomdev-recom";
var outputStreamToFuture = new LinkedHashMap<CancellableOutputStream, CompletableFuture<?>>();
try (
        var s3Client = S3AsyncClient.crtBuilder().region(Region.EU_CENTRAL_1).build();
        var streams = new AutoCloseable() {
            boolean cancelled = true;

            @Override
            public void close() {
                outputStreamToFuture.forEach((outputStream, completableFuture) -> {
                    if (cancelled) {
                        outputStream.cancel();
                    }
                    try {
                        outputStream.close();
                    } catch (IOException e) {
                        throw new RuntimeException(e);
                    }
                });
                outputStreamToFuture.forEach((__, future) -> {
                    future.join();
                });
            }
        }
) {
    for (int i = 0; i < 4; i++) {
        var body = software.amazon.awssdk.core.async.AsyncRequestBody.forBlockingOutputStream(1L);
        var response = s3Client.putObject(
                PutObjectRequest.builder().bucket(bucket).key("faucct-" + i).build(), body
        );
        try {
            outputStreamToFuture.put(body.outputStream(), response);
        } catch (Exception e) {
            try (var ignored2 = new AutoCloseable() {
                @Override
                public void close() {
                    response.getNow(null);
                }
            }) {
                throw new RuntimeException("" + i, e);
            }
        }
    }
    for (var cancellableOutputStream : outputStreamToFuture.keySet()) {
        cancellableOutputStream.write(1);
    }
    streams.cancelled = false;
}

Possible Solution

No response

Additional Information/Context

No response

aws-crt-java version used

0.23.2

Java version used

openjdk version "11.0.14.1" 2022-02-08 LTS OpenJDK Runtime Environment Zulu11.54+25-CA (build 11.0.14.1+1-LTS) OpenJDK 64-Bit Server VM Zulu11.54+25-CA (build 11.0.14.1+1-LTS, mixed mode)

Operating System and version

macOS 13.4.1
Ubuntu 18.04

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.