GithubHelp home page GithubHelp logo

gradle-s3-build-cache's People

Contributors

antillean avatar chemarodriguez-jt avatar cristiangm avatar double16 avatar durbon avatar egor-n avatar erdi avatar myniva avatar yogurtearl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gradle-s3-build-cache's Issues

Use Kotlin DSL for build scripts and implementation?

Kotlin adds type safety, and it is included by default in the modern Gradle distributions (I guess it is available since 4.10 or so).
Kotlin helps writing tests as it has multi-line string literals.

What do you think of using Kotlin?

Upgrade to Gradle 4.0

The build cache APIs have slightly changed in version 4.0 (compared to 3.5). This plugin needs to be updated in order to be usable in projects using Gradle version 4.0+.

SSE Algorithm

Hi, thanks for this project.
Our infra requires encryption when we are using permissions like PutObject.
It's possible to add more properties in the configuration to accept at least AES256?

meta.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);

Are the project open for PR? I can create one.

Thanks

Add an option to set custom HTTP headers

For other non-AWS implementations of S3, sometimes you need to add custom headers to each request.

This can be done by adding the header to ClientConfiguration and setting that on the AmazonS3ClientBuilder

Could add this to the config options:

headers = [ "name": "value" ]

Alternative S3-compatible targets

Hi!

Could you please provide an option to configure a custom endpoint?

We would like to use this plugin with DigitalOcean's Spaces (fully S3-compatible storage)

Thanks!

Add a configuration option to limit the maximum size of the cached entry

I've seen a case when the cached object was 35MiB, and it was caused by a single unit-test that had extremely long test name.
The test in question did include a multi-megabyte String in the test name (it was a parameterized test, and the string was included into the parameter).

Apparently, 30MiB retrieval over the internet does not sound right, so I would like to have an option to cap the maximum size of the cached entries.

Provide a way to configure the client in such a way, so it does not try to resolve credentials via network

When using s3-build-cache locally, I have "connection exception" printed to the screen as S3 client is trying to connect to the EC2 metadata service.

Even though the error can be neglected, it is executed on the main thread, so it does impact build performance.

I think it would be nice to have an option to configure AWSCredentialsProviderChain.
For instance:
a) an option to switch between "default" and "local-only" (easier to configure, target this use case only)
b) an option that would configure the list of providers (it might be more flexible, however, it would be harder to configure)

com.amazonaws.SdkClientException: Failed to connect to service endpoint:
        at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:100)
        at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.getToken(InstanceMetadataServiceResourceFetcher.java:91)
        at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:69)
        at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66)
        at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:58)
        at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsResponse(InstanceMetadataServiceCredentialsFetcher.java:46)
        at com.amazonaws.auth.BaseCredentialsFetcher.fetchCredentials(BaseCredentialsFetcher.java:112)
        at com.amazonaws.auth.BaseCredentialsFetcher.getCredentials(BaseCredentialsFetcher.java:68)
        at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:166)
        at com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper.getCredentials(EC2ContainerCredentialsProviderWrapper.java:75)
        at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
        at com.amazonaws.services.s3.S3CredentialsProviderChain.getCredentials(S3CredentialsProviderChain.java:36)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1251)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:827)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:777)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5054)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5000)
        at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1335)
        at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1309)
        at com.amazonaws.services.s3.AmazonS3Client.doesObjectExist(AmazonS3Client.java:1390)
        at ch.myniva.gradle.caching.s3.internal.AwsS3BuildCacheService.load(AwsS3BuildCacheService.java:64)
        at org.gradle.caching.internal.controller.service.BaseBuildCacheServiceHandle.loadInner(BaseBuildCacheServiceHandle.java:73)
        at org.gradle.caching.internal.controller.service.OpFiringBuildCacheServiceHandle$1.run(OpFiringBuildCacheServiceHandle.java:49)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:402)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:394)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:92)
Caused by: java.net.ConnectException: Host is down (connect failed)
        at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
        at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
        at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
        at java.base/java.net.Socket.connect(Socket.java:609)
        at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)
        at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)
        at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)
        at java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
        at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:341)
        at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:362)
        at java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1253)
        at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1232)
        at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)
        at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015)
        at com.amazonaws.internal.ConnectionUtils.connectToEndpoint(ConnectionUtils.java:52)
        at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:80)

Treat 403 (forbidden) response like 404 (not found)

If S3 bucket is configured in "allow only GetObject for public access", then it returns 403 in case the requested object is not found.

That is made for the security reasons so no-one can tell if the object is missing or they just don't have permissions to access it.

However, the s3-build-cache does not really need listobjects permissions to retrieve cache values, so it would be nice if the cache could treat 403 responses like cache entry not found.

WDYT?

Is the plugin production ready?

This S3 build cache implementation for Gradle started as a proof-of-concept right when the build cache feature was first announced in Gradle 3.5. To warn potential users of issues which might exist, I've added a note to the readme regarding the beta state ("this plugin is not yet ready for production") of this plugin.

Unfortunately, I've never had the chance to use the plugin in a real world scenario. I am therefore still lacking any experience regarding the robustness of it. It is also hard to tell how widely the plugin is adopted in the wild.

Let's use this issue to discuss the current state of the plugin and which steps would be needed to release a first stable and production ready version 1.0 of it.

Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use

We have this warning when we have caches under s3.

Task :blabba:blabla:compileJava FROM-CACHE Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.

Does the cache is working?

Support for other methods of providing credentials

Are there any plans to support other methods of retrieving credentials for the plugin? For instance

a) Loading a credentials file and specifying a profile. Currently the only way to support this is to export AWS_PROFILE=<profile_name>.

b) AWS SSO credentials. AWS CLI 2 supports SSO to retrieve short lived credentials and use a profile name to access resources. Cache plugin does not have any mechanism to support this either

Proposing additional configuration to specify credentials

Default Chain

{
    credentials: {
        strategy: default
    }
}

Profile

{
     credentials: {
        strategy: profile
        profile: default or <profile_name>
    }
}

AWS SSO does not write the credentials to ~/.aws/credentials file and instead relies on configuration specified in ~/.aws/profile file and store the creds in seperate file under ~/.aws/sso directory.

Happy to work with you on this and hear your thoughts.

Exception when apply plugin to gradle 5.5

I try to use the plugin using a gradle project with version 5.5 of the wrapper.

I used to apply the plugin using gradle dsl syntax:

plugins {
  id 'java'
  id 'groovy'
  id 'checkstyle'
  id 'jacoco'
  id 'pmd'
   ...
  id "ch.myniva.s3-build-cache" version "0.9.0"
}

I did not apply and buildCache configuration.

After applying the plugin I get the following error

An exception occurred applying plugin request [id: 'ch.myniva.s3-build-cache', version: '0.9.0']
> Failed to apply plugin [id 'ch.myniva.s3-build-cache']
   > org.gradle.api.internal.project.DefaultProject_Decorated cannot be cast to org.gradle.api.initialization.Settings
* Exception is:
org.gradle.api.plugins.InvalidPluginException: An exception occurred applying plugin request [id: 'ch.myniva.s3-build-cache', version: '0.9.0']
        at org.gradle.plugin.use.internal.DefaultPluginRequestApplicator.exceptionOccurred(DefaultPluginRequestApplicator.java:247)
Caused by: java.lang.ClassCastException: org.gradle.api.internal.project.DefaultProject_Decorated cannot be cast to org.gradle.api.initialization.Settings
        at ch.myniva.gradle.caching.s3.AwsS3Plugin.apply(AwsS3Plugin.java:26)
        at org.gradle.api.internal.plugins.ImperativeOnlyPluginTarget.applyImperative(ImperativeOnlyPluginTarget.java:42)

AWS STS dependency is required to allow WebToken / Kubernetes AWS IRSA support

Solved my problem from #34, the problem was that even though the gradle job should have access to S3 to GET/PUT etc, it was being denied because it couldn't use its AWS WebToken effectively, only because the STS jar wasn't on the classpath.

So if you add it as a dependency it will work:

classpath 'com.amazonaws:aws-java-sdk-sts:1.11.751'

Warning is:
Unable to load credentials from WebIdentityTokenCredentialsProvider: To use assume role profiles the aws-java-sdk-sts module must be on the class path.

IRSA is a growing pattern with AWS Kubernetes deployments where it uses an OIDC flow to authenticate:

https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/

Do you know of a installation guide?

Thanks for the plugin.

I'm configuring s3-build-cache for PostgreSQL JDBC (see pgjdbc/pgjdbc#1828), and it was not that obvious.

It was my first S3 configuration, however, it would be great if there was a configuration guide or a script that configures S3 (Terraform? Pulumi?). Do you know of such a guide?

`store()` method does not stream data to S3, uses byte[].

I just encountered an OOM / Java heap space error in a gradle build and thought i should check the implementation, in the store() method in the service it appears that the output to be cached is being buffered in a ByteArrayOutputStream before being passed to the SDK.

public void store(BuildCacheKey key, BuildCacheEntryWriter writer) {

For tasks with large outputs (mine is 100mb+) this means the entire thing is going to be held in memory, I can't see any reason why you could just pass an InputStream to SDK instead ๐Ÿ‘

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.