GithubHelp home page GithubHelp logo

cloudstoragemaven's People

Contributors

ahl avatar blockmar avatar daphshez avatar davemx avatar ertwroc avatar gkatzioura avatar guillaume-mayer avatar httpants avatar mark-edf avatar mungojam avatar pascalgn avatar pitxyoki avatar raphaelhamonnais avatar sgandon avatar vavvolo avatar w0ut0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudstoragemaven's Issues

AWS: Assume Roles

I'm trying to use the s3-storage-wagon with ARN profile support as mandated by my corporate environment and am struggling to get it to work via mvn.

All the credentials are configured and I can do a aws s3 ls mybucket or aws s3 cp file.txt mybucket and it works perfectly, but I consistently get "Unable to find a region via the region provider chain. " from maven.

I have AWS_PROFILE, AWS_DEFAULT_REGION and the other AWS key variables set in my bash profile and .aws/credentials. The is also set in the section of s3-upload

That I have to use a profile is the only reason I can think that the upload is failing

Plugin asks for bucket even though it's configured

I am using the plugin with version 2.3 configured as follows.

<build>
    <plugins>
         <plugin>
            <groupId>com.gkatzioura.maven.cloud</groupId>
            <artifactId>s3-storage-wagon</artifactId>
            <version>2.3</version>
            <executions>
                <execution>
                    <id>upload-multiple-files</id>
                    <phase>package</phase>
                    <goals>
                        <goal>s3-upload</goal>
                    </goals>
                    <configuration>
                        <bucket>my-bucket</bucket>
                        <region>xxxx</region>
                        <path>target/mydoc</path>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

When I execute the goal s3-storage-wagon:s3-upload I receive the following error message:

[ERROR] Failed to execute goal com.gkatzioura.maven.cloud:s3-storage-wagon:2.3:s3-upload (default-cli) on project s3: You need to specify a bucket for the s3-upload goal configuration -> [Help 1]

Could you clarify why this configuration isn't picked up? I tried to align my code with the documentation on the README file.

Public arifacts not downloading without any valid credentials

I'm working on a project that has open/public artifacts that I can download without any credentials.
But when I try to run a maven to build a project, I'm facing an error that points that I don't have any credentials.

Right now I've "fixed" it by putting in ~/.aws/credentials valid values (but this credentials from a very different project)
So CloudStorageMaven need to consume ANY valid credentials to work even with open repositories

Use different build system

I have an android library in which I use bazel to compile and generate an aar.
I'd like to host the aar on s3 as a maven repository.

Would it be possible to use CloudStorageMaven without maven building the aar for me? I'd like to just specify the location of my aar and have maven take it from there.

Is that possible at all?

Anonymous caller does not have storage.objects.list access to XXX.

I followed the google-storage-wagon tutorial.

Tried with

gcloud auth application-default login

Tried setting the environment variable

GOOGLE_APPLICATION_CREDENTIALS=/path/xxx.json

Tried different plugin versions from 1.0 to 1.8.. 2.0.
Nothing works.
I'm still getting the following exception:

Caused by: com.vorstella.shade.com.google.api.client.googleapis.json.GoogleJsonResponseException: 401 Unauthorized
{
  "code" : 401,
  "errors" : [ {
    "domain" : "global",
    "location" : "Authorization",
    "locationType" : "header",
    "message" : "Anonymous caller does not have storage.objects.list access to XXX.",
    "reason" : "required"
  } ],
  "message" : "Anonymous caller does not have storage.objects.list access to XXX."
}
	at com.vorstella.shade.com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:150)
	at com.vorstella.shade.com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
	at com.vorstella.shade.com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
	at com.vorstella.shade.com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:401)
	at com.vorstella.shade.com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1097)
	at com.vorstella.shade.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:499)
	at com.vorstella.shade.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432)
	at com.vorstella.shade.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:549)
	at com.vorstella.shade.com.google.cloud.storage.spi.v1.HttpStorageRpc.list(HttpStorageRpc.java:356)
	... 76 more

gcloud/gsutil - both work properly.

Why does the plugin keep trying to connect Anonymously?
Please, help!

Add terraform scripts for aws

Implement infrastructure as a code in order to setup a repository.
Include key creation, service role creation and instances with service role for testing.

Checksum failures on version <= 1.7 replacing existing artifacts with google-storage-wagon

Perhaps this is already fixed, but if I am replacing an existing object, like the index files for the package, I get the following:

[WARNING] Checksum validation failed, expected bb29bd78d7ea635bf96003a85e3294bd3c27578a but is da39a3ee5e6b4b0d3255bfef95601890afd80709 for gs://mvn.company.com/release/com/company/artifact/maven-metadata.xml

the da39... etc. checksum is the same for all files it tries to replace.

This might already be fixed in >= 1.8 but I cannot verify since uploads always fail with those versions, see #44

Multi-region HA

Any recommendations on how this might extend to be a multi-region HA solution for Maven?

Ideally a push should be able to be done to either region and then visible on both (eventually consistent is ok).

S3 supports cross region replication but not bi-directionally which is a bit of a wrench in how it can be done simply.

Provide the possibility to work with few buckets from different Google Cloud projects

Provide the possibility to work with few buckets from different Google Cloud projects.

E.g.:
I have project A which has dependencies B and C.
B is deployed to b-packages bucket of b-project.
C is deployed to c-packages bucket of c-project.
I have a service account which has necessary rights in both projects but some package is not available:

Caused by: com.vorstella.shade.com.google.api.client.googleapis.json.GoogleJsonResponseException: 404 Not Found
{
  "code" : 404,
  "errors" : [ {
    "domain" : "global",
    "message" : "Not Found",
    "reason" : "notFound"
  } ],
  "message" : "Not Found"
}
    at com.vorstella.shade.com.google.api.client.googleapis.json.GoogleJsonResponseException.from (GoogleJsonResponseException.java:146)
    at com.vorstella.shade.com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError (AbstractGoogleJsonClientRequest.java:113)
    at com.vorstella.shade.com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError (AbstractGoogleJsonClientRequest.java:40)
    at com.vorstella.shade.com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse (AbstractGoogleClientRequest.java:321)
    at com.vorstella.shade.com.google.api.client.http.HttpRequest.execute (HttpRequest.java:1065)
    at com.vorstella.shade.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed (AbstractGoogleClientRequest.java:419)
    at com.vorstella.shade.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed (AbstractGoogleClientRequest.java:352)
    at com.vorstella.shade.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute (AbstractGoogleClientRequest.java:469)
    at com.vorstella.shade.com.google.cloud.storage.spi.v1.HttpStorageRpc.list (HttpStorageRpc.java:336)
    at com.vorstella.shade.com.google.cloud.storage.StorageImpl$8.call (StorageImpl.java:299)
    at com.vorstella.shade.com.google.cloud.storage.StorageImpl$8.call (StorageImpl.java:296)
    at com.vorstella.shade.com.google.api.gax.retrying.DirectRetryingExecutor.submit (DirectRetryingExecutor.java:89)
    at com.vorstella.shade.com.google.cloud.RetryHelper.run (RetryHelper.java:74)
    at com.vorstella.shade.com.google.cloud.RetryHelper.runWithRetries (RetryHelper.java:51)
    at com.vorstella.shade.com.google.cloud.storage.StorageImpl.listBlobs (StorageImpl.java:295)
    at com.vorstella.shade.com.google.cloud.storage.StorageImpl.list (StorageImpl.java:262)
    at com.gkatzioura.maven.cloud.gcs.GoogleStorageRepository.connect (GoogleStorageRepository.java:55)

I suppose it means that it finds b-package in a-project and can't find it.

Add Other CannedAccessControlList Server Config Options

As an extension of #25, it would be really useful to include the other levels of CannedAccessControlList.

The main benefit being for cross-account S3 private bucket sharing, as currently if you push an artifact to someone else's bucket, they cannot access the artifact by default. One would need to set the the CannedAcl to BucketOwnerRead or BucketOwnerFullControl.

Thanks for maintaining this plugin btw, it's incredibly useful! Pleased to find this after realising the spring version hasn't been touched in ~7 years.

I'd be happy to put together and an implementation if you're accepting merge requests.

Thanks,
Dan

Facilitate debugging on connection failure

When setting up the module, I had a lot of trouble with the GOOGLE_APPLICATION_CREDENTIAL environment. The reason was some CI/CD details, but it would have saved me a lot of time to have some sort of diagnostic when the connection fails.

I am thinking:

  • Display the GOOGLE_APPLICATION_CREDENTIAL.
    • If value present, open the file. Report IO errors
      • Try parsing the file's content. Report errors (without displaying the value though)
      • If no error, display a hash value of the content on debug

google-storage-wagon shades an old version of jackson, breaking git-commit-id-plugin

An old version of Jackson is in the dependency tree:

[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ google-storage-wagon ---
[INFO] com.gkatzioura.maven.cloud:google-storage-wagon:jar:1.4
[INFO] +- org.slf4j:jcl-over-slf4j:jar:1.7.25:compile
[INFO] |  \- org.slf4j:slf4j-api:jar:1.7.25:compile
[INFO] +- org.apache.maven.wagon:wagon-provider-api:jar:3.0.0:provided
[INFO] |  \- org.codehaus.plexus:plexus-utils:jar:3.0.24:provided
[INFO] +- com.google.cloud:google-cloud-storage:jar:1.20.0:compile
[INFO] |  +- com.google.cloud:google-cloud-core:jar:1.20.0:compile
[INFO] |  |  +- com.google.guava:guava:jar:20.0:compile
[INFO] |  |  +- joda-time:joda-time:jar:2.9.2:compile
[INFO] |  |  +- com.google.http-client:google-http-client:jar:1.23.0:compile
[INFO] |  |  |  \- org.apache.httpcomponents:httpclient:jar:4.0.1:compile
[INFO] |  |  |     +- org.apache.httpcomponents:httpcore:jar:4.0.1:compile
[INFO] |  |  |     +- commons-logging:commons-logging:jar:1.1.1:compile
[INFO] |  |  |     \- commons-codec:commons-codec:jar:1.3:compile
[INFO] |  |  +- com.google.code.findbugs:jsr305:jar:3.0.1:compile
[INFO] |  |  +- com.google.api:api-common:jar:1.4.0:compile
[INFO] |  |  +- com.google.api:gax:jar:1.19.0:compile
[INFO] |  |  |  \- org.threeten:threetenbp:jar:1.3.3:compile
[INFO] |  |  +- com.google.protobuf:protobuf-java-util:jar:3.5.1:compile
[INFO] |  |  |  +- com.google.protobuf:protobuf-java:jar:3.5.1:compile
[INFO] |  |  |  \- com.google.code.gson:gson:jar:2.7:compile
[INFO] |  |  +- com.google.api.grpc:proto-google-common-protos:jar:1.2.0:compile
[INFO] |  |  \- com.google.api.grpc:proto-google-iam-v1:jar:0.3.0:compile
[INFO] |  +- com.google.cloud:google-cloud-core-http:jar:1.20.0:compile
[INFO] |  |  +- com.google.auth:google-auth-library-credentials:jar:0.9.0:compile
[INFO] |  |  +- com.google.auth:google-auth-library-oauth2-http:jar:0.9.0:compile
[INFO] |  |  +- com.google.oauth-client:google-oauth-client:jar:1.23.0:compile
[INFO] |  |  +- com.google.api-client:google-api-client:jar:1.23.0:compile
[INFO] |  |  +- com.google.http-client:google-http-client-appengine:jar:1.23.0:compile
[INFO] |  |  +- com.google.http-client:google-http-client-jackson:jar:1.23.0:compile
[INFO] |  |  |  \- org.codehaus.jackson:jackson-core-asl:jar:1.9.11:compile
[INFO] |  |  +- com.google.http-client:google-http-client-jackson2:jar:1.23.0:compile
[INFO] |  |  |  \- com.fasterxml.jackson.core:jackson-core:jar:2.1.3:compile
[INFO] |  |  +- com.google.api:gax-httpjson:jar:0.36.0:compile
[INFO] |  |  +- io.opencensus:opencensus-api:jar:0.11.1:compile
[INFO] |  |  |  +- com.google.errorprone:error_prone_annotations:jar:2.2.0:compile
[INFO] |  |  |  \- io.grpc:grpc-context:jar:1.9.0:compile
[INFO] |  |  \- io.opencensus:opencensus-contrib-http-util:jar:0.11.1:compile
[INFO] |  \- com.google.apis:google-api-services-storage:jar:v1-rev114-1.23.0:compile
[INFO] +- com.gkatzioura.maven.cloud:cloud-storage-core:jar:1.4:compile
[INFO] +- commons-io:commons-io:jar:2.6:compile
[INFO] \- junit:junit:jar:4.12:test
[INFO]    \- org.hamcrest:hamcrest-core:jar:1.3:test

The version of Jackson is quite old, which breaks git-commit-id-plugin:

[INFO] Caused by: java.lang.NoSuchMethodError: com.fasterxml.jackson.core.JsonFactory.requiresPropertyOrdering()Z
[INFO]     at com.fasterxml.jackson.databind.ObjectMapper.<init> (ObjectMapper.java:571)
[INFO]     at com.fasterxml.jackson.databind.ObjectMapper.<init> (ObjectMapper.java:480)
[INFO]     at pl.project13.maven.git.GitCommitIdMojo.maybeGeneratePropertiesFile (GitCommitIdMojo.java:596)

jackson-core is indeed shaded but not relocated:

$ javap -s -cp ~/.m2/repository/com/gkatzioura/maven/cloud/google-storage-wagon/1.3/google-storage-wagon-1.3.jar com.fasterxml.jackson.core.JsonFactory
Compiled from "JsonFactory.java"
public class com.fasterxml.jackson.core.JsonFactory implements com.fasterxml.jackson.core.Versioned,java.io.Serializable {
[โ€ฆ]

Error resolving artifact

Hi guys,

I'm using the maven local repository for sharing client libraries (it's being generated by the maven assembly plugin) among Spring Boot apps.

Currently, I'm using mvn install-file command like this:

mvn install:install-file -Dfile=lib1-client.jar -DartifactId=lib1-client -Dversion=0.0.1 -DgroupId=com.company.lib1 -Dpackaging=jar

My app pom.xml was like this:

    <dependency>
        <groupId>com.company.lib1</groupId>
        <artifactId>lib1-client</artifactId>
        <version>0.0.1</version>
    </dependency>

And It was working fine.

But, now I'm moving to a continuous integration process (based on pipelines) and of course those dependencies are breaking. So I'm deploying the lib1-client to a s3 bucket, using this command:

mvn deploy:deploy-file -Dfile=lib1-client.jar -DartifactId=lib1-client -Dversion=0.0.1 -DgroupId=com.company.lib1 -Dpackaging=jar -DrepositoryId=company-bucket-repo-release -Durl=s3://mvn-repo-company-dev/release -DAWS_DEFAULT_REGION=ap-southeast-2

I'm using a minimal temporary pom.xml to configure the s3 wagon, I just placed it on the same directory as the client jar.

<groupId>com.company.lib1</groupId>
<artifactId>lib1-client</artifactId>
<version>0.0.1</version>
<name>company lib1 Client Module</name>

<distributionManagement>

    <repository>
        <id>company-bucket-repo-release</id>
        <url>s3://mvn-repo-company-dev/release</url>
    </repository>

    <snapshotRepository>
        <id>company-bucket-repo-snapshot</id>
        <url>s3://mvn-repo-company-dev/snapshot</url>
    </snapshotRepository>

</distributionManagement>

<build>
    <extensions>
        <extension>
            <groupId>com.gkatzioura.maven.cloud</groupId>
            <artifactId>s3-storage-wagon</artifactId>
            <version>1.6</version>
        </extension>
    </extensions>
</build>

And It works! I can see the client jar and the generated pom, both deployed on the s3 bucket.

My app pom.xml now is like this:

    <dependency>
        <groupId>com.company.lib1</groupId>
        <artifactId>lib1-client</artifactId>
        <version>0.0.1</version>
    </dependency>

<repositories>
    <repository>
        <id>company-bucket-repo-release</id>
        <url>s3://mvn-repo-company-dev/release</url>
        <releases>
            <enabled>true</enabled>
        </releases>
        <layout>default</layout>
    </repository>

    <repository>
        <id>company-bucket-repo-snapshot</id>
        <url>s3://mvn-repo-company-dev/snapshot</url>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
        <layout>default</layout>
    </repository>
</repositories>

<build>
    <extensions>
        <extension>
            <groupId>com.gkatzioura.maven.cloud</groupId>
            <artifactId>s3-storage-wagon</artifactId>
            <version>1.6</version>
        </extension>
    </extensions>

</build>

My settings.xml is like this:

<server>
  <id>company-bucket-repo-release</id>
  <username>access_key</username>
  <password>access_secret</password>
</server>

<server>
  <id>company-bucket-repo-snapshot</id>
  <username>access_key</username>
  <password>access_secret</password>
</server>

But I'm getting the follow error when I'm trying to build the app (using mvn clean install):

Downloading: s3://mvn-repo-company-dev/release/com/company/lib1/lib1-client/0.0.1/lib1-client-0.0.1.pom
[WARNING] The POM for com.company.lib1:lib1-client:jar:0.0.1 is missing, no dependency information available
Downloading: s3://mvn-repo-company-dev/release/com/company/lib1/lib1-client/0.0.1/lib1-client-0.0.1.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.684 s
[INFO] Finished at: 2019-01-29T14:37:12+11:00
[INFO] Final Memory: 46M/394M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project company: Could not resolve dependencies for project com.company:company:war:0_0_1: Could not find artifact com.company.lib1:lib1-client:jar:0.0.1 in company-bucket-repo-release (s3://mvn-repo-company-dev/release) -> [Help 1]

I can't see where I'm doing it wrong

Any idea?

Thanks,
Marcelo

com.gkatzioura.maven.cloud.listener.TransferListenerContainerImpl.lambda$fireTransferProgress produces java.lang.IndexOutOfBoundsException

Hi, great idea for a mvn enhancement however at the moment I am getting:

    <distributionManagement>
        <snapshotRepository>
            <id>ave-repo-bucket-snapshot</id>
            <url>gs://bucket-ave-build-artifact/snapshot</url>
        </snapshotRepository>
        <repository>
            <id>ave-repo-bucket-release</id>
            <url>gs://bucket-ave-build-artifact/release</url>
        </repository>
    </distributionManagement>
...
        <extensions>
            <extension>
                <groupId>com.gkatzioura.maven.cloud</groupId>
                <artifactId>google-storage-wagon</artifactId>
                <version>1.8</version>
            </extension>
        </extensions>
    </build>

produces

Caused by: java.lang.IndexOutOfBoundsException
    at java.nio.ByteBuffer.wrap (ByteBuffer.java:395)
    at org.eclipse.aether.transport.wagon.WagonTransferListener.transferProgress (WagonTransferListener.java:64)
    at com.gkatzioura.maven.cloud.listener.TransferListenerContainerImpl.lambda$fireTransferProgress$2 (TransferListenerContainerImpl.java:75)
    at java.util.Vector.forEach (Vector.java:1387)
    at com.gkatzioura.maven.cloud.listener.TransferListenerContainerImpl.fireTransferProgress (TransferListenerContainerImpl.java:75)
    at com.gkatzioura.maven.cloud.transfer.TransferProgressImpl.progress (TransferProgressImpl.java:36)
    at com.gkatzioura.maven.cloud.transfer.TransferProgressFileInputStream.read (TransferProgressFileInputStream.java:65)
    at com.vorstella.shade.com.google.api.client.util.ByteStreams.copy (ByteStreams.java:51)
    at com.vorstella.shade.com.google.api.client.util.IOUtils.copy (IOUtils.java:94)
    at com.vorstella.shade.com.google.api.client.http.AbstractInputStreamContent.writeTo (AbstractInputStreamContent.java:72)
    at com.vorstella.shade.com.google.api.client.http.MultipartContent.writeTo (MultipartContent.java:107)
    at com.vorstella.shade.com.google.api.client.http.GZipEncoding.encode (GZipEncoding.java:49)
    at com.vorstella.shade.com.google.api.client.http.HttpEncodingStreamingContent.writeTo (HttpEncodingStreamingContent.java:51)
    at com.vorstella.shade.com.google.api.client.http.javanet.NetHttpRequest$DefaultOutputWriter.write (NetHttpRequest.java:76)
    at com.vorstella.shade.com.google.api.client.http.javanet.NetHttpRequest.writeContentToOutputStream (NetHttpRequest.java:156)
    at com.vorstella.shade.com.google.api.client.http.javanet.NetHttpRequest.execute (NetHttpRequest.java:117)
    at com.vorstella.shade.com.google.api.client.http.javanet.NetHttpRequest.execute (NetHttpRequest.java:84)
    at com.vorstella.shade.com.google.api.client.http.HttpRequest.execute (HttpRequest.java:1011)
    at com.vorstella.shade.com.google.api.client.googleapis.media.MediaHttpUploader.executeCurrentRequestWithoutGZip (MediaHttpUploader.java:548)
    at com.vorstella.shade.com.google.api.client.googleapis.media.MediaHttpUploader.executeCurrentRequest (MediaHttpUploader.java:565)
    at com.vorstella.shade.com.google.api.client.googleapis.media.MediaHttpUploader.directUpload (MediaHttpUploader.java:360)
    at com.vorstella.shade.com.google.api.client.googleapis.media.MediaHttpUploader.upload (MediaHttpUploader.java:334)
    at com.vorstella.shade.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed (AbstractGoogleClientRequest.java:508)
    at com.vorstella.shade.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed (AbstractGoogleClientRequest.java:432)
    at com.vorstella.shade.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute (AbstractGoogleClientRequest.java:549)
    at com.vorstella.shade.com.google.cloud.storage.spi.v1.HttpStorageRpc.create (HttpStorageRpc.java:303)
    at com.vorstella.shade.com.google.cloud.storage.StorageImpl.create (StorageImpl.java:161)
    at com.gkatzioura.maven.cloud.gcs.GoogleStorageRepository.put (GoogleStorageRepository.java:101)
    at com.gkatzioura.maven.cloud.gcs.GoogleStorageWagon.put (GoogleStorageWagon.java:87)
    at org.eclipse.aether.transport.wagon.WagonTransporter$PutTaskRunner.run (WagonTransporter.java:686)
    at org.eclipse.aether.transport.wagon.WagonTransporter.execute (WagonTransporter.java:435)
    at org.eclipse.aether.transport.wagon.WagonTransporter.put (WagonTransporter.java:418)
    at org.eclipse.aether.connector.basic.BasicRepositoryConnector$PutTaskRunner.runTask (BasicRepositoryConnector.java:554)
    at org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunner.run (BasicRepositoryConnector.java:363)
    at org.eclipse.aether.connector.basic.BasicRepositoryConnector.put (BasicRepositoryConnector.java:287)
    at org.eclipse.aether.internal.impl.DefaultDeployer.deploy (DefaultDeployer.java:295)
    at org.eclipse.aether.internal.impl.DefaultDeployer.deploy (DefaultDeployer.java:211)
    at org.eclipse.aether.internal.impl.DefaultRepositorySystem.deploy (DefaultRepositorySystem.java:381)
    at org.apache.maven.artifact.deployer.DefaultArtifactDeployer.deploy (DefaultArtifactDeployer.java:142)
    at org.apache.maven.plugin.deploy.AbstractDeployMojo.deploy (AbstractDeployMojo.java:167)
    at org.apache.maven.plugin.deploy.DeployMojo.execute (DeployMojo.java:149)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
    at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
    at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
    at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
    at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
    at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:566)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)

Cannot resolve Parent artifact from Azure

Hi there,

thank you very much for your plugin, it looks quite promising and is exactly what I need right now!

There's one problem I have encountered, though. I have a Maven POM, which in turn points to another Maven artefact that is also hosted on the Azure Maven repo. In that case, resolution does not work.

Steps to reproduce:

  • Create and deploy a Maven project A (type=POM)
  • Create a Maven project B, and reference A as parent
  • Register the azure-storage-wagon extension in extensions.xml as per http://maven.apache.org/docs/3.3.1/release-notes.html (I simply put the file into the .m2 directory of my user account)
    • Otherwise the resolution of the parent cannot work anyway, because the extension is loaded too late!
  • Execute mvn clean package on project B

The following error message appears:
[ERROR] Non-resolvable parent POM for xyz:yadda:0.0.1-SNAPSHOT: Could not transfer artifact xyz:abc-parent:pom:0.1.0 from/to az-releases (bs://az-mvnrepository/releases): Cannot access bs://az-mvnrepository/releases with type default using the available connector factories: BasicRepositoryConnectorFactory and 'parent.relativePath' points at wrong local POM @ line 6, column 10: Cannot access bs://az-mvnrepository/releases using the registered transporter factories: WagonTransporterFactory: java.util.NoSuchElementException

Please let me know if you need any more infos to reproduce this. Thanks in advance and keep up the good work!

Kind regards,
Philipp

No content length specified for stream data.

Thank you for this nice project.

Below warning was shown while doing mvn deploy command.

Feb 16, 2019 12:49:17 PM com.amazonaws.services.s3.AmazonS3Client putObject
WARNING: No content length specified for stream data.  Stream contents will be buffered in memory and could result in out of memory errors.

Could we fix this warning?

No snapshot versions listed in maven-metadata.xml

Thanks for producing this extension!

I'm unable to download snapshot versions I've published to an S3 bucket using this extension. The maven-metadata.xml file from my published snapshot contains:

<?xml version="1.0" encoding="UTF-8"?>
<metadata>
  <groupId>foo.bar</groupId>
  <artifactId>my-artifact</artifactId>
  <versioning>
    <versions>
      <version>0.0.1-SNAPSHOT</version>
    </versions>
    <lastUpdated>20190311191628</lastUpdated>
  </versioning>
</metadata>

Based on details in https://blog.packagecloud.io/eng/2017/03/09/how-does-a-maven-repository-work/ and https://maven.apache.org/ref/3.5.3/maven-repository-metadata/repository-metadata.html it seems there should be more information here about the latest snapshot.

Am I doing something wrong when I'm deploying the artifact? If there's any more diagnostics I can do on this end, please let me know!

Specify Bucket Region in build extension XML

I'd like to specify the bucket region in my pom.xml so that all the config is together and easy to understand. I can't think of a reason why that would be a bad idea, something like this:

<extension>
  <extension>
    <groupId>com.gkatzioura.maven.cloud</groupId>
    <artifactId>s3-storage-wagon</artifactId>
    <version>1.6</version>
    <configuration>
      <s3-default-region>eu-west-1</s3-default-region>
    </configuration>
  </extension>
</repository>

I'm not sure whether it's possible to specify config like this in Maven extensions. We could also make it possible to specify a region per repo-id, but I suspect this would be sufficient for most usecases.

Good/bad idea?

Cannot fetch an artifact without s3:ListAllMyBuckets policy action

Hi,

Thanks for this cool extension! โค
I am experiencing one little issue though, I am not able to fetch an artifact without the very permission :

s3:ListAllMyBuckets

I tested with the same user, connected to the console UI I am able to browse and download, but if I remove this specific permission, s3-storage-wagon cannot fetch anymore my artifacts.

It's a permission I would prefer not to set to all my users as they should not be capable of seeing my other buckets.

Did anyone notice that?

Thanks,

Charlie

NullPointerException originating from GoogleStorageRepository.createStorage()

Error thrown by plugin:
Please configure you google cloud account by logging using gcloud and specify a default project

Reason: keyPath is null

Is there is definite way to pass the value of keyPath?
I am getting this issue in my project which is trying to download artifacts from google buckets.

I am authenticated with my GCP.
I have set value of GOOGLE_APPLICATION_CREDENTIALS correctly. Since keyPath is null code never goes to default.

I have tried providing the value of keyPath in my pom.xml like this:
<plugin> <groupId>com.gkatzioura.maven.cloud</groupId> <artifactId>google-storage-wagon</artifactId> <version>2.3</version> <executions> <execution> <id>download</id> <phase>install</phase> <configuration> <keyPath>../src/test/resources/serviceacc.json</keyPath> </configuration> <goals> <goal>gcs-download</goal> </goals> </execution> </executions> </plugin>
Plugin Version: 2.3

S3StorageWagon: Make API credentials optional

Hi,

AWS allows an EC2 machine to access resources even when there aren't any API credentials present in the system, if the machine's IAM role contains permissions to access those resources.

It seems that S3StorageWagon makes providing API keys mandatory.
Is there currently a way I can force it to ignore authenticating if the credentials are not provided?

The API requests will still go through successfully because this wagon will be running in an EC2 machine that has IAM permissions to access the bucket, so API keys are not required.

If currently not possible, I'd like to submit a fix if you're open to it.

I'm using S3StorageWagon v1.0.

include aws-java-sdk-sts as a dependency to support assume role profiles

We make heavy use of roles in our AWS profiles. When I specify a profile with a role I get this error:

Could not authenticate: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), com.amazonaws.auth.profile.ProfileCredentialsProvider@662713b6: To use assume role profiles the aws-java-sdk-sts module must be on the class path., com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper@558127d2: Unable to load credentials from service endpoint] -> [Help 1]

Note in particular: To use assume role profiles the aws-java-sdk-sts module must be on the class path.

This error comes from here: https://github.com/aws/aws-sdk-java/blob/7bf0fbec42de8a7d54875510f27fb363ca2d19f5/aws-java-sdk-core/src/main/java/com/amazonaws/auth/profile/internal/securitytoken/STSProfileCredentialsServiceProvider.java#L57

The code that does the real work is here: https://github.com/aws/aws-sdk-java/blob/7bf0fbec42de8a7d54875510f27fb363ca2d19f5/aws-java-sdk-sts/src/main/java/com/amazonaws/services/securitytoken/internal/STSProfileCredentialsService.java

Would you be amenable to adding aws-java-sdk-sts as a dependency? I'd be happy to submit a PR if you'd look at it.

Shade dependencies

First, thank you for this project. I'm trying to use it in gradle to upload artifacts to GCS but i'm getting conflicts between the guava pulled in by your project and transitive dependencies I think from the maven plugin. Can we shade the dependencies for this project?

Caused by: java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
        at com.google.cloud.storage.StorageImpl.optionMap(StorageImpl.java:1028)
        at com.google.cloud.storage.StorageImpl.optionMap(StorageImpl.java:1020)
        at com.google.cloud.storage.StorageImpl.optionMap(StorageImpl.java:1050)
        at com.google.cloud.storage.StorageImpl.list(StorageImpl.java:257)
        at com.gkatzioura.maven.cloud.gcs.GoogleStorageRepository.connect(GoogleStorageRepository.java:55)
        at com.gkatzioura.maven.cloud.gcs.GoogleStorageWagon.connect(GoogleStorageWagon.java:135)
        at org.sonatype.aether.connector.wagon.WagonRepositoryConnector.connectWagon(WagonRepositoryConnector.java:345)
        at org.sonatype.aether.connector.wagon.WagonRepositoryConnector.pollWagon(WagonRepositoryConnector.java:385)
        at org.sonatype.aether.connector.wagon.WagonRepositoryConnector$PutTask.run(WagonRepositoryConnector.java:803)
        at org.sonatype.aether.connector.wagon.WagonRepositoryConnector.put(WagonRepositoryConnector.java:467)
...

s3-storage-wagon store artifacts for public read

Is it possible to add an option to publish artifacts to S3 with the ACL set to allow public read? This would allow users to download my artifacts without depending on the s3 storage wagon or having an access policy to my AWS account -- their projects could be simply set up with an HTTP link to the repository.

This would be as simple as adding .withCannedAcl(CannedAccessControlList.PublicRead) to the PutObjectRequest in the S3StorageRepository class. I'm not too familiar with maven extensions, so I don't know how/if it is possible to configure an extension with parameters to make this an option.

Hard-coded "/" as path separator

Uploading a bucket in Windows results in keys: "\a\b\c.txt". S3 uses / as prefix separator, so the upload should be made platform independent by using java.io.File.separator.

Maven build fails with 2.1, but not with 1.8 version of the wagon (also 1.9, 2.0 do not cause the issue).

Here's the Maven error:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-clean-plugin:2.6.1:clean (default-clean) on project xxxx: Execution default-clean of goal org.apache.maven.plugins:maven-clean-plugin:2.6.1:clean failed: Unable to load the mojo 'clean' (or one of its required components) from the plugin 'org.apache.maven.plugins:maven-clean-plugin:2.6.1': java.util.NoSuchElementException
[ERROR]       role: org.apache.maven.plugin.Mojo
[ERROR]   roleHint: org.apache.maven.plugins:maven-clean-plugin:2.6.1:clean
[ERROR] -> [Help 1]
[ERROR] 

AWS profile definition on server level

Issue for pull request: #69

I need to use two repositories from different accounts. I dont want to put in settings.xml secretes cause I have them in .aws/credencials.
So idea was to put awsProfile to configuration. :)
Now is working

LinkageError when using google-storage-wagon with .mvn/extensions.xml

I have a project that requires a parent POM from a GoogleCloud mavenrepo.

When a "parent" must be downloaded using a custom "wagon", the "wagon extension" has to be defined in ${maven.projectBasedir}/.mvn/extensions.xml.

My ${maven.projectBasedir}/.mvn/extensions.xml looks like :

<extensions xmlns="http://maven.apache.org/EXTENSIONS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/EXTENSIONS/1.0.0 http://maven.apache.org/xsd/core-extensions-1.0.0.xsd">
    <extension>
        <groupId>com.gkatzioura.maven.cloud</groupId>
        <artifactId>google-storage-wagon</artifactId>
        <version>1.5</version>
    </extension>
</extensions>

And I get this error :

...
[INFO] Scanning for projects...
[WARNING] Error injecting: com.gkatzioura.maven.cloud.gcs.GoogleStorageWagon
java.lang.LinkageError: loader constraint violation: when resolving method "org.slf4j.impl.StaticLoggerBinder.getLoggerFactory()Lorg/slf4j/ILoggerFactory;" the class loader (instance of org/codehaus/plexus/classworlds/realm/ClassRealm) of the current class, org/slf4j/LoggerFactory, and the class loader (instance of org/codehaus/plexus/classworlds/realm/ClassRealm) for the method's defining class, org/slf4j/impl/StaticLoggerBinder, have different Class objects for the type org/slf4j/ILoggerFactory used in the signature
    at org.slf4j.LoggerFactory.getILoggerFactory (LoggerFactory.java:418)
    at org.slf4j.LoggerFactory.getLogger (LoggerFactory.java:357)
    at org.slf4j.LoggerFactory.getLogger (LoggerFactory.java:383)
    at com.gkatzioura.maven.cloud.wagon.AbstractStorageWagon.<clinit> (AbstractStorageWagon.java:55)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance (NativeConstructorAccessorImpl.java:62)
...

I'm using Java 8 and Maven 3.5.4.

Content length not specified for S3 PutObject

Using this wagon with S3 results in multiple warnings:

WARNING: No content length specified for stream data.  Stream contents will be buffered in memory and could result in out of memory errors.

From what I can tell by looking at the code the put operations are always preformed on File objects so the size should be known and we should be able to set it before uploading.

Using a corporate proxy

Is it possible to use a corporate proxy? Currently it is not taking my env setting HTTPS_PROXY

Error: "The specified key does not exist" when passing S3 prefix as the only value of <keys> field.

When I pass the S3 prefix of the objects that I want downloaded before packaging, I get the following error:

Failed to execute goal com.gkatzioura.maven.cloud:s3-storage-wagon:2.3:s3-download (download-multiple-files-to-one-directory) on project TestProject: Execution download-multiple-files-to-one-directory of goal com.gkatzioura.maven.cloud:s3-storage-wagon:2.3:s3-download failed: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: REDACTED; S3 Extended Request ID: REDACTED) -> [Help 1]

The execution config is as follows:

<execution>
    <id>download-multiple-files-to-one-directory</id>
    <phase>prepare-package</phase>
    <goals>
        <goal>s3-download</goal>
    </goals>
    <configuration>
        <bucket>plugins-bucket</bucket>
        <downloadPath>WebContent\WEB-INF</downloadPath>
        <keys>Plugins</keys>
    </configuration>
</execution>

Going through the code in S3DownloadMojo.java I have come across the following lines:

if (keys.size()==1) {
    downloadSingleFile(amazonS3,keys.get(0));
    return;
}

Here downloadSingleFile() is set to only download a single file if the number of keys given is one. Could this be enhanced to recursively fetch all objects under the given prefix via another configuration parameter?

GCP error on version 2.3

I followed the guide posted here https://egkatzioura.com/2018/04/09/host-your-maven-artifacts-using-google-cloud-storage/ to make deployment of my project to Google Cloud Storage.

It worked well but then i updated the version of this plugin to 2.3 and i got this error:

Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project commons: Failed to deploy artifacts: Could not transfer artifact net.agroop:commons:jar:1.0.1 from/to my-repo-bucket-release (gs://agroop-mvn/release): Please configure you google cloud account by logging using gcloud and specify a default project

I don't know if something needs to change on the config or it's a bug.

For now i'll keep using 1.0

Errors deploying with version 2.3

Using version 2.3 as build extension, i.e.:

<extension>
    <groupId>com.gkatzioura.maven.cloud</groupId>
    <artifactId>google-storage-wagon</artifactId>
    <version>2.3</version>
</extension>

generates an error, following a call to mvn -e deploy:

org.apache.maven.lifecycle.LifecycleExecutionException: Internal error in the plugin manager getting plugin 'org.apache.maven.plugins:maven-resources-plugin': Plugin 'org.apache.maven.plugins:maven-resources-plugin:2.3' has an invalid descriptor:
1) Plugin's descriptor contains the wrong group ID: com.gkatzioura.maven.cloud
2) Plugin's descriptor contains the wrong artifact ID: google-storage-wagon
	at org.apache.maven.lifecycle.DefaultLifecycleExecutor.verifyPlugin(DefaultLifecycleExecutor.java:1544)
	at org.apache.maven.lifecycle.DefaultLifecycleExecutor.getMojoDescriptor(DefaultLifecycleExecutor.java:1851)
	at org.apache.maven.lifecycle.DefaultLifecycleExecutor.bindLifecycleForPackaging(DefaultLifecycleExecutor.java:1311)
	at org.apache.maven.lifecycle.DefaultLifecycleExecutor.constructLifecycleMappings(DefaultLifecycleExecutor.java:1275)
	at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:534)
	at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387)
	at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348)
	at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180)
	at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328)
	at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138)
	at org.apache.maven.cli.MavenCli.main(MavenCli.java:362)
	at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315)
	at org.codehaus.classworlds.Launcher.launch(Launcher.java:255)
	at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430)
	at org.codehaus.classworlds.Launcher.main(Launcher.java:375)

Note: deleting folder ~/.m2/repository/com/gkatzioura/ does not fix the issue.

S2StorageMaven vs MinIO

Was trying to use S2StorageMaven against my own MinIO instance but I didn't found a way to configure a custom endpoint even if it seems to be supported, is there any doc/example about that ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.