GithubHelp home page GithubHelp logo

aws / aws-sdk-php Goto Github PK

View Code? Open in Web Editor NEW
6.0K 226.0 1.2K 329.77 MB

Official repository of the AWS SDK for PHP (@awsforphp)

Home Page: http://aws.amazon.com/sdkforphp

License: Apache License 2.0

Makefile 0.01% PHP 98.89% Gherkin 0.04% Roff 1.06%

aws-sdk-php's Introduction

AWS SDK for PHP - Version 3

Total Downloads Apache 2 License Gitter codecov

The AWS SDK for PHP makes it easy for developers to access Amazon Web Services in their PHP code, and build robust applications and software using services like Amazon S3, Amazon DynamoDB, Amazon Glacier, etc. You can get started in minutes by installing the SDK through Composer or by downloading a single zip or phar file from our latest release.

Jump To:

Getting Started

  1. Sign up for AWS – Before you begin, you need to sign up for an AWS account and retrieve your AWS credentials.
  2. Minimum requirements – To run the SDK, your system will need to meet the minimum requirements, including having PHP >= 7.2.5. We highly recommend having it compiled with the cURL extension and cURL 7.16.2+ compiled with a TLS backend (e.g., NSS or OpenSSL).
  3. Install the SDK – Using Composer is the recommended way to install the AWS SDK for PHP. The SDK is available via Packagist under the aws/aws-sdk-php package. If Composer is installed globally on your system, you can run the following in the base directory of your project to add the SDK as a dependency:
    composer require aws/aws-sdk-php
    
    Please see the Installation section of the User Guide for more detailed information about installing the SDK through Composer and other means.
  4. Using the SDK – The best way to become familiar with how to use the SDK is to read the User Guide. The Getting Started Guide will help you become familiar with the basic concepts.
  5. Beta: Removing unused services — To date, there are over 300 AWS services available for use with this SDK. You will likely not need them all. If you use Composer and would like to learn more about this feature, please read the linked documentation.

Quick Examples

Create an Amazon S3 client

<?php
// Require the Composer autoloader.
require 'vendor/autoload.php';

use Aws\S3\S3Client;

// Instantiate an Amazon S3 client.
$s3 = new S3Client([
    'version' => 'latest',
    'region'  => 'us-west-2'
]);

Upload a file to Amazon S3

<?php
// Upload a publicly accessible file. The file size and type are determined by the SDK.
try {
    $s3->putObject([
        'Bucket' => 'my-bucket',
        'Key'    => 'my-object',
        'Body'   => fopen('/path/to/file', 'r'),
        'ACL'    => 'public-read',
    ]);
} catch (Aws\S3\Exception\S3Exception $e) {
    echo "There was an error uploading the file.\n";
}

Getting Help

Please use these community resources for getting help. We use the GitHub issues for tracking bugs and feature requests and have limited bandwidth to address them.

This SDK implements AWS service APIs. For general issues regarding the AWS services and their limitations, you may also take a look at the Amazon Web Services Discussion Forums.

Maintenance and support for SDK major versions

For information about maintenance and support for SDK major versions and their underlying dependencies, see the following in the AWS SDKs and Tools Shared Configuration and Credentials Reference Guide:

Opening Issues

If you encounter a bug with aws-sdk-php we would like to hear about it. Search the existing issues and try to make sure your problem doesn’t already exist before opening a new issue. It’s helpful if you include the version of aws-sdk-php, PHP version and OS you’re using. Please include a stack trace and a simple workflow to reproduce the case when appropriate, too.

The GitHub issues are intended for bug reports and feature requests. For help and questions with using aws-sdk-php please make use of the resources listed in the Getting Help section. There are limited resources available for handling issues and by keeping the list of open issues lean we can respond in a timely manner.

Features

Contributing

We work hard to provide a high-quality and useful SDK for our AWS services, and we greatly value feedback and contributions from our community. Please review our contributing guidelines before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution.

Resources

  • User Guide – For both getting started and in-depth SDK usage information
  • API Docs – For details about operations, parameters, and responses
  • Blog – Tips & tricks, articles, and announcements
  • Sample Project - A quick, sample project to help get you started
  • Forum – Ask questions, get help, and give feedback
  • Issues – Report issues, submit pull requests, and get involved (see Apache 2.0 License)

Related AWS Projects

aws-sdk-php's People

Contributors

ajredniwja avatar alexkovalevych avatar aws-sdk-php-automation avatar beefcakefu avatar carusogabriel avatar cjyclaire avatar dependabot[bot] avatar diehlaws avatar grahamcampbell avatar hacfi avatar hidehara avatar howardlopez avatar imshashank avatar jeremeamia avatar jeskew avatar jschwarzwalder avatar kstich avatar martinssipenko avatar miklos-martin avatar mtdowling avatar nyholm avatar pcolby avatar poisa avatar rairlie avatar rquadling avatar ruudk avatar samremis avatar skyzyx avatar stobrien89 avatar yenfryherrerafeliz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-sdk-php's Issues

ec2 Client does not honor parameters

I Briefly ran through the code, but it looks like params are not being passed on the ec2 client (the only PostField for curl on the stack was the 'describeInstances' method name). For example:

$response = $ec2->describeInstances(array(
'Filter' => array(
// So AWS isn't exactly honoring this filter, will need to double check below
array(
'Name' => 'instance-state-name',
'Value' => $filterStopped ? 'running' : '*'
),
array(
'Name' => 'tag-key',
'Value' => $tag
)
)
));

Returns the entire corpus of our ec2 account and does not do any filtering.

SNS: How to verify incoming notification's authenticity

I've set up an HTTP endpoint and confirmed it. I'm wondering if the PHP 2 SDK has any means of checking whatever comes through the endpoint and verifying the notification's signature so that we know it's legit.

(note that at the time of this writing the sdk's sns docs are still under construction that's why I'm asking)

Uploading an Empty File

If you try to upload an empty file via the UploadBuilder, you currently get a

The XML you provided was not well-formed or did not validate against our published schema

exception. I'm not sure if this is generally not supported, but some sort of better exception message would be cool.

Class 'Aws\Common\Signature\SignatureV4' not found

When trying to send an email using AmazonSES, I get this error

Fatal error: Class 'Aws\Common\Signature\SignatureV4' not found in phar:///var/www/xxx/xxx/aws-sdk-v2/aws.phar/src/Aws/Common/Client/ClientBuilder.php on line 393

The phar file was downloaded from aws site yesterday

UploadBuilder example does not work for me

Hello,

When I use the code given in the Readme to upload big files, I encounter the same error every time :

require 'vendor/autoload.php';

use Aws\Common\Aws;
use Aws\Common\Enum\Size;
use Aws\S3\Enum\CannedAcl;
use Aws\Common\Enum\Region;
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\Model\MultipartUpload\UploadBuilder;

$s3 = Aws::factory('config.php')->get('s3');

$transfer = UploadBuilder::newInstance()
    ->setClient($s3)
    ->setSource('funny.jpg')
    ->setBucket('sweet-kittens')
    ->setKey('my-object-key-name')
    ->setMinPartSize(1 * Size::MB)
    ->build();

try {
    $transfer->upload();
} catch (MultipartUploadException $e) {
    echo $e->getMessage() . "\n";
    $transfer->abort();
}

results in :

An error was encountered while performing a multipart upload: One or more of the specified parts could not be found.  The part may not have been uploaded, or the specified entity tag may not match the part's entity tag.

Whereas this code works perfectly fine :

require 'vendor/autoload.php';

use Aws\Common\Aws;
use Aws\Common\Enum\Size;
use Aws\S3\Enum\CannedAcl;
use Aws\Common\Enum\Region;
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\Model\MultipartUpload\UploadBuilder;

$s3 = Aws::factory('config.php')->get('s3');

$s3->putObject(array(
    'Bucket' => 'sweet-kittens',
    'Key'    => 'my-object-key-name',
    'Body'   => file_get_contents('./funny.jpg'),
    'ACL'    => CannedAcl::PRIVATE_ACCESS
));

What am I doing wrong ?

S3 Inconsistent handling of filenames with spaces

You can try this at home.

Put a file on any S3 bucket with spaces in the name - call it "file with spaces.png" for argument's sake. We sort them by userid.

$userID = "todd";
$file = "file with spaces.png";
$file2 = urlencode($file); // "file+with+spaces.png"
$bucket =
// I have to think of everything? Use your own bucket.

$s3->doesObjectExist($bucket, "{$userID}/{$file}"); // returns true
$s3->doesObjectExist($bucket, "{$userID}/{$file2}"); // returns false

so when asking S3 about a file - do NOT urlencode the file name....
however (here it comes)

** $key = "{$bucket}/{$userID}/{$file}"; ** // S3 says this exists so grab a url for it
** $url = $s3->createPresignedUrl($s3->get($key), '+5 minutes'); **

which returns something like

https://aws.amazon.com/bucket/todd/file with spaces.png?bunch-of-credentials

which is not a valid url because it has spaces in it. The highlighting makes this obvious. Fortunately your browser will fix it for you and you will see

https://aws.amazon.com/bucket/todd/file+with+spaces.png?bunch-of-credentials
which is nice but this isn't the string the credentials match and so you will get a bunch of ugly xml saying something to the effect of "SignatureDoesNotMatch" or something.

The trick, the workaround, the hack is that you have to url encode the components in the url (but not the whole url or it will encode the slashes and you're screwed) before asking for a signed version of the url.

*$key2 = urlencode($bucket) . '/' . urlencode($userID) . '/' . urlencode($file); *

which when passed to
$url = $s3->createPresignedUrl($s3->get($key2), '+5 minutes');
returns something like
https://aws.amazon.com/bucket/todd/file+with+spaces.png?bunch-of-credentials

which matches the credentials and ultimately works.

But the handling of the functions seems odd - seems like I should be able to use the same kind of paths for both calls.

S3 PutObjectCopy not functional

Attempting to use PutObjectCopy ($s3->copyObject) throws the following error:

PHP Warning:  require(Includes/Aws/S3/Command/PutObjectCopy.php) file not found

require aws.phar fails silently

I am having trouble using the phar file in a trivial script. I have a php script like the one below and when I try to run it from the command line (php script.php) any code after the require 'aws.phar' line does not get executed. It acts like the require aws.phar line was an exit line.

Note: the aws.phar is in the same directory as the script.

<?php
require "aws.phar";
echo "foo\n"; //doesn't execute

WriteRequestBatch example assumes that all attributes are of StringSet type

Hey guys. The current docs branch has a great example on using WriteRequestBatch to add an item. The problem is that the method used in the example - Item's fromArray- will assume each attribute to be a StringSet. This will end you with an exception when the type of ie. your hash key is not a StringSet (which is exactly the case in your PutItem example).

You should probably include a heads-up with this specific example and mention that the constructor of Item will take attributes in the (old) array format which has type definitions.

Curl Error

I'm using the PHP SDK2 for glacier API access; yesterday my script was working fine, today it's not.

I'm receiving the following error:

PHP Fatal error: Uncaught exception 'Guzzle\Http\Exception\CurlException' with message 'curl 6: name lookup timed out [url] https://glacier.us-west-1.amazonaws.com/-/vaults/********** info array (
'url' => 'https://glacier.us-west-1.amazonaws.com/-/vaults/',
'content_type' => NULL,
'http_code' => 0,
'header_size' => 0,
'request_size' => 0,
'filetime' => -1,
'ssl_verify_result' => 0,
'redirect_count' => 0,
'total_time' => 0,
'namelookup_time' => 0,
'connect_time' => 0,
'pretransfer_time' => 0,
'size_upload' => 0,
'size_download' => 0,
'speed_download' => 0,
'speed_upload' => 0,
'download_content_length' => -1,
'upload_content_length' => -1,
'starttransfer_time' => 0,
'redirect_time' => 0,
'certinfo' =>
array (
),
) debug ' in phar:///var/www/clients/
/aws.phar/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php:561

My server is up to date and connected online (i'm able to ping).

sendRawEmail headers not recognized

Not able to get sendRawEmail to recognize simple Email. Used the SDK before and the missing header values (date, ID, etc) was filled in. The error I get back is "SesException: Missing required header 'From'."

    $sesSettings = Config::get('aws');
    $sesSettings['region'] = 'us-east-1';
    $ses = Aws::factory($sesSettings)->get('ses');


    $headers = "From: [email protected]\r\n";
    $headers .= "Reply-To: [email protected]\r\n";
    $headers .= "Subject: Hello\r\n";
    $headers .= "MIME-Version: 1.0\r\n";
    $headers .= "Content-Type: text/html; charset=ISO-8859-1\r\n";
    $headers .= "Content-Transfer-Encoding: 7bit\r\n";
    $headers .= "\r\n";
    $headers .= "PLAIN TEXT MSG";
    $headers .= "\r\n";

    return $ses->sendRawEmail(array('RawMessage' => array('Data' => $headers)));

Use of undefined constant CURLE_COULDNT_RESOLVE_HOST

I have been working with amazon S3 for media storage for a e-commerce site but I ran into this error and have no idea how to fix it:

Use of undefined constant CURLE_COULDNT_RESOLVE_HOST - assumed 'CURLE_COULDNT_RESOLVE_HOST'

NEW CurlBackoffStrategy

File: /var/api/vendor/aws/aws-sdk-php/src/Aws/S3/S3Client.php
Line: 188

I'm assuming this is software incompatibility some were but I'm no sure where to look.

This code dose work on my local test environment but when the server gets it, it just fails. I have updated PHP and, curl to the same as my test environment but to no avail.

curl 7.22.0 php 5.4.14-1 are the versions I'm using.

Migrating session handler to v2

I can get it to work here is my code

btw I have to add doctine/common to composer that was not in any tutorial anywhere

use Aws\DynamoDb\DynamoDbClient;
use Aws\DynamoDb\Session;
use Aws\Common\Enum\Region;
use Guzzle\Cache\DoctrineCacheAdapter;
use Aws\DynamoDb\Exception\DynamoDbException;

// Instantiate the DynamoDB client with your AWS credentials
$client = DynamoDbClient::factory(array(
    'key'    => 'xxx',
    'secret' => 'xxx',
    'region' => Region::US_EAST_1,
    'credentials.cache' => true,
));

$result = $client->registerSessionHandler(array(
    'table_name' => 'my_sessions',
    'hash_key'             => 'userid',
    'session_lifetime'     => 0,
    'consistent_reads'     => true,
    'session_locking'      => false,
    'max_lock_wait_time'   => 15,
    'min_lock_retry_utime' => 5000,
    'max_lock_retry_utime' => 50000
));

echo $result->register(); //this echo's 1 so I assume its ok

session_start();

...yet this doesnt work any clues as to what I am doing wrong?

Error: "Your socket connection to the server was not read from or written to within the timeout period."

On some system, I get the above error when trying to upload a file to Amazon S3.

Triggering Code:

UploadBuilder::newInstance()
    ->setClient($this->s3)
    ->setSource($url)
    ->setBucket($this->bucketName)
    ->setKey($key)
    ->build()
    ->upload()
;

The error can be prevented by switching the source to be in memory:

UploadBuilder::newInstance()
    // ...
    // Pass the content instead of just the URL
    ->setSource(EntityBody::factory($content))
    // ...
;

Maybe you have seen this before, and have an idea how to fix this automatically?

Complete Stack Trace:

Aws\Common\Exception\MultipartUploadException: An error was encountered while performing a multipart upload: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.

vendor/aws/aws-sdk-php/src/Aws/Common/Model/MultipartUpload/AbstractTransfer.php:177
vendor/aws/aws-sdk-php/src/Aws/Common/Exception/NamespaceExceptionFactory.php:75
vendor/aws/aws-sdk-php/src/Aws/Common/Exception/ExceptionListener.php:55
vendor/symfony/event-dispatcher/Symfony/Component/EventDispatcher/EventDispatcher.php:164
vendor/symfony/event-dispatcher/Symfony/Component/EventDispatcher/EventDispatcher.php:53
vendor/guzzle/guzzle/src/Guzzle/Http/Message/Request.php:757
vendor/guzzle/guzzle/src/Guzzle/Http/Message/Request.php:466
vendor/guzzle/guzzle/src/Guzzle/Http/Message/EntityEnclosingRequest.php:66
vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php:499
vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php:426
vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php:387
vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php:278
vendor/guzzle/guzzle/src/Guzzle/Http/Client.php:363
vendor/guzzle/guzzle/src/Guzzle/Service/Client.php:223
vendor/guzzle/guzzle/src/Guzzle/Service/Command/AbstractCommand.php:167
vendor/guzzle/guzzle/src/Guzzle/Service/Command/AbstractCommand.php:206
vendor/aws/aws-sdk-php/src/Aws/S3/Model/MultipartUpload/SerialTransfer.php:73
vendor/aws/aws-sdk-php/src/Aws/Common/Model/MultipartUpload/AbstractTransfer.php:167

Caused by
Aws\S3\Exception\RequestTimeoutException: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.

vendor/aws/aws-sdk-php/src/Aws/Common/Exception/NamespaceExceptionFactory.php:89
vendor/aws/aws-sdk-php/src/Aws/Common/Exception/NamespaceExceptionFactory.php:75
vendor/aws/aws-sdk-php/src/Aws/Common/Exception/ExceptionListener.php:55
vendor/symfony/event-dispatcher/Symfony/Component/EventDispatcher/EventDispatcher.php:164
vendor/symfony/event-dispatcher/Symfony/Component/EventDispatcher/EventDispatcher.php:53
vendor/guzzle/guzzle/src/Guzzle/Http/Message/Request.php:757
vendor/guzzle/guzzle/src/Guzzle/Http/Message/Request.php:466
vendor/guzzle/guzzle/src/Guzzle/Http/Message/EntityEnclosingRequest.php:66
vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php:499
vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php:426
vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php:387
vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php:278
vendor/guzzle/guzzle/src/Guzzle/Http/Client.php:363
vendor/guzzle/guzzle/src/Guzzle/Service/Client.php:223
vendor/guzzle/guzzle/src/Guzzle/Service/Command/AbstractCommand.php:167
vendor/guzzle/guzzle/src/Guzzle/Service/Command/AbstractCommand.php:206
vendor/aws/aws-sdk-php/src/Aws/S3/Model/MultipartUpload/SerialTransfer.php:73
vendor/aws/aws-sdk-php/src/Aws/Common/Model/MultipartUpload/AbstractTransfer.php:167

Using multiple versions of AWS SDK for PHP in single application

Hi,

I am working on a environment which requires to use multiple version of Aws SDK for PHP starting from v1.4.4 and onwards. Basically, we have a large code-base, and it's impractical for us to move everything over to the new API version all at once.

Now, As past v2.0 don't have name space, I am not sure how would I fix this in them but from v2.0 and onwards will have namespaces but they all will be same (eg. v2.0 > Aws\Common\Aws and v2.0.2 > Aws\Common\Aws). Is there anyway I can use both at the same time. Can Amazon try and make namespace structure like Aws\v2_0\Common\Aws, and Aws\v2_0_2\Common\Aws, or any other mechanism that allows multiple versions of same Aws classes can be used at the same time.

https://forums.aws.amazon.com/thread.jspa?threadID=109145&tstart=0

Thanks in advance.
Ashish

Async Plugin support

It would be handy to have clients that easily wired up to Guzzle's Async plugin. In particular I'd like to be able to use the SimpleDbClient asynchronously.

Waiter for EC2 Instance Status Checks

Is there a waiter for this? Currently, I am using the 'InstanceRunning', however, even though the instance is running, I cannot ssh into it. Once the status checks have passed, the instance is ready for me to connect. I currently have to wait until the instance is running, and then sleep(60).

A waiter for the status checks would be more accurate when needing to use new instances.

Mocking Aws\Common\InstanceMetadata\InstanceMetadataClient

Is there a way to mock/stub this in order to develop locally? This calls a URL only available from within an EC2 instance so I'm trying to create a mock but I'm failing miserably.

I'm attempting to do this with PHPUnit but for some reason it is expecting that I send a parameter to send() which is strange since the real object is not expecting this.

Warning: Missing argument 1 for Mock_InstanceMetadataClient_1341b3cc::send(), called in ...IndexController.php line 38
// IndexController.php (lines 38-39)
...
$response = $client->get('user-data')->send();
var_dump($response->json());
...
// StubInstanceMetadata.php
namespace Stub\Amazon;

use PHPUnit_Framework_TestCase;

class StubInstanceMetadata extends PHPUnit_Framework_TestCase
{
    private $_stub;

    public function __construct()
    {

        // Create a stub for the SomeClass class.
        $stub = $this->getMockBuilder('Aws\Common\InstanceMetadata\InstanceMetadataClient')
                     ->disableOriginalConstructor()
                     ->setMethods(array('get', 'send'))
                     ->getMock();

        $stub->expects($this->any())
             ->method('get')
             ->will($this->returnSelf());

        $stub->expects($this->any())
             ->method('send')
             ->will($this->returnValue(new StubInstanceMetadataResponse())); // I have a similar object for the response with expects a ->json() call and returns an array

        $this->_stub = $stub;
    }

    public function getStub()
    {
        return $this->_stub;
    }
}```

Bug PHP Late Static Binding on 5.3.2

Hey,

I always get an PHP Fatal error on my cloudhoster (php 5.3.2) environment (@see https://phpinfo.cloudcontrolled.com/)

PHP Fatal error: Cannot access self:: when no class scope is active in /srv/www/library/vendor/aws/aws-sdk-php/src/Aws/Common/Enum.php on line 50.

After changing code in \Aws\Common\Enum.php from

public static function values()
{
    $class = get_called_class();
    if (!isset(self::$cache[$class])) {
        $reflected = new \ReflectionClass($class);
        self::$cache[$class] = $reflected->getConstants();
    }

    return self::$cache[$class];
}

to (Bugfix)

public static function values()
{
    $class = get_called_class();
    if (!isset(static::$cache[$class])) {
        $reflected = new \ReflectionClass($class);
        static::$cache[$class] = $reflected->getConstants();
    }

    return static::$cache[$class];
}

it also works on PHP 5.3.2

More information at http://php.net/manual/en/language.oop5.late-static-bindings.php

AWS Autoloader fails when using APC with apc.stat = 0

When using APC with directive apc.stat = 0, you'll get PHP Warnings similar to:

PHP Warning: require(): realpath failed to canonicalize phar://aws.phar/vendor/guzzle/guzzle/src/Guzzle/Common/Exception/RuntimeException.php - bailing in phar:///opt/nihongomaster/lib/aws.phar/vendor/symfony/class-loader/Symfony/Component/ClassLoader/UniversalClassLoader.php on line 249

More examples please

The 1.X api docs were awesome in that many of the api's had inline examples. I know ApiGen does not lend its self towards that as easy, but are there any plans for more examples in the 2.x docs themselves?

Would really get people up and going faster and would get a faster adoption rate (which means less time you have to spend supporting your old 1.x code :)

I use ApiGen as well, and I know it supports multi line code in doc with <code> and

/---code
<?php echo "hi";
\---

syntax. They also plan on supporting markdown in future release...

Best way to upload larger files (i.e. podcasts)?

I’m having difficulties coming up with the best way to do the following. I have a website, where users should be able to upload podcasts. I’m wanting to use Amazon S3 to store the actual MP3 file, but need to also create a corresponding database record.

Normally, I’d just have a HTML form with the relevant fields that submits to a controller, that then creates the record in the database but also moves the file. But as podcasts are larger than your typical image file, I’m going to get either max file size or timeout issues.

What would the normal flow be in this instance? I can’t be the first person with this problem. I’d also like to use version 2 of the PHP SDK, as I’m already using this in my project.

Originally posted at https://forums.aws.amazon.com/thread.jspa?threadID=110093. Posting here for exposure. Apologies for any frustration caused.

DynamoDB Session Handler exceptions

  1. Occasionally I get some exceptions:
    PHP Fatal error: Undefined class constant 'Aws\\Common\\Client\\UserAgentListener::OPTION' in /var/www/dynamodb_session_handler/vendor/aws/aws-sdk-php/src/Aws/DynamoDb/Session/LockingStrategy/AbstractLockingStrategy.php on line 82
    PHP Fatal error: Cannot declare self-referencing constant 'Aws\\Common\\Client\\UserAgentListener::OPTION' in /var/www/dynamodb_session_handler/vendor/aws/aws-sdk-php/src/Aws/DynamoDb/Session/LockingStrategy/AbstractLockingStrategy.php on line 82
  2. Also, more rarely, been getting this: PHP Fatal error: Uncaught exception 'Guzzle\Http\Exception\CurlException' with message '[curl] 65: necessary data rewind wasn't possible [url] https://dynamodb.us-east-1.amazonaws.com/' in /var/www/dynamodb_session_handler/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php:595\nStack trace: #0 /var/www/dynamodb_session_handler/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php(526): Guzzle\Http\Curl\CurlMulti->isCurlException(Object(Guzzle\Http\Message\EntityEnclosingRequest), Object(Guzzle\Http\Curl\CurlHandle), Array) #1 /var/www/dynamodb_session_handler/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php(458): Guzzle\Http\Curl\CurlMulti->processResponse(Object(Guzzle\Http\Message\EntityEnclosingRequest), Object(Guzzle\Http\Curl\CurlHandle), Array) #2 /var/www/dynamodb_session_handler/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php(418): Guzzle\Http\Curl\CurlMulti->processMessages() #3 /var/www/dynamodb_session_handler/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php(278): Guzzle\Http\Curl\CurlMu in /var/www/dynamodb_session_handler/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php on line 595

What would cause this? I'm on PHP 5.3.19...

Uploading an object which has %20 in its key fails with SignatureDoesNotMatch error

Code to reproduce the issue:

<?

require 'lib/aws/vendor/autoload.php';
use Aws\Common\Aws;
use Aws\Common\Enum\Region;
use Aws\S3\Enum\CannedAcl;
use Aws\S3\Exception\S3Exception;

error_reporting(E_ALL);
ini_set('display_errors', 'on');

$aws = Aws::factory(array(
    'key'    => '...',
    'secret' => '...',
    'ssl.certificate_authority' => 'lib/cacert.pem',
));
$client = $aws->get('s3');

try 
{
    $client->putObject(array(
        'Bucket' => 'mybucket',
        'Key'    => 'myfolder/te st%20test.txt',
        'Body'   => '2',
        'ACL'    => CannedAcl::PUBLIC_READ,
    ));
} 
catch (S3Exception $e) 
{
    echo $e;
}

?>

Result:
Aws\S3\Exception\SignatureDoesNotMatchException: AWS Error Code: SignatureDoesNotMatch, Status Code: 403, AWS Request ID: 598A15D6B6FAF312, AWS Error Type: client, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method.

(I'll be committing a fix in a bit)

Using PHP SDK2 for SQS

Is it possible to use the AWS PHP SDK2 for Amazon SQS? If so, is there any sample code or documentation on getting started with such?

Example of config

In example of file upload:
$s3 = Aws::factory('/path/to/config.php')->get('s3');
And where is that config.php? Any example - how to write it?

What parameters does S3Client::putObject() expect?

I’m having trouble with the putObject() method of the S3Client class. One of the arguments to be passed is Body. The API documentation says this is to be an array, but doesn’t say or give an example of parameters/keys that should be contained in this array.

I have an upload working, but the file being placed on my Amazon S3 set-up is empty because I’m obviously passing some incorrect data to it, and don‘t know what I should be passing. Below is my current code:

$file = 'public/test.jpg';

try {
    $result = $container['amazon_s3']->putObject(array(
        'Bucket' => (string) $container['misc_config']->images->amazon->bucket,
        'Key' => 'images/test_martin/test.jpg',
        'ContentType' => mime_content_type($file),
        'Body' => array(
            'file' => $file,
            'size' => filesize($file)
        )
    ));
    print '<pre>';
    print_r($result);
    exit;
}
catch (\Exception $e) {
    echo $e->getMessage();
    exit;
}

Any help would be much appreciated.

Expire Pre-Signed URL after # Access Attempts

It doesn't look like there is any way to pass a "download attempts" expiration param to getPresignedUrl, but this feature would be superRad.

Any chance this might be implemented?

S3 putObject from http stream not working

Hey, not sure if I am barking up the wrong tree here. I am trying to pass a stream to putObject, like so:

 $s3->putObject(array(
    'Bucket' => 'my-bucket',
    'Key'    => 'my-object',
    'Body'   => fopen( $url, 'r' ),
    'ACL'    => Aws\S3\Enum\CannedAcl::PUBLIC_READ
));

But I get the error:

[17-Apr-2013 22:23:34 UTC] PHP Fatal error:  Uncaught Aws\S3\Exception\NotImplementedException: AWS Error Code: NotImplemented, Status Code: 501, AWS Request ID: 2E703D55A572B6F9, AWS Error Type: server, AWS Error Message: A header you provided implies functionality that is not implemented, User-Agent: aws-sdk-php2/2.2.1 Guzzle/3.3.1 curl/7.24.0 PHP/5.4.12
  thrown in phar:.../aws.phar/src/Aws/Common/Exception/NamespaceExceptionFactory.php on line 89

Is it not possible to stream a file from HTTP > S3 directly like this?

Thanks!

Latest version (2.0.2) from PEAR breaks .phar package + location

Installed latest package with

pear -D auto_discover=1 install pear.amazonwebservices.com/sdk

My code, which uses

require 'AWSSDKforPHP/aws.phar';

is now broken. The error returned is:

hadoop@domU-12-31-39-09-28-24:~$ php mapper.php 
PHP Warning:  require(AWSSDKforPHP/aws.phar): failed to open stream: No such file or directory in /home/hadoop/mapper.php on line 5
PHP Fatal error:  require(): Failed opening required 'AWSSDKforPHP/aws.phar' (include_path='.:/usr/share/php:/usr/share/pear') in /home/hadoop/mapper.php on line 5`

This worked previously with 2.0.1

If I look in /usr/share/php/AWSSDKforPHP, I see that aws.phar is now in the src/ folder. Okay, let's try:

require 'AWSSDKforPHP/src/aws.phar';

Now the error is:

PHP Fatal error:  Class 'Guzzle\Service\Builder\ServiceBuilderLoader' not found in phar:///usr/share/php/AWSSDKforPHP/src/aws.phar/src/Aws/Common/Aws.php on line 26

Bummer guys. Manually downloading the aws.phar package works as expected.

S3: Signature does not match when metadata has null

I'm getting:

The request signature we calculated does not match the signature you provided. Check your key and signing method.

with the following code:

$client->putObject(array(
    'Bucket' => 'C5Backups',
    'Key' => $this->getArchiveName(),
    'SourceFile' => $this->getArchivePath(),
    'Metadata' => array(
        'archiveStartDate' => date('c', $this->archiveStart),
        'archiveFileContents' => 'FILES',
        'archiveType' => 'FULL',
        'archiveBaseJobID' => null             
    )
));

But when I change the archiveBaseJobID to int or string, everything works fine. On the other hand, it breaks with empty string, or a space (' ').

S3 Multipart Upload

I'm having trouble migrating a multipart upload from v1 to v2. The code using v1 looks basically like the second example from http://docs.amazonwebservices.com/AmazonS3/latest/dev/LLuploadFilePHP.html

I'm stuck at $s3->uploadPart(), my code looks like this:

$response = $s3->createMultipartUpload(array(
    'Bucket' => $bucketName,
    'Key' => $s3File
));

$uploadId = $response['UploadId'];

$file = fopen($localFile, 'r');

$part_counts = getMultipartCounts(filesize($localFile), $chunkSize);

$total_parts = count($part_counts);
$current_part = 1;

foreach ($part_counts as $i => $part) {
    if ($part['seekTo'] !== ftell($file)) {
        fseek($file, $part['seekTo']);
    }

    $body = fread($file, $part['length']);
    $response = $s3->uploadPart(array(
        'Bucket' => $config['s3.bucket'],
        'Key' => $s3File,
        'Body' => $body,
        'UploadId' => $uploadId,
        'PartNumber' => $i
    ));

    if ($response) {
        echo "Part number $current_part of $total_parts uploaded.";
        $current_part++;
    } else {
        $s3->AbortMultipartUpload(array(
            'Bucket' => $bucketName,
            'Key' => $s3File,
            'UploadId' => $uploadId,
        ));
        echo "Upload of file $localFile failed, aborting upload.";
    }
}

$parts = $s3->listParts(array(
    'Bucket' => $config['s3.bucket'],
    'Key' => $s3File,
    'UploadId' => $uploadId,
));

$s3->completeMultipartUpload(array(
    'Bucket' => $bucketName,
    'Key' => $s3File,
    'UploadId' => $uploadId,
    'Parts' => $parts['Parts']
));
  • getMultipartCounts() is basically $s3->get_multipart_counts() from v1
  • The PartNumber option for $s3->uploadPart() was just a guess, couldn't find anything in the service description

The $s3->listParts() call returns an empty parts list. The file is uploaded but contains only the last uplodaded part.

Any help appreciated.

S3Client::isValidBucketName is not validating correctly on US Standard Region

Using PostData I came across an issue of validating a bucket name that was previously created at the server's console -ie my_private_bucket

According to the documentation:

The rules for bucket names in the US Standard region are similar but less restrictive:
Bucket names can be as long as 255 characters.
Bucket names can contain any combination of uppercase letters, lowercase letters, numbers, periods (.), dashes (-) and underscores (
)
_

Nevertheless, when using $post = new Aws\S3\Model\PostObject($s3->getClient(), 'my_private_bucket'); an error was thrown.

When looking at the function throwing the error (S3Client::isValidBucketName) I saw the following regular expression:

!preg_match('/^[a-z0-9]([a-z0-9\\-.]*[a-z0-9])?$/', $bucket)

I tried on http://rubular.com/ and found out it was actually failing, but after removing the double backslash and replace it with one backslash, everything was working as expected.

!preg_match('/^[a-z0-9]([a-z0-9\-.\_]*[a-z0-9])?$/', $bucket)

Note: underscore does not apply to other regions so I am not sure whether it should be included or not.

No example for batchGet.

Trying to use batchGet with no luck. There is no examples in the documentation.

http://docs.aws.amazon.com/aws-sdk-php-2/latest/class-Aws.DynamoDb.DynamoDbClient.html#_batchGetItem

These docs would have me believe the call should look something like this...

        $results = $client->batchGetItem([
            'RequestItems' => [
                ['Keys' => ['HashKeyElement' => ['S' => 'element1']]],
                ['Keys' => ['HashKeyElement' => ['S' => 'element2']]]
            ]
        ]);

But when I execute the command I get the error:

Guzzle\Service\Exception\ValidationException: Validation errors: [RequestItems] must be an array of properties. Got a numerically indexed array.

Any help getting this call right would be appreciated. This probably warrants a code example added to the guides - I'd be happy to submit a pull request if I can get the example above working.

Thanks

Problem with buckets in US Standard region

I created a bucket in US Standard region. Initialize the s3client using Region::US_EAST_1 . When trying to upload a file, i received this error "The request signature we calculated does not match the signature you provided. Check your key and signing method."

Change Region to Region::US_WEST_1, i received this error: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.

Now i created a bucket in Ireland region and intialize s3Client using Region::EU_WEST_1, everything works fine.

i created a bucket in North California region and intialize s3Client using Region::US_WEST_1, everything works fine.

i created a bucket in Oregon region and intialize s3Client using Region::US_WEST_2, everything works fine.

So basically, i have problem with bucket created in US Standard region

Route53 returned ResourceSets are not those it accepts to write

If you have a Alias Failover ResourceSet, there is no ResourceRecords attribute. Sending one results in an Exception. (This is expected behaviour)
However the same ResourceSet returned by ListResourceRecordSets (when already present in route53) contains an empty ResourceRecords attribute. This causes the following Exception to be thrown when trying to delete the fetched ResourceRecordSet:

[Guzzle\Service\Exception\ValidationException]                                                                            
  Validation errors: [ChangeBatch][Changes][1][Change][ResourceRecordSet][ResourceRecords] must contain 1 or more elements  

S3 createPresignedUrl() Method Doesn't Work When AWS Object Key Has Spaces

I was racking my head over an issue and it was resolved the instant I took spaces out of my object name in S3.

public function GetFileObjectDownloadLink($bucketName, $fileName, $fileNameForDownload)  {
    $s3 = $this->aws->get('s3');
$extra = urlencode("attachment; filename=\"$fileNameForDownload\"");
$request = $s3->get("{$bucketName}/{$fileName}?response-content-disposition={$extra}");
$url = $s3->createPresignedUrl($request, '+1 minute');
return $url;
}

When attempting to fetch the file from the generated URL, S3 would produce the XML error that the Signature was invalid because it did not match. The instant I removed the spaces from the object key in S3, the errors went away.

Docs for DynamoDbClient::batchGetItem incorrect

The docs here are incorrect for batchGetItem:
http://docs.amazonwebservices.com/aws-sdk-php-2/latest/class-Aws.DynamoDb.DynamoDbClient.html

Fails to mention that the table name is needed before keys eg. should be something like

$requests = array(
            'RequestItems' => array(
                'table_name_here' => array(              
                    'Keys' => array(
                        array( 'HashKeyElement' => array(             
                                'S' => $someKey)                    
                            )
                        ),
                        array( 'HashKeyElement' => array(             
                                'S' => $anotherKey                    
                            )
                        )
                    )
                )
            )   
        );

Happy to submit a PR if someone can point me in the right direction to where I can correct it

Unable to set S3Client Credentials after object has been instanciated

I'm in a situation where I need to loop over hundreds of items. Each item comes with its own set of S3 credentials so I need to configure and reconfigure the same S3Client over and over. I can see there is a getter for the credentials but not a setter. As far as I can see they can only be set using the constructor and you are stuck with them until the object is destroyed.

My concern is that I would like to be able to inject a mock S3Client while testing to prevent real calls from happening but if I have to destroy the client and recreate it for every iteration then passing in a mock won't help.

The one thing that I can think of is to wrap the S3 factory and mock that, but this solution seems a little clunky to me.

Any help is greatly appreciated!

uploadPart response appears to be double quoting ETag

 $response = $s3->uploadPart(array(
                'Bucket'      => $awsConfig['bucket_name'],
                'Key'         => 'multipart_'.$Item->getFilename(),
                'Body'        => $body,
                'ContentMD5'  => hash_file('md5', $fullPath),
            ));
            $eTag = $response->getPath('ETag');
            $requestId = $response->getPath('RequestId');
            $this->log('Part Received with Etag: '.$etag.' and RequestId: ' .$requestId);
            var_dump($response);

Using that code my $eTag come out empty. If you look at the response it has

Part Received with Etag:  and RequestId: 4C8F395A88108EB7
object(Guzzle\Service\Resource\Model)#791 (2) {
  ["structure":protected]=>
  object(Guzzle\Service\Description\Parameter)#780 (25) {
// LEFT OUT
  }
  ["data":protected]=>
  array(3) {
    ["ServerSideEncryption"]=>
    string(0) ""
    ["ETag"]=>
    string(34) ""ad17daed6574002fcd5affc03226aece""
    ["RequestId"]=>
    string(16) "7D39349F6BC00BF3"
  }
}

Notice my RequestId was found, but the ETag is empty. I believe it's because it's double-quoted for some reason.

Inconsistent CloudFrontClient->getSignedUrl() Results

Hello, I am receiving inconsistent results when I call the getSignedUrl method from a CloudFrontClient object, similar to the following code:

$this->aws = AWS::factory(array(
'key' => '...',
'secret' => '...',
'region' => Region::CONSTANT
));
$cf_client = $this->aws->get('cloudfront');
$result= $cf_client->getSignedUrl(array(
'url' => 'http://test.test.com/test.jpg',
'expires' => (time() + 600)
));

It seems that the resulting url has about a 50% chance of successfully accessing the query-string-protected S3 bucket. An 'Access Denied' page is returned by the cloudfront endpoint when the url is unsuccessful. Furthermore, I looked into the RFC 2045 code for base64 encoding, and I noticed that the resulting URL is Never successful when the SSL signature ends with '==', or '__' after being url-encoded. This has something to do with how the octets are represented, but I am not too clear on the specifics. Any solution to this issue would be greatly appreciated!

CloudWatchClient listMetrics() failed "The security token included in the request is invalid"

I've executed following code, but got "The security token included in the request is invalid". I patched to Aws/Common/Signature/SignatureV2.php, then done successfully. is this right?

<?php
date_default_timezone_set('Asia/Tokyo');

require 'AWSSDKforPHP/aws.phar';

use Aws\CloudWatch\CloudWatchClient;
use Aws\Common\Enum\Region;
use Aws\Common\Credentials\RefreshableInstanceProfileCredentials;

#$credentials = new RefreshableInstanceProfileCredentials(Credentials::factory());
$client = CloudWatchClient::factory(array(
  #'credentials' => $credentials,
  'region' => Region::TOKYO,
));

try {
  $metrics = $client->listMetrics(array('Namespace'  => 'AWS/EC2'));
  foreach($metrics['Metrics'] as $metric) {
    foreach($metric['Dimensions'] as $dimension) {
      printf("%s %s %s=%s\n",
              $metric['Namespace'], $metric['MetricName'],
              $dimension['Name'],   $dimension['Value']);
    }
  }
} catch(Aws\CloudWatch\Exception\CloudWatchException $e) {
  die($e->getMessage());
}
?>
$ diff -u src/Aws/Common/Signature/SignatureV2.php.org src/Aws/Common/Signature/SignatureV2.php
--- src/Aws/Common/Signature/SignatureV2.php.org        2013-02-15 22:29:29.000000000 +0000
+++ src/Aws/Common/Signature/SignatureV2.php    2013-03-05 15:03:33.563810976 +0000
@@ -38,6 +38,7 @@
         $this->addParameter($request, 'SignatureVersion', '2');
         $this->addParameter($request, 'SignatureMethod', 'HmacSHA256');
         $this->addParameter($request, 'AWSAccessKeyId', $credentials->getAccessKeyId());
+        $this->addParameter($request, 'SecurityToken', $credentials->getSecurityToken());

         // Get the path and ensure it's absolute
         $path = '/' . ltrim($request->getUrl(true)->normalizePath()->getPath(), '/');

fatal error when calling S3Client->createPresignedUrl()

$s3 = S3Client::factory(array(
    'key' => <key>,
    'secret' => <secret>,
    'region' => Region::US_EAST_1
));
$rf = RequestFactory::getInstance();
echo $s3->createPresignedUrl(
    $rf->create(
         'GET', 
         $s3->getBaseUrl() . '/' . BUCKET . '/testing123.jpg'
    ), 
    time()+300) . "\n";

generates
PHP Fatal error: Call to a member function getBaseUrl() on a non-object in phar:///usr/share/php/AWSSDKforPHP/aws.phar/src/Aws/S3/S3Signature.php on line 170

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.