GithubHelp home page GithubHelp logo

dyerc / craft-flux Goto Github PK

View Code? Open in Web Editor NEW
4.0 4.0 2.0 16.6 MB

Craft CMS plugin which integrates with AWS Lambda and CloudFront to process, cache and serve images

Home Page: https://cdyer.co.uk/plugins/flux

License: Other

JavaScript 3.32% TypeScript 24.20% PHP 53.18% Makefile 0.21% Twig 18.16% CSS 0.92%
aws aws-lambda aws-s3 cloudfront craft-plugin craftcms image-processing performance plugin

craft-flux's People

Contributors

dyerc avatar markdrzy avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

markdrzy bfopma

craft-flux's Issues

Purge Transformed Assets fails if there are more than 1000 transformed files

The deleteObjects method in the S3 API has a hard limit of 1000 maximum objects:

The [deleteObjects] request contains a list of up to 1000 keys that you want to delete.
โ€” https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html

If greater than 1000 transformed files exist, the purgeAllTransformedVersions method in the Flux S3 service will create a deleteObjects request including greater than 1000 objects. When submitted to the S3 API, this request will fail with a 400 Bad Request response.

If this 1000 object limit has been exceeded, the S3 Service should batch these requests into 1000-file chunks.

CloudFront invalidations are limited to 3000 concurrent invalidation paths

Per the CloudFront API documentation,

you can have invalidation requests for up to 3,000 files per distribution in progress at one time.

If this number is exceeded, the invalidation requests will fail. The invalidateCache method accepts an array of paths, but does not presently batch the invalidation requests if greater than 3000 paths are provided. To work around this limitation, invalidation requests should be sent in 3000-chunk batches.

It should also be noted that invalidation requests may be in progress for a non-trivial duration; if multiple 3000-chunk invalidation requests are sent simultaneously, only the first batch will succeed โ€” the others will fail. I believe the only way to avoid this situation is to send these invalidation requests synchronously.

Setup failing on multi-site setup

Hello,

After installing and filling in the basic setup details like S3 bucket, Cloudfront ID etc I then proceeded to test the connection.

This resulted in the following error:

Unfortunately I cannot view the plugin's settings anymore.
Please do let me know if you need more details.

Twig\Error\RuntimeError: Variable "siteOptions" does not exist. in /var/www/html/vendor/dyerc/craft-flux/src/templates/_settings.twig:23
1. in /var/www/html/vendor/dyerc/craft-flux/src/templates/_settings.twig at line 23
14151617181920212223242526272829303132  general: {label: 'General Settings'|t('flux'), url: '#general'},
  aws: {label: 'AWS Settings'|t('flux'), url: '#aws'},
  advanced: {label: 'Advanced Settings'|t('flux'), url: '#advanced'},
} %}
 
{% set fullPageForm = true %}
 
{% set siteColumn = [] %}
{% if craft.app.getIsMultiSite() %}
  {% set allSiteOptions = [{value: '', label: 'All Sites'|t('flux')}]|merge(siteOptions) %}
  {% set siteColumn = {
    siteId: {
      type: 'select',
      heading: 'Site'|t('flux'),
      options: allSiteOptions,
      thin: true,
    }
  } %}
{% endif %}

image

Malformed XML error when attempting to "Purge Transformed Assets"

If an empty Objects array is sent in a deleteObjects request, AWS will respond with a malformed request error:

Error executing "DeleteObjects" on "https://REDACTED.s3.amazonaws.com/?delete";

AWS HTTP error: Client error: `POST https://REDACTED.s3.amazonaws.com/?delete` resulted in a `400 Bad Request` response:

MalformedXML (client): The XML you provided was not well-formed or did not validate against our published schema - 

<?xml version="1.0" encoding="UTF-8"?>
<Error>
    <Code>MalformedXML</Code>
    <Message>The XML you provided was not well-formed or did not validate against our published schema</Message>
    <RequestId>REDACTED</RequestId>
    <HostId>REDACTED</HostId>
</Error>

Thanks!

๐Ÿ™๐Ÿผ Finally a useful plugin with tests in the Craft plugin store.

Thanks! That's all.

Flux doesn't correctly handle updated assets when the filename remains the same

We have a site that imports real estate listings daily (using the Feed Me plugin). Each imported listing has a list of associated images, which we are saving as assets in Craft. The API we are pulling these images from always names the associated images in the following pattern: {locationId}-{index}.jpg. If images are added, removed, or modified, the filenames will change accordingly. For example if we had the following list of images:

20220316-1.jpg
20220316-2.jpg
20220316-3.jpg

and the second image (20220316-2.jpg) was removed, the resulting list would look like this:

20220316-1.jpg
20220316-2.jpg

The image that was 20220316-3.jpg is now named 20220316-2.jpg. When the import takes place, Craft removes the 20220316-3.jpg asset, and replaces the file on the 20220316-2.jpg asset with the file that was previously 20220316-3.jpg.

When the asset is modified, Flux is correctly purging transforms for this asset. However, Flux does not remove the copy it makes of the original asset. Since Flux bases its transforms off of the copy, all transforms recreated after the purge are of the original 20220316-2.jpg image, not the updated version.

We believe that this is due to the following check in Flux's S3 service (line 243):

if (!is_a($asset->volume->fs, "craft\\awss3\\Fs") && $originalPath != $path) {
    $deleteObjects[] = $path;
}

Since we store our Flux transforms in an S3 volume the is_a(โ€ฆ awsss3\\Fs") check will return true, preventing Flux from purging it's copy of the original asset file. It appears that double-checking that this is an S3 volume is unnecessary, and that the $originalPath != $path check is sufficient to prevent Flux from inadvertently removing the original asset.

Perhaps we are wrong in this assessment for other potential Flux configurations, but in our case, removing the S3 volume check like so:

if ($originalPath != $path) {
    $deleteObjects[] = $path;
}

resolved our issue. Now the Flux copy of the asset file is being purged, and the new asset file is used for all new transforms.

Install Failing

I'm sure it's something i've done wrong but i've tried to follow the docs - first when I try to test the setup - I eventually get a time-out error. So instead I go into manual and enter all my settings. Then I go to Utilities->Flux->Install/Update AWS and it starts the process only to eventually fail. The Error I get back is:

Error executing "UpdateDistribution" on "https://cloudfront.amazonaws.com/2020-05-31/distribution/xxxxxxxxxxxxxxxxx/config"; AWS HTTP error: Client error: PUT https://cloudfront.amazonaws.com/2020-05-31/distribution/xxxxxxxxxxxxxxxxx/config resulted in a 403 Forbidden response: Sender< (truncated...) AccessDenied (client): The user is not authorized to create or assume a service linked role. - SenderAccessDeniedThe user is not authorized to create or assume a service linked role.cd99e80e-ac54-49c9-b7c2-ba0f7f3e52ce

I'm assuming I did something wrong in user creation or perhaps there is something different with the way our AWS account is set up? I'm at a loss.

Settings validation fails

Settings validation fails for AWS Resource Prefix (awsResourcePrefix) and S3 Root Prefix (rootPrefix) when using environment variables.

Fixed in PR #4

Multiple Flux-enabled sites on one AWS account eventually exceeds Cache Policy and Origin Request Policy quotas

The quantity of Cache Policies and Origin Request Policies are strictly limited to 20 (each) per AWS account. AWS does not provide a means to alter the quota for these resources. If you manage multiple Flux-enabled projects under one AWS account, you run the risk of exceeding your quota.

My agency ran into this issue recently upon the creation of our 10th Flux-enabled project. We maintain two environments per project: staging and production. Since we'd already had other custom Cache and Origin Request Policies on our AWS account, our 10th project put us over the 20 policy limit.

All that to say: thank you for this plugin โ€” it was a lifesaver for us, and we are including it on all of our larger projects going forward. Keep up the good work!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.