GithubHelp home page GithubHelp logo

arc's Introduction

This Project is Deprecated

NOTE: This project is being deprecated in favor of Waffle. The new project is based off this project meaning all of the same configuration and APIs will continue to work as expected (at the time of writing this).

Migrating to Waffle

Update your dependency

If you are using Mix, simply change your dependency from arc to waffle. Waffle uses the same version tags, so there is no need to modify the version in your mix file.

Because Waffle is still Arc under the hood, any Arc-based storage providers, such as arc_gcs, will continue to work without any additional changes.

Update your configs

Replace any instances of config :arc to config :waffle. All configuration options remain the same (as of version 0.11.0) and do not need to be updated.

Update your code

Change any instances of Arc to Waffle except where it concerns the third-party storage providers. For example, Arc.Storage.GCS would continue to use Arc as the root namespace because that's defined outside the core library. If you wish to standardize the libraries, you can use an alias in your files to prepare for an eventual namespace conversion, e.g.

# "Renames" the Arc namespace to Waffle
alias Arc, as: Waffle

# Use the new library name as your normally would
Waffle.Storage.GCS

Arc

Arc is a flexible file upload library for Elixir with straightforward integrations for Amazon S3 and ImageMagick.

Browse the readme below, or jump to a full example.

Content

Installation

Add the latest stable release to your mix.exs file, along with the required dependencies for ExAws if appropriate:

defp deps do
  [
    arc: "~> 0.11.0",

    # If using Amazon S3:
    ex_aws: "~> 2.0",
    ex_aws_s3: "~> 2.0",
    hackney: "~> 1.6",
    poison: "~> 3.1",
    sweet_xml: "~> 0.6"
  ]
end

Then run mix deps.get in your shell to fetch the dependencies.

Configuration

Arc expects certain properties to be configured at the application level:

config :arc,
  storage: Arc.Storage.S3, # or Arc.Storage.Local
  bucket: {:system, "AWS_S3_BUCKET"} # if using Amazon S3

Along with any configuration necessary for ExAws.

Storage Providers

Arc ships with integrations for Local Storage and S3. Alternative storage providers may be supported by the community:

Usage with Ecto

Arc comes with a companion package for use with Ecto. If you intend to use Arc with Ecto, it is highly recommended you also add the arc_ecto dependency. Benefits include:

  • Changeset integration
  • Versioned urls for cache busting (.../thumb.png?v=63601457477)

Getting Started: Defining your Upload

Arc requires a definition module which contains the relevant configuration to store and retrieve your files.

This definition module contains relevant functions to determine:

  • Optional transformations of the uploaded file
  • Where to put your files (the storage directory)
  • What to name your files
  • How to secure your files (private? Or publicly accessible?)
  • Default placeholders

To start off, generate an attachment definition:

mix arc.g avatar

This should give you a basic file in:

web/uploaders/avatar.ex

Check this file for descriptions of configurable options.

Basics

There are two supported use-cases of Arc currently:

  1. As a general file store, or
  2. As an attachment to another model (the attached model is referred to as a scope)

The upload definition file responds to Avatar.store/1 which accepts either:

  • A path to a local file
  • A path to a remote http or https file
  • A map with a filename and path keys (eg, a %Plug.Upload{})
  • A map with a filename and binary keys (eg, %{filename: "image.png", binary: <<255,255,255,...>>})
  • A two-tuple consisting of one of the above file formats as well as a scope object.

Example usage as general file store:

# Store any locally accessible file
Avatar.store("/path/to/my/file.png") #=> {:ok, "file.png"}

# Store any remotely accessible file
Avatar.store("http://example.com/file.png") #=> {:ok, "file.png"}

# Store a file directly from a `%Plug.Upload{}`
Avatar.store(%Plug.Upload{filename: "file.png", path: "/a/b/c"}) #=> {:ok, "file.png"}

# Store a file from a connection body
{:ok, data, _conn} = Plug.Conn.read_body(conn)
Avatar.store(%{filename: "file.png", binary: data})

Example usage as a file attached to a scope:

scope = Repo.get(User, 1)
Avatar.store({%Plug.Upload{}, scope}) #=> {:ok, "file.png"}

This scope will be available throughout the definition module to be used as an input to the storage parameters (eg, store files in /uploads/#{scope.id}).

Transformations

Arc can be used to facilitate transformations of uploaded files via any system executable. Some common operations you may want to take on uploaded files include resizing an uploaded avatar with ImageMagick or extracting a still image from a video with FFmpeg.

To transform an image, the definition module must define a transform/2 function which accepts a version atom and a tuple consisting of the uploaded file and corresponding scope.

This transform handler accepts the version atom, as well as the file/scope argument, and is responsible for returning one of the following:

  • :noaction - The original file will be stored as-is.
  • :skip - Nothing will be stored for the provided version.
  • {executable, args} - The executable will be called with System.cmd with the format #{original_file_path} #{args} #{transformed_file_path}.
  • {executable, fn(input, output) -> args end} - If your executable expects arguments in a format other than the above, you may supply a function to the conversion tuple which will be invoked to generate the arguments. The arguments can be returned as a string (e.g. – " #{input} -strip -thumbnail 10x10 #{output}") or a list (e.g. – [input, "-strip", "-thumbnail", "10x10", output]) for even more control.
  • {executable, args, output_extension} - If your transformation changes the file extension (eg, converting to png), then the new file extension must be explicit.

ImageMagick transformations

As images are one of the most commonly uploaded filetypes, Arc has a recommended integration with ImageMagick's convert tool for manipulation of images. Each upload definition may specify as many versions as desired, along with the corresponding transformation for each version.

The expected return value of a transform function call must either be :noaction, in which case the original file will be stored as-is, :skip, in which case nothing will be stored, or {:convert, transformation} in which the original file will be processed via ImageMagick's convert tool with the corresponding transformation parameters.

The following example stores the original file, as well as a squared 100x100 thumbnail version which is stripped of comments (eg, GPS coordinates):

defmodule Avatar do
  use Arc.Definition

  @versions [:original, :thumb]

  def transform(:thumb, _) do
    {:convert, "-strip -thumbnail 100x100^ -gravity center -extent 100x100"}
  end
end

Other examples:

# Change the file extension through ImageMagick's `format` parameter:
{:convert, "-strip -thumbnail 100x100^ -gravity center -extent 100x100 -format png", :png}

# Take the first frame of a gif and process it into a square jpg:
{:convert, fn(input, output) -> "#{input}[0] -strip -thumbnail 100x100^ -gravity center -extent 100x100 -format jpg #{output}", :jpg}

For more information on defining your transformation, please consult ImageMagick's convert documentation.

Note: Keep this transformation function simple and deterministic based on the version, file name, and scope object. The transform function is subsequently called during URL generation, and the transformation is scanned for the output file format. As such, if you conditionally format the image as a png or jpg depending on the time of day, you will be displeased with the result of Arc's URL generation.

System Resources: If you are accepting arbitrary uploads on a public site, it may be prudent to add system resource limits to prevent overloading your system resources from malicious or nefarious files. Since all processing is done directly in ImageMagick, you may pass in system resource restrictions through the -limit flag. One such example might be: -limit area 10MB -limit disk 100MB.

FFmpeg transformations

Common transformations of uploaded videos can be also defined through your definition module:

# To take a thumbnail from a video:
{:ffmpeg, fn(input, output) -> "-i #{input} -f jpg #{output}" end, :jpg}

# To convert a video to an animated gif
{:ffmpeg, fn(input, output) -> "-i #{input} -f gif #{output}" end, :gif}

Complex Transformations

Arc requires the output of your transformation to be located at a predetermined path. However, the transformation may be done completely outside of Arc. For fine-grained transformations, you should create an executable wrapper in your $PATH (eg. bash script) which takes these proper arguments, runs your transformation, and then moves the file into the correct location.

For example, to use soffice to convert a doc to an html file, you should place the following bash script in your $PATH:

#!/usr/bin/env sh

# `soffice` doesn't allow for output file path option, and arc can't find the
# temporary file to process and copy. This script has a similar argument list as
# what arc expects. See https://github.com/stavro/arc/issues/77.

set -e
set -o pipefail

function convert {
    soffice \
        --headless \
        --convert-to html \
        --outdir $TMPDIR \
        "$1"
}

function filter_new_file_name {
    awk -F$TMPDIR '{print $2}' \
    | awk -F" " '{print $1}' \
    | awk -F/ '{print $2}'
}

converted_file_name=$(convert "$1" | filter_new_file_name)

cp $TMPDIR/$converted_file_name "$2"
rm $TMPDIR/$converted_file_name

And perform the transformation as such:

def transform(:html, _) do
  {:soffice_wrapper, fn(input, output) -> [input, output] end, :html}
end

Asynchronous File Uploading

If you specify multiple versions in your definition module, each version is processed and stored concurrently as independent Tasks. To prevent an overconsumption of system resources, each Task is given a specified timeout to wait, after which the process will fail. By default this is 15 seconds.

If you wish to change the time allocated to version transformation and storage, you may add a configuration parameter:

config :arc,
  :version_timeout, 15_000 # milliseconds

To disable asynchronous processing, add @async false to your upload definition.

Storage of files

Arc currently supports Amazon S3 and local destinations for file uploads.

Local Configuration

To store your attachments locally, override the __storage function in your definition module to Arc.Storage.Local. You may wish to optionally override the storage directory as well, as outlined below.

defmodule Avatar do
  use Arc.Definition
  def __storage, do: Arc.Storage.Local # Add this
end

S3 Configuration

ExAws is used to support Amazon S3.

To store your attachments in Amazon S3, you'll need to provide a bucket destination in your application config:

config :arc,
  bucket: "uploads"

You may also set the bucket from an environment variable:

config :arc,
  bucket: {:system, "S3_BUCKET"}

In addition, ExAws must be configured with the appropriate Amazon S3 credentials.

ExAws has by default the following configuration (which you may override if you wish):

config :ex_aws,
  access_key_id: [{:system, "AWS_ACCESS_KEY_ID"}, :instance_role],
  secret_access_key: [{:system, "AWS_SECRET_ACCESS_KEY"}, :instance_role]

This means it will first look for the AWS standard AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, and fall back using instance meta-data if those don't exist. You should set those environment variables to your credentials, or configure an instance that this library runs on to have an iam role.

Storage Directory

Configuration Option

  • arc[:storage_dir] - The storage directory to place files. Defaults to uploads, but can be overwritten via configuration options :storage_dir
config :arc,
  storage_dir: "my/dir"

The storage dir can also be overwritten on an individual basis, in each separate definition. A common pattern for user profile pictures is to store each user's uploaded images in a separate subdirectory based on their primary key:

def storage_dir(version, {file, scope}) do
  "uploads/users/avatars/#{scope.id}"
end

Note: If you are "attaching" a file to a record on creation (eg, while inserting the record at the same time), then you cannot use the model's id as a path component. You must either (1) use a different storage path format, such as UUIDs, or (2) attach and update the model after an id has been given.

Note: The storage directory is used for both local filestorage (as the relative or absolute directory), and S3 storage, as the path name (not including the bucket).

Specify multiple buckets

Arc lets you specify a bucket on a per definition basis. In case you want to use multiple buckets, you can specify a bucket in the uploader definition file like this:

def bucket, do: :some_custom_bucket_name

Specify multiple asset hosts

Arc lets you specify an asset host on a per definition basis. In case you want to use multiple hosts, you can specify an asset_host in the uploader definition file like this:

def asset_host, do: "https://example.com"

Access Control Permissions

Arc defaults all uploads to private. In cases where it is desired to have your uploads public, you may set the ACL at the module level (which applies to all versions):

@acl :public_read

Or you may have more granular control over each version. As an example, you may wish to explicitly only make public a thumbnail version of the file:

def acl(:thumb, _), do: :public_read

Supported access control lists for Amazon S3 are:

ACL Permissions Added to ACL
:private Owner gets FULL_CONTROL. No one else has access rights (default).
:public_read Owner gets FULL_CONTROL. The AllUsers group gets READ access.
:public_read_write Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. Granting this on a bucket is generally not recommended.
:authenticated_read Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
:bucket_owner_read Object owner gets FULL_CONTROL. Bucket owner gets READ access.
:bucket_owner_full_control Both the object owner and the bucket owner get FULL_CONTROL over the object.

For more information on the behavior of each of these, please consult Amazon's documentation for Access Control List (ACL) Overview.

S3 Object Headers

The definition module may specify custom headers to pass through to S3 during object creation. The available custom headers include:

  • :cache_control
  • :content_disposition
  • :content_encoding
  • :content_length
  • :content_type
  • :expect
  • :expires
  • :storage_class
  • :website_redirect_location
  • :encryption (set to "AES256" for encryption at rest)

As an example, to explicitly specify the content-type of an object, you may define a s3_object_headers/2 function in your definition, which returns a Keyword list, or Map of desired headers.

def s3_object_headers(version, {file, scope}) do
  [content_type: MIME.from_path(file.file_name)] # for "image.png", would produce: "image/png"
end

File Validation

While storing files on S3 (rather than your harddrive) eliminates some malicious attack vectors, it is strongly encouraged to validate the extensions of uploaded files as well.

Arc delegates validation to a validate/1 function with a tuple of the file and scope. As an example, to validate that an uploaded file conforms to popular image formats, you may use:

defmodule Avatar do
  use Arc.Definition
  @extension_whitelist ~w(.jpg .jpeg .gif .png)

  def validate({file, _}) do
    file_extension = file.file_name |> Path.extname() |> String.downcase()
    Enum.member?(@extension_whitelist, file_extension)
  end
end

Any uploaded file failing validation will return {:error, :invalid_file} when passed through to Avatar.store.

File Names

It may be undesirable to retain original filenames (eg, it may contain personally identifiable information, vulgarity, vulnerabilities with Unicode characters, etc).

You may specify the destination filename for uploaded versions through your definition module.

A common pattern is to combine directories scoped to a particular model's primary key, along with static filenames. (eg: user_avatars/1/thumb.png)

Examples:

# To retain the original filename, but prefix the version and user id:
def filename(version, {file, scope}) do
  file_name = Path.basename(file.file_name, Path.extname(file.file_name))
  "#{scope.id}_#{version}_#{file_name}"
end

# To make the destination file the same as the version:
def filename(version, _), do: version

Object Deletion

After an object is stored through Arc, you may optionally remove it. To remove a stored object, pass the same path identifier and scope from which you stored the object.

Example:

# Without a scope:
{:ok, original_filename} = Avatar.store("/Images/me.png")
:ok = Avatar.delete(original_filename)

# With a scope:
user = Repo.get! User, 1
{:ok, original_filename} = Avatar.store({"/Images/me.png", user})
:ok = Avatar.delete({original_filename, user})
# or
user = Repo.get!(User, 1)
{:ok, original_filename} = Avatar.store({"/Images/me.png", user})
user = Repo.get!(User, 1)
:ok = Avatar.delete({user.avatar, user})

Url Generation

Saving your files is only the first half of any decent storage solution. Straightforward access to your uploaded files is equally as important as storing them in the first place.

Often times you will want to regain access to the stored files. As such, Arc facilitates the generation of urls.

# Given some user record
user = %{id: 1}

Avatar.store({%Plug.Upload{}, user}) #=> {:ok, "selfie.png"}

# To generate a regular, unsigned url (defaults to the first version):
Avatar.url({"selfie.png", user}) #=> "https://bucket.s3.amazonaws.com/uploads/1/original.png"

# To specify the version of the upload:
Avatar.url({"selfie.png", user}, :thumb) #=> "https://bucket.s3.amazonaws.com/uploads/1/thumb.png"

# To generate a signed url:
Avatar.url({"selfie.png", user}, :thumb, signed: true) #=> "https://bucket.s3.amazonaws.com/uploads/1/thumb.png?AWSAccessKeyId=AKAAIPDF14AAX7XQ&Signature=5PzIbSgD1V2vPLj%2B4WLRSFQ5M%3D&Expires=1434395458"

# To generate urls for all versions:
Avatar.urls({"selfie.png", user}) #=> %{original: "https://.../original.png", thumb: "https://.../thumb.png"}

Default url

In cases where a placeholder image is desired when an uploaded file is not present, Arc allows the definition of a default image to be returned gracefully when requested with a nil file.

def default_url(version) do
  MyApp.Endpoint.url <> "/images/placeholders/profile_image.png"
end

Avatar.url(nil) #=> "http://example.com/images/placeholders/profile_image.png"
Avatar.url({nil, scope}) #=> "http://example.com/images/placeholders/profile_image.png"

Virtual Host

To support AWS regions other than US Standard, it may be required to generate urls in the virtual_host style. This will generate urls in the style: https://#{bucket}.s3.amazonaws.com instead of https://s3.amazonaws.com/#{bucket}.

To use this style of url generation, your bucket name must be DNS compliant.

This can be enabled with:

config :arc,
  virtual_host: true

When using virtual hosted–style buckets with SSL, the SSL wild card certificate only matches buckets that do not contain periods. To work around this, use HTTP or write your own certificate verification logic.

Asset Host

You may optionally specify an asset host rather than using the default bucket.s3.amazonaws.com format.

In your application configuration, you'll need to provide an asset_host value:

config :arc,
  asset_host: "https://d3gav2egqolk5.cloudfront.net", # For a value known during compilation
  asset_host: {:system, "ASSET_HOST"} # For a value not known until runtime

Alternate S3 configuration example

If you are using a region other than US-Standard, it is necessary to specify the correct configuration for ex_aws. A full example configuration for both arc and ex_aws is as follows:

config :arc,
  bucket: "my-frankfurt-bucket"

config :ex_aws,
  access_key_id: "my_access_key_id",
  secret_access_key: "my_secret_access_key",
  region: "eu-central-1",
  s3: [
    scheme: "https://",
    host: "s3.eu-central-1.amazonaws.com",
    region: "eu-central-1"
  ]

For your host configuration, please examine the approved AWS Hostnames. There are often multiple hostname formats for AWS regions, and it will not work unless you specify the correct one.

Full Example

defmodule Avatar do
  use Arc.Definition

  @versions [:original, :thumb]
  @extension_whitelist ~w(.jpg .jpeg .gif .png)

  def acl(:thumb, _), do: :public_read

  def validate({file, _}) do
    file_extension = file.file_name |> Path.extname |> String.downcase
    Enum.member?(@extension_whitelist, file_extension)
  end

  def transform(:thumb, _) do
    {:convert, "-thumbnail 100x100^ -gravity center -extent 100x100 -format png", :png}
  end

  def filename(version, _) do
    version
  end

  def storage_dir(_, {file, user}) do
    "uploads/avatars/#{user.id}"
  end

  def default_url(:thumb) do
    "https://placehold.it/100x100"
  end
end

# Given some current_user record
current_user = %{id: 1}

# Store any accessible file
Avatar.store({"/path/to/my/selfie.png", current_user}) #=> {:ok, "selfie.png"}

# ..or store directly from the `params` of a file upload within your controller
Avatar.store({%Plug.Upload{}, current_user}) #=> {:ok, "selfie.png"}

# and retrieve the url later
Avatar.url({"selfie.png", current_user}, :thumb) #=> "https://s3.amazonaws.com/bucket/uploads/avatars/1/thumb.png"

Roadmap

Contributions are welcome. Here is my current roadmap:

  • Ease migration for version (or acl) changes
  • Alternative storage destinations (eg, Filesystem)
  • Solidify public API

Contribution

Open source contributions are welcome. All pull requests must have corresponding unit tests.

To execute all tests locally, make sure the following system environment variables are set prior to running tests (if you wish to test s3_test.exs)

  • ARC_TEST_BUCKET
  • ARC_TEST_S3_KEY
  • ARC_TEST_S3_SECRET

Then execute mix test.

License

Copyright 2015 Sean Stavropoulos

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

arc's People

Contributors

crowdhailer avatar dmarkow avatar dmytronasyrov avatar gazler avatar jbranchaud avatar jerodsanto avatar jfrolich avatar marinho10 avatar nashby avatar opakalex avatar phil-a avatar princemaple avatar ryanwinchester avatar samhamilton avatar samueljmurray avatar sanrodari avatar sashaafm avatar shankardevy avatar sheharyarn avatar simonprev avatar sobolevn avatar stavro avatar stephenmoloney avatar techgaun avatar thzinc avatar tute avatar tyler-eon avatar vursen avatar weixiyen avatar xymbol avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

arc's Issues

SignatureDoesNotMatch error on Arc.Storage.S3

I'm just starting with Elixir and Phoenix and trying to use Arc (arc_ecto, actually) to upload files to S3.

I the access keys set up in the config file, but when trying to upload I get this:

[error] Ranch protocol #PID<0.341.0> (:cowboy_protocol) of 
                listener X.Endpoint.HTTP terminated
** (exit) an exception was raised:
    ** (ErlangError) erlang error: {:aws_error, 
              {:http_error, 403, 'Forbidden', 
              "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n
              <Error><Code>SignatureDoesNotMatch</Code>
              <Message>The request signature we calculated does not match 
              the signature you provided. Check your key and signing method.
              </Message>
              <AWSAccessKeyId>XXXX</AWSAccessKeyId>
              <StringToSign>PUT\n+NFGtaSc2qJy2E/IjvTHHA==\n\n
              Tue, 15 Sep 2015 17:45:58 GMT\nx-amz-acl:private\n
              /lukla-dev/uploads/profile.jpg</StringToSign>
              <SignatureProvided>VqTEbMI37SMKvavENE0eP+mQF2I=</SignatureProvided>
              <StringToSignBytes>50 55 54 0a 2b 4e 46 47 74 61 53 63 32 71 4a 79 32 45 
              2f 49 6a 76 54 48 48 41 3d 3d 0a 0a 54 75 65 2c 20 31 35 20 53 65 70 20 32 
              30 31 35 20 31 37 3a 34 35 3a 35 38 20 47 4d 54 0a 78 2d 61 6d 7a 2d 61 63 
              6c 3a 70 72 69 76 61 74 65 0a 2f 6c 75 6b 6c 61 2d 64 65 76 2f 75 70 6c 
              6f 61 64 73 2f 70 72 6f 66 69 6c 65 2e 6a 70 67</StringToSignBytes>
              <RequestId>A8A1CB9362F3DB59</RequestId>
              <HostId>46tQHK/X1Vl26jl+HDQr6e00/
              QqTqHxKtEy4Aq2FlFq686KYTw+0xPXzCgp3npKyVLljaufO3uE=</HostId>
              </Error>"
        }}

        (erlcloud) src/erlcloud_s3.erl:911: :erlcloud_s3.s3_request/8
        (erlcloud) src/erlcloud_s3.erl:611: :erlcloud_s3.put_object/6
        lib/arc/storage/s3.ex:9: Arc.Storage.S3.put/3
        (elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
        (elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
        (stdlib) proc_lib.erl:239: :proc_lib.init_p_do_apply/3

I tried to further investigate, but I'm stuck.

The strange thing to me is that, Arc.Storage.S3/put/3 calls:

:erlcloud_s3.put_object(bucket, s3_key, binary, [acl: acl], erlcloud_config)

which looks to me as a put-object/5

yet, on the stack trace, it says :erlcloud_s3.put_object/6 was called, which expects
a list of HTTPHeaders as 5th parameter

I'm confused 😕

Not sure if it's a bug, or a misconfiguration on my part.

Also, my mix.lock is

%{"arc": {:hex, :arc, "0.1.2"},
  "arc_ecto": {:hex, :arc_ecto, "0.2.0"},
  "cowboy": {:hex, :cowboy, "1.0.3"},
  "cowlib": {:hex, :cowlib, "1.0.1"},
  "decimal": {:hex, :decimal, "1.1.0"},
  "ecto": {:hex, :ecto, "1.0.2"},
  "erlcloud": {:hex, :erlcloud, "0.9.2"},
  "fs": {:hex, :fs, "0.9.2"},
  "jsx": {:hex, :jsx, "2.1.1"},
  "lhttpc": {:hex, :lhttpc, "1.3.0"},
  "meck": {:hex, :meck, "0.8.3"},
  "mock": {:hex, :mock, "0.1.1"},
  "phoenix": {:hex, :phoenix, "1.0.2"},
  "phoenix_ecto": {:hex, :phoenix_ecto, "1.2.0"},
  "phoenix_html": {:hex, :phoenix_html, "2.2.0"},
  "phoenix_live_reload": {:hex, :phoenix_live_reload, "1.0.0"},
  "plug": {:hex, :plug, "1.0.0"},
  "poison": {:hex, :poison, "1.5.0"},
  "poolboy": {:hex, :poolboy, "1.5.1"},
  "postgrex": {:hex, :postgrex, "0.9.1"},
  "ranch": {:hex, :ranch, "1.1.0"}}

Authentication issue.

I get this error:

 {:aws_error, {:http_error, 400, 'Bad Request', "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error>
<Code>InvalidRequest</Code><Message>The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.</Message>
<RequestId>C27DE9ADAC91C66F</RequestId>

As you can see there is a problem with the authentication. erlcloud is migrating now to what they refer to as sign_v4. You can see the issue here. alertlogic/erlcloud#11

Some of the modules have been migrated, including s3.(alertlogic/erlcloud@12ca9a9)

The problem might be because my bucket is in the wrong region (Frankfurt).
http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html

Uploading of big files to S3 storage causes out of memory

Hi,

when I want to upload some big (ca. 3-6 GB) video files in S3 I get an out of memory exception:

2016-05-31 12:21:02.694 [error] Task #PID<0.761.0> started from #PID<0.759.0> terminating ** (File.Error) could not read file /tmp/plug-1464/multipart-697199-932634-1: not enough memory (elixir) lib/file.ex:244: File.read!/1 (arc) lib/arc/storage/s3.ex:7: Arc.Storage.S3.put/3

What do you think about changing the implementation in a way that we could define a kind of threshold and if a file is bigger than that we're uploading that file in chunks as a multipart-upload instead of a simple 'put_object'?

See https://aws.amazon.com/blogs/aws/amazon-s3-multipart-upload/

Cheers
florian

Attaching multiple images to Arc.Ecto model

Arc allows me to upload multiple image, however I need to access those images using arc.ecto model. As explained in tutorial I added one field ("avatar") to my user model as shown below

schema "users" do
    field :name, :string
    field :age, :integer
    field :gender, :string
    field :user_name, :string
    field :email, :string
    field :crypted_password, :string
    field :password, :string, virtual: true
    **field :avatar, LoginService.Avatar.Type**
    field :token, :string, virtual: true
   timestamps
  end

Field "avatar" allows me to access image associated with the user model and I can access the avatar path by with following method:
Avatar.url({user.avatar, user})

but I can't find a way to associate multiple images to the user so that I can access those images with "Avatar.url" method as I accessed avatar image ?

Erlang list_to_integer error while uploading

Hi, I am using arc and arc_ecto.

I have included the bucket name like

config :arc,
  bucket: "arn:aws:s3:::bucket-name/"

And I have ex_aws: configuration

config :ex_aws,
  debug_requests: true,
  access_key_id: [{:system, "AWS_ACCESS_KEY_ID"}],
  secret_access_key: [{:system, "AWS_SECRET_ACCESS_KEY"}],
  region: "us-west-2"

config :ex_aws, :httpoison_opts,
    recv_timeout: 60_000,
    hackney: [recv_timeout: 60_000, pool: false]

Now when I try to upload a file, with phoenix, I have an error:

[debug] Processing by Expense.TransactionController.create/2
  Parameters: %{"_csrf_token" => "MWtWVml4EAADHE0mB2V+AEoQEQJz7oN3K68xg9QQ==", "_utf8" => "✓", "account_id" => "1", "transaction" => %{"amount" => "1234", "date" => %{"day" => "1", "month" => "6", "year" => "2016"}, "description" => "", "receipt" => %Plug.Upload{content_type: "image/png", filename: "Screenshot from 2016-05-26 16:17:21.png", path: "/tmp/plug-1465/multipart-534432-578580-2"}}}
  Pipelines: [:browser]
[debug] SELECT u0.`id`, u0.`name`, u0.`email`, u0.`encrypted_password`, u0.`verified_at`, u0.`verified`, u0.`role`, u0.`inserted_at`, u0.`updated_at` FROM `users` AS u0 WHERE (u0.`id` = ?) [1] OK query=0.7ms
[debug] SELECT a0.`id`, a0.`name`, a0.`activated`, a0.`user_id`, a0.`inserted_at`, a0.`updated_at` FROM `accounts` AS a0 WHERE (a0.`id` = ?) [1] OK query=0.9ms queue=0.1ms
[debug] SELECT t0.`id`, t0.`amount`, t0.`date`, t0.`description`, t0.`approved_at`, t0.`deleted_at`, t0.`receipt`, t0.`approved_by`, t0.`deleted_by`, t0.`user_id`, t0.`account_id`, t0.`inserted_at`, t0.`updated_at` FROM `transactions` AS t0 WHERE (t0.`account_id` = ?) [1] OK query=1.9ms queue=0.1ms
[debug] SELECT a0.`id`, a0.`name`, a0.`activated`, a0.`user_id`, a0.`inserted_at`, a0.`updated_at` FROM `accounts` AS a0 WHERE (a0.`id` = ?) [1] OK query=1.8ms queue=0.1ms
[debug] Request URL: "https://arn:aws:s3:::memento-is-dev/.s3-us-west-2.amazonaws.com/uploads/Screenshot from 2016-05-26 16:17:21.png"
[debug] Request HEADERS: [{"Authorization", "AWS4-HMAC Credential=accesskey/20160610/us-west-2/s3/aws4_request,SignedHeaders=content-length;host;x-amz-acl;x-amz-content-sha256;x-amz-date,Signature=9e9ced01075"}, {"host", "arn"}, {"x-amz-date", "20160610T045352Z"}, {"content-length", 75189}, {"x-amz-acl", "private"}, {"x-amz-content-sha256", "2d969d852e270a4c0aa2a73"}]
[debug] Request BODY: <<137, 80, 78, 71, 13, 10, 26, 10, 0, 0, 0, 13, 73, 72, 68, 82, 0, 0, 5, 86, 0, 0, 3, 0, 8, 6, 0, 0, 0, 207, 62, 60, 194, 0, 0, 0, 4, 115, 66, 73, 84, 8, 8, 8, 8, 124, 8, 100, 136, 0, ...>>
[error] Task #PID<0.734.0> started from #PID<0.721.0> terminating
** (ArgumentError) argument error
    :erlang.list_to_integer('aws:s3:::test-bucket/')
    (hackney) src/hackney_url.erl:196: :hackney_url.parse_netloc/2
    (hackney) src/hackney.erl:341: :hackney.request/5
    (httpoison) lib/httpoison/base.ex:396: HTTPoison.Base.request/9
    (ex_aws) lib/ex_aws/request/httpoison.ex:35: ExAws.Request.HTTPoison.request/4
    (ex_aws) lib/ex_aws/request.ex:38: ExAws.Request.request_and_retry/7
    lib/arc/storage/s3.ex:14: Arc.Storage.S3.put/3
    (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
    (elixir) lib/task/supervised.ex:40: Task.Supervised.reply/5
    (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Function: #Function<4.37961548/0 in Arc.Actions.Store.async_put_version/3>
    Args: []
[error] Ranch protocol #PID<0.721.0> (:cowboy_protocol) of listener Expense.Endpoint.HTTP terminated
** (exit) an exception was raised:
    ** (ArgumentError) argument error
        :erlang.list_to_integer('aws:s3:::test-bucket')
        (hackney) src/hackney_url.erl:196: :hackney_url.parse_netloc/2
        (hackney) src/hackney.erl:341: :hackney.request/5
        (httpoison) lib/httpoison/base.ex:396: HTTPoison.Base.request/9
        (ex_aws) lib/ex_aws/request/httpoison.ex:35: ExAws.Request.HTTPoison.request/4
        (ex_aws) lib/ex_aws/request.ex:38: ExAws.Request.request_and_retry/7
        lib/arc/storage/s3.ex:14: Arc.Storage.S3.put/3
        (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
        (elixir) lib/task/supervised.ex:40: Task.Supervised.reply/5
        (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3

Transformation not working (file not found)

I'm trying to do a basic transformation but I'm getting the following error:

[error] Task #PID<0.797.0> started from #PID<0.793.0> terminating
** (stop) :enoent
    (elixir) lib/system.ex:435: System.cmd("convert", ["/var/folders/2g/b9yxmq5d4bzc1yrxpwpgtdth0000gn/T//plug-1451/multipart-778711-631234-1", "-strip", "-thumbnail", "200x200", "-format", "png", "/var/folders/2g/b9yxmq5d4bzc1yrxpwpgtdth0000gn/T/CXJPJE5N3S7NFJZCIVL67NTHAXACTWRD"], [stderr_to_stdout: true])
    lib/arc/transformations/convert.ex:5: Arc.Transformations.Convert.apply/2
    lib/arc/actions/store.ex:50: Arc.Actions.Store.put_version/3
    (elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
    (elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
    (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Function: #Function<2.27415194/0 in Arc.Actions.Store.async_put_version/3>
    Args: []
[error] Ranch protocol #PID<0.793.0> (:cowboy_protocol) of listener Bookroo.Endpoint.HTTP terminated
** (exit) an exception was raised:
    ** (ErlangError) erlang error: :enoent
        (elixir) lib/system.ex:435: System.cmd("convert", ["/var/folders/2g/b9yxmq5d4bzc1yrxpwpgtdth0000gn/T//plug-1451/multipart-778711-631234-1", "-strip", "-thumbnail", "200x200", "-format", "png", "/var/folders/2g/b9yxmq5d4bzc1yrxpwpgtdth0000gn/T/CXJPJE5N3S7NFJZCIVL67NTHAXACTWRD"], [stderr_to_stdout: true])
        lib/arc/transformations/convert.ex:5: Arc.Transformations.Convert.apply/2
        lib/arc/actions/store.ex:50: Arc.Actions.Store.put_version/3
        (elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
        (elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
        (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3

It seems like this has to do with a missing file. This is my definition file.

defmodule Bookroo.BookImage do
  use Arc.Definition
  use Arc.Ecto.Definition

  # To add a thumbnail version:
  @versions [:original, :thumb]

  @extension_whitelist ~w(.jpg .jpeg .gif .png)

  def validate({file, _}) do
    file_extension = file.file_name |> Path.extname |> String.downcase
    Enum.member?(@extension_whitelist, file_extension)
  end

  def transform(:thumb, _) do
    {:convert, "-strip -thumbnail 200x200 -format png"}
  end

  # Override the persisted filenames:
  def filename(version, _) do
    version
  end

  # Override the storage directory:
  def storage_dir(version, {_, scope}) do
    "uploads/books/#{scope.uuid}/"
  end

  # Provide a default URL if there hasn't been a file uploaded
  def default_url(:thumb, _) do
    "http://placehold.it/200x200"
  end
end

Any thoughts? Could this have something to do with permissions?

Proposal: add `get` or `fetch` to storage

Currently the storage expects a url function to generate the URL which points to the uploaded file. However, there is no way to fetch an asset from the uploaded location. Once the asset has been uploaded, there is no way to get the asset from the storage location again.

An example of when this would be convenient is proxying requests. For example if you want to do some sort of permission check before allowing a user to download a file. You can do this on S3 with signed urls, but sometimes that is not desirable. Perhaps you wish to serve the content over a different protocol (such as a [web]socket.) In this case it would be nice if there was a convenient function to read the asset into memory from the storage location.

If this is something that you think is a good fit for the project then I can take a look at implementing it.

scope.id nil, upload not nested.

When creating a new user, I can upload a profile image, but since the user is being created the same time as the image is being uploaded, my scope.id is nil. Do I need to use an uuid as the user.id? Is there something else I'm missing?

  def storage_dir(version, {file, scope}) do
    "uploads/user/profile_photo/#{scope.id}"
  end

Because scope.id is nil, all my uploads are being placed in the uploads/user/profile_photo/ instead of uploads/user/profile_photo/#{user.id}.

connection error with version 0.2.x

In the new version with ex_aws instead of erlcloud, I get a connection error.
If the S3 bucket is in region other than US east, do you need to specify that in the configuration?

Absolute links for local urls

What do you think about having absolute links when generating urls for the local storage?

Today the urls are relatives, (without a forward slash at the start) this means that if I try to use it in a page that is not in the website root, it will not be found.

The change is very simple, just need to add a / at the start of the url.

I can't think in any problem it may cause, could also be a configuration if you don't want to change the current behaviour.

Getting a "SignatureDoesNotMatch" when trying to access files.

Uploading works fine, but I get an "SignatureDoesNotMatch" error when trying to access signed files.

Here is my config:

mix file:

     applications: [:phoenix, :phoenix_html, :cowboy, :logger, :gettext,
                    :phoenix_ecto, :postgrex, :ex_aws, :httpoison]]

deps:

    [{:phoenix, "~> 1.1"},
     {:phoenix_ecto, "~> 2.0"},
     {:postgrex, ">= 0.0.0"},
     {:phoenix_html, "~> 2.3"},
     {:phoenix_live_reload, "~> 1.0", only: :dev},
     {:cowboy, "~> 1.0"},
     {:gettext, "~> 0.9"},
     {:arc,  "~> 0.2.2"},
     {:arc_ecto, github: "stavro/arc_ecto"},
     {:ex_aws, "~> 0.4.10"},
     {:httpoison, "~> 0.7"}]

dev config file:

config :arc,
  bucket: "verktyget-development"

import_config "dev.secret.exs"

In dev.secret.exs:

config :ex_aws,
  access_key_id: "KEY",
  secret_access_key: "SECRET"

My attachment module:

defmodule MyApp.Image do
  use Arc.Definition

  # Include ecto support (requires package arc_ecto installed):
  use Arc.Ecto.Definition

  @versions [:original]

  # To add a thumbnail version:
  # @versions [:original, :thumb]

  # Whitelist file extensions:
  def validate({file, _}) do
    ~w(.jpg .jpeg .gif .png) |> Enum.member?(Path.extname(file.file_name))
  end

  # Define a thumbnail transformation:
  # def transform(:thumb, _) do
  #   {:convert, "-strip -thumbnail 250x250^ -gravity center -extent 250x250 -format png"}
  # end

  # Override the persisted filenames:
  def filename(version, _) do
    version
  end

  # Override the storage directory:
  def storage_dir(version, {file, scope}) do
    "uploads/media/#{scope.id}"
  end

  # Provide a default URL if there hasn't been a file uploaded
  # def default_url(version, scope) do
  #   "/images/avatars/default_#{version}.png"
  # end
end

To get the file I use:

MyApp.Image.url({modell.image, modell}, :original, signed: true)

Does arc need to be added to the applications list?

Hi!

When I compile for production I get this:

[...]
Building release with MIX_ENV=prod.

You have dependencies (direct/transitive) which are not in :applications!

The following apps should be added to :applications in mix.exs:

        arc => arc is missing from my_app

Continue anyway? Your release may not work as expected if these dependencies are required! [Yn]: 

Is it necessary to add arc to the applications list as well? If so, we should update the readme :)

If not, can I ignore this warning somehow?

Cannot upload file bigger than 700KB to s3

Everytime I uploaded file bigger than 700KB, I've got this error:

[error] #PID<0.550.0> running AppscoastFm.Endpoint terminated
Server: localhost:4000 (http)
Request: POST /episodes
** (exit) exited in: Task.await(%Task{owner: #PID<0.550.0>, pid: #PID<0.553.0>, ref: #Reference<0.0.1.14482>}, 10000)
    ** (EXIT) time out

Did this: config :arc, version_timeout: 100_000_000 #milliseconds and

  plug Plug.Parsers,
    parsers: [:urlencoded, :multipart, :json],
    pass: ["*/*"],
    json_decoder: Poison,
    length: 100_000_000

Please help

Arc.Storage.S3 changeset is always invalid

Hi,

I'm trying to upload avatars, my setup is pretty normal given the examples, etc, when I try to upload an image with Storage.S3 I always get is_invalid, but if I change it to Arc.Storage.S3 it works ok, I created a gist with all relevant files/snippets, the controller action, the user model, and the uploader, I'm also using ecto 2.0.2:

https://gist.github.com/kainlite/58202fbffbe948ad240cfd3967e75c71

config.exs
config :arc,
  bucket: "wauploads",
  virtual_host: true

config :ex_aws, :httpoison_opts,
  recv_timeout: 60_000,
  hackney: [recv_timeout: 60_000, pool: false]

config :myapp, :ex_aws,
  debug_requests: true,
  access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
  secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY")
  # region: ["us-west-2"]
  # s3: [
  #   scheme: "https://",
  #   host: "s3-us-west-2.amazonaws.com",
  #   region: "us-west-2"
  # ]
config :arc,
  bucket: "myuploads",
  virtual_host: true

config :ex_aws, :httpoison_opts,
  recv_timeout: 60_000,
  hackney: [recv_timeout: 60_000, pool: false]

config :myapp, :ex_aws,
  debug_requests: true,
  access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
  secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY")
  # region: ["us-west-2"]
  # s3: [
  #   scheme: "https://",
  #   host: "s3-us-west-2.amazonaws.com",
  #   region: "us-west-2"
  # ]

mix.exs

  def application do
    [mod: {MyApp, []},
     applications: [:phoenix, :phoenix_html, :cowboy, :logger, :gettext,
                    :phoenix_ecto, :postgrex, :comeonin, :canary, :canada,
                    :hound, :ex_machina, :mailgun, :guardian,
                    :ex_aws, :httpoison, :arc, :arc_ecto
                  ]]
  end

  defp deps do
    [{:phoenix, "~> 1.1.4"},
     {:postgrex, "~> 0.11.2", [hex: :postgrex, optional: true]},
     {:phoenix_ecto, "~> 3.0.0", override: true},
     {:ecto, "~> 2.0.2", override: true},
     {:phoenix_html, "~> 2.6"},
     {:phoenix_live_reload, "~> 1.0", only: :dev},
     {:comeonin, "~> 2.0"},
     {:guardian, "~> 0.10.1"},
     {:gettext, "~> 0.9"},
     {:mailgun, "~> 0.1.2"},
     {:canary, "~> 0.14.1", override: true},
     {:canada, github: "jarednorman/canada", override: true},
     {:credo, "~> 0.1.6", only: [:dev, :test]},
     {:hound, "~> 1.0"},
     {:mix_test_watch, "~> 0.2", only: :dev},
     {:ex_machina, "~> 0.6.1"},
     {:exrm, "~> 1.0"},
     {:cowboy, "~> 1.0"},
     {:arc, "~> 0.5.3"},
     {:arc_ecto, "~> 0.4.2"},
     {:ex_aws, "~> 0.4.10"},
     {:httpoison, "~> 0.7"},
     {:poison, "~> 1.2"},
   ]
  end

Thanks.

Different public path for uploads

Path for the application to access the file:
priv/static/system/trainers/avatars/17/thumb-ms_big_yellow.png.png?v=63620674899

Public path to the file:
http://localhost:4000/system/trainers/avatars/17/thumb-ms_big_yellow.png.?v=63620674899

I tried it with:

MyApp.Avatar.url({@trainer.avatar, @trainer}, :thumb)

but it returns priv/static/system/trainers/avatars/17/thumb-ms_big_yellow.png.png?v=63620674899 which of course can't be accessed by the user since the domain is rooted to the priv/static directory.

How do I get this to work? Couldn't find anything in the docs.

Removing not used assets

I notices assets I uploaded to s3 persists when I update my 'avatar'

I don't know if it's a amazon thing or it's something I should/can handle

:http_error, 307, 'Temporary Redirect'

I am getting the below error when trying to store files to S3.

iex(1)> Myapp.SiteImageUploader.store("path/to/image.jpg")
** (EXIT from #PID<0.703.0>) an exception was raised:
    ** (ErlangError) erlang error: {:aws_error, {:http_error, 307, 'Temporary Redirect', "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>TemporaryRedirect</Code><Message>Please re-send this request to the specified temporary endpoint. Continue to use the original request endpoint for future requests.</Message><Bucket>mybucket-dev</Bucket><Endpoint>mybucket-dev.s3-us-west-2.amazonaws.com</Endpoint><RequestId>801142CBF4589FAC</RequestId><HostId>RJuvEnRxWxmQMBMCLA7Bn1ie+znHQeWhunRFkjk1sjMsARTDeu92N0EmeU8xTOAP2gMR5ydLP1Q=</HostId></Error>"}}
        (erlcloud) src/erlcloud_s3.erl:1022: :erlcloud_s3.s3_request/8
        (erlcloud) src/erlcloud_s3.erl:682: :erlcloud_s3.put_object/6
        lib/arc/storage/s3.ex:9: Arc.Storage.S3.put/3
        (elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
        (elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
        (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3

My hunch is that its related to not able to specify the region? My bucket is in the us-west-2 region.

s3 host is incorrect for other regions

Hello. Thanks for you app.

I have faced an issue with the S3 storage. I have registered a s3-bucket in eu-west-1 region. And have these lines in configuration:

config :arc,
  bucket: "bucketname"

config :ex_aws,
  access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
  secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY"),
  s3: [
    scheme: "https://",
    host: "s3-eu-west-1.amazonaws.com",
    region: "eu-west-1"
  ]

But, when trying to build the url it gives back: https://s3.amazonaws.com/bucketname/filename.jpg. So this causes an error, when trying to access this path:

<Error>
  <Code>PermanentRedirect</Code>
  <Message>
    The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
  </Message>
  <Bucket>bucketname</Bucket>
  <Endpoint>bucketname.s3.amazonaws.com</Endpoint>
  <RequestId>some-request-id</RequestId>
  <HostId>
    some-host-id
  </HostId>
</Error>

The reason is these lines in s3.ex:

defp default_host do
    case virtual_host do
      true -> "https://#{bucket}.s3.amazonaws.com"
      _    -> "https://s3.amazonaws.com/#{bucket}"
    end
  end

So I have changed the configuration to be:

config :arc,
  asset_host: "https://s3-eu-west-1.amazonaws.com/bucketname"

It solved the issue. Also adding virtual_host: true solves it too. But, maybe it is possible to reuse the code from ex_aws? Since it already has all the configurations needed. If so, I can make a PR for this.

Support for additional S3 upload options and/or built-in support for MIME types

S3 defaults to application/octet-stream for any files uploaded without a Content-Type header. As a result, linking to any file uploaded via Arc causes the browser to download the file instead of viewing it in the browser (PDFs, JPGs, etc.). It appears that other upload libraries (CarrierWave for example) set the MIME type based on the filename before uploading. Plug.MIME has path/1 which will give you the mime type for the given filename (though that would introduce a dependency on Plug).

As an alternative, or maybe in addition, it would be great to add another overridden function, similar to storage_dir/2, where you could return additional S3 PUT options (right now, Arc.Storage.S3.put/3 hardcodes the options list to [acl: acl], but it could be something along the lines of [acl: acl] ++ definition.upload_options(version, {file, scope}) or something like that). This way, if MIME support wasn't built in, the end user could still examine the filename and provide a content_type option if necessary.

Thoughts? I'm happy to submit a pull request for either or both of these but wanted to check in first. Thanks!

Scope is nil in nested models

Thanks for a great uploader.
I ran into an issue when attaching files to nested model

I have modified the uploader so I can upload an image on creation

def storage_dir(version, {file, scope}) do
"uploads/banner/images/#{scope.storage_dir}" <<< storage_id is a UUID
end

This works great :-)

Bui If I use the same uploader on a nested model e.g a user has many party_images then scope is nil
when I try to create / update a `party_image, version and file is OK

File fingerprint

Is there a way to get the fingerprint of the uploaded file? I want to append the fingerprint to the filename.

Doesn't work with most recent version of Phoenix?

Hello, I'm trying to use your app and I set up everything like in the README but it keeps giving this error:

== Compilation error on file web/uploaders/attachment.ex ==
** (CompileError) web/uploaders/attachment.ex:2: module Arc.Definition is not loaded and could not be found
(elixir) expanding macro: Kernel.use/1
web/uploaders/attachment.ex:2: MyApp.Attachment (module)
(elixir) lib/kernel/parallel_compiler.ex:100: anonymous fn/4 in Kernel.ParallelCompiler.spawn_compilers/8

I've got the 'use Arc.Definition' macro and have the mix.exs file setup correctly as well as the config.exs (I think)

Upload progress

Hi,

It would be very nice if the async process could return a progress status.
Either in the form of 15324/2123456 Bytes or as a percentage.

I've been looking at both hackney and httpotion but could not find any hooks regarding this myself.
I hope you might have more insight into it :)

Gerard

Transformations can't handle complex command args

I'm building up an ffmpeg command that sets metadata and I can't pass this transform in its current state:

fn(in, out) -> ~s(-f mp3 -i #{in} -metadata title="The Title" -f mp3 #{out}) end

I believe the problem arises in Arc.Transformations.Convert where it always takes the args variable and sends it to ~w() before passing it to System.cmd. The string in this case is:

"-f mp3 -i in -metadata title=\"The Title\" -f mp3 out"

Calling ~w() on that produces:

["-f", "mp3", "-i", "in", "-metadata", "title=\"The", "Title\"", "-f", "mp3", "out"]

ffmpeg chokes on this because it improperly splits the title in the metadata.

A possible solution: check the type of args before sending it to System.cmd. If it's a string, pass it to ~w(). If it's already a list, leave it alone. This would allow me to build up the argument list inside my function and return it instead of a string.

If that solution works for you, I'm happy to PR. Please let me know, thanks.

Namespaced Uploads

I'm building out a server that supports multiple sites. Each site is decided by its domain. Therefore, I'd like to store files for each site in their own folder. Is there a way to do that with Arc?

Thanks,
Lee

How to use a transformer which doesn't accept an output file name as parameter?

Arc.Transformations.Convert expects program to leave a temporary-named file in the file system, but for example soffice (LibreOffice's converter) doesn't take an output path configuration (only --outdir).

With the following configuration

  def transform(:html, _) do
    {
      :soffice,
      &args/2,
      :html
    }
  end

  def args(input, _output) do
    " --headless --convert-to html #{input} "
  end

My application throws:

** (File.CopyError) could not copy from /var/folders/g7/dtxf2tc57z71whsmx20slp0h0000gn/T/YC7IUQLEUHTWUPUF77L67YFIE6N22MZA to uploads/documents/html-609W56th.xlsx.html: no such file or directory

If I set --outdir #{System.tmp_dir} it still doesn't know what file name to look for.

How can we work around this issue?

Thanks for your work on this library! :)

No function clause - transform :noaction and ffmpeg

I'm attempting to use Arc's transformation functionality to reformat audio using FFMPEG before I store them. Currently I'm running into an issue with both the :noaction transformation, and the "action" transformation.

When a user uploads a file in the desired format, where I don't need to reformat it, and I attempt the :noaction method I get the following error:

[error] Task #PID<0.553.0> started from #PID<0.550.0> terminating
** (FunctionClauseError) no function clause matching in Arc.Processor.apply_transformation/2
    (arc) lib/arc/processor.ex:6: Arc.Processor.apply_transformation(%Arc.File{file_name: "audio.wav", path: "/tmp/plug-1458/multipart-862094-378542-2"}, :noaction)
    (arc) lib/arc/actions/store.ex:50: Arc.Actions.Store.put_version/3
    (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
    (elixir) lib/task/supervised.ex:40: Task.Supervised.reply/5
    (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Function: #Function<2.66792300/0 in Arc.Actions.Store.async_put_version/3>
    Args: []

audio_file.ex

defmodule AudioFile do
  use Arc.Definition

  @extension_whitelist ~w(.wav)

  def acl(_, _), do: :private

  def filename(_, {_, scope}) do
    "#{scope.storage_key}"
  end

  def storage_dir(_, {_, scope}) do
    "#{scope.name}/"
  end

  def transform(_version_atom, {file, _scope}) do
    file_extension = get_file_extension(file)

    if Enum.member?(@extension_whitelist, file_extension) do
      :noaction
    else
      {:ffmpeg, fn(input, output) -> "#{input} -format wav #{output}" end, :wav}
    end
  end

  defp get_file_extension(file) do
    file.file_name |> Path.extname |> String.downcase
  end
end

When I attempt the deprecated method listed in lib/arc/processor.ex :7, by wrapping the :noaction in a tuple like {:noaction}

I instead get what looks like the transform function running multiple times, twice without error, and a third time running without including the parameters at all.

Based on the lib/arc/processor.ex file it looks like that function should be defined, and have an appropriately matching pattern.

When attempting to actually run the transformation {:ffmpeg, fn(input, output) -> "#{input} -format png #{output}" end, :wav} I run into a similar issue:

[error] Task #PID<0.552.0> started from #PID<0.549.0> terminating
** (FunctionClauseError) no function clause matching in Arc.Processor.apply_transformation/2
    (arc) lib/arc/processor.ex:6: Arc.Processor.apply_transformation(%Arc.File{file_name: "mp3_test.mp3", path: "/tmp/plug-1458/multipart-864421-249072-1"}, {:ffmpeg, #Function<1.113334142/2 in AudioFile.transform/2>})
    (arc) lib/arc/actions/store.ex:50: Arc.Actions.Store.put_version/3
    (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
    (elixir) lib/task/supervised.ex:40: Task.Supervised.reply/5
    (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Function: #Function<2.66792300/0 in Arc.Actions.Store.async_put_version/3>
    Args: []

This error doesn't seem to make sense to me given the methods on lines 8 and 12 of lib/arc/processor.ex seem to have functions that should match.

I'm wondering I'm misunderstanding how Arc should be used, or if there are other issues at play here.

Crashing when I commit my form if I'm use scope. maybe it's something I do wrong

defmodule uploaders/Digiramp.Avatar do

..
..

def filename(version, {file, scope}) do
    "#{scope.id}_#{version}_#{file.file_name}"
end

from the console

[error] Task #PID<0.951.0> started from #PID<0.949.0> terminating
Function: #Function<0.47797856/0 in Arc.Actions.Store.async_put_version/3>
Args: []
** (exit) an exception was raised:
** (UndefinedFunctionError) undefined function: nil.id/0
nil.id()
(digiramp) web/uploaders/avatar.ex:31: Digiramp.Avatar.filename/2
(arc) lib/arc/definition/versioning.ex:10: Arc.Definition.Versioning.resolve_file_name/3
(arc) lib/arc/actions/store.ex:49: Arc.Actions.Store.put_version/3
(elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
(elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
(stdlib) proc_lib.erl:237: :proc_lib.init_p_do_apply/3
[error] Ranch protocol #PID<0.949.0> (:cowboy_protocol) of listener Digiramp.Endpoint.HTTP terminated
** (exit) an exception was raised:
** (UndefinedFunctionError) undefined function: nil.id/0
nil.id()
(digiramp) web/uploaders/avatar.ex:31: Digiramp.Avatar.filename/2
(arc) lib/arc/definition/versioning.ex:10: Arc.Definition.Versioning.resolve_file_name/3
(arc) lib/arc/actions/store.ex:49: Arc.Actions.Store.put_version/3
(elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
(elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
(stdlib) proc_lib.erl:237: :proc_lib.init_p_do_apply/3

Same thing with Overrideing the storage directory:

  def storage_dir(version, {file, scope}) do
    "uploads/users/avatars/#{scope.id}"
  end

Scope in storage_dir on create

Would it be easy/straigthforward to have the scope in storage_dir after it passed throught some callbacks? My use case is: I have a model with a slug column and I want that column to be in the uploaded path. The content of this column is generated in a before_insert callback. The goal is to have a unique path for each record with an identifier persisted in the database. As it is right now in arc, the scope contains no id and no other fields set in callback.

Could it be possible to achieve this with arc? My hack right now is to inject the generated slug value in the controller before it goes in the changeset.

Thank you for this package, by the way, it’s really fun and easy to use 😄

Upload size 413

Hi

When uploading larger images on my production app, I receive the following error:

413 Request Entity Too Large

How can I lift the maximum upload size and is this done on application layer or NGINX?

Generate random file name

Hi!

My model can have multiple images so I need the filenames to be unique, is there any way to do this? I have tried to do it in my changeset function but I can't make it work since cast_attachment uploads the image directly.

Any ideas?

Moving from erlcloud to ex_aws ?

I think it may be worth it to move from using erlcloud to ex_aws for accesing S3 services, the latter gives way more useful responses, easier to configure and some other things. You can find it here. What do you think?

Noobs here.....

I'd like to thank you for such an amazing libraries....

  1. Arc_ecto doesn't seem to work with phoenix_ecto (current version) but i managed it by using arc_ecto 0.3.2
  2. While using arc_ecto 0.3.2 i realized that, in the storage dir (scope.id) doesn't show... how to handle this ?

Local file is processed and uploaded to S3 but nothing is stored in database

I'm following the examples in the Readme closely but I cannot get it to work. I think something is wrong here.

I'm using elixirs Arc with Ecto and Amazon S3 to store files that I have previously downloaded. Everything seems to work, they end up on S3. But nothing is stored in my database. So if I try to generate an URL I always just get the default image back.

This is how I store a file:

iex > user = Repo.get(User, 3)
iex > Avatar.store({"/tmp/my_file.png", user})
{:ok, "my_file.png"}
iex > user.avatar
nil

But the user.avatar field is still nil.

My user module:


defmodule MyApp.User do
  use MyApp.Web, :model
  use Arc.Ecto.Schema

  alias MyApp.Repo

  schema "users" do
    field :name, :string
    field :email, :string
    field :avatar, MyApp.Avatar.Type    
    embeds_many :billing_emails, MyApp.BillingEmail
    embeds_many :addresses, MyApp.Address
    timestamps
  end

  @required_fields ~w(name email)
  @optional_fields ~w(avatar)

  def changeset(model, params \\ :empty) do
    model
    |> cast(params, @required_fields, @optional_fields)
    |> cast_embed(:billing_emails)
    |> cast_embed(:addresses)
    |> validate_required([:name, :email])
    |> validate_format(:email, ~r/@/)
    |> unique_constraint(:email)
    |> cast_attachments(params, [:avatar])
  end

end

The Avatar uploader:

defmodule MyApp.Avatar do
  use Arc.Definition

  # Include ecto support (requires package arc_ecto installed):
  use Arc.Ecto.Definition

  @acl :public_read

  # To add a thumbnail version:
  @versions [:original, :thumb]

  # Whitelist file extensions:
  def validate({file, _}) do
    ~w(.jpg .jpeg .gif .png) |> Enum.member?(Path.extname(file.file_name))
  end

  # Define a thumbnail transformation:
  def transform(:thumb, _) do
    {:convert, "-strip -thumbnail 250x250^ -gravity center -extent 250x250 -format png", :png}
  end

  def transform(:original, _) do
    {:convert, "-format png", :png}
  end

  def filename(version,  {file, scope}), do: "#{version}-#{file.file_name}"

  # Override the storage directory:
  def storage_dir(version, {file, scope}) do
    "uploads/user/avatars/#{scope.id}"
  end

  # Provide a default URL if there hasn't been a file uploaded
  def default_url(version, scope) do
    "/images/avatars/default_#{version}.png"
  end

end

Resize animated gif

I tried resizing an animated gif but it messes up the animation after resizing. I found out that there are two steps required to resize an animated gif.

Example

convert do.gif -coalesce temporary.gif
convert -size <original size> temporary.gif -resize 24x24 smaller.gif

Is there a way to get a resized version of an animated gif with arc?

error in lib/arc/storage/s3.ex:69: Arc.Storage.S3.bucket/0

i get this error:

[error] Task #PID<0.487.0> started from #PID<0.463.0> terminating
** (MatchError) no match of right hand side value: :error
    lib/arc/storage/s3.ex:69: Arc.Storage.S3.bucket/0
    lib/arc/storage/s3.ex:14: Arc.Storage.S3.put/3
    (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
    (elixir) lib/task/supervised.ex:40: Task.Supervised.reply/5
    (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Function: #Function<4.60738616/0 in Arc.Actions.Store.async_put_version/3>
    Args: []

but i didn't use s3 storage.

avatar params:

%{avatar: %Plug.Upload{content_type: "image/png", filename: "Klogo副本.png", 
   path: "/var/folders/pn/jhz7bfx944b0ftdxrxtlsv7h0000gn/T//plug-1459/multipart-849582-337479-2"},
  id: 12}

user params:

%ChinaPhoneix.User{__meta__: #Ecto.Schema.Metadata<:loaded>, admin: false,
 age: nil, avatar: nil, comments: [], email: "[email protected]", id: 12,
 inserted_at: #Ecto.DateTime<2016-04-05T08:54:52Z>, name: "ssj4429108",
 password: nil, password_confirmation: nil,
 password_digest: "$2b$12$FOGrltWbsFXEMu24RXf1VOriDQaB3fXPS3WfWi5NzHflvDOYma/kC",
 posts: [], score: 1, updated_at: #Ecto.DateTime<2016-04-05T09:00:50Z>}

error in

          changeset = User.update_changeset(user, params)

Delay transformations

Would it be possible to store the original upload, but delay transformations until later?

For instance, video encoding takes forever, especially if the clip is of any significant length. It would make sense to offload this task to a pool of workers, which would handle encoding later on, once the upload (to S3, etc) has completed.

As I understand it, Arc currently runs transformations automatically. For images, that's often fine. But larger files could benefit from manual control.

More secure file validation?

Hi

How can I safely validate avatars uploaded by users?

It seems to me that a user could easily just rename a virus.exe to avatar.png and pass the default validation.

Any ideas?

Signed URLs

Has anyone done signed URLs on any regions other than us-east-1 ?

I get SignatureDoesNotMatch from amazonaws

I tried

config :arc,
  virtual_host: true,
  bucket: "bucketname",
  asset_host: "https://bucketname.s3-eu-west-1.amazonaws.com/"

but still get the same error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.