GithubHelp home page GithubHelp logo

elastocloud / elasto Goto Github PK

View Code? Open in Web Editor NEW
10.0 4.0 2.0 1.46 MB

Lightweight cloud storage library and client

Home Page: http://elastocloud.org

License: GNU Lesser General Public License v2.1

C 97.31% Python 2.68% Makefile 0.01%

elasto's Introduction

Elasto

Warning: This project is no longer actively maintained and should not be used.

Summary

Elasto is a lightweight library and client utility for managing and manipulating cloud storage objects via REST protocols. Microsoft Azure and Amazon S3 cloud storage protocols are supported.

Installation

Prebuilt packages for GNU/Linux distributions are available for download from the Elasto project website: http://elastocloud.org/. Alternatively, Elasto can be built from source using the Waf build framework.

libevent, libexpat and openssl (libcrypto) development libraries are required for building.

To compile the library and client, run the following from the top of the source tree:

> ./waf configure
> ./waf build

After building, Elasto can be installed by running:

> ./waf install

Running (short version)

Azure

  1. Create an Azure account

  2. Download the PublishSettings file for the account:
    https://manage.windowsazure.com/publishsettings/index

  3. Start the Elasto client, connecting to the Azure Blob Service:

> elasto -s Azure_PublishSettings_File

Amazon S3

  1. Create Amazon S3 account

  2. Create an IAM group with S3 access, assign a new user to the group:
    https://console.aws.amazon.com/iam/home#home

  3. Download the user’s access key (credentials.csv) file

  4. Start the Elasto client:

> elasto -k iam_creds_file

Running (not so short version)

Azure

First step is to create an Azure account at https://azure.microsoft.com.

Authentication

The Elasto client can authenticate with Azure using one of two credentials parameters:

Access Key

An Access Key is associated with a specific storage account, and only allows access to Blob and File Service objects nested within the corresponding account. E.g.

> elasto -K access_key
PublishSettings

A PublishSettings file can be used to create and manipulate storage accounts, as well as underlying Blob and File Service objects. It can be downloaded at https://manage.windowsazure.com/publishsettings/index.
The file contains security sensitive information, so should be kept private.

> elasto -s PublishSettings_path

Service URI

Azure Blob Service (Block Blobs - default)
> elasto -u abb:// ...
Azure Blob Service (Page Blobs)
> elasto -u apb:// ...
Azure File Service
> elasto -u afs:// ...

Amazon S3

Create an Amazon S3 account at https://aws.amazon.com/s3/.

Authentication

Create an IAM group with S3 access, assign a new user to the group:
https://console.aws.amazon.com/iam/home#home → Create a New Group of Users

The IAM user creation wizard allows for the download of access credentials. Select "Generate an access key for each User", and subsequently "Download Credentials".

Commands can then be issued by running the elasto client binary with the -k argument. E.g.

> elasto -k iam_creds_file <command>

Alternatives

Ceph

Open-source, distributed storage system. https://ceph.com

Azure SDK for Rust

https://github.com/Azure/azure-sdk-for-rust

AWS SDK for Rust

https://github.com/awslabs/aws-sdk-rust

elasto's People

Contributors

ddiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

ddiss zhangjinde

elasto's Issues

signature checks failing against premium Azure accounts created via web portal

Procedure to recreate this:

  • create a new storage account using the Azure web portal via the (classic) or standard account window
  • flag the account as premium storage
  • attempt to access the account using elasto_cli

in such a case I see:
../lib/op.c:391:op_rsp_error_process(): got error msg: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
../lib/op.c:393:op_rsp_error_process(): signature source was: GET

x-ms-date:Tue, 23 Aug 2016 22:33:06 GMT
x-ms-version:2015-02-21
/tcmm/?comp=list

(I have a change to dump the signature source string on 403 response.)

Uploads of long-named Azure block blobs fails with EINVAL

multi-part block blob uploads currently use the blob name as a prefix for the block ID. This is no good for a couple of reasons:

  • block IDs are limited to 64-bytes, whereas blob names can be up to 1024
  • the block ID namespace is already nested under the blob

EOF on send should trigger TCP reconnect

tcmu_elasto handles are long-lived, in that they are used for all I/O for the life of the tcmu device. In some cases, the session may close, with subsequent I/Os triggering a connection layer EOF error:

../lib/conn.c:778:ev_conn_close_cb(): Connection to XXX.blob.core.windows.net closed
../lib/conn.c:458:ev_err_cb(): got client error: EOF
../lib/conn.c:598:ev_done_cb(): NULL request on completion, an error occurred!
../lib/op.c:392:op_rsp_error_process(): got error msg: An HTTP header that
Could not write: Input/output error
io error 0x9eea80 2 2a -5 32768 37961728
[  520.353660] sd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[  520.353663] sd 0:0:0:0: [sda] tag#0 Sense Key : 0x2 [current]
[  520.353664] sd 0:0:0:0: [sda] tag#0 ASC=0x8 ASCQ=0x0
[  520.353666] sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x2a 2a 00 00 01 21 a0 00 00 40 00
[  520.353667] blk_update_request: I/O error, dev sda, sector 74144
[  520.353669] BTRFS error (device sda): bdev /dev/sda errs: wr 1, rd 0, flush 0, corrupt 0, gen 0

The EOF error above should trigger a reconnect / retry, to ensure that the disconnect doesn't result in the I/O error.

client commands missing

c8c46e0 caused a regression, in that the command constructors (which register the command with the client binary) are no longer called.

AWS redirects on object elasto_fopen() aren't handled

With the libelasto AWS backend, elasto_fopen() checks for the existence of an object using the HEAD Object request (http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html). On redirect (e.g. Bucket is being created), the server sends a 307 Temporary Redirect response, without providing XML data to specify the redirect target endpoint - this XML data is not provided with HEAD requests, only GET/PUT/etc. requests. I expect this is inline with the HTTP standard, as the libevent client is also implemented to skip any data segments for HEAD requests.

To successfully handle redirects on open, the HEAD request issued on open could be changed to a (very small data) GET request, but that still leaves a few other HEAD requests, used for write (check existing object length before I/O) and stat.

Large uploads stall when HTTPS is used

When uploading a large Azure file, blob or S3 object over a HTTPS connection, the transfer often stalls indefinitely.
This appears to be due to the current request dispatch behaviour, which sees Elasto disconnect and reconnect for every request.
This bug will be fixed with "Connection: Keep-Alive" support, and corresponding persistent server hostname changes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.