GithubHelp home page GithubHelp logo

backblaze-b2's Introduction

Backblaze B2 Node.js Library

npm version Build Status

A customizable B2 client for Node.js:

  • Uses axios. You can control the axios instance at the request level (see axios and axiosOverride config arguments) and at the global level (see axios config argument at instantiation) so you can use any axios feature.
  • Automatically retries on request failure. You can control retry behaviour using the retries argument at instantiation.

Usage

This library uses promises, so all actions on a B2 instance return a promise in the following pattern:

b2.instanceFunction(arg1, arg2).then(
    successFn(response) { ... },
    errorFn(err) { ... }
);

Basic Example

const B2 = require('backblaze-b2');

const b2 = new B2({
  applicationKeyId: 'applicationKeyId', // or accountId: 'accountId'
  applicationKey: 'applicationKey' // or masterApplicationKey
});

async function GetBucket() {
  try {
    await b2.authorize(); // must authorize first (authorization lasts 24 hrs)
    let response = await b2.getBucket({ bucketName: 'my-bucket' });
    console.log(response.data);
  } catch (err) {
    console.log('Error getting bucket:', err);
  }
}

Response Object

Each request returns an object with:

How it works

Each action (see reference below) takes arguments and constructs an axios request. You can add additional axios options at the request level using:

  • The axios argument (object): each property in this object is added to the axios request object only if it does not conflict with an existing property.
  • The axiosOverride argument (object): each property in this object is added to the axios request object by overriding conflicting properties, if any. Don't use this unless you know what you're doing!
  • Both axios and axiosOverride work by recursively merging properties, so if you pass axios: { headers: { 'your-custom-header': 'header-value' } }, the entire headers object will not be overridden - each header property (your-custom-header) will be compared.

Reference

const B2 = require('backblaze-b2');

// All functions on the b2 instance return the response from the B2 API in the success callback
// i.e. b2.foo(...).then((b2JsonResponse) => {})

// create B2 object instance
const b2 = new B2({
    applicationKeyId: 'applicationKeyId', // or accountId: 'accountId'
    applicationKey: 'applicationKey', // or masterApplicationKey
    // optional:
    axios: {
        // overrides the axios instance default config, see https://github.com/axios/axios
    },
    retry: {
        retries: 3 // this is the default
        // for additional options, see https://github.com/softonic/axios-retry
    }
});

// common arguments - you can use these in any of the functions below
const common_args = {
    // axios request level config, see https://github.com/axios/axios#request-config
    axios: {
        timeout: 30000 // (example)
    },
    axiosOverride: {
        /* Don't use me unless you know what you're doing! */
    }
}

// authorize with provided credentials (authorization expires after 24 hours)
b2.authorize({
    // ...common arguments (optional)
});  // returns promise

// create bucket
b2.createBucket({
    bucketName: 'bucketName',
    bucketType: 'bucketType' // one of `allPublic`, `allPrivate`
    // ...common arguments (optional)
});  // returns promise

// delete bucket
b2.deleteBucket({
    bucketId: 'bucketId'
    // ...common arguments (optional)
});  // returns promise

// list buckets
b2.listBuckets({
    // ...common arguments (optional)
});  // returns promise

// get the bucket
b2.getBucket({
    bucketName: 'bucketName',
    bucketId: 'bucketId' // optional
    // ...common arguments (optional)
});  // returns promise

// update bucket
b2.updateBucket({
    bucketId: 'bucketId',
    bucketType: 'bucketType'
    // ...common arguments (optional)
});  // returns promise

// get upload url
b2.getUploadUrl({
    bucketId: 'bucketId'
    // ...common arguments (optional)
});  // returns promise

// upload file
b2.uploadFile({
    uploadUrl: 'uploadUrl',
    uploadAuthToken: 'uploadAuthToken',
    fileName: 'fileName',
    contentLength: 0, // optional data length, will default to data.byteLength or data.length if not provided
    mime: '', // optional mime type, will default to 'b2/x-auto' if not provided
    data: 'data', // this is expecting a Buffer, not an encoded string
    hash: 'sha1-hash', // optional data hash, will use sha1(data) if not provided
    info: {
        // optional info headers, prepended with X-Bz-Info- when sent, throws error if more than 10 keys set
        // valid characters should be a-z, A-Z and '-', all other characters will cause an error to be thrown
        key1: 'value',
        key2: 'value'
    },
    onUploadProgress: (event) => {} || null // progress monitoring
    // ...common arguments (optional)
});  // returns promise

// list file names
b2.listFileNames({
    bucketId: 'bucketId',
    startFileName: 'startFileName',
    maxFileCount: 100,
    delimiter: '',
    prefix: ''
    // ...common arguments (optional)
});  // returns promise

// list file versions
b2.listFileVersions({
    bucketId: 'bucketId',
    startFileName: 'startFileName',
    startFileId: 'startFileId',
    maxFileCount: 100
    // ...common arguments (optional)
});  // returns promise

// list uploaded parts for a large file
b2.listParts({
    fileId: 'fileId',
    startPartNumber: 0, // optional
    maxPartCount: 100, // optional (max: 1000)
    // ...common arguments (optional)
});  // returns promise

// hide file
b2.hideFile({
    bucketId: 'bucketId',
    fileName: 'fileName'
    // ...common arguments (optional)
});  // returns promise

// get file info
b2.getFileInfo({
    fileId: 'fileId'
    // ...common arguments (optional)
});  // returns promise

// get download authorization
b2.getDownloadAuthorization({
    bucketId: 'bucketId',
    fileNamePrefix: 'fileNamePrefix',
    validDurationInSeconds: 'validDurationInSeconds', // a number from 0 to 604800
    b2ContentDisposition: 'b2ContentDisposition'
    // ...common arguments (optional)
});  // returns promise

// download file by name
b2.downloadFileByName({
    bucketName: 'bucketName',
    fileName: 'fileName',
    responseType: 'arraybuffer', // options are as in axios: 'arraybuffer', 'blob', 'document', 'json', 'text', 'stream'
    onDownloadProgress: (event) => {} || null // progress monitoring
    // ...common arguments (optional)
});  // returns promise

// download file by fileId
b2.downloadFileById({
    fileId: 'fileId',
    responseType: 'arraybuffer', // options are as in axios: 'arraybuffer', 'blob', 'document', 'json', 'text', 'stream'
    onDownloadProgress: (event) => {} || null // progress monitoring
    // ...common arguments (optional)
});  // returns promise

// delete file version
b2.deleteFileVersion({
    fileId: 'fileId',
    fileName: 'fileName'
    // ...common arguments (optional)
});  // returns promise

// start large file
b2.startLargeFile({
    bucketId: 'bucketId',
    fileName: 'fileName'
    // ...common arguments (optional)
}); // returns promise

// get upload part url
b2.getUploadPartUrl({
    fileId: 'fileId'
    // ...common arguments (optional)
}); // returns promise

// get upload part
b2.uploadPart({
    partNumber: 'partNumber', // A number from 1 to 10000
    uploadUrl: 'uploadUrl',
    uploadAuthToken: 'uploadAuthToken', // comes from getUploadPartUrl();
    data: Buffer // this is expecting a Buffer not an encoded string,
    hash: 'sha1-hash', // optional data hash, will use sha1(data) if not provided
    onUploadProgress: (event) => {} || null, // progress monitoring
    contentLength: 0, // optional data length, will default to data.byteLength or data.length if not provided
    // ...common arguments (optional)
}); // returns promise

// finish large file
b2.finishLargeFile({
    fileId: 'fileId',
    partSha1Array: [partSha1Array] // array of sha1 for each part
    // ...common arguments (optional)
}); // returns promise

// cancel large file
b2.cancelLargeFile({
    fileId: 'fileId'
    // ...common arguments (optional)
}); // returns promise

// create key
b2.createKey({
    capabilities: [
        'readFiles',                    // option 1
        b2.KEY_CAPABILITIES.READ_FILES, // option 2
        // see https://www.backblaze.com/b2/docs/b2_create_key.html for full list
    ],
    keyName: 'my-key-1', // letters, numbers, and '-' only, <=100 chars
    validDurationInSeconds: 3600, // expire after duration (optional)
    bucketId: 'bucketId', // restrict access to bucket (optional)
    namePrefix: 'prefix_', // restrict access to file prefix (optional)
    // ...common arguments (optional)
});  // returns promise

// delete key
b2.deleteKey({
    applicationKeyId: 'applicationKeyId',
    // ...common arguments (optional)
});  // returns promise

// list keys
b2.listKeys({
    maxKeyCount: 10, // limit number of keys returned (optional)
    startApplicationKeyId: '...', // use `nextApplicationKeyId` from previous response when `maxKeyCount` is set (optional)
    // ...common arguments (optional)
});  // returns promise

Uploading Large Files Example

To upload large files, you should split the file into parts (between 5MB and 5GB) and upload each part seperately.

First, you initiate the large file upload to get the fileId:

let response = await b2.startLargeFile({ bucketId, fileName });
let fileId = response.data.fileId;

Then, to upload parts, you request at least one uploadUrl and use the response to upload the part with uploadPart. The url and token returned by getUploadPartUrl() are valid for 24 hours or until uploadPart() fails, in which case you should request another uploadUrl to continue. You may utilize multiple uploadUrls in parallel to achieve greater upload throughput.

If you are unsure whether you should use multipart upload, refer to the recommendedPartSize value returned by a call to authorize().

let response = await b2.getUploadPartUrl({ fileId });

let uploadURL = response.data.uploadUrl;
let authToken = response.data.authorizationToken;

response = await b2.uploadPart({
    partNumber: parNum,
    uploadUrl: uploadURL,
    uploadAuthToken: authToken,
    data: buf
});
// status checks etc.

Then finish the uploadUrl:

let response = await b2.finishLargeFile({
    fileId,
    partSha1Array: parts.map(buf => sha1(buf))
})

If an upload is interrupted, the fileId can be used to get a list of parts which have already been transmitted. You can then send the remaining parts before finally calling b2.finishLargeFile().

let response = await b2.listParts({
    fileId,
    startPartNumber: 0,
    maxPartCount: 1000
})

Changes

See the CHANGELOG for a history of updates.

Upgrading from 0.9.x to 1.0.x

For this update, we've switched the back end HTTP request library from request to axios as it has better Promise and progress support built in. However, there are a couple changes that will break your code and ruin your day. Here are the changes:

  • The Promise resolution has a different data structure. Where previously, the request response data was the root object in the promise resolution (res), this data now resides in res.data.
  • In v0.9.12, we added request progress reporting via the third parameter to then(). Because we are no longer using the same promise library, this functionality has been removed. However, progress reporting is still available by passing a callback function into the b2.method() that you're calling. See the documentation below for details.
  • In v0.9.x, b2.downloadFileById() accepted a fileId parameter as a String or Number. As of 1.0.0, the first parameter is now expected to be a plain Object of arguments.

Contributing

Contributions, suggestions, and questions are welcome. Please review the contributing guidelines for details.

Authors and Contributors

  • Yakov Khalinsky (@yakovkhalinsky)
  • Ivan Kalinin (@IvanKalinin) at Isolary
  • Brandon Patton (@crazyscience) at Isolary
  • C. Bess (@cbess)
  • Amit (@Amit-A)
  • Zsombor Paróczi (@realhidden)
  • Oden (@odensc)

backblaze-b2's People

Contributors

ablankenship10 avatar amit-a avatar bhdouglass avatar cbess avatar crazyscience avatar dependabot[bot] avatar foxdavidj avatar haywirez avatar ivankalinin avatar jamiesyme avatar jaredreich avatar josephrocca avatar klowner avatar legacy3 avatar m4chinations avatar martinkolarik avatar odensc avatar phil-r avatar ps73 avatar realhidden avatar scottchapman avatar tobiasmuehl avatar yakovkhalinsky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

backblaze-b2's Issues

download binary

cant get a valid file (test zip and mp3)

   b2.downloadFileById(fileid).then(function(file){
            // tried   var bitmap = new Buffer(file.data,'base64');
                //fs.writeFile('testload.txt',bitmap)

                   fs.writeFile('testload.mp3',file.data, function(err) {
                        console.log(err);
                });
});

Any idea? thnx

filename vs fileName inconsistency

The method b2.uploadFile() seems to use filename as an arg, while most (if not all) others like b2.deleteFileVersion() use fileName.

Perhaps we could change b2.uploadFile() to use fileName by default, but fallback to filename so we don't break the library for everyone?

Example for uploading an image from a canvas

I need help uploading an image from a canvas. Right now this is what I have.


   var imageDataURL = canvas.toDataURL();


    var b2 = new B2({
      accountId: B2_ACCOUNT_ID,
      applicationKey: B2_APPKEY
    });

    b2.authorize().then(function() {
      console.log(".authorize-DONE");
      return b2.getUploadUrl(B2_BUCKET_ID);
    }).then(function(response) {
      console.log(".uploadUrl:" + JSON.stringify(response));
      return b2.uploadFile({
        uploadUrl: response.uploadUrl,
        uploadAuthToken: response.authorizationToken,
        mime: 'image/png',
        filename: 'sample.png',
        data: imageDataURL    // You mentioned to use a Buffer but I am clueless how to do it. Example please.
      });
    }).then(function(response) {
      console.log(JSON.stringify(response));
    }).catch(function(error) {
      console.log("A problem has occurred:" + error.stack);
    });

Byte length not same as content-length

Hello,
When I am downloading arbitrary data stored in B2 bucket, the content-length header has the correct byte length of 208374 but when I call:

Buffer.byteLength(resp.data)

where resp is the await return value from b2.downloadFileById(), I get 378415. Even the backblaze UI shows that the size is 208.4 kB. What is the correct way to decode the data sent from the downloadFileById() method to get the correct-size Buffer or Uint8 array?

Blocked by CORS policy

I put the keys in the app and also in Backblaze bucket i configure cors with all origins and in google console:

Access to XMLHttpRequest at 'https://api.backblazeb2.com/b2api/v2/b2_authorize_account' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

accountId returned from b2_authorize_account shouldn't overwrite the one provided on init

context.accountId = authResponse.accountId;

When authorising the account, this lines overwrites the accountId that was provided as argument on init with the accountId returned from the authorize_account api.

This doesn't work when using other application keys, because the API returns the accountId of the master key and not the one provided.

This means that you cannot renew the authorizationToken when using keys other than the master key.

I'm now using a workaround (patched the authorize action function with my own).

Why is saving the accountId from the API actually needed if we need to provide it on init?

Retry options already builtin anywhere

Hello,

Thanks for this project, have used it already with great satisfaction thus far.

I'm wondering though: is there anyway to pass in an option and retry on failure? I'm specifically wanting to
retry once or maybe twice if I get a 503, service_unavailable

Automatically call authorize_account on 401

Perhaps we should have backblaze-b2 automatically call authorize_account when it encounters a 401 error, or have it as an option at least.

I encountered an issue in a long-running application where after 24 hours all calls stopped working.

https://www.backblaze.com/b2/docs/application_keys.html

Authorization tokens are only good for 24 hours. You can use the application key to make new authorization tokens as they expire.

It would be nice if the module automatically did this rather than the user having to: 1. Call authorize before every method (incurs a Class C transaction cost), 2. Add an error handler to every call to check for 401s, or 3. Add a setInterval every 24 hours to authorize (a bit hacky).

Thoughts?

Filename should be URIEncoded

Per https://www.backblaze.com/b2/docs/string_encoding.html any strings should be URIEncoded.

If you do b2.uploadFile(...'test file.txt'...); you receive the error:

{ code: 'bad_request',
  message: 'Bad character in percent-encoded string: 32',
  status: 400 }

This is because the filename does not get URIEncoded.

(There's probably other areas in this implementation where this happens, too)

Retag Pull Requests for 1.1.0 release

@cbess @odensc @realhidden

Just putting this in so we can clean up for an npm release.

I think since we've tagged some PR's already for 1.1.0, we should move those that aren't in 1.0.4 into the 1.1.0 release in preparation for making a changelog and doing the final version bump.

This would go a long way to refresh the package.

Options misssing for the listFileVersions function

Hello,

I've noticed the prefix and delimiter options are missing from the listFileVersions function.
This raises an issue when trying to filter specific files from backblaze.

Proposed change:

exports.listFileVersions = function(b2, args) {
    const bucketId = args.bucketId;
    const startFileName = args.startFileName;
    const startFileId = args.startFileId;
    const maxFileCount = args.maxFileCount;
    const prefix = args.prefix;
    const delimiter = args.delimiter;

    const options = {
        url: endpoints(b2).listFileVersionsUrl,
        method: 'POST',
        headers: utils.getAuthHeaderObjectWithToken(b2),
        data: {
            bucketId: bucketId,
            startFileName: startFileName || '',
            prefix: prefix || '',
            delimiter: delimiter || null,
            startFileId: startFileId,
            maxFileCount: maxFileCount || 100
        }
    };

    // merge order matters here: later objects override earlier objects
    return request.sendRequest(_.merge({},
        _.get(args, 'axios', {}),
        options,
        _.get(args, 'axiosOverride', {})
    ));
};

b2.authorizationToken is null

Hi I am trying to use backblaze-b2 plugin in my application but when I used b2.authorize() I am getting this error : Error: Invalid authorizationToken
I debugged it and found that b2.authorizationToken is null in utils.js

Could somebody tell me what could have went wrong?

Code :

var B2 = require('backblaze-b2');
// create B2
// create b2 object instance
var b2 = new B2({
accountId: 'accId',
applicationKey: 'appkey'
});

b2.authorize();

Binary file issues - Axios passthrough options not documented

Related to #1

We ran into an issue where binary downloads were not working.
This can be addressed by adding the responseType attribute as an 'arraybuffer'.
There is no documentation on passing through options to the axios library used

e.g. this is the correct way to download binary files.

b2.downloadFileById({
    fileId: fileId,
    responseType: 'arraybuffer'

See a stackoverflow answer related to this issue here:
https://stackoverflow.com/questions/41846669/download-an-image-using-axios-and-convert-it-to-base64

Get download url

Is it possible to get file download url or do I have to download on server and handle downloadUrl my self?

Add "smart" upload function

The AWS S3 SDK for JavaScript has an upload function that does not correspond to any particular API request. You can give it a buffer or a stream, and it will automatically perform either a single PutObject call or a multi-part upload.

It would be a great benefit to this library to provide something similar. Right now, large file uploads are unnecessarily cumbersome, especially when the input is a stream. Authorization token management is a giant pain.

I am working on such a function right now for our own internal use. I'm writing it as a module that exposes a single function that can be attached to the prototype of the B2 class provided by this library (B2.prototype.uploadAny = require('backblaze-b2-upload-any');).

This issue is intended to convey my intent to integrate this function into this library and submit a PR. Therefore, I would very much appreciate any feedback on my proposal so that I can accommodate any necessary design changes as early as possible.

The current planned features of this function (many of which are already done) are:

  • Performs the upload using a single upload_file call or switches to a large-file upload as appropriate.
  • In large-file mode, uploads multiple parts with configurable concurrency.
  • Automatic management of upload tokens. An upload token (URL + authorization token) can be reused by future part uploads. Expired tokens (where the server returns 503 or 400) are discarded.
  • Automatic re-authorization if the server returns 401 in the middle of an upload.
  • Retry with exponential backoff.
  • Support for uploading:
    • Buffers
    • Streams
    • Local files (specified as a string path)
  • If the operation is aborted for whatever reason, any outstanding large-file upload is canceled with cancel_large_file.
  • The caller need not (and cannot) supply a hash. When uploading in large-file mode, a hash of the entire content is not provided to B2 -- a hash is provided for each part. A caller-supplied hash of the content is therefore useless in large-file mode anyway.

There is a difference between the local file and stream cases. When uploading a local file, no content is buffered in memory. Rather, multiple read streams are created (and re-created as necessary if a part upload must be retried).

Stream support necessarily requires some buffering in memory to facilitate retries since node streams cannot be seeked (and not all stream types would be seekable, anyway).

Note that I currently introduce two new dependencies:

  • @hapi/joi which is used to validate the options object.
  • memoizee which is used during re-authorization. If multiple part uploads are trying to re-authorize at the same time, this prevents multiple authorize calls to B2.

finishLargeFile() behavior

Hey, I'm having troubles with finishLargeFile() and I was wondering if my structure for partSha1Array is wrong. At the moment it's basic array of [hash1, hash2, ...] but I can't get any response from finishLargeFile() and it doesn't seem to throw errors either.

Any ideas?

Set maxRedirects:0 on all axios calls

B2 does not return any redirect codes under any circumstances, so there is no need to handle the redirect case. When uploading a stream, axios will buffer the entire request body if maxRedirects is not 0 (the default is 5) so it can correctly handle HTTP codes 307 and 308. When uploading files, this causes axios to hold the entire stream contents in memory to handle a situation that will never occur. This is wasteful, particularly for very large uploads.

uploadFile() and uploadPart() should set maxRedirects:0 on the axios request config. (It should be safe to set this everywhere, as a global default, but these two functions are the most critical.)

Is there a way to use a download url to download without auth headers?

I'm working on a 3D model viewer feature for my app and the client side of the application which is written in Reactjs has a component that renders the .obj model and the only way to successfully load a model is to pass a url paramater which can be either a locally stored file in the computer or a download url (this is how I want to tackle it), the thing is that to download files from backblaze cloud you need to add the authorization headers in order to get the permissions, is there a way to go around this as I need a url that is usable by itself?

Error getting bucket: Error: Request failed with status code 401

Hi,
I'm trying to set up the library. However, I am receiving
Error getting bucket: Error: Request failed with status code 401

Steps on what I did

  1. Get my keyId and keyName from:

image



const b2 = new B2({
    applicationKeyId: "sadfasdffad", // or accountId: 'accountId'
    applicationKey: "masterApplicationKey", // or masterApplicationKey
});
  1. Create API method:
    Get bucketname from:
    image
 export const uploadCreationImage = async (
) => {
    try {
        await b2.authorize(); // must authorize first (authorization lasts 24 hrs)
        let response = await b2.getBucket({
            bucketName: "bobbyhill",
        });
        console.log(response.data);
    } catch (err) {
        console.log("Error getting bucket:", err);
    }
};

What is causing this issue?

Allow underscores in upload info headers

If you try to send an info key that contains an underscore, you get back this error: Info header keys contain invalid characters: <header>

Backblaze suggests using X-Bz-Info-src_last_modified_millis to send the file's last modification time (and in fact the b2 sync command in their Python CLI application does this). I had to use axios.headers to manually set the header as a workaround.

Looking for contributors

I am open to anyone who would like to contribute, I can add any interested parties to the repo and the npm library.

onUploadProgress is never called

my code look like :

const uploadPartObj = {
          partNumber: orderedChunks[i + 1].id,
          uploadUrl: tokenInfo.data.uploadUrl,
          uploadAuthToken: tokenInfo.data.authorizationToken,
          data: orderedChunks[i + 1].buffer,
          hash: orderedChunks[i + 1].sha1,
          onUploadProgress: (event) => console.log(event),
        };
await this.#b2.uploadPart(uploadPartObj);

but i never see the console.log, the onUploadProgress is never trigger ...

Auto-compute SHA1 sum for streams

Related to #32. Applies to uploadPart and uploadFile.

If hash is not passed and data is a stream, the hash can be computed on the fly and appended to the output, while providing the header X-Bz-Content-Sha1: hex_digits_at_end. It would be nice if the client would wrap up this logic itself.

This change is simpler than it seems at first. I wrote the following transform stream that hashes the content as it passes through, then emits the hash before the stream ends. We are using this in production successfully.

const crypto = require('crypto');
const stream = require('stream');

function makeSha1AppendingStream() {
    const d = crypto.createHash('sha1');

    return new stream.Transform({
        transform(chunk, encoding, cb) {
            d.update(chunk, encoding);
            this.push(chunk, encoding);
            cb();
        },

        flush(cb) {
            this.push(d.digest('hex'));
            cb();
        },
    });
}

Used simply like (adjust variable names as needed):

if (hash === undefined && typeof data.pipe === 'function') {
  const hashStream = makeSha1AppendingStream();
  data.on('error', err => { hashStream.emit('error', err); });
  data = data.pipe(hashStream);

  hash = 'hex_digits_at_end';
  contentLength += 40;
}

Side note: if streams are used, all retrying/redirect-following should be disabled. This is either unsafe since the stream has been consumed, or will likely consume a large amount of memory as the entire request body is buffered in memory in case the request needs to be replayed. We had to pass maxRedirects: 0 to axios or process memory would balloon (we're uploading several-hundred-MB files and this was killing us).

Upload progress

Is there any way to know how much data has been uploaded?

[question] Delete all file in folder or starting with /folderName

Hi,

Sorry to ask this here, but b2 is new so not a lot of people are responding on stackoverflow.

Do you know any solution to delete all files in a folder (so with name starting with "folder/") ?

I guess I have to do a listFile starting with "folderUrl" and then for each file a delete file version call.

SHA1 checksum

Hi.
I was trying to use the module for uploading files within an Electron app, but when uploading a file, I constantly encountered an issue with the SHA1 checksum:

{code: "bad_request", message: "sha1 is wrong length", status: 400}

Any ideas how to solve that?
Thank's a lot
Heiko

Help needed with use with Typescript

I've gotten this to work with node and typescript with the associated @types/backblaze-b2 package but I have not been able to figure out how to use the interfaces defined in the index.d.ts file in my node js program. For example, I am trying to define a BucketInfo interface that can be used to extract data from a getBucket() call. In the BucketInfo interface, I would like to define a bucketType: BucketType property, but I cannot figure out how to get access to the BucketType type definition in the index.d.ts file in @types/backblaze-b2. Can anyone suggest a solution?

Currently, I am importing the b2 definition like this:

import B2 = require("./node_modules/@types/backblaze-b2");

This allows me to call B2 as a constructor, but I don't seem to be able to use it as a namespace.

I'm not sure if this is appropriate to post as an issue, but it would be helpful to add this to the examples in the doc, so I suppose it is valid.

More operations with the filename without id

It would be interesting if they heard more operations that could be performed without the file ID, just with the filename, like:

  • List files by name;
  • Delete a file by name;
  • Take the ID of a file by name;

Socket hang up on getUploadPartUrl

When uploading large Files i need to get UploadPartUrl for each part / chunk first.

Sending my Chunks like this:

b2.getUploadPartUrl({fileId:bbFileId}).then(
    function(){
        b2.uploadPart({
            partNumber: chunk.cnt,
            data: chunk.data,
            partSha1: chunk.partSha1,
            uploadUrl: uploadUrl,
            uploadAuthToken: token                
        });
    },
    reject);

results in getting

Error: socket hang up
    at createHangUpError (_http_client.js:331:15)
    at TLSSocket.socketOnEnd (_http_client.js:423:23)
    at emitNone (events.js:111:20)
    at TLSSocket.emit (events.js:208:7)
    at endReadableNT (_stream_readable.js:1064:12)
    at _combinedTickCallback (internal/process/next_tick.js:139:11)
    at process._tickCallback (internal/process/next_tick.js:181:9)`

for some parts / chunks.

Seems like calling getUploadPartUrl many times in a row will cause this error.

Having some trouble with posting data

Hey, so I am trying to upload a file but am kind of confused how I will go about it, so I have

b2.uploadFile({
     uploadUrl: 'my url is here',
     uploadAuthToken: 'my token is here',
)}

Now, how do I go about uploading a file, because the only other param I see is fileName, and not like a specific file or path where I can get the image and upload it, because I have an api and want to post the uploaded users image to backblaze,

So, Example for uploading an image from a canvas #17 kind of helped me, but it's just upload buffer, and I am unable to view it, just shows a black box.

Make providing bucket id optional if auth returns bucket id

B2 returns the bucket id on authorisation if the auth key is for a particular bucket. Thus, one can store the bucket id in the instance too and use it if bucket id is not passed in a function call.

I am open to implementing this myself. Looking for feedback initially.

400 Missing header: Content-Length when try to upload file

b2.authorize().then(() => {
    console.log('auth success');
    return b2.getUploadUrl('b19aa53a2f0f1c4256da021f').then(response => {
        console.log('get uplaod url success');
        this.auth = response.data;
        // Get the file Content-Length
        return fp.stat(filePath).then(stat => {
            this.stat = stat;
            return b2.uploadFile({
                uploadUrl: this.auth.uploadUrl,
                uploadAuthToken: this.auth.authorizationToken,
                filename: fileID,
                mime: file.type,
                data: fs.createReadStream(filePath),
                info: {
                    'Content-Length': this.stat.size,
                }
            }).then(response => {
                console.log('upload success')
                console.log(response)
            })
        })
    })
})
.catch(error => {
    console.log(error);
})

internal error 'bz_sha1 did not match data received' on b2.uploadFile

I'm not sure if this error is the fault of this package, or the b2 API. I'm just trying to upload a file to a bucket, but it seems that it's failing because the sha1 didn't match up with what the server received.

I'm unsure if I am able to provide a sha1, or if the library is doing this for me.

Here is the code:

b2.authorize().then( /* authorized */
    (response) => {
        b2.getUploadUrl(bucket).then( /* get url to upload to */
                (response) => {
                    fs.readFile('test.mp3', 'utf8', (err, data) => { /* get data object from fs */
                        b2.uploadFile( { /* upload data to bucket */
                            uploadUrl: response.uploadUrl,
                            uploadAuthToken: response.authorizationToken,
                            filename: 'test.mp3',
                            data: data /* maybe a problem? */
                        }).then(
                            (response) => {console.log(response)},
                            (err) => {console.log(err)} /* error gets logged here )': */
                        );
                    });
                },
                (error) => { console.log(error); }
        );
    },
    (error) => { console.log(error); }
);

Question on integration with GraphQL

How can I integrate the this API with GraphQl. Since node will reside on EC2 If I use getDownlaodAutherization it will be valid only for that id and wont work for client device. Similarly upload url might not work for client or I may be wrong.

Please do advise on how to go about this.

delete issue?

I store files using "folders". So as an example, my structure looks like this:

{folder}/{folder}/{file.ext}

When using deletefileversion, I get back the following:
{ code: 'file_not_present',
message: 'file not present: -K82QoL4BCVSRqv3u3r4%2Fcollages%2Fd3f81d06-973d-4478-d19e-0677d8bdc681.jpg 4_zd55dcf6fe56cedf45d130010_f11275d83e9982f7a_d20160115_m035841_c001_v0001012_t0043',
status: 400 }

The actual filename is:
-K82QoL4BCVSRqv3u3r4/collages/d3f81d06-973d-4478-d19e-0677d8bdc681.jpg

So, it appears that the filename is being urlencoded when perhaps it should not be? Or perhaps something else is broken.

Thanks!
-Matt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.