GithubHelp home page GithubHelp logo

mindflavor / azuresdkforrust Goto Github PK

View Code? Open in Web Editor NEW
163.0 12.0 64.0 1.96 MB

Microsoft Azure SDK for Rust

Home Page: http://mindflavor.github.io/AzureSDKForRust

License: Apache License 2.0

Rust 99.96% Shell 0.04%
microsoft-azure-sdk rust azure blob-storage azure-table-storage azure-event-hub azure-blob cosmosdb

azuresdkforrust's People

Contributors

atul9 avatar bownairo avatar brycefisher avatar christophebiocca avatar ctaggart avatar damienpontifex avatar djc avatar guywaldman avatar gzp79 avatar jeramyrr avatar jibbow avatar jmlx42 avatar karataliu avatar kichristensen avatar letubert avatar mindflavor avatar nayato avatar nbigaouette avatar neshanthan avatar pataruco avatar rauchg avatar rjkerrison avatar skeet70 avatar timbotambo avatar wayeast avatar zimingwu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azuresdkforrust's Issues

called `Option::unwrap()` on a `None` value

First off: thank you for this nice crate ๐Ÿ‘

I ran into a small issue today. Given a response body for a QueryDocumentResponse (SELECT VALUE COUNT(*) ...) like this:

[32]
let body = "[32]".as_bytes();
let mut d: serde_json::Value = serde_json::from_slice(body).unwrap();

There is an extract_result_json function in azure_sdk_for_rust::cosmos::QueryDocumentRequest (src) which iterates over the documents and at some point uses serde_json::value::Value::as_object_mut (src) to check if the document is an object.

let map = doc.as_object_mut().unwrap();

However, in my example, the document is a Number, so the returned Option value is None causing the unwrap to panick:
called `Option::unwrap()` on a `None` value

What do you mean with account?

Following the Azure Cosmos Db Tutorial there are no account involved. There is a master key (primary key), but I'm lost with this account.

I'm getting the error

Err(HyperError(Error { kind: Connect, cause: Some(Os { code: 11001, kind: Other, message: "Este host nรฃo รฉ conhecido." }) }))

You are awesome.

Thank you for making a futures based API for object storage (as well as the other interfaces, but especially object storage). The google & AWS SDKs out there for rust, as of now, do not have futures based interfaces.

Also, sorry for spamming your issues list ... I just really felt compelled to put a good word in for you. Feel free to close whenever :).

copy_table example not preserving some int64 types

The copy_table example is not preserving some of the types. Some of them are getting converted to strings. I'll be hunting this down. Something is getting converted somewhere in here, I'm guessing:

let inserts = table_service.stream_query_entities(&table_name, None)
.for_each(|entity: serde_json::Value| {
count += 1;
to_table_service.insert_entity(&table_name, &entity)
});
core.run(inserts)?;

Cosmos add a GlobalEndpointManager

In the JS SDK and I presume and other official Cosmos SDK implementations a cache of available regions is used to pick a URL to write to based on a list of preferred regions provided to the client. Without this I can easily set a region URL manually but without something like the endpoint manager the region might not be available when I make my next request.

From what I can ascertain the JS SDK refreshes the endpoint list every 5 minutes by default.

The region URLs are stored in the Database Account resource which doesn't appear to be exposed at this moment in this SDK.

list_blob always return with error

(On the head for the git)

List blob always return with error:
ResponseParsingError(PathNotFound("AccessTier"))
Sample:

client.list_blobs()
            .with_container_name(container)
            .with_include_metadata()
            .finalize()
            .map(|iv| {
                trace!("List containers returned {} containers.", iv.incomplete_vector.len());
                unimplemented!() // never called
            }).map_err(|err| {
                println!("{:?}", err);
            })

According to the logs (just removed :) ) it gets the result in xml from the server, but it never gets to the map function.

Add stream_query_entities

I think support for next_marker was added in #5 for list_blobs. It is need for query_entities as well in table storage. In testing #142 to copy a table, only the first 1000 entities were returned and copied.

Support SAS URL in addition to SAS Token.

In the Azure portal and Storage Explorer, when you generate a SAS token, you are given the option to use the generated SAS URL, which can include much of the details needed for the rest of the operations.

These are in the form:

https://a.blob.core.windows.net/b/c?d

This would simplify the use of Client::azure_sas() as well as provide some of the needed information when the SAS token is limited to a specific object, rather than container or storage account.

As an example, providing the SAS URL looks like this now:

let client = Client::azure_sas("a", "https://a.blob.core.windows.net/b/c?d");
let future client.get_blob().with_container_name("b").with_blob_name("c").finalize();

Where parsing that information out (of which, Url::Options is already being used to parse the token), it could look like this:

let client = Client::azure_sas_url("https://a.blob.core.windows.net/b/c?d");
let future = client.get_blob().finalize();

Supporting SAS URLs significantly improves the ergonomics of user supplied storage locations. As an example, the javascript library supports these:

Get rid of tuples in return values

Tuples do not allow you to specify the type in the let binding forcing you to do so in the method call.
This will lead to non-ergonomic methods if coupled with traits. For example the method list_documents now returns a tuple:

fn list_documents<'b, S1, S2, T>(
    &self, 
    database: S1, 
    collection: S2, 
    ldo: &ListDocumentsOptions
) -> impl Future<Item = (ListDocumentsResponse<T>, ListDocumentsResponseAdditionalHeaders), Error = AzureError> 
where
    S1: AsRef<str>,
    S2: AsRef<str>,
    T: DeserializeOwned, 

Since there are 2 AsRef some it would be better if the compiler would be able to infer the type of T. It cannot using a tuple. This is an example of what happens:

let (entries, ldah) = core.run(
        client.list_documents::<_, _, MySampleStructOwned>(&database_name, &collection_name, &ldo),
    ).unwrap();

If it were a single return binding something like this could work instead:

let response : Response<MySampleStructOwned> = core.run(
        client.list_documents(&database_name, &collection_name, &ldo),
    ).unwrap();

With three less useless types specified.

Collect all document IDs for all documents of a specific type

Hi!

I'm struggling to figure out how to perform the below equivalent Python code in Rust.
Having tested, both pieces of code work as-is (barring lacking set-up code for cli arguments and such), and should sufficiently demonstrate what I want to achive.

A thing to note is that since DocumentAttributes in AzureSDKForRust library sets รฌd to private, I need to split the partitionKey to get the ID of each document.

In all of our documents, we've got a key partitionKey which is composed as following;
"partitionKey": "<document type>.<document id>"

Python code
collection_link = f"dbs/{database_id}/colls/{collection_id}"
kwargs = dict(
    query=dict(
        query='select * from r where r.type=@type',
        parameters=[
            dict(
                name='@type',
                value=document_type
            )
        ]
    ),
    options=dict(
        enableCrossPartitionQuery=True
    ),
)

echo(
    f"\nCreating query to be executed towards '{collection_link}':"
    f"\n{dumps(kwargs, indent=2)}"
)
all_documents = [
    order
    for order
    in client.QueryItems(
        database_or_Container_link=collection_link,
        **kwargs,
    )
]

document_ids = set(
    [
        # Ensure no new_field values overlap/match existing IDs
        doc['id']
        for doc
        in all_documents
    ]
)
assert(len(all_documents) == len(document_ids))

This is as close as I've come in Rust so far;

Rust code
fn main() -> Result<(), Box<dyn std::error::Error>> {
    let cli = Cli::from_args();
    // println!("{:?}", cli);
    println!("\nStarting program...");

    // Create Azure connection client
    let auth_token = AuthorizationToken::new(
        cli.database_account.clone(),
        TokenType::Master,
        &cli.cosmosdb_key,
    )?;
    let client = ClientBuilder::new(auth_token)?;
    println!("HTTP client created:\n{:#?}", &client);

    let sql_str = "SELECT * FROM document WHERE document.type=@type";
    let query = Query::with_params(
        &sql_str,
        vec![ParamDef::new("@type").value(cli.document_type.clone())],
    );
    println!(
        "\nRequesting documents with the following query:\n{}",
        to_string_pretty(&query)?
    );

    // Build HTTP REST API query
    let future = client
        .query_documents(&cli.database_id, &cli.collection_id, query)
        .enable_cross_partition(true)
        .execute_json();

    // Create async http client
    let mut core = Core::new()?;

    // Execute Query
    let mut response = core.run(future)?;
    let mut documents = response.results.clone();
    while response.additional_headers.continuation_token.is_some() {
        // While there are more results not fetched, fetch more and add to documents vector
        response = core.run(
            client
                .query_documents(
                    &cli.database_id,
                    &cli.collection_id,
                    Query::with_params(
                        &sql_str,
                        vec![ParamDef::new("@type").value(cli.document_type.clone())],
                    ),
                )
                .enable_cross_partition(true)
                .continuation_token(response.additional_headers.continuation_token.unwrap())
                .execute_json(),
        )?;
        documents.extend(response.results);
    }
    println!("{} document(s) returned from Azure.", documents.len());

    println!(
        "\nFirst document of type {}:\n{}",
        &cli.document_type,
        to_string_pretty(&documents[0])?
    );

    let document_ids: HashSet<String> = HashSet::from(
        documents
            .iter()
            .map(|doc| doc.result["partitionKey"].as_str().unwrap().to_string())
            //.map(|s| s.rsplitn(2, ".").collect::<Vec<_>>().take(1))
            //.map(|(id, doc_type)| id.to_string())
            .collect(),
    );
    println!("{}", to_string_pretty(&document_ids)?);
    assert_eq!(documents.len(), document_ids.len());

    Ok(())
}

Any ideas/suggestions/criticisms? All are welcome!
(I'll be the first to admit that I have little to no idea what I'm doing).

Support for key-vault?

Many thanks for creating this library! What are your thoughts/plans around supporting key-vault?

I have not looked at the implementation yet, not sure if there is a rest api or something else.

Marius

Update README.md to reflect the new crate distribution

README shows only one version of the crate as, originally, that was the case. Right now there are multiple, semi-independent crates that can have different versions. There are two strategies here:

  1. Keep the same version across all crates (what I have done so far).
  2. Have any crate its own version (what I am doing from now on).

1 is easier but it means that any non backwards-compatible change in any crate will propagate to the others, possibly confusing users.

Anyway since I have switched to 2 we need to update the README page to show the version of each crate separately.

Use builder pattern for Storage operations

  • Create container
  • List containers
  • Delete container
  • Get container ACLs
  • Set container ACLs
  • Get container properties
  • Acquire container lease
  • Break container lease
  • Release container lease
  • Renew container lease
  • List Blobs
  • Get Blob
  • Put block blob
  • Put page blob
  • Put append blob
  • Update page
  • Clear page
  • Put block
  • Put block list
  • Get block list
  • Delete blob
  • Delete blob snapshot
  • Acquire blob lease
  • Renew blob lease
  • Change blob lease. Useless IMHO, Skipped for now.
  • Release blob lease
  • Break blob lease

Cosmos DB: url encode database names

For requests with the Cosmos DB module, the database name is not properly encoded and requests will fail if they contain e.g. whitespaces.

When creating a database, the name is sent in the body as JSON and therefore arbitrary names are possible. When using other methods (like get_database) the name is in the URL.
The request might fail with "invalid URI bytes".

A solution would be to properly encode the database name.

And btw: Really nice project! ๐Ÿ˜ ๐Ÿ‘

Support Custom Cosmos DB Base URI

Base uri is hard-coded with "documents.azure.com". But the URI of Azure Cosmos DB China is 'documents.azure.cn:10255'. Also developing with Cosmos DB Local Emulator needs to custom base URI, like "my-local-emulator:port".

Maybe add base Uri as an option in AuthorizationToken?

Cosmos DB: fix replace_collection()

The replace_selection() takes the name of the collection.
Then serializes the name, which just yields the name with quotes and then sends it to Azure and tries to create a collection.
This looks pretty much like a copy-paste bug :D
And thinking about it this is more like create_collection() (also see #152 )

Docs for create collection: https://docs.microsoft.com/de-de/rest/api/cosmos-db/replace-a-collection

Sorry for all those issues ๐Ÿ™ˆ
I am currently working on #21 and #22 and these are issues I came across.

Logs during process

It seems as the sdk logs to the console and I see no option to turn it off.
(I'm using the git version and not the released one)

Optimization workstream

  • QueryDocumentRequest::extract_result_json -- get rid of clone and parse DocumentAttributes adhoc while building new value.
  • Rework responses to be thin wrappers around HTTP response, delaying actual parsing / processing of the response until it is actually requested by the user. Move to ref model instead of pre-emptively converting everything to owned.

Authentication failed for storage methods

There is something amiss in the authentication method.
Repro:

โœ˜-101 frcognoarch:~/src/rust/AzureSDKForRust [master {origin/master}|โœ”]> cargo run --example container00 pippo
    Finished dev [unoptimized + debuginfo] target(s) in 0.10s
     Running `target/debug/examples/container00 pippo`
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: UnexpectedHTTPResult(UnexpectedHTTPResult { expected: 200, received: 403, body: "\u{feff}<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:983dccaa-901e-014f-27a8-04b49b000000\nTime:2018-06-15T12:55:45.8895570Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request \'xwfkhbM6p95SgtaAr1shmrDU8HFOfzpSxqqgKOgbsNo=\' is not the same as any computed signature. Server used following string to sign: \'GET\n\n\n\n\n\n\n\n\n\n\n\nx-ms-date:Fri, 15 Jun 2018 12:55:45 GMT\nx-ms-version:2017-11-09\n/azureskdforrust/\ncomp:list\ninclude:metadata\nmaxresults:5000\'.</AuthenticationErrorDetail></Error>" })', libcore/result.rs:945:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.

Cannot compile on MacOS

Azure auth aad not compiling

Commit

commit cb306c2a6cc65047d8d2752d62a8ddc953ac3859 (tag: 0.24.0)

Compiler

stable-x86_64-apple-darwin toolchain
rustc --version rustc 1.38.0 (625451e37 2019-09-23)

Output of cargo build azure_sdk_auth_aad

cargo build --package azure_sdk_auth_aad
   Compiling azure_sdk_auth_aad v0.21.0 (/repo/AzureSDKForRust/azure_sdk_auth_aad)
error[E0432]: unresolved import `oauth2::curl`
  --> azure_sdk_auth_aad/src/lib.rs:10:13
   |
10 | use oauth2::curl::http_client;
   |             ^^^^ could not find `curl` in `oauth2`

error[E0433]: failed to resolve: could not find `curl` in `oauth2`
  --> azure_sdk_auth_aad/src/lib.rs:75:39
   |
75 |     oauth2::RequestTokenError<oauth2::curl::Error, oauth2::StandardErrorResponse<oauth2::basic::BasicErrorResponseType>>,
   |                                       ^^^^ could not find `curl` in `oauth2`

error[E0308]: mismatched types
  --> azure_sdk_auth_aad/src/lib.rs:37:9
   |
37 | /         Url::parse(&format!("https://login.microsoftonline.com/{}/oauth2/authorize", tenant_id))
38 | |             .expect("Invalid authorization endpoint URL"),
   | |_________________________________________________________^ expected struct `url::Url`, found a different struct `url::Url`
   |
   = note: expected type `url::Url` (struct `url::Url`)
              found type `url::Url` (struct `url::Url`)
note: Perhaps two different versions of crate `url` are being used?
  --> azure_sdk_auth_aad/src/lib.rs:37:9
   |
37 | /         Url::parse(&format!("https://login.microsoftonline.com/{}/oauth2/authorize", tenant_id))
38 | |             .expect("Invalid authorization endpoint URL"),
   | |_________________________________________________________^

error[E0308]: mismatched types
  --> azure_sdk_auth_aad/src/lib.rs:41:9
   |
41 |         Url::parse(&format!("https://login.microsoftonline.com/{}/oauth2/v2.0/token", tenant_id)).expect("Invalid token endpoint URL"),
   |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `url::Url`, found a different struct `url::Url`
   |
   = note: expected type `url::Url` (struct `url::Url`)
              found type `url::Url` (struct `url::Url`)
note: Perhaps two different versions of crate `url` are being used?
  --> azure_sdk_auth_aad/src/lib.rs:41:9
   |
41 |         Url::parse(&format!("https://login.microsoftonline.com/{}/oauth2/v2.0/token", tenant_id)).expect("Invalid token endpoint URL"),
   |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error[E0308]: mismatched types
  --> azure_sdk_auth_aad/src/lib.rs:49:44
   |
49 |         .set_redirect_url(RedirectUrl::new(redirect_url));
   |                                            ^^^^^^^^^^^^ expected struct `url::Url`, found a different struct `url::Url`
   |
   = note: expected type `url::Url` (struct `url::Url`)
              found type `url::Url` (struct `url::Url`)
note: Perhaps two different versions of crate `url` are being used?
  --> azure_sdk_auth_aad/src/lib.rs:49:44
   |
49 |         .set_redirect_url(RedirectUrl::new(redirect_url));
   |                                            ^^^^^^^^^^^^

error[E0308]: mismatched types
  --> azure_sdk_auth_aad/src/lib.rs:64:9
   |
64 |         authorize_url,
   |         ^^^^^^^^^^^^^ expected struct `url::Url`, found a different struct `url::Url`
   |
   = note: expected type `url::Url` (struct `url::Url`)
              found type `url::Url` (struct `url::Url`)
note: Perhaps two different versions of crate `url` are being used?
  --> azure_sdk_auth_aad/src/lib.rs:64:9
   |
64 |         authorize_url,
   |         ^^^^^^^^^^^^^

error: aborting due to 6 previous errors

Some errors have detailed explanations: E0308, E0432, E0433.
For more information about an error, try `rustc --explain E0308`.
error: Could not compile `azure_sdk_auth_aad`.

I have no idea about why I get these issues while the travis build seems to work. I suppose it's because Travis doesn't actually test building on macos, and maybe there are some crates which work differently depending on the platform ?

Clippy raising unknown attribute errors

When building with clippy 0.0.212 the following errors are raised:

error: Usage of unknown attribute
  --> /home/max/.cargo/registry/src/github.com-1ecc6299db9ec823/azure_sdk_storage_core-0.20.2/src/rest_client.rs:57:11
   |
57 | #[clippy::needless_pass_by_value]
   |           ^^^^^^^^^^^^^^^^^^^^^^

error: Usage of unknown attribute
   --> /home/max/.cargo/registry/src/github.com-1ecc6299db9ec823/azure_sdk_storage_core-0.20.2/src/rest_client.rs:236:11
    |
236 | #[clippy::too_many_arguments]
    |           ^^^^^^^^^^^^^^^^^^

error: aborting due to 2 previous errors

error: Could not compile `azure_sdk_storage_core`.

Blob "put" has lifetime issues

Right now, Blob::put takes a generic lifetime 'b. This works fine when submitting jobs to a Tokio Core using Core::run. However, trying to use Handle::spawn will result in a lifetime conflict. I'm definitely not the expert, but maybe this is due to different lifetime requirements between Core::run and Handle::spawn which are interacting poorly with the compiler-inferred lifetime in Blob::put.

Here is a minimum working example demonstrating the problem (all dependency versions match those of AzureSDKForRust's Cargo.toml):

extern crate azure_sdk_for_rust;
extern crate hyper;
extern crate chrono;
extern crate tokio_core;
extern crate futures;

use hyper::mime::Mime;
use tokio_core::reactor::Core;
use futures::future::*;
use azure_sdk_for_rust::azure::storage::client::Client;
use azure_sdk_for_rust::azure::storage::blob::{Blob, BlobType, PUT_OPTIONS_DEFAULT};
use azure_sdk_for_rust::azure::core::lease::{LeaseState, LeaseStatus};

/// Just create a test Blob.
fn create_blob(data: &[u8]) -> Blob {
    Blob {
        name: "nameOfBlob".into(),
        container_name: "containerName".into(),
        snapshot_time: None,
        last_modified: chrono::Utc::now(),
        etag: "".to_owned(),
        content_length: data.len() as u64,
        content_type: Some("text/plain".parse::<Mime>().unwrap()),
        content_encoding: None,
        content_language: None,
        content_md5: None,
        cache_control: None,
        x_ms_blob_sequence_number: None,
        blob_type: BlobType::BlockBlob,
        lease_status: LeaseStatus::Unlocked,
        lease_state: LeaseState::Available,
        lease_duration: None,
        copy_id: None,
        copy_status: None,
        copy_source: None,
        copy_progress: None,
        copy_completion: None,
        copy_status_description: None
    }
}

fn main() {
    let mut core = Core::new().unwrap();
    let client = Client::new(&core.handle(), "storageAccount", "accountKey").unwrap();

    let data = b"someData";
    let blob = create_blob(&data[..]);

    let future = blob.put(
        &client,
        &PUT_OPTIONS_DEFAULT,
        Some(&data[..])
    ).then(move |res| {
        // Handle the result.
        // This is needed because `Handle::spawn` requires that the future it
        // is given be Future<Item=(), Error=()>.
        match res {
            Ok(_) => ok(()),
            Err(_) => err(())
        }
    });

    // THIS WORKS:
    //core.run(future).unwrap();

    // BUT THIS DOES NOT:
    let handle = core.handle();
    handle.spawn(future);
    // Assume the core is running...
}

The compiler error is:

error[E0597]: `client` does not live long enough
  --> src/main.rs:50:4
   |
50 |         &client,
   |          ^^^^^^ borrowed value does not live long enough
...
70 | }
   | - borrowed value only lives until here
   |
   = note: borrowed value must be valid for the static lifetime...

For my use case, I can fix the problem by removing the generic lifetime 'b altogether from Blob::put, like so (line 566 of src/azure/storage/blob/mod.rs):

    pub fn puts(
        &self,
        c: &Client,
        po: &PutOptions,
        r: Option<&[u8]>,
    ) -> impl Future<Item = (), Error = AzureError> {

However, I don't know if this will break others' use cases. If you feel that removing the generic is the right way to go, let me know and I can submit a PR.

Implement streaming operations on blobs

Reading and writing blobs in a single operation is cumbersome and really not efficient.
Since we are aync anyway it would make sense to enable data processing as soon as possible without waiting for the complete blob. Also, there might be no need to allocate a big buffer for a file.

The future crate exposes the Stream trait so we should implement it.

Cosmos DB: create_collection() should not require an existing collection

Right now, create_collection() in Cosmos DB takes a reference to a Collection as a parameter (and sends a serialized version in the body of the request).
The reason for this is probably that that the according REST endpoint requires a partial collection.

The required fields are:

  • Id (which is the name)
  • IndexingPolicy
  • PartitionKey

It would be much easier to understand and use if the create_collection() would just require a name and the other two parameters instead of a full collection.

Support for getting Shared Access Signatures

It would be great to be able to get Shared Access Signatures for blobs/containers, maybe something like:

client
    .get_sas()
    .with_blob_name(...) || with_container_name(...)
    .with_start(DateTime)
    .with_expiry(DateTime)
    .with_permissions(&[SasPermission::Read, SasPermission::Write, ...])

Also thanks for maintaining this really nice crate!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.