mindflavor / azuresdkforrust Goto Github PK
View Code? Open in Web Editor NEWMicrosoft Azure SDK for Rust
Home Page: http://mindflavor.github.io/AzureSDKForRust
License: Apache License 2.0
Microsoft Azure SDK for Rust
Home Page: http://mindflavor.github.io/AzureSDKForRust
License: Apache License 2.0
The result for a list blob might be returned in multiple "chunks".
I'd like to alter the API make it a stream and hide the "chunk" behaviour from the client.
(ex see http://xion.io/post/code/rust-unfold-pagination.html)
It would be great to include support for hyper-rustls, such that pure-rust clients could be built.
First off: thank you for this nice crate ๐
I ran into a small issue today. Given a response body for a QueryDocumentResponse
(SELECT VALUE COUNT(*) ...
) like this:
[32]
let body = "[32]".as_bytes();
let mut d: serde_json::Value = serde_json::from_slice(body).unwrap();
There is an extract_result_json
function in azure_sdk_for_rust::cosmos::QueryDocumentRequest
(src) which iterates over the documents and at some point uses serde_json::value::Value::as_object_mut
(src) to check if the document is an object.
let map = doc.as_object_mut().unwrap();
However, in my example, the document is a Number
, so the returned Option
value is None
causing the unwrap
to panick:
called `Option::unwrap()` on a `None` value
Following the Azure Cosmos Db Tutorial there are no account involved. There is a master key (primary key), but I'm lost with this account.
I'm getting the error
Err(HyperError(Error { kind: Connect, cause: Some(Os { code: 11001, kind: Other, message: "Este host nรฃo รฉ conhecido." }) }))
Thank you for making a futures based API for object storage (as well as the other interfaces, but especially object storage). The google & AWS SDKs out there for rust, as of now, do not have futures based interfaces.
Also, sorry for spamming your issues list ... I just really felt compelled to put a good word in for you. Feel free to close whenever :)
.
The copy_table example is not preserving some of the types. Some of them are getting converted to strings. I'll be hunting this down. Something is getting converted somewhere in here, I'm guessing:
AzureSDKForRust/azure_sdk_storage_table/examples/copy_table.rs
Lines 34 to 39 in 3cdf4fb
In the JS SDK and I presume and other official Cosmos SDK implementations a cache of available regions is used to pick a URL to write to based on a list of preferred regions provided to the client. Without this I can easily set a region URL manually but without something like the endpoint manager the region might not be available when I make my next request.
From what I can ascertain the JS SDK refreshes the endpoint list every 5 minutes by default.
The region URLs are stored in the Database Account resource which doesn't appear to be exposed at this moment in this SDK.
(On the head for the git)
List blob always return with error:
ResponseParsingError(PathNotFound("AccessTier"))
Sample:
client.list_blobs()
.with_container_name(container)
.with_include_metadata()
.finalize()
.map(|iv| {
trace!("List containers returned {} containers.", iv.incomplete_vector.len());
unimplemented!() // never called
}).map_err(|err| {
println!("{:?}", err);
})
According to the logs (just removed :) ) it gets the result in xml from the server, but it never gets to the map function.
This will be done after #61:
In the Azure portal and Storage Explorer, when you generate a SAS token, you are given the option to use the generated SAS URL, which can include much of the details needed for the rest of the operations.
These are in the form:
https://a.blob.core.windows.net/b/c?d
This would simplify the use of Client::azure_sas() as well as provide some of the needed information when the SAS token is limited to a specific object, rather than container or storage account.
As an example, providing the SAS URL looks like this now:
let client = Client::azure_sas("a", "https://a.blob.core.windows.net/b/c?d");
let future client.get_blob().with_container_name("b").with_blob_name("c").finalize();
Where parsing that information out (of which, Url::Options is already being used to parse the token), it could look like this:
let client = Client::azure_sas_url("https://a.blob.core.windows.net/b/c?d");
let future = client.get_blob().finalize();
Supporting SAS URLs significantly improves the ergonomics of user supplied storage locations. As an example, the javascript library supports these:
Tuples do not allow you to specify the type in the let binding forcing you to do so in the method call.
This will lead to non-ergonomic methods if coupled with traits. For example the method list_documents
now returns a tuple:
fn list_documents<'b, S1, S2, T>(
&self,
database: S1,
collection: S2,
ldo: &ListDocumentsOptions
) -> impl Future<Item = (ListDocumentsResponse<T>, ListDocumentsResponseAdditionalHeaders), Error = AzureError>
where
S1: AsRef<str>,
S2: AsRef<str>,
T: DeserializeOwned,
Since there are 2 AsRef
some it would be better if the compiler would be able to infer the type of T
. It cannot using a tuple. This is an example of what happens:
let (entries, ldah) = core.run(
client.list_documents::<_, _, MySampleStructOwned>(&database_name, &collection_name, &ldo),
).unwrap();
If it were a single return binding something like this could work instead:
let response : Response<MySampleStructOwned> = core.run(
client.list_documents(&database_name, &collection_name, &ldo),
).unwrap();
With three less useless types specified.
Hi,
I need to create a Client
authenticated with a Shared Access Signature.
AFAIK this is not supported yet.
How can I/where should I add this?
Thank you for your great work,
Hi!
I'm struggling to figure out how to perform the below equivalent Python code in Rust.
Having tested, both pieces of code work as-is (barring lacking set-up code for cli arguments and such), and should sufficiently demonstrate what I want to achive.
A thing to note is that since DocumentAttributes
in AzureSDKForRust library sets รฌd
to private, I need to split the partitionKey
to get the ID of each document.
In all of our documents, we've got a key partitionKey
which is composed as following;
"partitionKey": "<document type>.<document id>"
collection_link = f"dbs/{database_id}/colls/{collection_id}"
kwargs = dict(
query=dict(
query='select * from r where r.type=@type',
parameters=[
dict(
name='@type',
value=document_type
)
]
),
options=dict(
enableCrossPartitionQuery=True
),
)
echo(
f"\nCreating query to be executed towards '{collection_link}':"
f"\n{dumps(kwargs, indent=2)}"
)
all_documents = [
order
for order
in client.QueryItems(
database_or_Container_link=collection_link,
**kwargs,
)
]
document_ids = set(
[
# Ensure no new_field values overlap/match existing IDs
doc['id']
for doc
in all_documents
]
)
assert(len(all_documents) == len(document_ids))
This is as close as I've come in Rust so far;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let cli = Cli::from_args();
// println!("{:?}", cli);
println!("\nStarting program...");
// Create Azure connection client
let auth_token = AuthorizationToken::new(
cli.database_account.clone(),
TokenType::Master,
&cli.cosmosdb_key,
)?;
let client = ClientBuilder::new(auth_token)?;
println!("HTTP client created:\n{:#?}", &client);
let sql_str = "SELECT * FROM document WHERE document.type=@type";
let query = Query::with_params(
&sql_str,
vec![ParamDef::new("@type").value(cli.document_type.clone())],
);
println!(
"\nRequesting documents with the following query:\n{}",
to_string_pretty(&query)?
);
// Build HTTP REST API query
let future = client
.query_documents(&cli.database_id, &cli.collection_id, query)
.enable_cross_partition(true)
.execute_json();
// Create async http client
let mut core = Core::new()?;
// Execute Query
let mut response = core.run(future)?;
let mut documents = response.results.clone();
while response.additional_headers.continuation_token.is_some() {
// While there are more results not fetched, fetch more and add to documents vector
response = core.run(
client
.query_documents(
&cli.database_id,
&cli.collection_id,
Query::with_params(
&sql_str,
vec![ParamDef::new("@type").value(cli.document_type.clone())],
),
)
.enable_cross_partition(true)
.continuation_token(response.additional_headers.continuation_token.unwrap())
.execute_json(),
)?;
documents.extend(response.results);
}
println!("{} document(s) returned from Azure.", documents.len());
println!(
"\nFirst document of type {}:\n{}",
&cli.document_type,
to_string_pretty(&documents[0])?
);
let document_ids: HashSet<String> = HashSet::from(
documents
.iter()
.map(|doc| doc.result["partitionKey"].as_str().unwrap().to_string())
//.map(|s| s.rsplitn(2, ".").collect::<Vec<_>>().take(1))
//.map(|(id, doc_type)| id.to_string())
.collect(),
);
println!("{}", to_string_pretty(&document_ids)?);
assert_eq!(documents.len(), document_ids.len());
Ok(())
}
Any ideas/suggestions/criticisms? All are welcome!
(I'll be the first to admit that I have little to no idea what I'm doing).
Many thanks for creating this library! What are your thoughts/plans around supporting key-vault?
I have not looked at the implementation yet, not sure if there is a rest api or something else.
Marius
According to the docs https://docs.microsoft.com/de-de/rest/api/cosmos-db/create-a-collection the field x-ms-offer-throughput
is optional.
When creating a collection with create_collection()
however it is strictly required.
GetBlob fails for "image - copy(1).png" with Error preparing HTTP request: invalid uri character.
rest_client.rs uses 2017-11-09
version which is deprecated and will be removed October 1st 2019 (see announcement https://azure.microsoft.com/en-us/updates/azure-api-management-update-oct-18).
It must be upgraded and tested.
README shows only one version of the crate as, originally, that was the case. Right now there are multiple, semi-independent crates that can have different versions. There are two strategies here:
1 is easier but it means that any non backwards-compatible change in any crate will propagate to the others, possibly confusing users.
Anyway since I have switched to 2 we need to update the README page to show the version of each crate separately.
Some elements can and should be reused between calls. Right now the client is very, very dumb.
It has been deprecated: announcement.
Is there any functionality you would require from Serde or another library before this would be possible?
For requests with the Cosmos DB module, the database name is not properly encoded and requests will fail if they contain e.g. whitespaces.
When creating a database, the name is sent in the body as JSON and therefore arbitrary names are possible. When using other methods (like get_database
) the name is in the URL.
The request might fail with "invalid URI bytes".
A solution would be to properly encode the database name.
And btw: Really nice project! ๐ ๐
Base uri is hard-coded with "documents.azure.com". But the URI of Azure Cosmos DB China is 'documents.azure.cn:10255'. Also developing with Cosmos DB Local Emulator needs to custom base URI, like "my-local-emulator:port".
Maybe add base Uri as an option in AuthorizationToken?
The replace_selection()
takes the name of the collection.
Then serializes the name, which just yields the name with quotes and then sends it to Azure and tries to create a collection.
This looks pretty much like a copy-paste bug :D
And thinking about it this is more like create_collection()
(also see #152 )
Docs for create collection: https://docs.microsoft.com/de-de/rest/api/cosmos-db/replace-a-collection
Sorry for all those issues ๐
I am currently working on #21 and #22 and these are issues I came across.
Travis Ci does not work with the "meta" Cargo.toml. It must be corrected somehow.
It seems as the sdk logs to the console and I see no option to turn it off.
(I'm using the git version and not the released one)
There is something amiss in the authentication method.
Repro:
โ-101 frcognoarch:~/src/rust/AzureSDKForRust [master {origin/master}|โ]> cargo run --example container00 pippo
Finished dev [unoptimized + debuginfo] target(s) in 0.10s
Running `target/debug/examples/container00 pippo`
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: UnexpectedHTTPResult(UnexpectedHTTPResult { expected: 200, received: 403, body: "\u{feff}<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:983dccaa-901e-014f-27a8-04b49b000000\nTime:2018-06-15T12:55:45.8895570Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request \'xwfkhbM6p95SgtaAr1shmrDU8HFOfzpSxqqgKOgbsNo=\' is not the same as any computed signature. Server used following string to sign: \'GET\n\n\n\n\n\n\n\n\n\n\n\nx-ms-date:Fri, 15 Jun 2018 12:55:45 GMT\nx-ms-version:2017-11-09\n/azureskdforrust/\ncomp:list\ninclude:metadata\nmaxresults:5000\'.</AuthenticationErrorDetail></Error>" })', libcore/result.rs:945:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.
I get a 404 error when I try to access the documentation from crates.io.
commit cb306c2a6cc65047d8d2752d62a8ddc953ac3859 (tag: 0.24.0)
stable-x86_64-apple-darwin
toolchain
rustc --version rustc 1.38.0 (625451e37 2019-09-23)
cargo build --package azure_sdk_auth_aad
Compiling azure_sdk_auth_aad v0.21.0 (/repo/AzureSDKForRust/azure_sdk_auth_aad)
error[E0432]: unresolved import `oauth2::curl`
--> azure_sdk_auth_aad/src/lib.rs:10:13
|
10 | use oauth2::curl::http_client;
| ^^^^ could not find `curl` in `oauth2`
error[E0433]: failed to resolve: could not find `curl` in `oauth2`
--> azure_sdk_auth_aad/src/lib.rs:75:39
|
75 | oauth2::RequestTokenError<oauth2::curl::Error, oauth2::StandardErrorResponse<oauth2::basic::BasicErrorResponseType>>,
| ^^^^ could not find `curl` in `oauth2`
error[E0308]: mismatched types
--> azure_sdk_auth_aad/src/lib.rs:37:9
|
37 | / Url::parse(&format!("https://login.microsoftonline.com/{}/oauth2/authorize", tenant_id))
38 | | .expect("Invalid authorization endpoint URL"),
| |_________________________________________________________^ expected struct `url::Url`, found a different struct `url::Url`
|
= note: expected type `url::Url` (struct `url::Url`)
found type `url::Url` (struct `url::Url`)
note: Perhaps two different versions of crate `url` are being used?
--> azure_sdk_auth_aad/src/lib.rs:37:9
|
37 | / Url::parse(&format!("https://login.microsoftonline.com/{}/oauth2/authorize", tenant_id))
38 | | .expect("Invalid authorization endpoint URL"),
| |_________________________________________________________^
error[E0308]: mismatched types
--> azure_sdk_auth_aad/src/lib.rs:41:9
|
41 | Url::parse(&format!("https://login.microsoftonline.com/{}/oauth2/v2.0/token", tenant_id)).expect("Invalid token endpoint URL"),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `url::Url`, found a different struct `url::Url`
|
= note: expected type `url::Url` (struct `url::Url`)
found type `url::Url` (struct `url::Url`)
note: Perhaps two different versions of crate `url` are being used?
--> azure_sdk_auth_aad/src/lib.rs:41:9
|
41 | Url::parse(&format!("https://login.microsoftonline.com/{}/oauth2/v2.0/token", tenant_id)).expect("Invalid token endpoint URL"),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error[E0308]: mismatched types
--> azure_sdk_auth_aad/src/lib.rs:49:44
|
49 | .set_redirect_url(RedirectUrl::new(redirect_url));
| ^^^^^^^^^^^^ expected struct `url::Url`, found a different struct `url::Url`
|
= note: expected type `url::Url` (struct `url::Url`)
found type `url::Url` (struct `url::Url`)
note: Perhaps two different versions of crate `url` are being used?
--> azure_sdk_auth_aad/src/lib.rs:49:44
|
49 | .set_redirect_url(RedirectUrl::new(redirect_url));
| ^^^^^^^^^^^^
error[E0308]: mismatched types
--> azure_sdk_auth_aad/src/lib.rs:64:9
|
64 | authorize_url,
| ^^^^^^^^^^^^^ expected struct `url::Url`, found a different struct `url::Url`
|
= note: expected type `url::Url` (struct `url::Url`)
found type `url::Url` (struct `url::Url`)
note: Perhaps two different versions of crate `url` are being used?
--> azure_sdk_auth_aad/src/lib.rs:64:9
|
64 | authorize_url,
| ^^^^^^^^^^^^^
error: aborting due to 6 previous errors
Some errors have detailed explanations: E0308, E0432, E0433.
For more information about an error, try `rustc --explain E0308`.
error: Could not compile `azure_sdk_auth_aad`.
I have no idea about why I get these issues while the travis build seems to work. I suppose it's because Travis doesn't actually test building on macos, and maybe there are some crates which work differently depending on the platform ?
https://seanmonstar.com/post/187493499882/hyper-alpha-supports-asyncawait
hyperium/hyper#1805
I'd love to be able to use the async/await syntax available. It will be in the stable release is
1.39 release which will be on November 7th. I'd like to be able to use it before then. std::future::Future is available in stable.
When building with clippy 0.0.212 the following errors are raised:
error: Usage of unknown attribute
--> /home/max/.cargo/registry/src/github.com-1ecc6299db9ec823/azure_sdk_storage_core-0.20.2/src/rest_client.rs:57:11
|
57 | #[clippy::needless_pass_by_value]
| ^^^^^^^^^^^^^^^^^^^^^^
error: Usage of unknown attribute
--> /home/max/.cargo/registry/src/github.com-1ecc6299db9ec823/azure_sdk_storage_core-0.20.2/src/rest_client.rs:236:11
|
236 | #[clippy::too_many_arguments]
| ^^^^^^^^^^^^^^^^^^
error: aborting due to 2 previous errors
error: Could not compile `azure_sdk_storage_core`.
On the repo main page, the link
https://mindflavor.github.io/AzureSDKForRust/azure_sdk_for_rust is not working
Update changelog with contribution list
Right now, Blob::put
takes a generic lifetime 'b
. This works fine when submitting jobs to a Tokio Core using Core::run
. However, trying to use Handle::spawn
will result in a lifetime conflict. I'm definitely not the expert, but maybe this is due to different lifetime requirements between Core::run
and Handle::spawn
which are interacting poorly with the compiler-inferred lifetime in Blob::put
.
Here is a minimum working example demonstrating the problem (all dependency versions match those of AzureSDKForRust's Cargo.toml):
extern crate azure_sdk_for_rust;
extern crate hyper;
extern crate chrono;
extern crate tokio_core;
extern crate futures;
use hyper::mime::Mime;
use tokio_core::reactor::Core;
use futures::future::*;
use azure_sdk_for_rust::azure::storage::client::Client;
use azure_sdk_for_rust::azure::storage::blob::{Blob, BlobType, PUT_OPTIONS_DEFAULT};
use azure_sdk_for_rust::azure::core::lease::{LeaseState, LeaseStatus};
/// Just create a test Blob.
fn create_blob(data: &[u8]) -> Blob {
Blob {
name: "nameOfBlob".into(),
container_name: "containerName".into(),
snapshot_time: None,
last_modified: chrono::Utc::now(),
etag: "".to_owned(),
content_length: data.len() as u64,
content_type: Some("text/plain".parse::<Mime>().unwrap()),
content_encoding: None,
content_language: None,
content_md5: None,
cache_control: None,
x_ms_blob_sequence_number: None,
blob_type: BlobType::BlockBlob,
lease_status: LeaseStatus::Unlocked,
lease_state: LeaseState::Available,
lease_duration: None,
copy_id: None,
copy_status: None,
copy_source: None,
copy_progress: None,
copy_completion: None,
copy_status_description: None
}
}
fn main() {
let mut core = Core::new().unwrap();
let client = Client::new(&core.handle(), "storageAccount", "accountKey").unwrap();
let data = b"someData";
let blob = create_blob(&data[..]);
let future = blob.put(
&client,
&PUT_OPTIONS_DEFAULT,
Some(&data[..])
).then(move |res| {
// Handle the result.
// This is needed because `Handle::spawn` requires that the future it
// is given be Future<Item=(), Error=()>.
match res {
Ok(_) => ok(()),
Err(_) => err(())
}
});
// THIS WORKS:
//core.run(future).unwrap();
// BUT THIS DOES NOT:
let handle = core.handle();
handle.spawn(future);
// Assume the core is running...
}
The compiler error is:
error[E0597]: `client` does not live long enough
--> src/main.rs:50:4
|
50 | &client,
| ^^^^^^ borrowed value does not live long enough
...
70 | }
| - borrowed value only lives until here
|
= note: borrowed value must be valid for the static lifetime...
For my use case, I can fix the problem by removing the generic lifetime 'b
altogether from Blob::put
, like so (line 566 of src/azure/storage/blob/mod.rs
):
pub fn puts(
&self,
c: &Client,
po: &PutOptions,
r: Option<&[u8]>,
) -> impl Future<Item = (), Error = AzureError> {
However, I don't know if this will break others' use cases. If you feel that removing the generic is the right way to go, let me know and I can submit a PR.
Reading and writing blobs in a single operation is cumbersome and really not efficient.
Since we are aync anyway it would make sense to enable data processing as soon as possible without waiting for the complete blob. Also, there might be no need to allocate a big buffer for a file.
The future
crate exposes the Stream
trait so we should implement it.
Right now, create_collection()
in Cosmos DB takes a reference to a Collection
as a parameter (and sends a serialized version in the body of the request).
The reason for this is probably that that the according REST endpoint requires a partial collection.
The required fields are:
It would be much easier to understand and use if the create_collection()
would just require a name and the other two parameters instead of a full collection.
After the futures migration (0.4.0) some methods no longer work (possibly because of wrong request headers, mainly Accept
)
It would be great to be able to get Shared Access Signatures for blobs/containers, maybe something like:
client
.get_sas()
.with_blob_name(...) || with_container_name(...)
.with_start(DateTime)
.with_expiry(DateTime)
.with_permissions(&[SasPermission::Read, SasPermission::Write, ...])
Also thanks for maintaining this really nice crate!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.