awslabs / aws-lambda-rust-runtime Goto Github PK
View Code? Open in Web Editor NEWA Rust runtime for AWS Lambda
License: Apache License 2.0
A Rust runtime for AWS Lambda
License: Apache License 2.0
Being very new to Rust in general I was wondering how I could share rusoto clients across multiple Lambda invocations. In particular I am trying to cache an IoT Data client for function invocations which happen several times per minute. Coming from Typescript where caching in global scope solves the issue, I can't figure out how to do it in Rust. Apologies if this should be out of scope for this project, but I couldn't find an answer anywhere else.
In this code I would like to cache iot_data_client
for further invocations:
use lambda_runtime::{error::HandlerError, lambda, Context};
use serde_derive::{Serialize, Deserialize};
use rusoto_iot_data::{IotData, IotDataClient, PublishRequest};
use bytes::Bytes;
use rusoto_core::Region;
#[derive(Deserialize, Clone)]
struct CustomEvent {
first_name: String,
last_name: String,
}
#[derive(Serialize, Clone)]
struct CustomOutput {
message: String,
}
fn main() {
lambda!(my_handler);
}
fn my_handler(e: CustomEvent, _ctx: Context) -> Result<CustomOutput, HandlerError> {
if e.first_name == "" {
panic!("Empty first name");
}
let request = PublishRequest {
payload: Some(Bytes::from_static(b"Hello Rust!")),
topic: String::from("rust"),
qos: Some(1),
};
// How can I cache this?
let iot_data_client = IotDataClient::new(Region::EuCentral1);
match iot_data_client.publish(request).sync() {
Ok(_result) => {
Ok(CustomOutput{
message: format!("Hello, {}!", e.first_name),
})
},
Err(error) => {
panic!(format!("Error: {:?}", error))
}
}
}
I have an expensive task that should not be part of the handler. As that would be running the code again and again for each event call. According to AWS documentation the execution context is the place where these "global" code needs to go so that the output of the code is persistent over events. The docs say that "A new instance of the Context object is passed to each handler invocation". Is there a way create a custom context which saves the global state of the object between calls?
Maybe I missing the point here but I am new to Rust and not sure how to do the following, which is our Python code that we are trying to port:
def handler(event, context):
print(event)
event_type = event.get('resource', 'not_found')
http_method = event.get('httpMethod', 'not_found')
try:
if event_type=='not_found' or http_method=='not_found':
ret['statusCode']=404
ret['body']=json.dumps({'err': 'err', 'event_type': event_type, 'http_method': http_method })
elif event_type=='/echo':
ret['statusCode']=200
ret['body']=json.dumps(event, default=str)
elif event_type=='/user/{username}' and http_method=='GET':
response = client.get_user(
UserName=event['pathParameters']['username']
)
ret['statusCode']=response['ResponseMetadata']['HTTPStatusCode']
ret['body']=json.dumps(response, default=str)
This code relies on accessing resource and httpMethod which are defined in the event. Would it be possible to deserialize selected values into the CustomEvent struct that is in the example code?
#[derive(Deserialize)]
struct CustomEvent {
#[serde(rename = "firstName")]
first_name: String,
}
Thanks in advance.
When I deploy the lambda-http basic example and call it via api gateway, I get
{
"errorMessage": "JsonError: missing field `path` at line 1 column 2",
"errorType": "JsonError",
"stackTrace": null
}
It seems like having a Request
in the handler function is not supported.
How can I make a lambda which handles both GET and POST invocations via api gateway?
Is this runtime supported in the CDK? If not, is this a possibility?
Current documentation instructs users of Serde to use the derive feature in Cargo.toml:
serde = { version = "1.0", features = ["derive"] }
This is incompatible with lambda_http which uses separate serde and serde_derive crates.
error[E0252]: the name `Deserialize` is defined multiple times
--> /home/ilianaw/.cargo/registry/src/github.com-1ecc6299db9ec823/lambda_http-0.1.0/src/request.rs:17:5
|
15 | Deserialize, Deserializer,
| ----------- previous import of the macro `Deserialize` here
16 | };
17 | use serde_derive::Deserialize;
| ----^^^^^^^^^^^^^^^^^^^^^^^^^-
| | |
| | `Deserialize` reimported here
| help: remove unnecessary import
|
= note: `Deserialize` must be defined only once in the macro namespace of this module
error: aborting due to previous error
For more information about this error, try `rustc --explain E0252`.
error: Could not compile `lambda_http`.
If a user switches back to the old serde_derive crate, lambda_http successfully builds.
serde = "1.0"
serde_derive = "1.0"
Without using a custom domain mapping in API Gateway, the URI looks like:
https://1234567890.execute-api.us-east-1.amazonaws.com/prod/a/b/c
where prod
is the deployment stage. Currently lambda_http doesn't include the stage in the URL (but it looks like it's available in the event).
I tested it and it works fine with an empty custom domain mapping, but a non-empty custom domain mapping might also not work; I didn't test that, and I'm not sure if the domain mapping is present in the event.
(I'm running into this because I'm writing request validation for a Twilio endpoint, which involves concatenating key/value pairs onto the URL Twilio knows. So I'm relying on Request::uri().to_string()
.)
Given that the runtime is built on top of tokio, I think it would be useful to be able to provide async handler functions.
Happy to contribute this if there is interest from your side!
EDIT: Support for tower
via the tower-service
crate could also be interesting.
I tried it out by connecting to a Kinesis stream and encountered the following error:
(Interestingly when I try the Lambda UI Console test cases (using the provided Kinesis template test cases), I don't see any errors.)
START RequestId: 7ea355dc-77ee-4cbc-aa9e-f18d6264fc5a Version: $LATEST
2018-12-29 10:33:36 ERROR [lambda_runtime::runtime] Could not parse event to type: invalid type: floating point `1546079388.123`, expected i64 at line 1 column 10108
2018-12-29 10:33:37 ERROR [lambda_runtime::runtime] Could not parse event to type: invalid type: floating point `1546079388.123`, expected i64 at line 1 column 10108
2018-12-29 10:33:37 ERROR [lambda_runtime::runtime] Could not parse event to type: invalid type: floating point `1546079388.123`, expected i64 at line 1 column 10108
2018-12-29 10:33:37 ERROR [lambda_runtime::runtime] Could not parse event to type: invalid type: floating point `1546079388.123`, expected i64 at line 1 column 10108
2018-12-29 10:33:37 ERROR [lambda_runtime::runtime] Unrecoverable error while fetching next event: JSON error
thread 'main' panicked at 'Could not retrieve next event', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/lambda_runtime-0.1.0/src/runtime.rs:267:17
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:80
2: std::panicking::default_hook::{{closure}}
at src/libstd/sys_common/backtrace.rs:68
at src/libstd/panicking.rs:210
3: std::panicking::default_hook
at src/libstd/panicking.rs:225
4: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:488
5: std::panicking::begin_panic
6: <lambda_runtime::runtime::Runtime<E, O>>::get_next_event
7: <lambda_runtime::runtime::Runtime<E, O>>::get_next_event
8: <lambda_runtime::runtime::Runtime<E, O>>::get_next_event
9: <lambda_runtime::runtime::Runtime<E, O>>::get_next_event
10: <lambda_runtime::runtime::Runtime<E, O>>::get_next_event
11: lambda_runtime::runtime::start
12: bootstrap::main
13: std::rt::lang_start::{{closure}}
14: std::panicking::try::do_call
at src/libstd/rt.rs:59
at src/libstd/panicking.rs:307
15: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:102
16: std::rt::lang_start_internal
at src/libstd/panicking.rs:286
at src/libstd/panic.rs:398
at src/libstd/rt.rs:58
17: main
END RequestId: 7ea355dc-77ee-4cbc-aa9e-f18d6264fc5a
After scanning through the library code, I think it has an issue parsing the "ApproximateArrivalTimestamp
in Kinesis payload which looks like a float.
I currently have a Lambda in Rust using the Python 3 binding using the excellent ilianaw/rust-crowbar crate.
I also spent a lot of time creating internal serde mappings for various data types, but the majority of this work is on a private repository.
What isn't clear here in the documentation is whether it's possible to catch all of the various use-cases that Lambdas can perform. My particular use case is using a Lambda for:
My Lambda essentially acts as an event bus allowing me to do just about everything a typical web application would need to do, and it's awesome. I had to do a lot of very strange things to get a build environment that mimicked the official Lambda runtime environment, so it's cool to see that now it's possible to just build an entirely statically linked binary using musl 👍
What is not clear in the existing documentation is how to integrate all of these services inside of this new Rust-native Lambda runtime. Can I use a single Rust Lambda function to address all of the use cases listed above, or is this only for API Gateway endpoints?
I've seen that in other issues, Tower is listed as a potential replacement for Hyper here, and that would be great! Basically I'd like to take advantage of native HTTP server libraries/frameworks for intelligent request routing and yet still be able to handle SNS/SES/CloudWatch/etc. events within the same Rust Lambda function. If all of this is possible, can the documentation be updated to reflect this and provide guidance on how to use it?
Hi all, correct me if I am wrong but:
Based on this, if the payload deserialization fails, you return Ok(None)
losing the original error message, basically the response will always be Ok and never PayloadError?
I thought maybe it was something in my code but I get the same panic with the example here: https://docs.rs/crate/lambda_runtime/0.2.0
START RequestId: 967aad3f-eaaa-4946-bc51-dc717f81aa71 Version: $LATEST
thread 'main' panicked at 'Could not retrieve next event', /Users/redacted/.cargo/registry/src/github.com-1ecc6299db9ec823/lambda_runtime_core-0.1.0/src/runtime.rs:266:17
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:71
2: std::panicking::default_hook::{{closure}}
at src/libstd/sys_common/backtrace.rs:59
at src/libstd/panicking.rs:211
3: std::panicking::default_hook
at src/libstd/panicking.rs:227
4: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:491
5: std::panicking::begin_panic
6: <lambda_runtime_core::runtime::Runtime<Function, EventError>>::get_next_event
7: <lambda_runtime_core::runtime::Runtime<Function, EventError>>::get_next_event
8: <lambda_runtime_core::runtime::Runtime<Function, EventError>>::get_next_event
9: <lambda_runtime_core::runtime::Runtime<Function, EventError>>::get_next_event
10: <lambda_runtime_core::runtime::Runtime<Function, EventError>>::get_next_event
11: lambda_runtime_core::runtime::start_with_config
12: bootstrap::main
13: std::rt::lang_start::{{closure}}
14: std::panicking::try::do_call
at src/libstd/rt.rs:59
at src/libstd/panicking.rs:310
15: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:102
16: std::rt::lang_start_internal
at src/libstd/panicking.rs:289
at src/libstd/panic.rs:398
at src/libstd/rt.rs:58
17: main
END RequestId: 967aad3f-eaaa-4946-bc51-dc717f81aa71
REPORT RequestId: 967aad3f-eaaa-4946-bc51-dc717f81aa71 Duration: 1050.16 ms Billed Duration: 1100 ms Memory Size: 128 MB Max Memory Used: 26 MB
RequestId: 967aad3f-eaaa-4946-bc51-dc717f81aa71 Error: Runtime exited with error: exit status 101
Runtime.ExitError
The examples for using the Rust Lambda runtime require additional calls around error handling that require access to the Context
object.
Unfortunately, that breaks Rust ergonomics for error handing, that ordinarily would use ?
and map custom errors to HandlerError
through an impl From
(which doesn't have a reference to the Context
struct.
Ideally, I could return my own error types, and either have the runtime take care of supplying context, with perhaps some place to inject logging, etc. into the error handing flow.
In practice, in the short term, I'm breaking out all actions that can return a Result
into a separate module, because even trivial activities get complicated quickly if you want to inject error handing with access to Context
.
In the main function using the rust runtime for some reason it is using the original environment variable that I had set in the aws lambda console. Even after 24 hours and several iterations of the function code it still uses the old environment variable.
I am doing this:
info!("Log level: {}", env!("LOG_LEVEL"));
Is there some way to fix this?
I want to have a lambda function that responds to http POST event and invokes another lambda with the POST parameters. To to that I need the stage
value, so I found that the stage
value is under RequestContext::ApiGateway
, I can get the RequestContext
using Request's request_context()
function, but RequestContext
is private, so I can't decompose it and get the stage value.
When using lambda_runtime
it seems to be hardcoded that one's lambda handler will take JSON as input and output in JSON and it serialises the structures using serde_json
. Is it possible to configure this and esp. be able to return binary data with a specified mime type instead of "application/json"?
This would be most useful for lambdas for example responding with image/png, image/jpg and other binary types.
I also tested using lambda-runtime-client
directly as then I have control over the deserialising of the request and are able to directly output a binary blob as the response. But the header for the response is fixed to use const API_CONTENT_TYPE: &str = "application/json";
.
Have there been any thoughts of how to support and handle binary responses with other mime types?
Hi, I'm currently using the alexa_sdk
which depends on this crate for developing Alexa Skills. Using the sample utterances I've provided to the Alexa Certification process, I've done manual tests using the Alexa devices and Simulator and am receiving working responses for my skill. However when the Functional Tests
section of the Skills Certification is executed I notice that their tests are causing a barrage of similar errors to be returned on CloudWatch
Logs. They all have the form:
2019-06-02 19:01:56,661 �[31mERROR�[0m [lambda_runtime_core::runtime] Handler returned an error for {Some_Id_here}: JsonError: missing field `token` at line 1 column 497
Again when I run the equivalent utterance using the Alexa Simulator in the develop console it generates a valid Non-Error JSON response. Also I'm not aware of a token field required for the response anywhere. Any idea what could be the cause of these errors?
Hey! I was trying to obtain values from the query string from a API GW request and I think I found a bug.
I was trying get the query string values by using Url::parse
to parse the URI obtained from request.uri().to_string()
. However the query string is not added to the URI when converting from LambdaRequest
to Request
. I believe this is not intended, but would like to confirm that.
Note 1: Eventually, I found the request.query_string_parameters()
function, as mentioned in the examples.
Note 2: I also mentioned this problem in this issue.
I tried to get the length of an array value. So I have coded as below.
var unreadCounts = 0;
var unreadMsg = [];
unreadMsg = dtc_data.Items.map(data => {
if (data.acknowledged == false)
return data;
})
unreadCounts = unreadMsg.length;
return cb(null, unreadCounts);
I thought it should return the unread count as a number. But I have got an error
Internal server error
Doesn't support the length function in the AWS Lambda?
Hello,
When compiling my project that uses:
lambda_runtime = "^0.2"
rusoto_core = "^0.42"
I get 2 errors when compiling lambda_runtime_error v0.1.1
:
error[E0277]: the trait bound (): LambdaErrorExt is not satisfied
--> $HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/lambda_runtime_errors-0.1.1/src/error_ext_impl.rs:145:9
error[E0277]: the trait bound
(): std::error::Error is not satisfied
-->
$HOME/.cargo/registry/src/github.com-1ecc6299db9ec823/lambda_runtime_errors-0.1.1/src/error_ext_impl.rs:145:9
Cargo version: cargo 1.41.0-nightly (8280633db 2019-11-11)
Rustc : rustc 1.41.0-nightly (412f43ac5 2019-11-24)
It seems that it is related to the nightly
version since it worked perfectly 3 days ago and doesn't work since I run rustup update
.
Hello. I'm trying this, following whats in the README and getting to the stepaws lambda invoke
for the basic example and it fails with:
RequestId: 22649496-f719-11e8-b4b6-c796ae152b7e Error: Runtime failed to start: fork/exec /var/task/bootstrap: exec format error
Runtime.ExitError
Any ideas?
I would like to create a mock Context to be able to test my handler fully. Unfortunately deadline is private and new requires a config provider and the ConfigProvider is in env, which is private.
Any pointers?
The lambda-http
crate doesn't seem to be published on crates.io, which means the only way to use it in a project is to reference this git repo directly in Cargo.toml
, which is not exactly ideal.
i.e., I got it to build with
[dependencies.lambda_http]
git = "https://github.com/awslabs/aws-lambda-rust-runtime"
but it would be much nicer to specify it as a regular crates.io dependency.
Are there any plans to publish the lambda-http
crate?
(Pulled from #94)
I wonder if we can support that through an optional extension for customers that do care about this. To that end, I've been talking to @carllerche—he might be able to comment on this more—about the usage of Tower in underlying of the Lambda runtime, and my current thinking is that we can provide several valid handler signatures as implemented by specialized tower services and a corresponding Into<Service>
(for context, this is the foundational Service trait) that'll be executed by the the Lambda runtime itself. Given this definition, we can provide several specialized services along the lines of:
Service<(http::Request<T>, Context)> -> http::Response<U>
Service<(http::Request<T>, Context)> -> Result<http::Response<U>, HandlerError>
Service<(aws_lambda_events::event::sqs::SqsEvent, Context)> -> U
Service<(aws_lambda_events::event::sqs::SqsEvent, Context)> -> Result<U, HandlerError>
Note: each service function signature would be implemented in terms of a sealed trait similar to HttpService
.
In the absence of a handler error, the Into<Service>
implementation would place a default value for HandlerError
. Handler construction will still be available using a something similar to a ServiceFn
.
To address some potential concerns:
HttpService
behind a feature flag.Currently, the Lambda Rust runtime declares a HandlerError
type that developers can use to return an error from their handler method. While this approach works, it forces developers to write more verbose code to handle all errors and generate the relevant HandlerError
objects. Developers would like to return errors automatically using the ?
operator (#23).
For example:
fn my_handler(e: CustomEvent, c: lambda::Context) -> Result<CustomOutput, Error> {
let i = event.age.parse::<u8>?;
...
}
In an error response, the Lambda runtime API expects two parameters: errorType
and errorMessage
. Optionally, we can also attach a stack trace to the error.
{
"errorMessage" : "Error parsing age data.",
"errorType" : "InvalidAgeException"
}
To generate an error message field we need the Err
variant from the handler Result
to support the Display
trait. This allows us to support any Display
type in the result - Error
types inherently supports Display
too. However, we cannot identify a way to automatically generate the error type field given a Display
-compatible object that uses stable features. To address this, we plan to introduce a new trait in the runtime:
pub trait ErrorExt {
fn error_type(&self) -> &str;
}
We'd like to deprecate this trait in the future and rely on the type name intrinsic (which is currently blocked on specialization). For context, see #1428.
The name ErrorExt comes from the extension trait conventions RFC. Based on feedback, we are open to changing this.
The runtime crate itself will provide the implementation of the ErrorExt
trait for the most common errors in the standard library. Developers will have to implement the trait themselves in their own errors. We may consider adding a procedural macro to the runtime crate to automatically generate the trait implementation.
In summary, the proposed changes are:
Display
and ErrorExt
in its Err
variant.Display
trait to extract an error message and use the ISO-8859-1
charset.error_type()
function to get the value for the errorType
field.RUST_BACKTRACE
environment variable is set to 1
, the runtime will use the backtrace crate to collect a stack trace as soon as the error is received.The new handler type definition will be:
pub trait Handler<Event, Output, Error>
where
Event: From<Vec<u8>>,
Output: Into<Vec<u8>>,
Error: Display + ErrorExt + Send + Sync,
{
fn run(&mut self, event: Event, ctx: Context) -> Result<Output, Error>;
}
There is a typo within the AWS CLI deploy instructions in the README.
--zip-file file://./lambda.zip \
should be --zip-file fileb://./lambda.zip \
In a precursor project there was this question about using Rust with Lambda@Edge (Lambda + CloudFront).
It looks like Lambda@Edge is still limited to nodejs. Any idea if Amazon plans to expand supported runtimes? Or, worst-case, is there any interest in this runtime to work via nodejs?
When trying to run my lambda function with sam local start-api
, I get the following error about _LAMBDA_SERVER_PORT
not being defined:
(sam) PS C:\Users\radix\Projects\pandt> sam local start-api --template .\SAM.yaml
2018-12-09 15:48:26 Mounting pandt at http://127.0.0.1:3000/ [GET]
2018-12-09 15:48:26 You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
2018-12-09 15:48:27 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
2018-12-09 15:48:35 Invoking lollerskates (provided)
Fetching lambci/lambda:provided Docker container image......
2018-12-09 15:48:36 Mounting C:\Users\radix\Projects\pandt\artifacts as /var/task:ro inside runtime container
thread 'main' panicked at 'failed to start runtime: environment error: the _LAMBDA_SERVER_PORT variable must specify a valid port to listen on', /home/rust/.cargo/git/checkouts/rust-aws-lambda-2a7b20b15eabd9cc/1435a4d/aws_lambda/src/lib.rs:56:31
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::print
at libstd/sys_common/backtrace.rs:71
at libstd/sys_common/backtrace.rs:59
2: std::panicking::default_hook::{{closure}}
at libstd/panicking.rs:211
3: std::panicking::default_hook
at libstd/panicking.rs:227
4: std::panicking::rust_panic_with_hook
at libstd/panicking.rs:476
5: std::panicking::continue_panic_fmt
at libstd/panicking.rs:390
6: std::panicking::begin_panic_fmt
at libstd/panicking.rs:345
7: aws_lambda::start::{{closure}}
8: aws_lambda::start
9: pandt_lambda::main
10: std::rt::lang_start::{{closure}}
11: std::panicking::try::do_call
at libstd/rt.rs:59
at libstd/panicking.rs:310
12: __rust_maybe_catch_panic
at libpanic_unwind/lib.rs:102
13: std::rt::lang_start_internal
at libstd/panicking.rs:289
at libstd/panic.rs:392
at libstd/rt.rs:58
14: main
If I set _LAMBDA_SERVER_PORT
to some random port value like 5000
, then it doesn't immediately crash, but then I get a timeout:
2018-12-09 15:51:06 Mounting C:\Users\radix\Projects\pandt\artifacts as /var/task:ro inside runtime container
2018-12-09 15:51:17 Function 'pandt' timed out after 10 seconds
2018-12-09 15:51:17 Function returned an invalid response (must include one of: body, headers or statusCode in the response object). Response received:
2018-12-09 15:51:17 127.0.0.1 - - [09/Dec/2018 15:51:17] "GET / HTTP/1.1" 502 -
My function isn't doing anything special:
use lambda_runtime::{self, error::HandlerError, start};
use serde::Serialize;
use serde_json::Value;
#[derive(Serialize, Debug)]
struct Response {
secret: String,
input_event: Value,
}
#[allow(clippy::needless_pass_by_value)]
fn main() -> Result<(), failure::Error> {
let handler = move |event: Value, _ctx: lambda_runtime::Context| -> Result<Response, HandlerError> {
Ok(Response {
secret: "foo".to_string(),
input_event: event,
})
};
start(handler, None);
Ok(())
}
Hey all, would it be possible to cut a release at the current master commit level or similar? As long as it's post #111 that should work.
Coming in as a new user I was confused between the differing formats in the examples source and in the docs. It didn't help that the examples source (which is the correct way to do things) doesn't compile with the latest version of the crate. Once the dependency is renamed to lambda = { git = "https://github.com/awslabs/aws-lambda-rust-runtime/", branch = "master" }
everything works for the new syntax.
I can try and provide documentation and examples improvements PRs but if the maintainers could cut a release I think that'd go a long way to usability.
Receiving this error when building crate fresh from crates.io.
Compiling lambda_http v0.1.0
error[E0252]: the name `Deserialize` is defined multiple times
--> /home/bruces/.cargo/registry/src/github.com-1ecc6299db9ec823/lambda_http-0.1.0/src/request.rs:17:5
|
15 | Deserialize, Deserializer,
| ----------- previous import of the macro `Deserialize` here
16 | };
17 | use serde_derive::Deserialize;
| ^^^^^^^^^^^^^^^^^^^^^^^^^ `Deserialize` reimported here
|
= note: `Deserialize` must be defined only once in the macro namespace of this module
help: you can use `as` to change the binding name of the import
|
17 | use serde_derive::Deserialize as OtherDeserialize;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error: aborting due to previous error
For more information about this error, try `rustc --explain E0252`.
error: Could not compile `lambda_http`.
Looking at the source here you can see the following on line 15 and 17 respectively:
use serde::de::{Deserialize, Deserializer, Error as DeError, MapAccess, Visitor};
use serde_derive::Deserialize;
Deleting line 17 appears to correct the issue.
Looking at PR 62, it may be resolved there, but I'm willing to submit a smaller PR to just fix this if desired.
I wrapped the docker deployment process into a cargo subcommand with this https://crates.io/crates/cargo-aws-lambda, if you're interested.
Updated just a little bit ago to the latest master, after the "cleanup". All lambda jobs that I have running now are erroring out with runtime errors:
Process exited before completing request
Simple example that causes this...
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
let func = handler_fn(|event| handler(event));
lambda::run(func).await?;
Ok(())
}
async fn handler(_event: Value) -> Result<()> { Ok(()) }
I am not getting any error to look at if I log all errors that I can. I can also confirm that it does run the handler to completion, but then exits right away.
I'm finding that, in practice, the new HandlerError' type constraints may have made error handling hard/awkward to satisfy for users
common cases I run into is trying to represent a foreign crate type for with lambda_runtime_errors::LambdaErrorExt. Since the Fail crate is becoming more common I think that's okay, but still rough if the foreign crates error type sticks to the standard library error type
I find myself often cluttering map_err(HandlerError::new)
code with this
// I don't control the Err type of foo_bar as its defined in another crate
// so I can't impl lambda_runtime_errors::LambdaErrorExt for ThatError { ... }
foo_bar(input).map_err(HandlerError::new)? // fails to compile because no impl for lambda_runtime_errors::LambdaErrorExt
This is unfortunate for users because often these are simple hello world integrate xxx with lambda cases where the majority of the code written is satisfying the trait constraints on errors and not the code what brings business value.
I understand why this constraint exists but I want to revisit the value for users this is providing vs what its costing to author lambdas.
I've ran into this enough that I wanted to reopen the floor for alternative options.
If a handler panics it will get picked up by the executor but it will just hang.
Example:
type Error = Box<dyn std::error::Error + Send + Sync + 'static>;
#[tokio::main]
async fn main() -> Result<(), Error> {
lambda::run(lambda::handler_fn(handler)).await?;
Ok(())
}
#[derive(Debug)]
enum MyError {
Unknown,
}
impl std::fmt::Display for MyError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}",
match self {
MyError::Unknown => "Unknown",
}
)
}
}
impl std::error::Error for MyError {}
async fn handler(_: serde_json::Value) -> Result<serde_json::Value, MyError> {
panic!("imagine an unwrap or unimplemented! here");
}
It's possible to work around this by introducing a shim:
[tokio::main]
async fn main() -> Result<(), Error> {
lambda::run(lambda::handler_fn(wrapped_handler)).await?;
Ok(())
}
async fn wrapped_handler(event: serde_json::Value) -> Result<serde_json::Value, MyError> {
let handle = tokio::spawn(async { handler(event).await }).await;
handle.map_err(|_| MyError::Unknown.into())?
}
Trying to run locally:
cargo run --example=basic
Downloaded simple-error v0.1.13
Compiling simple-error v0.1.13
Compiling simple_logger v1.0.1
Compiling lambda_runtime v0.2.0 (/Users/walther/git/aws-lambda-rust-runtime/lambda-runtime)
Finished dev [unoptimized + debuginfo] target(s) in 3.87s
Running `/Users/walther/git/aws-lambda-rust-runtime/target/debug/examples/basic`
thread 'main' panicked at 'Could not find runtime API env var: environment variable not found', lambda-runtime-core/src/runtime.rs:79:13
note: Run with `RUST_BACKTRACE=1` for a backtrace.
It would be useful to have some documentation on how to run Rust-based Lambda functions locally, for development & testing purposes.
Hey all, first off - thank you for the awesome runtime!
Second, I'm not able to see errors printed properly in my Cloudwatch Logs, just a generic 2019-03-29 18:12:23 INFO [lambda_runtime_core::runtime] Error response for ... accepted by Runtime API
.
I think I might be missing some kind of impl
or something, but I'm not sure - should I be expecting errors returned by main
to be displayed in Cloudwatch?
Here's my basic structure
use derive_more::{Display, From};
use failure::Fail;
use lambda_runtime::error::{lambda, HandlerError, LambdaErrorExt};
use serde_derive::Deserialize;
#[derive(Debug, Display, From)]
pub enum Error {
// exhaustive errors here
}
impl Fail for Error {}
impl From<Error> for HandlerError {
fn from(err: Error) -> HandlerError {
HandlerError::new(err)
}
}
impl LambdaErrorExt for Error {
fn error_type(&self) -> &str {
match self {
// ...
}
}
}
#[derive(Deserialize, Debug)]
#[serde(untagged)]
pub enum Events {
// ...
}
pub fn handler(event: Events, _ctx: lambda_runtime::Context) -> Result<(), Error> {
// ...
Ok(())
}
fn main() -> Result<(), Error> {
simple_logger::init_with_level(log::Level::Info)?;
lambda!(handler);
Ok(())
}
First, there is some inconsistency about what the deadline values mean: here
aws-lambda-rust-runtime/lambda-runtime/src/context.rs
Lines 62 to 64 in 109a547
aws-lambda-rust-runtime/lambda-runtime-client/src/client.rs
Lines 101 to 102 in 109a547
aws-lambda-rust-runtime/lambda-runtime/src/context.rs
Lines 103 to 108 in 109a547
Second, I don't think we need u128
for the deadlines, even if they are in nanoseconds; an u64
would last until the year 2554, and even an i64
until the year 2262. Also, we should probably use a signed int (i64
) for the return value of get_time_remaining_millis
. What do you think?
I can open a PR for these if we come to any conclusions.
(Sorry for using "we" when I clearly haven't done anything for the project yet, it was the easiest to phrase it this way.)
fn name() wasn't added to failure::Fail until 0.1.5. When upgrading from lambda-runtime 0.1 to 0.2 I got the following error because I still had failure 0.1.4 in my Cargo.lock:
/.cargo/git/checkouts/aws-lambda-rust-runtime-7c865cce90132439/05190ef/lambda-runtime-errors/src/lib.rs:40:32 | 40 | self.find_root_cause().name().unwrap_or_else(|| "FailureError") |
Probably need to be more restrictive in the Cargo.toml to force an upgrade. Maybe failure = "~0.1.5"
?
I got below error when building the lambda_http example code
use lambda_http::{lambda, IntoResponse, Request, RequestExt};
use lambda_runtime::{Context, error::HandlerError};
fn main() {
lambda!(hello)
}
fn hello(
request: Request,
_ctx: Context
) -> Result<impl IntoResponse, HandlerError> {
Ok(format!(
"hello {}",
request
.query_string_parameters()
.get("name")
.unwrap_or_else(|| "stranger")
))
}
according to issue #87 , I chose to fetch lambda_http crate from git address
the dependencies in Cargo.toml are:
lambda_runtime = "0.2"
lambda_http = {git = "https://github.com/awslabs/aws-lambda-rust-runtime"}
error message:
type mismatch in function arguments
expected signature of `fn(http::request::Request<lambda_http::body::Body>, lambda_runtime_core::context::Context) -> _`
note: required because of the requirements on the impl of `lambda_http::Handler<_>` for `fn(http::request::Request<lambda_http::body::Body>, lambda_runtime_core::context::Context) -> std::result::Result<impl lambda_http::response::IntoResponse, lambda_runtime_errors::HandlerError> {hello}`
note: required by `lambda_http::start`rustc(E0631)
<::lambda_http::lambda macros>(1, 27): expected signature of `fn(http::request::Request<lambda_http::body::Body>, lambda_runtime_core::context::Context) -> _`
main.rs(8, 1): found signature of `fn(http::request::Request<lambda_http::body::Body>, lambda_runtime_core::context::Context) -> _`
It will not receive such error If I change the lambda_http version to 0.1.0
But in a more practical project, serde::Deserialize (#87 ) will be introduced, so I cannot choose 0.1.0
Is there any workaround? or is there any example to return the http response via lambda_runtime crate only (I would prefer this way, because lambda
macro in runtime crate allow custom Error type which implements Fail + LambdaErrorExt + Display
(though still not convenient, i.e. #94 ), while lambda
in http crate allow HandlerError
only)? Thanks in advance
Hello! I am one of the maintainers of https://github.com/srijs/rust-aws-lambda (Rust interface to the Go runtime). As part of that project, we have lambda event definitions that don't depend on the runtime:
https://github.com/srijs/rust-aws-lambda/tree/master/aws_lambda_events
It would be great to have those event types integrated in this crate, preferably by re-exporting. I'm also fine if you want to take over ownership and make it officially supported, as long as it doesn't take too long for changes to be merged.
Note right now the generated structs take String
and thus may require more allocations. I plan to also have variants that take Cow<'a, str>
.
I am using the latest master (ebc5474) and structure my project according to this example: https://github.com/awslabs/aws-lambda-rust-runtime/blob/master/lambda/examples/hello-without-macro.rs
What I ran into is that if having e.g. a DynamoDB call like this:
let result = dynamodb_client.update_item(update_item_input).await?;
which is returning a Result<UpdateItemOutput, RusotoError<UpdateItemError>>
and RusotoError
implementing the std::error::Error
, I wouldn't see the error in CloudWatch logs. The lambda would still fail, but I wouldn't be able to have a look at what exactly failed.
Is there a way that I could have a general match on the async fn func(event: Value) -> Result<Value, Error>
at the end of the execution of the lambda and be able to log the error via println?
Coming from Typescript, I would put a try catch around my whole function and log the error. Basically I am looking for something similar here.
docs.rs makes it easy to host your rustdocs for free https://docs.rs/lambda_runtime/0.1.0/lambda_runtime/ it also offers a quality of life feature with its badge urls Since there's one central readme for this project (with a list of modules and descriptions at the top) it make make sense to add a badge for each there I wanted to open up an issue for discussion first
Hi!
It's great to see official lambda support for Rust!
Over at https://github.com/srijs/rust-aws-lambda, we have been building a lambda runtime and various auxiliary crates. We will start to redirect users of the runtime parts over to the official runtime, but similar to #12 I'm keen to hear whether there is interest on your side to move some of the other auxiliary crates.
In particular, there are bindings that translate between the API Gateway proxy events and the types provided by the http
crate. I'm happy to continue to provide these myself, but I'd like to avoid any duplication of effort, so if you'd like to incorporate these please let me know!
Hi,
Is there a canonical way to reuse valuable objects between calls to the handler, for example a database client ? I have considered :
lazy_static
or a call to Once
, but it feels like a hacklambda!
but it's not implemented AFAIKThank you for this project, it's awesome ! :)
Are there any disadvantages to making lambda_http::LambdaRequest
public?
For testing http handlers, I would like to deserialize json to a http::Request
I am essentially after performing something akin to https://github.com/awslabs/aws-lambda-rust-runtime/blob/master/lambda-http/src/request.rs#L361.
Our lambda fails on certain SNS messages. I found that every message involved has some special characters like 😊
, 👌🏻
, ☺
(codepoint is 9786) , ’
(codepoint 8217) and others.
In different form: 'here \ud83d\ude0a my fianc\u00e9 '
.
The error message looks like this:
thread 'main' panicked at 'byte index 32 is not a char boundary; it is inside '☺' (bytes 31..34) of `And perfect, thank you so much!☺️`', src/libcore/str/mod.rs:2102:5
RequestId: 855396e8-00b6-4029-9843-80e67960813e Error: Runtime exited with error: exit status 101
Runtime.ExitError
At the moment the AWS Lambda rust runtime is composed of two crates: The Runtime Client and the Runtime. The runtime client implements a client SDK to the Lambda runtime APIs, taking care of all the communication to the runtime endpoint to receive new events, send responses and errors. The runtime implements the “main loop” of a function's lifecycle: call /next
to receive the next event, invoke the handler, and then post responses or errors to the relevant endpoints. The runtime client SDK only deals with byte slices and does not understand types; the runtime crate, on the other hand, expects the event and response types to be serializable/deserializable with Serde.
We have received requests to handle multiple event types in the same Lambda function (#29) as well as abstracted implementations that abstract the event into a well-known type in rust, such as the HTTP crate (#18). To make it easier to extend the runtime and support future use-cases, we propose to split the Rust Lambda runtime into three separate crates:
/next
endpoint and posting responses. Expects a handler that only deals with byte slices.The runtime-core crate will declare this handler type:
pub trait Handler<Error> {
where Error: Display + ErrorExt + Send + Sync {
fn run(&mut self, event: Vec<u8>, ctx: Context) -> Result<Vec<u8>, Error>;
}
The typed runtime will declare the same handler type as now and wrap the run method to perform the serialization/deserialization before passing data to the core crate.
I'm building the lambda on Ubuntu with the basic example you've provided in the README.
It builds without any errors but if I upload and test it on aws in crashes with:
{
"errorType": "Runtime.ExitError",
"errorMessage": "RequestId: c24d34ab-f4a9-11e8-a9b7-d5cbfb363674 Error: Runtime exited with error: exit status 1"
}
The log output is:
START RequestId: c24d34ab-f4a9-11e8-a9b7-d5cbfb363674 Version: $LATEST
/var/task/bootstrap: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by /var/task/bootstrap)
END RequestId: c24d34ab-f4a9-11e8-a9b7-d5cbfb363674
REPORT RequestId: c24d34ab-f4a9-11e8-a9b7-d5cbfb363674 Duration: 61.35 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 12 MB
RequestId: c24d34ab-f4a9-11e8-a9b7-d5cbfb363674 Error: Runtime exited with error: exit status 1
Runtime.ExitError
When using a lambda with HTTP API Gateways it fails to parse the incoming event. I think it's because http apis don't pass a resource id, so the deserializing fails. Maybe something like this would fix it: MarcoPolo@b0091cb?
You can repro by using something like the following in a serverless.yml config:
...
functions:
fooFn:
handler: fooHandler
events:
- httpApi:
path: /
method: GET
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.