aodn / aws-wps Goto Github PK
View Code? Open in Web Editor NEWComponents for an OGC Web Processing Service providing NetCDF aggregation and subsetting services on AWS
License: GNU General Public License v3.0
Components for an OGC Web Processing Service providing NetCDF aggregation and subsetting services on AWS
License: GNU General Public License v3.0
Post a valid execute request without specifying the callback parameter
<ExceptionReport version="1.0.0" xmlns="http://www.opengis.net/ows/1.1">
<Exception exceptionCode="ExecutionError">
<ExceptionText>Unable to substitute value. No parameter found for reference callbackParams (Service: AWSBatch; Status Code: 400; Error Code: ClientException; Request ID: 2811662a-b51c-11e7-8f06-4d2ead61f7de)</ExceptionText>
</Exception>
</ExceptionReport>
The aggregation is performed without sending emails (as required for MARVL)
Request a timeseries
Timeseries is in descending order
Timeseries is in ascending order. This matches the current production ordering and while the other order is valid, ascending is preferred by @ggalibert
During the last iteration I prototyped some additions to the status page - aimed at providing support information. The aims of these additions were to:
The initial prototype has added 2 new 'format's to the status page that can be requested to view this additional information:
format=queue : Displays the queue information
format=admin&jobId=<JOB_ID> : shows the job status and the request information for the job
Submit an execute using curl
Get a status response back that points to the s3 file containing the status document
Get a status response back that points to the xml getstatus service for the job.
Our first production version of AWS-WPS will currently only support gs:GoGoDuck aggregations - not NetcdfOutput aggregations.
It is expected that future releases will support all types of aggregations - and the aggregation codebase currently implement in Geoserver WPS will be decommissioned.
In the interim: until AWS-WPS implements all aggregation types we will need to preserve the current implementations of aggregation types not supported in AWS-WPS : NetcdfOutput (is that the only one?).
A client (the portal for instance) will need to direct gs:GoGoDuck aggregation execute requests to AWS-WPS & all other aggregation execute requests to Geoserver WPS.
Can we easily cater for this behaviour in a way that is relatively transparent to a client? Will updating URLs in the metadata for the collections that invoke gs:GoGoDuck be sufficient to point those collections to the new AWS-WPS aggregator?
The following policies need to be modified so that the corresponding role only has access to modify the appropriate stack resource. For example, a Lambda PublishVersion permission should only apply to the stack function (by reference), to avoid any risk of code bugs modifying non-stack resources.
I think these are all of the references to '*':
https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L542
https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L647
https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L749
https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L783
https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L882
https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L1055
We should be doing at least some validation of the parameters being passed to the Execute operation in order to sanity-check incoming requests prior to submitting the job to AWS batch for processing.
These checks could include checking to ensure that the incoming WPS execute request does not include outputs and features that we are not supporting.
Validation should provide some meaningful error messages when common parameter/request problems are detected.
Related PR
<ns2:Execute service="WPS" version="1.0.0" xmlns:ns2="http://www.opengis.net/wps/1.0.0" xmlns:ns1="http://www.opengis.net/ows/1.1" xmlns:ns3="http://www.w3.org/1999/xlink">
<ns1:Identifier>gs:GoGoDuck</ns1:Identifier>
<ns2:DataInputs>
<ns2:Input>
<ns1:Identifier>layer</ns1:Identifier>
<ns2:Data>
<ns2:LiteralData>imos:acorn_hourly_avg_rot_qc_timeseries_url</ns2:LiteralData>
</ns2:Data>
</ns2:Input>
<ns2:Input>
<ns1:Identifier>subset</ns1:Identifier>
<ns2:Data>
<ns2:LiteralData>TIME,2017-01-01T00:00:00.000Z,2017-01-07T23:04:00.000Z;LATITUDE,-33.18,-31.45;LONGITUDE,114.82,115.39</ns2:LiteralData>
</ns2:Data>
</ns2:Input>
<ns2:Input>
<ns1:Identifier>callbackParams</ns1:Identifier>
<ns2:Data>
<ns2:LiteralData>[email protected]</ns2:LiteralData>
</ns2:Data>
</ns2:Input>
</ns2:DataInputs>
<ns2:ResponseForm>
<ns2:ResponseDocument storeExecuteResponse="true" status="true">
<ns2:Output asReference="true" mimeType="text/xml">
<ns1:Identifier>provenance</ns1:Identifier>
</ns2:Output>
</ns2:ResponseDocument>
</ns2:ResponseForm>
</ns2:Execute>
<gex:westBoundLongitude>
<gco:Decimal>115.39</gco:Decimal>
</gex:westBoundLongitude>
<gex:eastBoundLongitude>
<gco:Decimal>114.82</gco:Decimal>
</gex:eastBoundLongitude>
westBoundingLongitude contains the west bounding longitude and eastBoundingLongitude contains the east bounding longitude
Builds fine on my laptop, but fails on jenkins for some reason:
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[3,27] error: cannot find symbol
[ERROR] symbol: class AWSBatchUtil
location: package au.org.aodn.aws.util
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[4,27] error: cannot find symbol
[ERROR] symbol: class JobFileUtil
location: package au.org.aodn.aws.util
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[6,33] error: cannot find symbol
[ERROR] symbol: class JobStatusFormatEnum
location: package au.org.aodn.aws.wps.status
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[10,33] error: cannot find symbol
[ERROR] symbol: class QueuePosition
location: package au.org.aodn.aws.wps.status
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[20,30] error: package net.opengis.wps.v_1_0_0 does not exist
[ERROR] /home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[21,30] error: package net.opengis.wps.v_1_0_0 does not exist
[ERROR] /home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[40,25] error: cannot find symbol
[ERROR] symbol: class JobStatusFormatEnum
location: class JobStatusServiceRequestHandler
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[239,32] error: cannot find symbol
[ERROR] symbol: class ExecuteResponse
location: class JobStatusServiceRequestHandler
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[308,33] error: cannot find symbol
[ERROR] symbol: class StatusType
...
Request handler custom endpoint not working but the API Gateway endpoint works fine. Enable logging for API Gateway to see the error.
Try deploying sandbox but leave environment as dev (or other breaking change)
Stack update fails (unknown vpc in the example above) and ends up in a ROLLBACK_COMPLETE status and stack is in an inconsistent state (broken - van't be used)
I get errors and rollback to a consistent state or we can't get into this situation (e.g. have a sandbox deploy job with correct params).
Deploy stack with
requestHandlerFilename=request-handler.zip
requestHandlerCodeURL=https://s3-ap-southeast-2.amazonaws.com/wps-lambda/request-handler-testing-3-lambda-package.zip
jobStatusFilename=job-status.zip
jobStatusCodeURL=https://s3-ap-southeast-2.amazonaws.com/wps-lambda/job-status-service-testing-3-lambda-package.zip
Try to update stack using
requestHandlerFilename=request-handler-4.zip
requestHandlerCodeURL=https://s3-ap-southeast-2.amazonaws.com/wps-lambda/request-handler-testing-4-lambda-package.zip
jobStatusFilename=job-status-4.zip
jobStatusCodeURL=https://s3-ap-southeast-2.amazonaws.com/wps-lambda/job-status-service-testing-4-lambda-package.zip
Get cloud formation errors:
13:30:04 UTC+1100 UPDATE_FAILED AWS::Lambda::Function JobStatusLambdaFunction Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
13:30:03 UTC+1100 UPDATE_FAILED AWS::Lambda::Function RequestHandlerLambdaFunction
Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
Code is updated and update completes
Submit valid execute request requesting provenance output
Get provenance document with invalid settings location:
<prov:entity prov:id="outputAggregationSettings">
<prov:location>https://github.com/aodn/geoserver-config/tree/production/https://s3-ap-southeast-2.amazonaws.com/aws-wps-dev-testing/config/templates.xml</prov:location>
<prov:type codeList="codeListLocation#type" codeListValue="outputConfiguration">outputConfiguration</prov:type>
</prov:entity>
Missing source data location
<prov:entity prov:id="sourceData">
<prov:location></prov:location>
<prov:type codeList="codeListLocation#type" codeListValue="inputData">inputData</prov:type>
</prov:entity>
Java documentation link which points to GoGoDuck doco which is now out-of-date
Get correct information included.
Invoke DescribeProcess for the first time after a aws-wps stack is deployed or after 20 minutes without calling a DescribeProcess, GetCapabilities or Execute request
Takes 12 secs to run
Runs in web time - ie. less than a few seconds
Parameters (values you pass in when you bring up a new stack):
Created Resources:
Job definition + IAM role
Job Queue
Compute Environment + IAM role
Lambda Handler + IAM role
S3 Bucket for output (if no pre-existing bucket given)
API Gateway
Our sandbox geonetwork is moving records to health bad because WFS checks are being run against wps-sandbox (in addition to being run against geoserver-sandbox). We should fix the online resource monitor to only do WFS checks on the correct endpoints.
Submit
or a smaller subset
Files are aggregated in random order
Adding /tmp/ee9da36d-626d-424e-8748-3ab7fcb3a05720161126032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc to output file
Adding /tmp/fc578f8c-ab70-49ee-b4a0-cf0188b2610c20170308032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc to output file
Adding /tmp/ec7dc58b-33ed-4700-9c2b-cf39f0afb42e20170630032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc to output file
Adding /tmp/b0b24011-e366-461e-8f81-ca26e8ff789920161129032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc to output file
...
Files are added in time order so that the resulting time series is valid.
Should be able to build this using jenkins maven repo. Need to add appropriate repository details.
Submit a subset request selecting no data (I specified an end time before the start time accidentally)
<ns3:ExecuteResponse statusLocation="https://s3-ap-southeast-2.amazonaws.com/aws-wps-dev-testing/jobs/a0c7d186-d89b-4154-bddd-f721c323b6c4/status.xml" service="WPS" version="1.0.0" xml:lang="en-US" xmlns:ns2="http://www.w3.org/1999/xlink" xmlns:ns1="http://www.opengis.net/ows/1.1" xmlns:ns3="http://www.opengis.net/wps/1.0.0">
<ns3:Status creationTime="2017-11-01T22:20:45.621Z">
<ns3:ProcessFailed>
<ns1:ExceptionReport version="1.0.0">
<ns1:Exception exceptionCode="AggregationError">
<ns1:ExceptionText>Exception occurred during aggregation :Could not convert output to csv: 'java.io.EOFException: Reading /tmp/agg2577211535447471107.nc at 0 file length = 0'</ns1:ExceptionText>
</ns1:Exception>
</ns1:ExceptionReport>
</ns3:ProcessFailed>
</ns3:Status>
</ns3:ExecuteResponse>
Get an error message saying end time cannot be before start time - or your subset selected no data or something along those lines.
Modifying aws-wps currently often require parameters being passed to the cloud formation template to be changed. As such deployment of a particular version of aws-wps requires the cloud formation template for that version to be used and parameters required for that template to be passed.
Its not possible to do this in the current aws-wps deploy process.
I have done an assessment of the logging we are currently doing in the AWS WPS codebase - in order to identify whether or not we are capturing all of the information we need to support our current support and reporting requirements (assessed by looking at the stats we currently gather & the support information we currently use).
The findings are:
We are gathering similar (and sufficient) logging information to what is currently gathered in geoserver. It should be sufficient to harvest out similar metrics and support information using Sumo. Some logging could be organised better so that key events log all key information in single log lines - to make our Sumo queries easier when we update our dashboards. I will implement these changes as part of the current iteration in preparation.
The only piece of information that seems to be missing from the AWS WPS logging which may be handy is the IP address of the caller. Most of our reports exclude requests from our AODN subnet from the counts - assuming that transactions we initiate are for support purposes. It would be possible for the portal to forward that information to the AWS Lambda request handler by adding it to the HTTP call it makes as a HTTP header (X-Forwarded-For for instance). This would be passed through & could be read & logged by the request handler code. This might not be required - as i think a lot of the Sumo queries use the portal logging to identify IP addresses - which should still work.
In the period when we go live with the new gs:GoGoDuck implementation in AWS & the old NetcdfOutput implementation is still active in geoserver : gathering the stats using Sumo may be a bit more complex. Working out total number of WPS requests, number of WPS jobs succeeded & number of WPS requests that have failed will require us to count the geoserver transactions and the AWS transactions in a single Sumo search. This may be possible (or not) - but may be too complex to make it practical. We may need to agree on the best way to present figures for folk to assemble reporting in this interim period. Ultimately all the transactions will be run in AWS in future - so this issue will largely become moot in time.
Submit the following request (a years worth of SRS data)
Job is killed due to memory usage
The job completes successfully
As part of our switch-over procedure for promoting the new AWS gs:GoGoDuck implementation into production - we will need to update the metadata records for all relevant records to point to the new AWS WPS endpoint.
WPS endpoints for NetcdfOutput will remain as they are - ie: pointing to the Geoserver WPS endpoint.
It has been suggested (by @jonescc ) that updating these metadata records could possibly be scripted in some fashion.
Submit request.txt
Gets stuck aggregating the file
Performs aggregation (Only one file is included!)
Currently AWS-WPS in dev account only sends emails to registered recipients. If sandbox is to be run in the dev account then email addresses for all testers will have to be added or the whitelist removed
Submit a valid execute request and wait for it to complete. Attempt to download status.xml, request.xml, output files
Files are downloaded as content-type application/octet-stream confusing applications that use them (restassured in my instance)
The correct content-type is returned for the downloaded files.
getCapabilitiesURL, jobStatusHtmlTransformURL, jobQueueHtmlTemplateURL, gogduckURL, netcdfOutputURL, templatesURL, provenanceTemplateGriddedURL, provenanceTemplateNonGriddedURL
parameters have been removed from the cloud formation template and do not need to be supplied anymore.
To be removed after all prior versions of the cloud formation template using it have been deployed.
Try to deploy sandbox with sumologic endpoint specified
Update fails with
},
{
"StackId": "arn:aws:cloudformation:ap-southeast-2:104044260116:stack/aws-wps-sandbox/12594420-d62a-11e7-a9bd-5081eaa90811",
"EventId": "BatchJobLogsSubscriptionFilter-CREATE_FAILED-2017-11-30T23:57:50.167Z",
"ResourceStatus": "CREATE_FAILED",
"ResourceType": "AWS::Logs::SubscriptionFilter",
"Timestamp": "2017-11-30T23:57:50.167Z",
"ResourceStatusReason": "Resource limit exceeded.",
"StackName": "aws-wps-sandbox",
"ResourceProperties": "{\"FilterPattern\":\"\",\"LogGroupName\":\"/aws/batch/job\",\"DestinationArn\":\"arn:aws:lambda:ap-southeast-2:104044260116:function:SumoLogic-WPS-aws-wps-sandbox\"}\n",
"PhysicalResourceId": "",
"LogicalResourceId": "BatchJobLogsSubscriptionFilter"
},
Full log failed-2.log
The update is successful
I was initially having trouble working out which folders in the repo connected with with which aws services. The services are located in the US East N. Virginia region (us-east-1).
javaduck-image
contains a docker file which is used with the Elastic Container Service (ECS). The created instances will contain the "aggregation-worker "jar so aggregations can be performed from them.
aggregation-worker
contains the java code from which the "aggregation-worker" jar has been compiled. This is the same code as in the geoserver javaduck extension (probably with some small changes) and performs the grunt work of the aggregation.
request-handler
contains the lambda function code for adding a batch job. The batch details are currently hard coded in:
SubmitJobRequest submitJobRequest = new SubmitJobRequest();
submitJobRequest.setJobQueue("javaduck-small-in"); //TODO: config/jobqueue selection
submitJobRequest.setJobName("javaduck");
submitJobRequest.setJobDefinition(processIdentifier); //TODO: either map to correct job def or set vcpus/memory required appropriately
submitJobRequest.setParameters(parameterMap);
AWSBatchClientBuilder builder = AWSBatchClientBuilder.standard();
builder.setRegion("us-east-1"); // TODO: get from config
requests
contains a demo request to an API gateway, which simply proxies the request to the lambda function. There doesn't seem to be any code for the definition of the gateway yet, it's been set up manually.Run up a stack specifying hostedZoneName/wpsDomainName(I used dev.aodn.org.au/aps-testing)
Submit a valid Execute request
I get an email with a link to the status page which doesn't work - https://wps-testing.dev.aodn.org.au/wps/jobStatus?jobId=02944aad-dfb5-4b97-8618-958063701559&format=HTML
The actual required link is https://wps-testing.dev.aodn.org.au/LATEST/wps/jobStatus?jobId=02944aad-dfb5-4b97-8618-958063701559&format=HTML
The link should work - i.e. https://wps-testing.dev.aodn.org.au/wps/jobStatus?jobId=02944aad-dfb5-4b97-8618-958063701559&format=HTML (don't see why we would need to specify LATEST in our url's).
We currently configure a status bucket and an output bucket separately. Requests and status documents use the status bucket, outputs use the output bucket. Not really sure why we would want to have two locations for storing outputs. Perhaps we should consolidate?
One wrinkle is that we are using the status bucket for css/image config as well????.
public static String getBootstrapCssS3ExternalURL() {
return getConfigFileS3ExternalURL(BOOTSTRAP_CSS_FILENAME_CONFIG_KEY);
}
public static String getAodnCssS3ExternalURL() {
return getConfigFileS3ExternalURL(AODN_CSS_FILENAME_CONFIG_KEY);
}
public static String getAodnLogoS3ExternalURL() {
return getConfigFileS3ExternalURL(AODN_LOGO_FILENAME_CONFIG_KEY);
}
private static String getConfigFileS3ExternalURL(String filename) {
return getS3ExternalURL(getConfig(STATUS_S3_BUCKET_CONFIG_KEY), getConfig(AWS_BATCH_CONFIG_S3_KEY) + getConfig(filename));
}
That to me could be configured separately!
Perform a DecsribeProcess, GetCapabilities, or Execute request
Get ContentType of 'application/json' returned
Get ContentType of 'application/xml' returned
We need to deploy sandbox with appropriate settings for compute environment and job memory usage/cpu limitations so that aggregation jobs will not fail due to running out of storage or memory or cause other jobs to fail.
Submit an execute request
Get an execute response which does not validate against the schema because it is missing the process element:
I get a response which validates against the schema as per geoserver:
We can have a different environment variables for the different queue which will be used by lambda to decide where to submit the job.
For example
AWS_BATCH_BIG_JOB_QUEUE_NAME
AWS_BATCH_SMALL_JOB_QUEUE_NAME
Submit an execute request for an unknown process you get:
<ExceptionReport version="1.0.0" xmlns="http://www.opengis.net/ows/1.1">
<Exception exceptionCode="ExecutionError">
<ExceptionText>Error executing request, Exception : JobDefinition must be provided, RequestId: efd5483a-bf5f-11e7-a61f-794d20532e48 (Service: AWSBatch; Status Code: 400; Error Code: ClientException; Request ID: efd5483a-bf5f-11e7-a61f-794d20532e48)</ExceptionText>
</Exception>
</ExceptionReport>
Get a more informative error message - such as Unknown process ...
Describe gs:GoGoDuck process
Perform aggregation returning provenance output
DescribeProcess specifies that application/xml will be returned.
text/xml is actually returned
They match
Submit
Get the following error during aggregation
2017-11-02 23:30:47.091 INFO 1 --- [ main] au.org.emii.aggregator.NetcdfAggregator : Adding /tmp/1b56d936-178d-436d-a344-55c9fe6667d620170311032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc to output file
2017-11-02 23:30:47.355 INFO 1 --- [pool-1-thread-3] a.o.e.download.ParallelDownloadManager : Downloaded http://data.aodn.org.au/IMOS/SRS/SST/ghrsst/L3S-1d/day/2017/20170727032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc
au.org.emii.aggregator.exception.AggregationException: java.io.IOException: -101: NetCDF: HDF error
at au.org.emii.aggregator.NetcdfAggregator.chunkedCopy(NetcdfAggregator.java:249)
at au.org.emii.aggregator.NetcdfAggregator.appendTimeSlices(NetcdfAggregator.java:221)
at au.org.emii.aggregator.NetcdfAggregator.add(NetcdfAggregator.java:111)
at au.org.emii.aggregationworker.AggregationRunner.run(AggregationRunner.java:181)
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:732)
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:716)
at org.springframework.boot.SpringApplication.afterRefresh(SpringApplication.java:703)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:304)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1118)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1107)
at au.org.emii.aggregationworker.Application.main(Application.java:12)
Suppressed: java.io.IOException: -101: NetCDF: HDF error
at ucar.nc2.jni.netcdf.Nc4Iosp.flush(Nc4Iosp.java:3292)
at ucar.nc2.NetcdfFileWriter.flush(NetcdfFileWriter.java:1013)
at ucar.nc2.NetcdfFileWriter.close(NetcdfFileWriter.java:1024)
at au.org.emii.aggregator.NetcdfAggregator.close(NetcdfAggregator.java:120)
at au.org.emii.aggregationworker.AggregationRunner.run(AggregationRunner.java:245)
... 7 more
Caused by: java.io.IOException: -101: NetCDF: HDF error
at ucar.nc2.jni.netcdf.Nc4Iosp.flush(Nc4Iosp.java:3292)
at ucar.nc2.NetcdfFileWriter.flush(NetcdfFileWriter.java:1013)
at au.org.emii.aggregator.NetcdfAggregator.chunkedCopy(NetcdfAggregator.java:245)
... 10 more
Aggregation finishes successfully.
Submit an Execute response and validate the response against http://www.opengis.net/wps/1.0.0/wpsAll.xsd
Get a validation error
org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 281; cvc-complex-type.4: Attribute 'serviceInstance' must appear on element 'ns3:ExecuteResponse'.
The response validates successfully
Describe an unknown process
Get
ExceptionCode: ExecutionError
ExceptionText: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 4A3D78608A541292; S3 Extended Request ID: MDzOx5gKNQ8+k+PQHUII220bXSSHDJR0Zt5edfc1Eq2zzJgMiTnGb+bBc/IthA4eGbFhJ+w8HRE=)
ExceptionCode: InvalidParameterValue
Locator: identifier
ExceptionText: something like "Unknown process"
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.