GithubHelp home page GithubHelp logo

aodn / aws-wps Goto Github PK

View Code? Open in Web Editor NEW
7.0 7.0 4.0 144.55 MB

Components for an OGC Web Processing Service providing NetCDF aggregation and subsetting services on AWS

License: GNU General Public License v3.0

Java 91.66% Shell 0.45% FreeMarker 5.63% Dockerfile 0.28% JavaScript 1.44% Perl 0.54%

aws-wps's People

Contributors

akashisama avatar aodn-ci avatar bhasin85 avatar ccmoloney avatar craigrose avatar digorgonzola avatar gsatimos avatar jonescc avatar lwgordonimos avatar nspool avatar pmbohm avatar sachitrajbhandari avatar sqbaillie avatar utas-raymondng avatar vietnguyengit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-wps's Issues

Execute operation incorrectly requires a callback parameter

To Reproduce

Post a valid execute request without specifying the callback parameter

What Happens

<ExceptionReport version="1.0.0" xmlns="http://www.opengis.net/ows/1.1">
  <Exception exceptionCode="ExecutionError">
    <ExceptionText>Unable to substitute value. No parameter found for reference callbackParams (Service: AWSBatch; Status Code: 400; Error Code: ClientException; Request ID: 2811662a-b51c-11e7-8f06-4d2ead61f7de)</ExceptionText>
  </Exception>
</ExceptionReport>

What I expect to happen

The aggregation is performed without sending emails (as required for MARVL)

Timeseries in descending rather than ascending order

To reproduce

Request a timeseries

What happens

Timeseries is in descending order

What I expect to happen

Timeseries is in ascending order. This matches the current production ordering and while the other order is valid, ascending is preferred by @ggalibert

Status Page Improvements

During the last iteration I prototyped some additions to the status page - aimed at providing support information. The aims of these additions were to:

  • Give a view of what was in the AWS Batch queue : to allow us to see how many jobs were queued, what jobs are currently running & what jobs have been completed recently (the job information is only available in AWS Batch for approximately 24 hours after a job has completed processing).
  • Provide additional information about each individual job that may be useful for support. In the first instance I have added the ability to view the request that was submitted : which shows the layer, the subset requested, the email address of the user who requested the aggregation & the outputs requested.

The initial prototype has added 2 new 'format's to the status page that can be requested to view this additional information:
format=queue : Displays the queue information
format=admin&jobId=<JOB_ID> : shows the job status and the request information for the job

Routing requests to AWS-WPS & Geoserver WPS

Our first production version of AWS-WPS will currently only support gs:GoGoDuck aggregations - not NetcdfOutput aggregations.

It is expected that future releases will support all types of aggregations - and the aggregation codebase currently implement in Geoserver WPS will be decommissioned.

In the interim: until AWS-WPS implements all aggregation types we will need to preserve the current implementations of aggregation types not supported in AWS-WPS : NetcdfOutput (is that the only one?).

A client (the portal for instance) will need to direct gs:GoGoDuck aggregation execute requests to AWS-WPS & all other aggregation execute requests to Geoserver WPS.

Can we easily cater for this behaviour in a way that is relatively transparent to a client? Will updating URLs in the metadata for the collections that invoke gs:GoGoDuck be sufficient to point those collections to the new AWS-WPS aggregator?

IAM policies with overly broad Resource: '*' access

The following policies need to be modified so that the corresponding role only has access to modify the appropriate stack resource. For example, a Lambda PublishVersion permission should only apply to the stack function (by reference), to avoid any risk of code bugs modifying non-stack resources.

I think these are all of the references to '*':

https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L542
https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L647
https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L749
https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L783
https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L882
https://github.com/aodn/aws-wps/blob/master/wps-cloudformation-template.yaml#L1055

Add parameter validation to verify incoming jobs prior to execution

We should be doing at least some validation of the parameters being passed to the Execute operation in order to sanity-check incoming requests prior to submitting the job to AWS batch for processing.

These checks could include checking to ensure that the incoming WPS execute request does not include outputs and features that we are not supporting.

Validation should provide some meaningful error messages when common parameter/request problems are detected.

Bounding longitudes reversed in provenance output

To reproduce

<ns2:Execute service="WPS" version="1.0.0" xmlns:ns2="http://www.opengis.net/wps/1.0.0" xmlns:ns1="http://www.opengis.net/ows/1.1" xmlns:ns3="http://www.w3.org/1999/xlink">
  <ns1:Identifier>gs:GoGoDuck</ns1:Identifier>
  <ns2:DataInputs>
    <ns2:Input>
      <ns1:Identifier>layer</ns1:Identifier>
      <ns2:Data>
        <ns2:LiteralData>imos:acorn_hourly_avg_rot_qc_timeseries_url</ns2:LiteralData>
      </ns2:Data>
    </ns2:Input>
    <ns2:Input>
      <ns1:Identifier>subset</ns1:Identifier>
      <ns2:Data>
        <ns2:LiteralData>TIME,2017-01-01T00:00:00.000Z,2017-01-07T23:04:00.000Z;LATITUDE,-33.18,-31.45;LONGITUDE,114.82,115.39</ns2:LiteralData>
      </ns2:Data>
    </ns2:Input>
    <ns2:Input>
      <ns1:Identifier>callbackParams</ns1:Identifier>
      <ns2:Data>
        <ns2:LiteralData>[email protected]</ns2:LiteralData>
      </ns2:Data>
    </ns2:Input>
  </ns2:DataInputs>
  <ns2:ResponseForm>
    <ns2:ResponseDocument storeExecuteResponse="true" status="true">
      <ns2:Output asReference="true" mimeType="text/xml">
        <ns1:Identifier>provenance</ns1:Identifier>
      </ns2:Output>
    </ns2:ResponseDocument>
  </ns2:ResponseForm>
</ns2:Execute>

What happens

            <gex:westBoundLongitude>
              <gco:Decimal>115.39</gco:Decimal>
            </gex:westBoundLongitude>
            <gex:eastBoundLongitude>
              <gco:Decimal>114.82</gco:Decimal>
            </gex:eastBoundLongitude>

What I expect to happen

westBoundingLongitude contains the west bounding longitude and eastBoundingLongitude contains the east bounding longitude

aws-wps build failing on jenkins

Builds fine on my laptop, but fails on jenkins for some reason:

[ERROR] COMPILATION ERROR : 
[INFO] -------------------------------------------------------------
[ERROR] /home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[3,27] error: cannot find symbol
[ERROR]   symbol:   class AWSBatchUtil
  location: package au.org.aodn.aws.util
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[4,27] error: cannot find symbol
[ERROR]   symbol:   class JobFileUtil
  location: package au.org.aodn.aws.util
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[6,33] error: cannot find symbol
[ERROR]   symbol:   class JobStatusFormatEnum
  location: package au.org.aodn.aws.wps.status
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[10,33] error: cannot find symbol
[ERROR]   symbol:   class QueuePosition
  location: package au.org.aodn.aws.wps.status
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[20,30] error: package net.opengis.wps.v_1_0_0 does not exist
[ERROR] /home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[21,30] error: package net.opengis.wps.v_1_0_0 does not exist
[ERROR] /home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[40,25] error: cannot find symbol
[ERROR]   symbol:   class JobStatusFormatEnum
  location: class JobStatusServiceRequestHandler
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[239,32] error: cannot find symbol
[ERROR]   symbol:   class ExecuteResponse
  location: class JobStatusServiceRequestHandler
/home/jenkins/jobs/aws_wps_build/workspace/job-status-service/src/main/java/au/org/aodn/aws/wps/lambda/JobStatusServiceRequestHandler.java:[308,33] error: cannot find symbol
[ERROR]   symbol:   class StatusType
...

Stack update failure breaks stack

To reproduce

Try deploying sandbox but leave environment as dev (or other breaking change)

What happens

Stack update fails (unknown vpc in the example above) and ends up in a ROLLBACK_COMPLETE status and stack is in an inconsistent state (broken - van't be used)

What I expect to happen

I get errors and rollback to a consistent state or we can't get into this situation (e.g. have a sandbox deploy job with correct params).

Cannot update lambda code using cloud formation

To reproduce

Deploy stack with

requestHandlerFilename=request-handler.zip
requestHandlerCodeURL=https://s3-ap-southeast-2.amazonaws.com/wps-lambda/request-handler-testing-3-lambda-package.zip
jobStatusFilename=job-status.zip
jobStatusCodeURL=https://s3-ap-southeast-2.amazonaws.com/wps-lambda/job-status-service-testing-3-lambda-package.zip

Try to update stack using

requestHandlerFilename=request-handler-4.zip
requestHandlerCodeURL=https://s3-ap-southeast-2.amazonaws.com/wps-lambda/request-handler-testing-4-lambda-package.zip
jobStatusFilename=job-status-4.zip
jobStatusCodeURL=https://s3-ap-southeast-2.amazonaws.com/wps-lambda/job-status-service-testing-4-lambda-package.zip

What happens

Get cloud formation errors:

13:30:04 UTC+1100 UPDATE_FAILED AWS::Lambda::Function JobStatusLambdaFunction Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
13:30:03 UTC+1100 UPDATE_FAILED AWS::Lambda::Function RequestHandlerLambdaFunction
Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.

What I expect to happen

Code is updated and update completes

Provenance document issues

To reproduce

Submit valid execute request requesting provenance output

What happens

Get provenance document with invalid settings location:

    <prov:entity prov:id="outputAggregationSettings">
        <prov:location>https://github.com/aodn/geoserver-config/tree/production/https://s3-ap-southeast-2.amazonaws.com/aws-wps-dev-testing/config/templates.xml</prov:location>
        <prov:type codeList="codeListLocation#type" codeListValue="outputConfiguration">outputConfiguration</prov:type>
    </prov:entity>

Missing source data location

     <prov:entity prov:id="sourceData">
        <prov:location></prov:location>
        <prov:type codeList="codeListLocation#type" codeListValue="inputData">inputData</prov:type>
    </prov:entity>

Java documentation link which points to GoGoDuck doco which is now out-of-date

What I expect to happen

Get correct information included.

DescribeProcess takes 12 secs to respond on cold start

To reproduce

Invoke DescribeProcess for the first time after a aws-wps stack is deployed or after 20 minutes without calling a DescribeProcess, GetCapabilities or Execute request

What Happens

Takes 12 secs to run

What I expect to happen

Runs in web time - ie. less than a few seconds

Cloud Formation Resource List

Parameters (values you pass in when you bring up a new stack):

  • vpc subnets
  • vpc security group
  • geoserver address
  • job vCPUs
  • job memory
  • pre-existing S3 bucket for output (optional)
  • docker image
  • lambda function S3 bucket
  • lambda function S3 key

Created Resources:
Job definition + IAM role
Job Queue
Compute Environment + IAM role
Lambda Handler + IAM role
S3 Bucket for output (if no pre-existing bucket given)
API Gateway

Online Resource Monitor WFS checks run on WPS

Our sandbox geonetwork is moving records to health bad because WFS checks are being run against wps-sandbox (in addition to being run against geoserver-sandbox). We should fix the online resource monitor to only do WFS checks on the correct endpoints.

Files aggregated out of order

To reproduce

Submit

request.txt

or a smaller subset

What happens

Files are aggregated in random order

Adding /tmp/ee9da36d-626d-424e-8748-3ab7fcb3a05720161126032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc to output file
Adding /tmp/fc578f8c-ab70-49ee-b4a0-cf0188b2610c20170308032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc to output file
Adding /tmp/ec7dc58b-33ed-4700-9c2b-cf39f0afb42e20170630032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc to output file
Adding /tmp/b0b24011-e366-461e-8f81-ca26e8ff789920161129032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc to output file
...

What I expect to happen

Files are added in time order so that the resulting time series is valid.

Poor error message for empty subset

To reproduce

Submit a subset request selecting no data (I specified an end time before the start time accidentally)

What happens

<ns3:ExecuteResponse statusLocation="https://s3-ap-southeast-2.amazonaws.com/aws-wps-dev-testing/jobs/a0c7d186-d89b-4154-bddd-f721c323b6c4/status.xml" service="WPS" version="1.0.0" xml:lang="en-US" xmlns:ns2="http://www.w3.org/1999/xlink" xmlns:ns1="http://www.opengis.net/ows/1.1" xmlns:ns3="http://www.opengis.net/wps/1.0.0">
  <ns3:Status creationTime="2017-11-01T22:20:45.621Z">
    <ns3:ProcessFailed>
      <ns1:ExceptionReport version="1.0.0">
        <ns1:Exception exceptionCode="AggregationError">
          <ns1:ExceptionText>Exception occurred during aggregation :Could not convert output to csv: 'java.io.EOFException: Reading /tmp/agg2577211535447471107.nc at 0 file length = 0'</ns1:ExceptionText>
        </ns1:Exception>
      </ns1:ExceptionReport>
    </ns3:ProcessFailed>
  </ns3:Status>
</ns3:ExecuteResponse>

What I expect to happen

Get an error message saying end time cannot be before start time - or your subset selected no data or something along those lines.

Cloud formation template deployment params are version specific

Modifying aws-wps currently often require parameters being passed to the cloud formation template to be changed. As such deployment of a particular version of aws-wps requires the cloud formation template for that version to be used and parameters required for that template to be passed.

Its not possible to do this in the current aws-wps deploy process.

Logging for support/metrics/reporting

I have done an assessment of the logging we are currently doing in the AWS WPS codebase - in order to identify whether or not we are capturing all of the information we need to support our current support and reporting requirements (assessed by looking at the stats we currently gather & the support information we currently use).

The findings are:

  • We are gathering similar (and sufficient) logging information to what is currently gathered in geoserver. It should be sufficient to harvest out similar metrics and support information using Sumo. Some logging could be organised better so that key events log all key information in single log lines - to make our Sumo queries easier when we update our dashboards. I will implement these changes as part of the current iteration in preparation.

  • The only piece of information that seems to be missing from the AWS WPS logging which may be handy is the IP address of the caller. Most of our reports exclude requests from our AODN subnet from the counts - assuming that transactions we initiate are for support purposes. It would be possible for the portal to forward that information to the AWS Lambda request handler by adding it to the HTTP call it makes as a HTTP header (X-Forwarded-For for instance). This would be passed through & could be read & logged by the request handler code. This might not be required - as i think a lot of the Sumo queries use the portal logging to identify IP addresses - which should still work.

  • In the period when we go live with the new gs:GoGoDuck implementation in AWS & the old NetcdfOutput implementation is still active in geoserver : gathering the stats using Sumo may be a bit more complex. Working out total number of WPS requests, number of WPS jobs succeeded & number of WPS requests that have failed will require us to count the geoserver transactions and the AWS transactions in a single Sumo search. This may be possible (or not) - but may be too complex to make it practical. We may need to agree on the best way to present figures for folk to assemble reporting in this interim period. Ultimately all the transactions will be run in AWS in future - so this issue will largely become moot in time.

Update GoGoDuck endpoint URLs in Geonetwork for AWS-WPS switchover

As part of our switch-over procedure for promoting the new AWS gs:GoGoDuck implementation into production - we will need to update the metadata records for all relevant records to point to the new AWS WPS endpoint.

WPS endpoints for NetcdfOutput will remain as they are - ie: pointing to the Geoserver WPS endpoint.

It has been suggested (by @jonescc ) that updating these metadata records could possibly be scripted in some fashion.

Need to add tester emails to AWS SES

Currently AWS-WPS in dev account only sends emails to registered recipients. If sandbox is to be run in the dev account then email addresses for all testers will have to be added or the whitelist removed

Documents not stored on S3 with correct mimetype

To reproduce

Submit a valid execute request and wait for it to complete. Attempt to download status.xml, request.xml, output files

What happens

Files are downloaded as content-type application/octet-stream confusing applications that use them (restassured in my instance)

What I expect to happen

The correct content-type is returned for the downloaded files.

Remove deleted inputs from aws_wps_deploy job

getCapabilitiesURL, jobStatusHtmlTransformURL, jobQueueHtmlTemplateURL, gogduckURL, netcdfOutputURL, templatesURL, provenanceTemplateGriddedURL, provenanceTemplateNonGriddedURL

parameters have been removed from the cloud formation template and do not need to be supplied anymore.

To be removed after all prior versions of the cloud formation template using it have been deployed.

Limit reached error adding sumologic endpoint

To reproduce

Try to deploy sandbox with sumologic endpoint specified

What happens

Update fails with

        }, 
        {
            "StackId": "arn:aws:cloudformation:ap-southeast-2:104044260116:stack/aws-wps-sandbox/12594420-d62a-11e7-a9bd-5081eaa90811", 
            "EventId": "BatchJobLogsSubscriptionFilter-CREATE_FAILED-2017-11-30T23:57:50.167Z", 
            "ResourceStatus": "CREATE_FAILED", 
            "ResourceType": "AWS::Logs::SubscriptionFilter", 
            "Timestamp": "2017-11-30T23:57:50.167Z", 
            "ResourceStatusReason": "Resource limit exceeded.", 
            "StackName": "aws-wps-sandbox", 
            "ResourceProperties": "{\"FilterPattern\":\"\",\"LogGroupName\":\"/aws/batch/job\",\"DestinationArn\":\"arn:aws:lambda:ap-southeast-2:104044260116:function:SumoLogic-WPS-aws-wps-sandbox\"}\n", 
            "PhysicalResourceId": "", 
            "LogicalResourceId": "BatchJobLogsSubscriptionFilter"
        }, 

Full log failed-2.log

What I expect to happen

The update is successful

Location of Stuff

I was initially having trouble working out which folders in the repo connected with with which aws services. The services are located in the US East N. Virginia region (us-east-1).

  • javaduck-image contains a docker file which is used with the Elastic Container Service (ECS). The created instances will contain the "aggregation-worker "jar so aggregations can be performed from them.

  • aggregation-worker contains the java code from which the "aggregation-worker" jar has been compiled. This is the same code as in the geoserver javaduck extension (probably with some small changes) and performs the grunt work of the aggregation.

  • request-handler contains the lambda function code for adding a batch job. The batch details are currently hard coded in:

        SubmitJobRequest submitJobRequest = new SubmitJobRequest();
        submitJobRequest.setJobQueue("javaduck-small-in");  //TODO: config/jobqueue selection
        submitJobRequest.setJobName("javaduck");
        submitJobRequest.setJobDefinition(processIdentifier);  //TODO: either map to correct job def or set vcpus/memory required appropriately
        submitJobRequest.setParameters(parameterMap);

        AWSBatchClientBuilder builder = AWSBatchClientBuilder.standard();
        builder.setRegion("us-east-1");  // TODO: get from config
  • requests contains a demo request to an API gateway, which simply proxies the request to the lambda function. There doesn't seem to be any code for the definition of the gateway yet, it's been set up manually.

Email link not working when specifying hostedZoneName/wpsDomainName

To reproduce

Run up a stack specifying hostedZoneName/wpsDomainName(I used dev.aodn.org.au/aps-testing)

Submit a valid Execute request

What Happens

I get an email with a link to the status page which doesn't work - https://wps-testing.dev.aodn.org.au/wps/jobStatus?jobId=02944aad-dfb5-4b97-8618-958063701559&format=HTML

The actual required link is https://wps-testing.dev.aodn.org.au/LATEST/wps/jobStatus?jobId=02944aad-dfb5-4b97-8618-958063701559&format=HTML

What I expect to happen

The link should work - i.e. https://wps-testing.dev.aodn.org.au/wps/jobStatus?jobId=02944aad-dfb5-4b97-8618-958063701559&format=HTML (don't see why we would need to specify LATEST in our url's).

Consolidate output storage locations?

We currently configure a status bucket and an output bucket separately. Requests and status documents use the status bucket, outputs use the output bucket. Not really sure why we would want to have two locations for storing outputs. Perhaps we should consolidate?

One wrinkle is that we are using the status bucket for css/image config as well????.

public static String getBootstrapCssS3ExternalURL() {

    public static String getBootstrapCssS3ExternalURL() {
        return getConfigFileS3ExternalURL(BOOTSTRAP_CSS_FILENAME_CONFIG_KEY);
    }

    public static String getAodnCssS3ExternalURL() {
        return getConfigFileS3ExternalURL(AODN_CSS_FILENAME_CONFIG_KEY);
    }

    public static String getAodnLogoS3ExternalURL() {
        return getConfigFileS3ExternalURL(AODN_LOGO_FILENAME_CONFIG_KEY);
    }

    private static String getConfigFileS3ExternalURL(String filename) {
        return getS3ExternalURL(getConfig(STATUS_S3_BUCKET_CONFIG_KEY), getConfig(AWS_BATCH_CONFIG_S3_KEY) + getConfig(filename));
    }

That to me could be configured separately!

Incorrect ContentType is returned for WPS requests

To reproduce

Perform a DecsribeProcess, GetCapabilities, or Execute request

What happens

Get ContentType of 'application/json' returned

What I expect to happen

Get ContentType of 'application/xml' returned

Unhelpful error returned when requesting an unsupported process

To reproduce

Submit an execute request for an unknown process you get:

What happens

<ExceptionReport version="1.0.0" xmlns="http://www.opengis.net/ows/1.1">
  <Exception exceptionCode="ExecutionError">
    <ExceptionText>Error executing request, Exception : JobDefinition must be provided, RequestId: efd5483a-bf5f-11e7-a61f-794d20532e48 (Service: AWSBatch; Status Code: 400; Error Code: ClientException; Request ID: efd5483a-bf5f-11e7-a61f-794d20532e48)</ExceptionText>
  </Exception>
</ExceptionReport>

What I expect to happen

Get a more informative error message - such as Unknown process ...

Error aggregating SRS collection

To reproduce

Submit

request.txt

What happens

Get the following error during aggregation

2017-11-02 23:30:47.091 INFO 1 --- [ main] au.org.emii.aggregator.NetcdfAggregator : Adding /tmp/1b56d936-178d-436d-a344-55c9fe6667d620170311032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc to output file
2017-11-02 23:30:47.355 INFO 1 --- [pool-1-thread-3] a.o.e.download.ParallelDownloadManager : Downloaded http://data.aodn.org.au/IMOS/SRS/SST/ghrsst/L3S-1d/day/2017/20170727032000-ABOM-L3S_GHRSST-SSTskin-AVHRR_D-1d_day.nc
au.org.emii.aggregator.exception.AggregationException: java.io.IOException: -101: NetCDF: HDF error
at au.org.emii.aggregator.NetcdfAggregator.chunkedCopy(NetcdfAggregator.java:249)
at au.org.emii.aggregator.NetcdfAggregator.appendTimeSlices(NetcdfAggregator.java:221)
at au.org.emii.aggregator.NetcdfAggregator.add(NetcdfAggregator.java:111)
at au.org.emii.aggregationworker.AggregationRunner.run(AggregationRunner.java:181)
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:732)
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:716)
at org.springframework.boot.SpringApplication.afterRefresh(SpringApplication.java:703)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:304)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1118)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1107)
at au.org.emii.aggregationworker.Application.main(Application.java:12)
Suppressed: java.io.IOException: -101: NetCDF: HDF error
at ucar.nc2.jni.netcdf.Nc4Iosp.flush(Nc4Iosp.java:3292)
at ucar.nc2.NetcdfFileWriter.flush(NetcdfFileWriter.java:1013)
at ucar.nc2.NetcdfFileWriter.close(NetcdfFileWriter.java:1024)
at au.org.emii.aggregator.NetcdfAggregator.close(NetcdfAggregator.java:120)
at au.org.emii.aggregationworker.AggregationRunner.run(AggregationRunner.java:245)
... 7 more
Caused by: java.io.IOException: -101: NetCDF: HDF error
at ucar.nc2.jni.netcdf.Nc4Iosp.flush(Nc4Iosp.java:3292)
at ucar.nc2.NetcdfFileWriter.flush(NetcdfFileWriter.java:1013)
at au.org.emii.aggregator.NetcdfAggregator.chunkedCopy(NetcdfAggregator.java:245)
... 10 more

What I expect to happen

Aggregation finishes successfully.

ExecuteResponse doesn't validate

To reproduce

Submit an Execute response and validate the response against http://www.opengis.net/wps/1.0.0/wpsAll.xsd

What happens

Get a validation error

org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 281; cvc-complex-type.4: Attribute 'serviceInstance' must appear on element 'ns3:ExecuteResponse'.

What I expect to happen

The response validates successfully

Unclear why DescribeProcess returned an error when describing an unknown process

To reproduce

Describe an unknown process

What happens

Get

ExceptionCode: ExecutionError
ExceptionText: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 4A3D78608A541292; S3 Extended Request ID: MDzOx5gKNQ8+k+PQHUII220bXSSHDJR0Zt5edfc1Eq2zzJgMiTnGb+bBc/IthA4eGbFhJ+w8HRE=)

What I expect to get

ExceptionCode: InvalidParameterValue
Locator: identifier
ExceptionText: something like "Unknown process"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.