aodn / geoserver-build Goto Github PK
View Code? Open in Web Editor NEWConfigures a GeoServer war containing extensions and configuration required by AODN instances
License: Other
Configures a GeoServer war containing extensions and configuration required by AODN instances
License: Other
Hit the following URL (similar to what Geonetwork Link Monitor will use):
http://geoserver-systest.aodn.org.au/geoserver/ncwms?service=WMS&request=GetMap&version=1.1.1&format=image/png&bbox=-180,-90,180,90&srs=EPSG:4326&width=1&height=1&STYLES=&layers=acorn_hourly_avg_rot_qc_timeseries_url/sea_water_velocity
Returns 500
Returns 200
GenerationIT
tests fail due to not having postgres and other environment variables.
Tests were commented out at:
#41
The tests in question are at:
https://github.com/aodn/geoserver-build/blob/master/src/extension/ncdfgenerator/src/test/java/au/org/emii/ncdfgenerator/GenerationIT.java
Javaduck currently reads all time varying variables into memory (uncompressed) before appending them to the output file. For collections such as CARS_monthly this can consume a large amount of memory inside and outside the java heap (over 670 MiB).
It is not necessary to read all data into memory before writing it out for this dataset. The dataset can be processed one time slice at a time.
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /tmp/jna6629583319403885393.tmp which might have disabled stack guard. The VM will try to fix the stack guard now.
SUREFIRE-859: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.148 sec <<< FAILURE! - in au.org.emii.gogoduck.worker.GoGoDuckModuleTest
testUpdateMetadata(au.org.emii.gogoduck.worker.GoGoDuckModuleTest) Time elapsed: 0.146 sec <<< ERROR!
java.lang.UnsatisfiedLinkError: Unable to load library 'netcdf': libnetcdf.so: cannot open shared object file: No such file or directory
at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:145)
at com.sun.jna.NativeLibrary.getInstance(NativeLibrary.java:188)
at com.sun.jna.Library$Handler.<init>(Library.java:123)
at com.sun.jna.Native.loadLibrary(Native.java:255)
at com.sun.jna.Native.loadLibrary(Native.java:241)
at ucar.nc2.jni.netcdf.Nc4Iosp.load(Nc4Iosp.java:155)
at ucar.nc2.jni.netcdf.Nc4Iosp._open(Nc4Iosp.java:234)
at ucar.nc2.jni.netcdf.Nc4Iosp.openForWriting(Nc4Iosp.java:230)
at ucar.nc2.NetcdfFileWriter.<init>(NetcdfFileWriter.java:196)
at ucar.nc2.NetcdfFileWriter.openExisting(NetcdfFileWriter.java:105)
at au.org.emii.gogoduck.worker.GoGoDuckModule.updateMetadata(GoGoDuckModule.java:153)
at au.org.emii.gogoduck.worker.GoGoDuckModuleTest.testUpdateMetadata(GoGoDuckModuleTest.java:61)
Solution is sudo apt-get install libnetcdf-dev, which should probably be documented
(This might be a bit overly-specific, and I haven't tried to reproduce it outside this one case of ours, sorry)
We have a few layers using the default workspace "cite" with namespace URI "http://www.opengeospatial.net/cite". Downloading these with the csv+metadata plugin causes a NPE:
11 Nov 12:05:57 ERROR [geoserver.ows] -
java.lang.NullPointerException
at au.org.emii.geoserver.wfs.response.CSVWithMetadataHeaderOutputFormat.getDataStoreForFeatureCollection(CSVWithMetadataHeaderOutputFormat.java:121)
at au.org.emii.geoserver.wfs.response.CSVWithMetadataHeaderOutputFormat.getMetadataFeatureName(CSVWithMetadataHeaderOutputFormat.java:134)
at au.org.emii.geoserver.wfs.response.CSVWithMetadataHeaderOutputFormat.write(CSVWithMetadataHeaderOutputFormat.java:79)
Ie, this line.
If I change the namespace to "cite" or "citens" for example, it seems to be fine. Regular outputFormat=CSV downloads are also fine.
This issue relates to the new version of GoGoDuck being tested in portal-sandbox.aodn.org.au:
Request a NetCDF aggregation (GoGoDuck) for an Acorn collection
the date_created attribute is set to the date_created attribute in the first file added to the aggregation
GoGoDuck can be configured to use the aggregation date/time for the date_created attribute
It seems as though we've got two "working copies" of ncdfgenerator:
We need to merge the two forks above to get a working WPS/netcdfgenerator, including bug fixes, within geoserver. The portal integration task depends on this.
Select IMOS - Australian National Mooring Network (ANMN) Facility - Temperature and salinity time-series on step 1
Select deployment code PIL100-1301 on step 2
Perform Netcdf download on step 3
Get an out of memory error trying to perform the download
Get the data downloaded
WPS job results (resources) are purged after a certain timeout (resourceExpirationTimeout
). Accessing the GetExecutionStatus
URL after this time, results in a 500 error.
It would be more useful to perhaps return a valid XML response with an appropriate status code, although there doesn't seem to be a Purged
status - so not sure what's best here.
Other than that, we could just return a 404, but I don't think this would be as good from an end users point of view (i.e. the error message won't be as descriptive).
Hi @danfruehauf,
After GoGoDuck finishes the job process and you can query the job status and download the result.
However, the job status query fails after certain time as WPS sets the job to be expired.
Do you know whether the expiry time can be set?
Thanks,
Ming
At the moment there are 2 gaps preventing abomination ncwms from complying with WMS:
SERVICE=ncwms
(instead of SERVICE=wms
)GetCapabilities
is not supportedGenerally when using ncwms, the GetCapabilities
request is rather useless and supplies you with a ~100mb (or more) XML file with all the time slices ncwms has ever indexed. A more useful GetCapabilities
command will include DATASET=
, which will return the GetCapabilities
only for a single data collection. The latter is probably what we should implement. Not sure what to do about the former.
Hi All,
The NCMWS service in GeoServer forced to set WMS version to 1.3.0 when it sends a wms request to THREDDS server. See below code in Ncwms.java.
public static String wmsVersion = "1.3.0";
wmsParameters.put("VERSION", new String[] { wmsVersion });
Our THREDDS server requires WMS version 1.1.1 and this fixed version number doesn't work for us.
Since the ncwms service only acts as a proxy between portal and THREDDS server, is it ok to remove this line to allow the version set by portal to pass through? The ncwms verion can be set in portal configuration file.
I also noticed that the getCapabilitiesXml method has fixed version 1.3.0 set, I am not sure whether it will be affected after removing the wmsVersion.
Thanks,
Ming
There is a potential to block wating for stdout or stderr on completion of processing here (refer http://www.thecodingforums.com/threads/problems-getting-stderr-and-stdout-from-process-object.699309/ or other numerous posts on this issue). Use a well tested library to consume stdout/stderr from a system command such as apache commons exec pumpstreamhandler, redirect stderr to stdout (e.g. as per http://stackoverflow.com/questions/64000/draining-standard-error-in-java) or better yet rewrite in java so this is not required.
Using subclasses to perform specific processing for different collections including determining the file location for CARS.
There is no need to use these subclasses - the file location should be sourced from teh database like every other collection, and information such as the correct variable name for latitude/longitude/time should eb sourced from the database (requires a harvester) or probed from the file itself.
Try the following (Geoserver 2.7.1.1):
curl "http://geoserver-123.aodn.org.au/geoserver/ows?typeName=acorn_hourly_avg_rot_qc_timeseries_url&SERVICE=WFS&outputFormat=csv&REQUEST=GetFeature&VERSION=1.0.0&CQL_FILTER=time%3D2014-06-06T00%3A30%3A00Z&PROPERTYNAME=time"
FID,time
acorn_hourly_avg_rot_qc_timeseries_url.fid-5e81ca8e_1506e615083_-194b,2014-06-06T00:30:00
In the geoserver log:
16 Oct 13:03:18 INFO [geoserver.wfs] -
Request: getServiceInfo
16 Oct 13:03:18 INFO [geoserver.wfs] -
Request: getFeature
service = WFS
version = 1.0.0
baseUrl = http://localhost:8080/geoserver/
query[0]:
propertyName[0] = time
filter = [ time = Fri Jun 06 10:30:00 AEST 2014 ]
typeName[0] = {imos.mod}acorn_hourly_avg_rot_qc_timeseries_url
outputFormat = csv
resultType = results
Formats are: {marvl xml=marvl xml}
So far so good.
And against a VM (Geoserver 2.8.0):
curl "http://po.aodn.org.au/geoserver/ows?typeName=acorn_hourly_avg_rot_qc_timeseries_url&SERVICE=WFS&outputFormat=csv&REQUEST=GetFeature&VERSION=1.0.0&CQL_FILTER=time%3D2014-06-06T00%3A30%3A00Z&PROPERTYNAME=time"
FID,file_url
No results. Geoserver log:
16 Oct 13:04:12 INFO [geoserver.wfs] -
Request: getServiceInfo
16 Oct 13:04:13 INFO [geoserver.wfs] -
Request: getFeature
service = WFS
version = 1.0.0
baseUrl = http://po.aodn.org.au:80/geoserver/
query[0]:
propertyName[0] = time
filter = [ time = Fri Jun 06 10:30:00 AEST 2014 ]
typeName[0] = {imos.mod}acorn_hourly_avg_rot_qc_timeseries_url
outputFormat = csv
resultType = results
Formats are: {marvl xml=marvl xml}
Now, it's not that there isn't data, if I ask for a range, I will get things on 06/06/2014 on the po box:
$ curl "http://po.aodn.org.au/geoserver/ows?typeName=acorn_hourly_avg_rot_qc_timeseries_url&SERVICE=WFS&outputFormat=csv&REQUEST=GetFeature&VERSION=1.0.0&CQL_FILTER=time%20%3E%3D%202014-06-05T00%3A00%3A00Z%20AND%20time%20%3C%202014-06-07T00%3A00%3A00Z&PROPERTYNAME=time&sortBy=time"
FID,time
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7fa8,2014-06-05T10:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7fa7,2014-06-05T11:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7fa6,2014-06-05T12:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7fa5,2014-06-05T13:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7fa4,2014-06-05T14:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7fa3,2014-06-05T15:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7fa2,2014-06-05T16:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7fa1,2014-06-05T17:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7fa0,2014-06-05T18:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7f9f,2014-06-05T19:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7f9e,2014-06-05T20:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7f9d,2014-06-05T21:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7f9c,2014-06-05T22:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7f9b,2014-06-05T23:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7f9a,2014-06-06T00:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7f99,2014-06-06T01:30:00
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7f98,2014-06-06T02:30:00
If I ask the VM (Geoserver 2.8.0) for the same, just without the trailing Z
, it seems to work:
curl "http://po.aodn.org.au/geoserver/ows?typeName=acorn_hourly_avg_rot_qc_timeseries_url&SERVICE=WFS&outputFormat=csv&REQUEST=GetFeature&VERSION=1.0.0&CQL_FILTER=time%3D2014-06-06T00%3A30%3A00&PROPERTYNAME=time"
FID,time
acorn_hourly_avg_rot_qc_timeseries_url.fid-7905037_1506d9509ec_-7f96,2014-06-06T00:30:00
I don't quite understand where and what time conversion is being done. However this needs to be dealt with as it will affect both GoGoDuck and the ncwms controller.
Open http://auv.aodn.org.au/auv/
(Use Chrome Browser as there are fatal errors in Firefox currently)
Open developer tools
Select a track from the drop down
Click on the track on the map until a 500 error is produced. I am able to see an error produced within about 10 tries.
If this is an error, and this might be the case, the app will popup an error as expected. End of this particular little bug report. (The cause of the error, or at least the output from a error log item relating to this layer, is in a separate bug report in the geoserver repo)
Open the link that caused an 500 error in a new tab, and you can see the response returns without error.
There is a corresponding AUV issue in aodn/auv#7 (comment)
Affects AODN contributors.
Steps to reproduce:
As an example, go to http://reeflifesurvey.com/reef-life-survey/survey-data/ with console open.
Check out the request http://geoserver-rls.imas.utas.edu.au/geoserver/RLS/ows?service=WFS&version=1.1.0&request=GetFeature&typeName=RLS:SiteListQ&outputFormat=text%2Fjavascript&format_options=callback:loadFeatures&srsname=EPSG:3857&_=1470012709613.
Observed output:
500 error, no WFS response returned in that format.
Expected output:
Correct WFS response.
Steps to reproduce:
Run up Geoserver in vagrant with borken DB connection (i.e the default one).
Try to edit a layer using Layer Filters Editor.
Observed behaviour:
Stacktrace complaining about wicket hierarchy issues.
Expected behaviour:
Decent error handling.
Specify an aggregation override like the following in an aggregation template:
<variable name="sea_surface_temperature" type="Float">
<attribute name="_FillValue" value="9.9692099683868690e+36"/>
<attribute name="valid_min" value="0.0"/>
<attribute name="valid_max" value="350.0"/>
</variable>
java.lang.NullPointerException
at au.org.emii.aggregator.datatype.NumericTypes.get(NumericTypes.java:10)
at au.org.emii.aggregator.datatype.NumericTypes.parse(NumericTypes.java:51)
at au.org.emii.aggregator.overrides.VariableAttributeOverride.getAttributeNumericValue(VariableAttributeOverride.java:46)
at au.org.emii.aggregator.overrides.VariableOverrides.getAttributeNumericValue(VariableOverrides.java:88)
at au.org.emii.aggregator.overrides.VariableOverrides.getFillerValue(VariableOverrides.java:45)
at au.org.emii.aggregator.NetcdfAggregator.getUnpackerOverrides(NetcdfAggregator.java:249)
The type of the fill_value, valid_min, valid_max is set to the variable type
Refer comments on https://github.com/aodn/geoserver-build/pull/271/files
We are currently assuming vertical axis values are integer when processing vertical subsets which works for CARS but will not work for datasets where this isn't the case.
$ unzip -l geoserver-1.0.0-imos.war | grep netcdf
25453334 2016-03-15 11:00 WEB-INF/lib/netcdfAll-4.6.4.jar
18360 2016-09-30 11:25 WEB-INF/lib/netcdf-iterator-1.0.0-SNAPSHOT.jar
59295 2016-10-19 10:44 WEB-INF/lib/netcdf4-4.6.6.jar
Job returns an error (not a netcdf file)
Job finishes processing and a subsetted file is available
Code should be moved to its own plugin.
[provenance.ProvenanceWriter] - No template provenance_template_gridded.ftl found for provenance document
To reproduce
Select the following dataset collection "2017 Victorian coastal DEM - Continuous seamless 10m DEM" on the RC AODN Portal
Select a bounding box around port Phillip Bay
Download as NetCDF
What happens
Get the following error message
Process failed during execution ucar.ma2.InvalidRangeException: first (-23186) must be >= 0
See JobID: 99b35702-f9da-4419-8519-0500b97fcc8d created on the 14/09/2017.
What I expect to happen
Get the data downloaded
Post request.txt to http://geoserver-123.aodn.org.au/geoserver/ows
Get a html page back -
response.txt
I get an OGC compliant Exception response back with some information about the error - which in this case is that time_coverage_start and time_coverage_end aren't valid properties.
An exception occurs when trying to generate a NetCDF under certain conditions.
Steps to reproduce
From the command line:
Note that the same thing happens from the context of the WPS process, which can be triggered equivalently from the portal.
What happens
After a minute or two:
Write exception java.lang.IllegalArgumentException: dimension length must be > 0 :0 at ucar.nc2.NetcdfFileWriteable.addDimension(NetcdfFileWriteable.java:247) at au.org.emii.ncdfgenerator.DimensionImpl.define(DimensionImpl.java:26) at au.org.emii.ncdfgenerator.NcdfEncoder.writeNext(NcdfEncoder.java:188) at au.org.emii.ncdfgenerator.Main.main(Main.java:110) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293) at java.lang.Thread.run(Thread.java:745)
What should happen
A valid zip of NetCDF files is produced.
Looks like the NetCDF generator is not closing some result sets resulting in excessive memory consumption including out of memory errors in production.
Refer https://github.com/aodn/internal-discussions/issues/274
Resource cleanup is not being performed on exception in many places e.g. https://github.com/aodn/geoserver-build/blob/master/src/extension/wps/src/main/java/au/org/emii/gogoduck/worker/GoGoDuck.java#L228
Not being performed at all in other places e.g. https://github.com/aodn/geoserver-build/blob/master/src/extension/wps/src/main/java/au/org/emii/gogoduck/worker/GoGoDuck.java#L178
Need to review resource usage and cleanup in gogoduck java code and cleanup where required.
On RC when downloading a spatial subset of the "Victorian Coastal Digital Elevation Model (VCDEM 2017)" (Deakin uni):
:time_coverage_start = "${TIME_START}" ;
:time_coverage_end = "${TIME_END}" ;
Since this dataset doesn't have a temporal dimension this is not relevant and should not be added.
:geospatial_vertical_min = -118.f ;
:geospatial_vertical_max = 753.9601f ;
Submit an aggregation request for oa_reconstruction e.g request.txt
Get an error returned - "Invalid time format for subset: TIME,1870-07-17T19:38:33.16600Z,1870-07-17T19:38:33.16600Z;LATITUDE,-31.6855,-31.6855;LONGITUDE,114.8291,114.8291 Valid time format example: TIME,2009-01-01T00:00:00.000Z,2009-12-25T23:04:00.000Z"
The aggregation is performed and the result returned.
User Case:
Our datasets are organized by month as per netcdf file per month. There are 4 time slices (6 hours jump) per day in a month, therefore, every time slice points to a netcdf file, if the time slice is in the same month, it all point to the same netcdf file. Send a WFS request for one month data, it returns 124 matched netcdf files (assume 31 days in a month), and all are basically the same netcdf file.
In GoGoDuck, it treats these 124 netcdf files are different files and will fire 124 requests to THREDDS server to download the file. This is not desired, is it possible to remove the duplicates before downloading the netcdf files?
Thanks,
Ming
I naively thought that having the parameters externalized via spring is going to be good enough to later override them by chef. Little did I know.
The parameters at the following file needs to be externalized and controlled via CM (chef in our case):
https://github.com/aodn/geoserver-build/blob/master/src/extension/ncwms/src/main/resources/applicationContext.xml
The main parameters we need to be able to control are:
It's possible to also embed them in a geoserver xml (such as ncwms.xml). GoGoDuck does that.
It'll also be important in regards to AODN contributors as they would like to reuse that code, obviously with different end points for infrastructure.
Job never finishes processing
Job finishes processing a subsetted file is available
Original text:
Is it possible to define a HTTP url prefix such as server name in config file?
If wfs service returns relative urls, this cause errors in GoGoDuck service as no server name is defined in the relative urls.
The GoGoDuck URL mangling should be configurable from the gogoduck.xml
configuration file found in the geoserver data directory. At the moment, it is hardcoded for IMOS' use.
Currently, the NetcdfOutputProcess
requires a cql filter to be provided in the request. Conversely, the portal enforces no such behaviour, meaning that it's possible for a user to request a "Download Later" download which will error.
There are two options:
Note: even if no filter is provided, the WPS time execution limit will protect against too large requests.
In current implementation, notification email has been sent directly after the job is completed.
However, for a complete wps process, it will still take time to copy the results and make the result downloadable by the end users. If the copying process takes very long time, end users will get the success email before the download link is available, which leads to a bad user experience.
To fix this, email needs to be sent in a wps event listener, which only send email after the whole wps process is completed.
Please also refer to this issues reported in the AODN Portal:
aodn/aodn-portal#2137
geoserver-layer-filter-extension can have a better name. I couldn't think of any yet, but definitely the geoserver
, layer
and extension
parts are redundant.
o Set the fileSizeLimit to 400000000 (~381MB)
o Select CARS Monthly collection and download as netcdf without subsetting
Get an error:
"Process failed during execution Total file size 3 GB for 1 files, exceeds the limit 381 MB"
The aggregation is successful as the size of the CARS NetCDF file is 314MB which is less than 381MB (3GB is right out)
This issue relates to the new version of GoGoDuck being tested in portal-sandbox.aodn.org.au:
Request a netcdf download for ACORN Bonney Coast real time.
time_coverage_end = "${TIME_END}" is added to the aggregated dataset
A time_coverage_end attribute is added set to the end date time range requested in ISO8601 format. e.g.
time_coverage_end = "2017-03-15T00:00:00Z"
It's something I've observed on my VM, connecting to production DB. Geoserver takes minutes to start. After loading all features, stores and layers, it sort of hangs and run a lot of geometry queries on the DB, such as:
SELECT SRID FROM GEOGRAPHY_COLUMNS WHERE F_TABLE_SCHEMA = 'srs_altimetry' AND F_TABLE_NAME = 'srs_altimetry_timeseries_data' AND F_GEOGRAPHY_COLUMN = 'geom'
Before it hangs, those are the log lines:
16 Oct 09:11:36 INFO [gwc.layer] - GeoServer TileLayer store base directory is: /vagrant/src/geoserver/gwc-layers
16 Oct 09:11:36 INFO [gwc.layer] - Loading tile layers from /vagrant/src/geoserver/gwc-layers
16 Oct 09:11:36 INFO [gwc.layer] - GWC configuration based on GeoServer's Catalog loaded successfuly
16 Oct 09:11:36 INFO [layer.TileLayerDispatcher] - Configuration GeoServer Catalog Configuration contained no layers.
16 Oct 09:11:36 INFO [config.XMLConfiguration] - Initializing GridSets from /vagrant/src/geoserver/gwc
16 Oct 09:11:36 INFO [config.XMLConfiguration] - Initializing layers from /vagrant/src/geoserver/gwc
16 Oct 09:11:36 INFO [layer.TileLayerDispatcher] - Configuration /vagrant/src/geoserver/gwc contained no layers.
16 Oct 09:11:36 INFO [gwc.config] - Initializing GeoServer specific GWC configuration from gwc-gs.xml
Eventually, after a few minutes, it starts and spits the following log lines:
16 Oct 09:14:19 WARN [geotools.jdbc] - Could not find mapping for 'duration', ignoring the column and setting the feature type read only
And once it's actually up:
16 Oct 09:16:43 INFO [diskquota.ConfigLoader] - DiskQuota configuration not found: /vagrant/src/geoserver/gwc/geowebcache-diskquota.xml
16 Oct 09:16:43 INFO [diskquota.ConfigLoader] - DiskQuota configuration not found: /vagrant/src/geoserver/gwc/geowebcache-diskquota.xml
16 Oct 09:16:43 INFO [diskquota.DiskQuotaMonitor] - Setting up disk quota periodic enforcement task
16 Oct 09:16:43 INFO [diskquota.DiskQuotaMonitor] - 0 layers configured with their own quotas.
16 Oct 09:16:43 INFO [diskquota.DiskQuotaMonitor] - 0 layers attached to global quota 500.0 MB
16 Oct 09:16:43 INFO [diskquota.DiskQuotaMonitor] - Disk quota periodic enforcement task set up every 10 SECONDS
16 Oct 09:16:43 INFO [ows.OWSHandlerMapping] - Mapped URL path [/gwc/service/**] onto handler 'dispatcher'
16 Oct 09:16:43 INFO [geowebcache.GeoWebCacheDispatcher] - Invoked setServletPrefix(gwc)
16 Oct 09:16:43 INFO [georss.GeoRSSPoller] - Initializing GeoRSS poller in a background job...
16 Oct 09:16:43 INFO [georss.GeoRSSPoller] - No enabled GeoRSS feeds found, poller will not run.
16 Oct 09:16:43 INFO [rest.RESTDispatcher] - Created RESTDispatcher with 15 paths
16 Oct 09:16:43 INFO [wms.WMSService] - Will NOT recombine tiles for non-tiling clients.
16 Oct 09:16:43 INFO [wms.WMSService] - Will proxy requests to backend that are not getmap or getcapabilities.
16 Oct 09:16:44 INFO [ows.OWSHandlerMapping] - Mapped URL path [/kml] onto handler 'dispatcher'
16 Oct 09:16:44 INFO [ows.OWSHandlerMapping] - Mapped URL path [/kml/*] onto handler 'dispatcher'
16 Oct 09:16:44 INFO [ows.OWSHandlerMapping] - Mapped URL path [/kml/icon/**/*] onto handler 'kmlIconService'
16 Oct 09:16:44 INFO [ows.OWSHandlerMapping] - Mapped URL path [/ows/**] onto handler 'dispatcher'
16 Oct 09:16:44 INFO [ows.OWSHandlerMapping] - Mapped URL path [/ows] onto handler 'dispatcher'
16 Oct 09:16:44 INFO [org.geoserver] - GeoServer configuration lock is enabled
16 Oct 09:16:44 INFO [ows.OWSHandlerMapping] - Mapped URL path [/wcs] onto handler 'dispatcher'
16 Oct 09:16:44 INFO [ows.OWSHandlerMapping] - Mapped URL path [/wcs/**] onto handler 'dispatcher'
16 Oct 09:16:46 INFO [ows.OWSHandlerMapping] - Mapped URL path [/wfs/*] onto handler 'dispatcher'
16 Oct 09:16:46 INFO [ows.OWSHandlerMapping] - Mapped URL path [/TestWfsPost] onto handler 'wfsTestServlet'
16 Oct 09:16:46 INFO [ows.OWSHandlerMapping] - Mapped URL path [/wfs] onto handler 'dispatcher'
16 Oct 09:16:53 INFO [ows.OWSHandlerMapping] - Mapped URL path [/wms/*] onto handler 'dispatcher'
16 Oct 09:16:53 INFO [ows.OWSHandlerMapping] - Mapped URL path [/wms] onto handler 'dispatcher'
16 Oct 09:16:53 INFO [ows.OWSHandlerMapping] - Mapped URL path [/animate] onto handler 'dispatcher'
16 Oct 09:16:53 INFO [ows.OWSHandlerMapping] - Mapped URL path [/animate/*] onto handler 'dispatcher'
16 Oct 09:16:53 INFO [ows.OWSHandlerMapping] - Mapped URL path [/wps] onto handler 'dispatcher'
16 Oct 09:16:53 INFO [ows.OWSHandlerMapping] - Mapped URL path [/wps/*] onto handler 'dispatcher'
16 Oct 09:16:53 INFO [ows.OWSHandlerMapping] - Mapped URL path [/temp/**] onto handler 'filePublisher'
16 Oct 09:16:53 INFO [geoserver.wps] - Blacklisting process ras:RasterZonalStatistics2 as the input zones of type class java.lang.Object cannot be handled
16 Oct 09:16:53 INFO [geoserver.wps] - Blacklisting process ras:RasterZonalStatistics2 as the input nodata of type class it.geosolutions.jaiext.range.Range cannot be handled
16 Oct 09:16:53 INFO [geoserver.wps] - Blacklisting process ras:RasterZonalStatistics2 as the input rangeData of type class java.lang.Object cannot be handled
16 Oct 09:16:53 INFO [geoserver.wps] - Blacklisting process ras:RasterZonalStatistics2 as the output zonal statistics of type interface java.util.List cannot be handled
16 Oct 09:16:53 INFO [geoserver.wps] - Found 14 bindable processes in Raster processes
16 Oct 09:16:53 INFO [geoserver.wps] - Found 48 bindable processes in Geometry processes
16 Oct 09:16:53 INFO [geoserver.wps] - Found 7 bindable processes in GeoServer specific processes
16 Oct 09:16:53 INFO [geoserver.wps] - Found 29 bindable processes in Vector processes
16 Oct 09:16:54 INFO [geoserver.wps] - Found 89 bindable processes in Deprecated processes
16 Oct 09:16:54 INFO [wps.NetcdfOutputProcess] - constructor
16 Oct 09:16:54 INFO [geoserver.security] - Start reloading user/groups for service named default
16 Oct 09:16:54 INFO [geoserver.security] - Reloading user/groups successful for service named default
16 Oct 09:16:54 INFO [geoserver.security] - AuthenticationCache Initialized with 1000 Max Entries, 300 seconds idle time, 600 seconds time to live and 3 concurrency level
16 Oct 09:16:54 INFO [geoserver.security] - AuthenticationCache Eviction Task created to run every 600 seconds
More information:
When geoserver finally starts it works alright.
Spatial and temporal search criteria are currently passed to the GoGoDuck process in a single subset parameter.
This makes the it difficult for a user to determine what needs to be passed without referring to additional documentation.
It also requires the process to perform parsing which wouldn't otherwise be necessary.
Would be good to define each parameter separately so that its easier to describe and make use of them.
Current build overrides web.xml. That is to allow overriding of the HTTP 500 error codes (needed for proper squid operation and is very important).
The original solution introduced web.xml
and added applicationContextOverrides.xml
, which does the overriding of a bean definition. It is generally not recommended to include the whole web.xml file in our repository but just override the required parts. There must be a better solution for that.
It doesn't quite hurt now, but it'll hurt when we do a version upgrade as web.xml
will have to be re-imported from the newer version and then apply the changes (something like d159736)
On 123 portal.
Steps to reproduce:
Add any radar, GSLA or CARS data collection and in step 2 define a temporal extent so that it includes 2 or 3 time steps (faster). No need to subset spatially.
Proceed to step 3 and select download as netcdf.
What should happen:
The following attributes:
-geospatial_lat_min
-geospatial_lat_max
-geospatial_lon_min
-geospatial_lon_max
should be updated to reflect what is actually found in the aggregated file with values of the same "type" as the LATITUDE/LONGITUDE variables (usually Double) or at least should be numeric (IMOS checker requirement).
What does happen:
The following attributes:
-geospatial_lat_min
-geospatial_lat_max
-geospatial_lon_min
-geospatial_lon_max
are being updated with the correct values but using the "type" String. Below is the difference between one of the original file and the aggregated file:
< :geospatial_lat_min = -33.433849 ;
< :geospatial_lat_max = -30.150743 ;
---
> :geospatial_lat_min = "-33.433849" ;
> :geospatial_lat_max = "-30.150743" ;
Noticed in link checker,
$ curl -v 'http://geoserver-123.aodn.org.au/geoserver/ncwms?service=ncwms&request=GetMap&version=1.1.1&format=image/png&bbox=-180,-90,180,90&srs=EPSG:4326&width=1&height=1&STYLES=&LAYERS=csiro_oa_reconstruction/OMEGA_C'
* Trying 10.1.1.12...
* TCP_NODELAY set
* Connected to geoserver-123.aodn.org.au (10.1.1.12) port 80 (#0)
> GET /geoserver/ncwms?service=ncwms&request=GetMap&version=1.1.1&format=image/png&bbox=-180,-90,180,90&srs=EPSG:4326&width=1&height=1&STYLES=&LAYERS=csiro_oa_reconstruction/OMEGA_C HTTP/1.1
> Host: geoserver-123.aodn.org.au
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< Content-Length: 0
< Date: Thu, 11 May 2017 01:57:59 GMT
<
* Curl_http_done: called premature == 0
* Connection #0 to host geoserver-123.aodn.org.au left intact
From the GS node,
meteo@geoserver:/var/lib/tomcat7/conf$ ping thredds.aodn.org.au
ping: unknown host thredds.aodn.org.au
It seems that the jar being built and included in the GeoServer war does not include LayerPage.html and layer_filters.css. These missing files cause a blank page when you click the 'WMS Layer Filters' link and prevent you from editing any filters.
From the geoserver-wps logs:
02 Nov 00:51:20 INFO [worker.GoGoDuck] - Validating subset TIME,2014-10-10T00:00:00,2014-10-12T00:00:00;LATITUDE,-33.433849,-32.150743;LONGITUDE,114.15197,115.741219
02 Nov 00:51:20 INFO [worker.GoGoDuck] - Matched Time Pattern: 2014-10-10T00:00:00
02 Nov 00:51:20 INFO [worker.GoGoDuck] - Matched Time Pattern: 2014-10-12T00:00:00
02 Nov 00:51:20 INFO [worker.GoGoDuck] - Matched Latitude/Longitude Pattern: 00,2014
02 Nov 00:51:20 INFO [worker.GoGoDuck] - Matched Latitude/Longitude Pattern: -33.433849,-32.150743
02 Nov 00:51:20 INFO [worker.GoGoDuck] - Matched Latitude/Longitude Pattern: 114.15197,115.741219
02 Nov 00:51:20 ERROR [worker.GoGoDuck] - Your aggregation failed! Reason for failure is: 'Invalid latitude/longitude format for subset: TIME,2014-10-10T00:00:00,2014-10-12T00:00:00;LATITUDE,-33.433849,-32.150743;LONGITUDE,114.15197,115.741219'
Looks like GoGoduck relies on the time zone (e.g. Z) to be specified on dates to be able to correctly validate latitude/longitude (https://github.com/aodn/geoserver-build/blob/master/src/extension/wps/src/main/java/au/org/emii/gogoduck/worker/GoGoDuck.java#L148).
Not sure how someone would work this out from the error returned. Looks like the requestor spent a bit of time changing the number of decimals on the lat/longitude parameters to get their request to work without success.
Refer geoserver-123-jstack-2016-01-27.txt
The ncwms plugin performs a wfs call to retrieve indexed file information. As shown in the file above, if all threads are already busy requesting indexed file information then its not possible for geoserver to respond to the wfs requests and geoserver deadlocks.
Refer aodn/aodn-portal#2467
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.