GithubHelp home page GithubHelp logo

opensearch-project / anomaly-detection-dashboards-plugin Goto Github PK

View Code? Open in Web Editor NEW
26.0 26.0 49.0 4.46 MB

Manage your detectors and identify atypical data in OpenSearch Dashboards

Home Page: https://opensearch.org/docs/latest/monitoring-plugins/ad/index/

License: Apache License 2.0

TypeScript 97.73% JavaScript 1.66% SCSS 0.62%
anomaly-detection dashboards opensearch

anomaly-detection-dashboards-plugin's People

Contributors

aesgithub avatar alolita avatar amitgalitz avatar amoo-miki avatar bsfishy avatar dblock avatar dependabot[bot] avatar gaiksaya avatar jackiehanyang avatar kaituo avatar kavilla avatar liujoycec avatar mihirsoni avatar ohltyler avatar opensearch-trigger-bot[bot] avatar peternied avatar peterzhuamazon avatar riysaxen-amzn avatar ruanyl avatar ryanbogan avatar saratvemulapalli avatar sean-zheng-amazon avatar smortex avatar vachashah avatar vijayanb avatar wnbts avatar yizheliu-amazon avatar ylwu-amzn avatar zelinh avatar zhyuanqi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

anomaly-detection-dashboards-plugin's Issues

[BUG] errors and failure to show detectors if .opendistro-anomaly-detection-stat doesn't exist yet

Describe the bug
When a detector is created via the create detector API (for example, using curl), the .opendistro-anomaly-detection-stat index isn't immediately created. At the point, when opening the Anomaly Detection dashboards plugin, errors are thrown and the detectors are not shown.

See the forum for where I initially reported this.

To Reproduce
Steps to reproduce the behavior:

  1. Create a detector using the create detector API, not via the plugin UI
  2. Validate that the detector was created correctly from the API output and with the search detector API
  3. Open Dashboards and navigate to the anomaly detection dashboards plugin UI
  4. Errors are shown (complaining about .opendistro-anomaly-detection-stat missing) and the detectors are not listed:
org.opensearch.index.IndexNotFoundException: no such index [.opendistro-anomaly-detection-state]
 at org.opensearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:959) ~[opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:896) ~[opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:852) ~[opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:231) ~[opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:207) ~[opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.search.TransportSearchAction.resolveLocalIndices(TransportSearchAction.java:634) ~[opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.search.TransportSearchAction.executeSearch(TransportSearchAction.java:662) ~[opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.search.TransportSearchAction.executeLocalSearch(TransportSearchAction.java:541) ~[opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.search.TransportSearchAction.lambda$executeRequest$3(TransportSearchAction.java:333) ~[opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.ActionListener$1.onResponse(ActionListener.java:76) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:127) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:100) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.search.TransportSearchAction.executeRequest(TransportSearchAction.java:376) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:259) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:120) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:192) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.indexmanagement.rollup.actionfilter.FieldCapsFilter.apply(FieldCapsFilter.kt:141) [opensearch-index-management-1.1.0.0.jar:1.1.0.0]
 at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:190) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.performanceanalyzer.action.PerformanceAnalyzerActionFilter.apply(PerformanceAnalyzerActionFilter.java:99) [opensearch-performance-analyzer-1.1.0.0.jar:1.1.0.0]
 at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:190) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.support.TransportAction.execute(TransportAction.java:168) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.support.TransportAction.execute(TransportAction.java:96) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.client.node.NodeClient.executeLocally(NodeClient.java:99) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.client.node.NodeClient.doExecute(NodeClient.java:88) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.client.support.AbstractClient.execute(AbstractClient.java:428) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.client.support.AbstractClient.search(AbstractClient.java:561) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.ad.transport.handler.ADSearchHandler.validateRole(ADSearchHandler.java:84) [opensearch-anomaly-detection-1.1.0.0.jar:1.1.0.0]
 at org.opensearch.ad.transport.handler.ADSearchHandler.search(ADSearchHandler.java:72) [opensearch-anomaly-detection-1.1.0.0.jar:1.1.0.0]
 at org.opensearch.ad.transport.SearchADTasksTransportAction.doExecute(SearchADTasksTransportAction.java:50) [opensearch-anomaly-detection-1.1.0.0.jar:1.1.0.0]
 at org.opensearch.ad.transport.SearchADTasksTransportAction.doExecute(SearchADTasksTransportAction.java:39) [opensearch-anomaly-detection-1.1.0.0.jar:1.1.0.0]
 at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:192) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.indexmanagement.rollup.actionfilter.FieldCapsFilter.apply(FieldCapsFilter.kt:141) [opensearch-index-management-1.1.0.0.jar:1.1.0.0]
 at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:190) [opensearch-1.1.0.jar:1.1.0]
Anomaly detector - Unable to search detectors { Error: [index_not_found_exception] no such index [.opendistro-anomaly-detection-state], with { index=".opendistro-anomaly-detection-state" & resource.id=".opendistro-anomaly-detection-state" & resource.type="index_or_alias" & index_uuid="_na_" }
    at respond (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:349:15)
    at checkRespForFailure (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:306:7)
    at HttpConnector.<anonymous> (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
    at IncomingMessage.wrapper (/usr/share/opensearch-dashboards/node_modules/lodash/lodash.js:4991:19)
    at IncomingMessage.emit (events.js:203:15)
    at endReadableNT (_stream_readable.js:1145:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)
  status: 404,
  displayName: 'NotFound',
  message:
   '[index_not_found_exception] no such index [.opendistro-anomaly-detection-state], with { index=".opendistro-anomaly-detection-state" & resource.id=".opendistro-anomaly-detection-state" & resource.type="index_or_alias" & index_uuid="_na_" }',
  path: '/_plugins/_anomaly_detection/detectors/tasks/_search',
  query: {},
  body:
   { error:
      { root_cause: [Array],
        type: 'index_not_found_exception',
        reason: 'no such index [.opendistro-anomaly-detection-state]',
        index: '.opendistro-anomaly-detection-state',
 at org.opensearch.performanceanalyzer.action.PerformanceAnalyzerActionFilter.apply(PerformanceAnalyzerActionFilter.java:99) [opensearch-performance-analyzer-1.1.0.0.jar:1.1.0.0]
 at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:190) [opensearch-1.1.0.jar:1.1.0]
        'resource.id': '.opendistro-anomaly-detection-state',
        'resource.type': 'index_or_alias',
        index_uuid: '_na_' },
     status: 404 },
  statusCode: 404,
  response:
   '{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index [.opendistro-anomaly-detection-state]","index":".opendistro-anomaly-detection-state","resource.id":".opendistro-anomaly-detection-state","resource.type":"index_or_alias","index_uuid":"_na_"}],"type":"index_not_found_exception","reason":"no such index [.opendistro-anomaly-detection-state]","index":".opendistro-anomaly-detection-state","resource.id":".opendistro-anomaly-detection-state","resource.type":"index_or_alias","index_uuid":"_na_"},"status":404}',
  toString: [Function],
  toJSON: [Function] }
 at org.opensearch.action.support.TransportAction.execute(TransportAction.java:168) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.action.support.TransportAction.execute(TransportAction.java:96) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.client.node.NodeClient.executeLocally(NodeClient.java:99) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.client.node.NodeClient.doExecute(NodeClient.java:88) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.client.support.AbstractClient.execute(AbstractClient.java:428) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.ad.rest.AbstractSearchAction.lambda$prepareRequest$0(AbstractSearchAction.java:101) [opensearch-anomaly-detection-1.1.0.0.jar:1.1.0.0]
 at org.opensearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:123) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.rest.RestController.dispatchRequest(RestController.java:271) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.rest.RestController.tryAllHandlers(RestController.java:353) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.rest.RestController.dispatchRequest(RestController.java:204) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:332) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:397) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:322) [opensearch-1.1.0.jar:1.1.0]
 at org.opensearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:55) [transport-netty4-client-1.1.0.jar:1.1.0]
 at org.opensearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:41) [transport-netty4-client-1.1.0.jar:1.1.0]
 at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at org.opensearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:71) [transport-netty4-client-1.1.0.jar:1.1.0]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
172.22.8.1 - tlacuache [22/Nov/2021:21:25:51 +0000] "GET /dashboards/api/anomaly_detectors/_indices?index=*sample-* HTTP/1.1" 200 37 "https://localhost/dashboards/app/anomaly-detection-dashboards" "Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0"
172.22.8.1 - tlacuache [22/Nov/2021:21:25:51 +0000] "GET /dashboards/api/anomaly_detectors/detectors?from=0&search=sample&indices=&size=1000&sortDirection=asc&sortField=name HTTP/1.1" 200 61 "https://localhost/dashboards/app/anomaly-detection-dashboards" "Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0"
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.59.Final.jar:4.1.59.Final]
 at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.59.Final.jar:4.1.59.Final]
 at java.lang.Thread.run(Thread.java:832) [?:?]

Expected behavior
Either the create detector API should create the necessary indices (in which case this bug should be moved to that repo) or the anomaly detection dashboards plugin should handle it gracefully.

Plugins

Security plugin has been removed in the docker images for both my opensearch and dashboards container. One custom visualization plugin installed in dashboards.

Dashboards:

OpenSearch:

$ /usr/share/opensearch/bin/opensearch-plugin list
opensearch-alerting
opensearch-anomaly-detection
opensearch-asynchronous-search
opensearch-cross-cluster-replication
opensearch-index-management
opensearch-job-scheduler
opensearch-knn
opensearch-notebooks
opensearch-performance-analyzer
opensearch-reports-scheduler
opensearch-sql

Host/Environment (please complete the following information):

  • OS: Docker based on official opensearchproject/opensearch:1.1.0 and opensearchproject/opensearch-dashboards:1.1.0 images, running on top of Debian 11 x86_64.
  • Version 1.1.0

Additional context

A workaround is to issue a start detector job it causes the appropriate indices to be created:

[2021-11-23T17:45:32,188][INFO ][o.o.j.JobSchedulerPlugin ] [opensearch] JobSweeper started listening to operations on index .opendistro-anomaly-detector-jobs
[2021-11-23T17:45:32,194][INFO ][o.o.c.m.MetadataCreateIndexService] [opensearch] [.opendistro-anomaly-detector-jobs] creating index, cause [api], templates [], shards [1]/[1]
[2021-11-23T17:45:32,228][INFO ][o.o.j.JobSchedulerPlugin ] [opensearch] JobSweeper started listening to operations on index .opendistro-anomaly-detector-jobs
[2021-11-23T17:45:32,317][INFO ][o.o.a.r.h.IndexAnomalyDetectorJobActionHandler] [opensearch] Created .opendistro-anomaly-detectors with mappings.
[2021-11-23T17:45:32,334][INFO ][o.o.c.m.MetadataCreateIndexService] [opensearch] [.opendistro-anomaly-detection-state] creating index, cause [api], templates [], shards [1]/[1]
[2021-11-23T17:45:32,434][INFO ][o.o.a.t.ADTaskManager    ] [opensearch] Created .opendistro-anomaly-detection-state with mappings.
[2021-11-23T17:45:32,484][INFO ][o.o.j.s.JobScheduler     ] [opensearch] Scheduling job id x-PkTX0Bj6KasTa5qLAc for index .opendistro-anomaly-detector-jobs .
[2021-11-23T17:45:49,106][INFO ][o.o.j.s.JobScheduler     ] [opensearch] Descheduling jobId: x-PkTX0Bj6KasTa5qLAc
[2021-11-23T17:45:49,130][INFO ][o.o.a.t.DeleteModelTransportAction] [opensearch] Delete model for x-PkTX0Bj6KasTa5qLAc
[2021-11-23T17:45:49,131][INFO ][o.o.a.t.DeleteModelTransportAction] [opensearch] Deleted model for [x-PkTX0Bj6KasTa5qLAc] with response [null] 
[2021-11-23T17:45:49,131][INFO ][o.o.a.m.CheckpointDao    ] [opensearch] Delete checkpoints of detector x-PkTX0Bj6KasTa5qLAc
[2021-11-23T17:45:49,133][INFO ][o.o.a.t.DeleteModelTransportAction] [opensearch] Finished deleting x-PkTX0Bj6KasTa5qLAc
[2021-11-23T17:45:49,134][INFO ][o.o.a.t.StopDetectorTransportAction] [opensearch] models of detector x-PkTX0Bj6KasTa5qLAc get deleted
[2021-11-23T17:45:49,134][INFO ][o.o.a.r.h.IndexAnomalyDetectorJobActionHandler] [opensearch] AD model deleted successfully for detector x-PkTX0Bj6KasTa5qLAc
[2021-11-23T17:45:49,133][INFO ][o.o.a.m.CheckpointDao    ] [opensearch] 0 checkpoints docs get deleted
[2021-11-23T17:46:18,603][ERROR][o.o.a.u.AlertingException] [opensearch] Alerting error: [.opendistro-alerting-config] IndexNotFoundException[no such index [.opendistro-alerting-config]]
[2021-11-23T17:46:44,629][DEPRECATION][o.o.d.c.m.MetadataCreateIndexService] [opensearch] index name [.opendistro-job-scheduler-lock] starts with a dot '.', in the next major version, index names starting with a dot are reserved for hidden indices and system indices
[2021-11-23T17:46:44,633][INFO ][o.o.c.m.MetadataCreateIndexService] [opensearch] [.opendistro-job-scheduler-lock] creating index, cause [api], templates [], shards [1]/[1]
[2021-11-23T17:46:45,398][INFO ][o.o.c.m.MetadataCreateIndexService] [opensearch] [.opendistro-ism-managed-index-history-2021.11.23-1] creating index, cause [api], templates [], shards [1]/[1]

But then (if I don't want the detector immediately started) I have to turn back around and do a stop detector job. This seems hacky and inelegant.

"indices not found"-style errors when detectors were created via API

Describe the bug

With the v1.2.0 release I was watching closely the fix for #109 (some more details in #155 which I logged and then closed as a duplicate). I am still unable to view the anomaly detection dashboard after creating detectors using the create detector API. The dashboard UI shows Unable to get all detectors errors, and the console output throws various exceptions about indices not being found (opensearch-ad-plugin-result-* and/or .opendistro-alerting-config).

To Reproduce
Steps to reproduce the behavior:

  1. Create a detector using the create detector API via curl or some similar tool, not via the plugin UI
  2. Validate that the detector was created correctly from the API output and with the search detector API
  3. Open Dashboards and navigate to the anomaly detection dashboards plugin UI
  4. The Unable to get all detectors errors are displayed in the UI and the following exceptions are written to the console output dashboards:

Anomaly Detection Dashboard UI

Anomaly detector - Unable to search detectors { Error: [illegal_argument_exception] No indices found
    at respond (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:349:15)
    at checkRespForFailure (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:306:7)
    at HttpConnector.<anonymous> (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
    at IncomingMessage.wrapper (/usr/share/opensearch-dashboards/node_modules/lodash/lodash.js:4991:19)
    at IncomingMessage.emit (events.js:203:15)
    at endReadableNT (_stream_readable.js:1145:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)
  status: 400,
  displayName: 'BadRequest',
  message: '[illegal_argument_exception] No indices found',
  path:
   '/_plugins/_anomaly_detection/detectors/results/_search/opensearch-ad-plugin-result-*?only_query_custom_result_index=false',
  query: {},
  body:
   { error:
      { root_cause: [Array],
        type: 'illegal_argument_exception',
        reason: 'No indices found' },
     status: 400 },
  statusCode: 400,
  response:
   '{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"No indices found"}],"type":"illegal_argument_exception","reason":"No indices found"},"status":400}',
  toString: [Function],
  toJSON: [Function] }

Anomaly Detection Detectors UI

[2021-11-29T17:31:08,810][ERROR][o.o.a.u.AlertingException] [opensearch] Alerting error: [.opendistro-alerting-config] IndexNotFoundException[no such index [.opendistro-alerting-config]]
Unable to get monitor on top of detector { Error: [alerting_exception] Configured indices are not found: [.opendistro-alerting-config]
    at respond (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:349:15)
    at checkRespForFailure (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:306:7)
    at HttpConnector.<anonymous> (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
    at IncomingMessage.wrapper (/usr/share/opensearch-dashboards/node_modules/lodash/lodash.js:4991:19)
    at IncomingMessage.emit (events.js:203:15)
    at endReadableNT (_stream_readable.js:1145:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)
  status: 404,
  displayName: 'NotFound',
  message:
   '[alerting_exception] Configured indices are not found: [.opendistro-alerting-config]',
  path: '/_plugins/_alerting/monitors/_search',
  query: {},
  body:
   { error:
      { root_cause: [Array],
        type: 'alerting_exception',
        reason:
         'Configured indices are not found: [.opendistro-alerting-config]',
        caused_by: [Object] },
     status: 404 },
  statusCode: 404,
  response:
   '{"error":{"root_cause":[{"type":"alerting_exception","reason":"Configured indices are not found: [.opendistro-alerting-config]"}],"type":"alerting_exception","reason":"Configured indices are not found: [.opendistro-alerting-config]","caused_by":{"type":"exception","reason":"org.opensearch.index.IndexNotFoundException: no such index [.opendistro-alerting-config]"}},"status":404}',
  toString: [Function],
  toJSON: [Function] }
Anomaly detector - Unable to search detectors { Error: [illegal_argument_exception] No indices found
    at respond (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:349:15)
    at checkRespForFailure (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:306:7)
    at HttpConnector.<anonymous> (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
    at IncomingMessage.wrapper (/usr/share/opensearch-dashboards/node_modules/lodash/lodash.js:4991:19)
    at IncomingMessage.emit (events.js:203:15)
    at endReadableNT (_stream_readable.js:1145:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)
  status: 400,
  displayName: 'BadRequest',
  message: '[illegal_argument_exception] No indices found',
  path:
   '/_plugins/_anomaly_detection/detectors/results/_search/opensearch-ad-plugin-result-*?only_query_custom_result_index=false',
  query: {},
  body:
   { error:
      { root_cause: [Array],
        type: 'illegal_argument_exception',
        reason: 'No indices found' },
     status: 400 },
  statusCode: 400,
  response:
   '{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"No indices found"}],"type":"illegal_argument_exception","reason":"No indices found"},"status":400}',
  toString: [Function],
  toJSON: [Function] }

Expected behavior
Detectors that were successfully created and exist should be displayed in the UI. Either the create detector API should create the necessary indices (in which case this bug should be moved to that repo) or the anomaly detection dashboards plugin should handle it gracefully.

Plugins
Please list all plugins currently enabled.

For dashboards:

For opensearch:

$ /usr/share/opensearch/bin/opensearch-plugin list
opensearch-alerting
opensearch-anomaly-detection
opensearch-asynchronous-search
opensearch-cross-cluster-replication
opensearch-index-management
opensearch-job-scheduler
opensearch-knn
opensearch-observability
opensearch-performance-analyzer
opensearch-reports-scheduler
opensearch-sql

Host/Environment (please complete the following information):

  • OS: Docker based on official opensearchproject/opensearch:1.2.0 and opensearchproject/opensearch-dashboards:1.2.0 images, running on top of Debian 11 x86_64.
  • Version 1.2.0

Additional Context:

This is actually worse than what I was experiencing in v1.1.0. For that, at least you had a workaround where starting/stopping a detector job would cause the necessary indices to be created. I have no workaround at the moment.

Catch all possible exceptions when making calls to the cluster from client-side

Currently, many of the client-side cluster calls made from this plugin are in the form:

await dispatch(<some-function-that-will-call-an-API>)

which will create a Promise, and the await will wait for the Promise to finish. However, if there is an exception thrown when making the call, and if there is no .catch() block, the exception may be unhandled in such a way that puts the frontend plugin in a weird state. An example of this: if a user is viewing a custom-result-index detector, and then deletes the custom result index and refreshes the detector detail page, it will get stuck in loading state forever. This is because the page is put into loading state when trying to fetch results, but the call will throw an unhandled exception, since the results index doesn't exist, leaving the page in loading state and never updating.

Ideally, all of these calls are followed with a catch block to prevent this by handling any issues on client-side appropriately, as well as a then block to run any code that should be executed after the Promise returns successfully. This is already done in many places in the plugin - see here for a good example. Additionally, a then block can be added to handle the successful response. There is examples of that here.

Empty heatmap chart's hover template shows undefined entity list when empty

When the heatmap chart is empty (no anomalies in the given date range), then when hovering over a heatmap cell, the entity list shows 'undefined'. This should be tuned to show a more appropriate message, such as an empty string or something indicating that there is no entities, since the cell is empty and contains no data.

Screenshot:
Screen Shot 2021-11-19 at 9 23 05 AM

Generate and upload code coverage report through CI workflow

Is your feature request related to a problem? Please describe.
Code coverage information for all the OpenSearch plugins should be shown in the Codecov dashboard (https://app.codecov.io/gh/opensearch-project/).
Related issue: opensearch-project/opensearch-plugins#73

Describe the solution you'd like
Please update the CI workflow to generate code coverage and upload it to Codecov.

Describe alternatives you've considered
n/a

Additional context
The guide for reporting the code coverage for OpenSearch plugins is located here. (https://github.com/opensearch-project/opensearch-plugins/blob/main/TESTING.md#code-coverage-reporting)

Allow filter on _source fields

Is your feature request related to a problem? Please describe.
If you create a detector, you can add a custom data filter, but there is no option to check if the data filter correct or not,
I was able to save complete nonsence filter and proceed to the next page. At that point I wantet to save the anomaly detector it realizes it could not query with the data filter and was not able to save.

Describe the solution you'd like

  • I would like to have as a minimum a default example for a custom filter. There is only a blank window, so customer does not even know what is wanted. This is not really intuitive.
    -If customer is a little bit more experienced it would be great to have a check button that this filter is valid, just the same as in the development tools.
    -If you want to support customer it would be also great to have an auto completion, just the same as for the development tools but with limitation to query dsl.

[BUG] Feature breakdown chart doesn't show anomalous interval

On single-entity detector historical analysis page, anomaly result line chart can't match the feature breakdown chart below.

Screen Shot 2021-10-30 at 1 10 05 PM

We can see 5 anomalies on the anomaly line chart above, but the feature breakdown shows only two spikes around 1400. If zoom in the second anomaly , we can see the correct feature breakdown around 1400.
Screen Shot 2021-10-30 at 1 13 18 PM

I guess the reason is we always query intervals which anomaly_grade>0 for the anomaly history line chart, but for feature breakdown we may just sample evenly if there are too many intervals, and that may skip the interval which has anomaly.

Suggestion: how about we always query and show the intervals which have anomaly on feature breakdown chart

Refactor and simplify validation callout logic on CreateAndReview page

Currently a lot of the logic for displaying the different validation callouts is handled by state variables in the CreateAndReview component. These state variables should be moved down to the child components that they effect instead of re-rendering the whole parent component every time they change. More details can be found in this comment.

[BUG] The expected time of HCAD initialization is not correct

Describe the bug
HCAD would require at least 2 intervals to initialize an entity since the 1st occurrence of the entity won't trigger cold start. I did it to filter out sparse entities that only appear once. We should update the message to reflect that in the initialization expected time.

To Reproduce
Steps to reproduce the behavior:

  1. Start a HCAD detector
  2. Check the init message in the real time tab.

Screenshots
Screen Shot 2021-09-29 at 3 43 37 PM

Support zooming on selected heatmap cell anomaly charts

Currently, we support zooming and daily/weekly/monthly aggregations for historical detectors, but only for single-entity detectors. For high-cardinality detectors, when a user selects a heatmap cell, the displayed chart doesn't support any zooming or aggregation functionality. Even at only 1 year of historical data, the charts can get difficult to read, where any sort of zooming capability would be helpful for the user (see screenshot below).

Suggest to support the same zooming (and maybe daily/weekly/monthly aggregations) for the displayed anomaly charts when the user selects a heatmap cell. This adds consistency across the detector types and easier readability for high-cardinality historical detectors.

Screen Shot 2021-08-25 at 9 13 49 PM

Investigate heatmap chart excessive remounting

The heatmap chart component is getting fully remounted when selecting any change that triggers any parent functions to run (e.g., changing the filter will run a parent function to run a new query to get new results, etc). Preferably, it only gets mounted once, and the state can be minorly updated through custom hooks rather than a remount of the component.

There are a few different ways React may trigger a remount, including a different key param being passed (which isn't the case here since no key prop is specified), the parent element being remounted (confirmed this doesn't happen in the parent AnomaliesChart component), or the root elements have different types (see details here).

I suspect this third ways is the issue happening, but don't see anything obvious that could be causing this. The JSX used to generate the elements is static in regards to element types, and I don't see how any parent element that encapsulates the <AnomalyHeatmapChart> component could be changing when AnomaliesChart is re-rendered.

More details on the bug this has caused is described in the workflows in #91

Occasional missing data even though heatmap cell indicates anomalies

Occasionally, a heatmap cell summary will indicate an anomaly present, but when clicked on, shows 0 available anomalies.

The anomaly summaries and the anomaly data are fetched in 2 different calls, so likely the issue has to do with the time bounds being different between the two queries. If an anomaly is on the edge, it may be getting included in the summary, but not included in the raw results, leading to the discrepancy.

Screenshot of the error:
Screen Shot 2021-11-07 at 5 29 15 PM

Clean up formatting of anomaly grade in charts

Currently, some charts don't perform any formatting on the raw anomaly grade results, leading to a very long decimal.

Change & apply formatting to only show the last 3 digits to improve readability.

Fix ESLint so it can be used

Currently, running yarn lint will result in some undefined failures. Investigate these and fix so we can utilize ESLint for static analysis, adding header rules, etc.

anomaly-detection-dashboards-plugin-1 ohltyler$ yarn lint
yarn run v1.22.4
$ node ../../scripts/eslint .
Browserslist: caniuse-lite is outdated. Please run:
npx browserslist@latest --update-db
TypeError [ERR_INVALID_ARG_TYPE]: The "to" argument must be of type string. Received type undefined

[BUG] creating Anomaly Detector using the API does not work

Describe the bug
The anomaly detection panel in opensearch dashboards fails after creating a detector via the API("Unable to get all detectors").
And it is also impossible to get/update/delete a detector created using the API.("Unable to get user information from detector")

To Reproduce
Steps to reproduce the behavior:

  1. Create a detector using the create detector API, not via the plugin UI
  2. Open Dashboards and navigate to the anomaly detection dashboards plugin UI

OpenSearch Dashboards logs:

Anomaly detector - Unable to search detectors { Error: [illegal_state_exception] Can't get text on a VALUE_NULL at 1:51
    at respond (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:349:15)
    at checkRespForFailure (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/transport.js:306:7)
    at HttpConnector.<anonymous> (/usr/share/opensearch-dashboards/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
    at IncomingMessage.wrapper (/usr/share/opensearch-dashboards/node_modules/lodash/lodash.js:4991:19)
    at IncomingMessage.emit (events.js:203:15)
    at endReadableNT (_stream_readable.js:1145:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)
  status: 500,
  displayName: 'InternalServerError',
  message:
   '[illegal_state_exception] Can\'t get text on a VALUE_NULL at 1:51',
  path: '/_plugins/_anomaly_detection/detectors/_search',
  query: {},
  body:
   { error:
      { root_cause: [Array],
        type: 'illegal_state_exception',
        reason: 'Can\'t get text on a VALUE_NULL at 1:51' },
     status: 500 },
  statusCode: 500,
  response:
   '{"error":{"root_cause":[{"type":"illegal_state_exception","reason":"Can\'t get text on a VALUE_NULL at 1:51"}],"type":"illegal_state_exception","reason":"Can\'t get text on a VALUE_NULL at 1:51"},"status":500}',
  toString: [Function],
  toJSON: [Function] }

OpenSearch logs(when trying to delete the detector):

[2022-01-17T16:14:43,040][ERROR][o.o.a.u.RestHandlerUtils ] [opensearch-node] Wrap exception before sending back to user
org.opensearch.ad.common.exception.AnomalyDetectionException: Unable to get user information from detector QXycaH4BI3aocvnW0ms2
	at org.opensearch.ad.util.ParseUtils.onGetAdResponse(ParseUtils.java:561) [opensearch-anomaly-detection-1.2.0.0-SNAPSHOT.jar:1.2.0.0-SNAPSHOT]
	at org.opensearch.ad.util.ParseUtils.lambda$getDetector$0(ParseUtils.java:517) [opensearch-anomaly-detection-1.2.0.0-SNAPSHOT.jar:1.2.0.0-SNAPSHOT]
	at org.opensearch.action.ActionListener$1.onResponse(ActionListener.java:78) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.TransportAction$1.onResponse(TransportAction.java:103) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.TransportAction$1.onResponse(TransportAction.java:97) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$2.handleResponse(TransportSingleShardAction.java:287) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$2.handleResponse(TransportSingleShardAction.java:273) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.security.transport.SecurityInterceptor$RestoringTransportResponseHandler.handleResponse(SecurityInterceptor.java:302) [opensearch-security-1.2.0.0.jar:1.2.0.0]
	at org.opensearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1347) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.transport.TransportService$DirectResponseChannel.processResponse(TransportService.java:1425) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1405) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.transport.TaskTransportChannel.sendResponse(TaskTransportChannel.java:65) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.ChannelActionListener.onResponse(ChannelActionListener.java:57) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.ChannelActionListener.onResponse(ChannelActionListener.java:40) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.ActionRunnable.lambda$supply$0(ActionRunnable.java:71) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.ActionRunnable$2.doRun(ActionRunnable.java:86) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:792) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:50) [opensearch-1.2.0.jar:1.2.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
	at java.lang.Thread.run(Thread.java:832) [?:?]
[2022-01-17T16:14:43,042][WARN ][r.suppressed             ] [opensearch-node] path: /_plugins/_anomaly_detection/detectors/QXycaH4BI3aocvnW0ms2, params: {pretty=true, detectorID=QXycaH4BI3aocvnW0ms2}
org.opensearch.OpenSearchStatusException: Unable to get user information from detector QXycaH4BI3aocvnW0ms2
	at org.opensearch.ad.util.RestHandlerUtils.lambda$wrapRestActionListener$2(RestHandlerUtils.java:205) [opensearch-anomaly-detection-1.2.0.0-SNAPSHOT.jar:1.2.0.0-SNAPSHOT]
	at org.opensearch.action.ActionListener$1.onFailure(ActionListener.java:86) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.ad.util.ParseUtils.onGetAdResponse(ParseUtils.java:561) [opensearch-anomaly-detection-1.2.0.0-SNAPSHOT.jar:1.2.0.0-SNAPSHOT]
	at org.opensearch.ad.util.ParseUtils.lambda$getDetector$0(ParseUtils.java:517) [opensearch-anomaly-detection-1.2.0.0-SNAPSHOT.jar:1.2.0.0-SNAPSHOT]
	at org.opensearch.action.ActionListener$1.onResponse(ActionListener.java:78) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.TransportAction$1.onResponse(TransportAction.java:103) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.TransportAction$1.onResponse(TransportAction.java:97) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$2.handleResponse(TransportSingleShardAction.java:287) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction$2.handleResponse(TransportSingleShardAction.java:273) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.security.transport.SecurityInterceptor$RestoringTransportResponseHandler.handleResponse(SecurityInterceptor.java:302) [opensearch-security-1.2.0.0.jar:1.2.0.0]
	at org.opensearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1347) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.transport.TransportService$DirectResponseChannel.processResponse(TransportService.java:1425) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1405) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.transport.TaskTransportChannel.sendResponse(TaskTransportChannel.java:65) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.ChannelActionListener.onResponse(ChannelActionListener.java:57) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.support.ChannelActionListener.onResponse(ChannelActionListener.java:40) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.ActionRunnable.lambda$supply$0(ActionRunnable.java:71) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.action.ActionRunnable$2.doRun(ActionRunnable.java:86) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:792) [opensearch-1.2.0.jar:1.2.0]
	at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:50) [opensearch-1.2.0.jar:1.2.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
	at java.lang.Thread.run(Thread.java:832) [?:?]

Expected behavior
Detectors that were successfully created and exist should be displayed in the UI.
Detectors are removed successfully using the API.

Plugins
OpenSearch Dashboards:

OpenSearch:

$ /usr/share/opensearch/bin/opensearch-plugin list
opensearch-alerting
opensearch-anomaly-detection
opensearch-asynchronous-search
opensearch-cross-cluster-replication
opensearch-index-management
opensearch-job-scheduler
opensearch-knn
opensearch-observability
opensearch-performance-analyzer
opensearch-reports-scheduler
opensearch-security
opensearch-sql

Host/Environment (please complete the following information):

  • OS: Docker based on official opensearchproject/opensearch:1.2.0 and opensearchproject/opensearch-dashboards:1.2.0 images, Centos7
  • Version 1.2.0

Additional context
Installing the AD plugin 1.2.0.1 did not help

Ensure plugins work with Dashboards Node v14.18.2

Dashboards has upgraded Node to v14.18.2 and this is for plugin teams to verify that their plugins work with the new version upgrade. The Dashboards branch running the upgraded node is feature/node14.

A list of things to check is:

  • Your plugin's node version is bumped to 14.18.2
  • Your plugin's @types/node package is bumped to ^14.17.32 if it's used
  • Plugin builds and runs on Dashboards
  • All of your plugin's tests pass
  • Your GitHub workflows are not broken by the node version change and updated if they do
  • Any sanity/smoke testing is successful

HC historical analysis result heat map doesn't auto load result

Describe the bug

Create a HC detector. Then go to historical analysis page, start historical job, stay on the historical analysis page. The heat map doesn't show new anomaly result data automatically. Same for both detector using default and custom result index.

Refresh the whole historical analysis page, then can show anomaly result data.

Change heatmap chart selections from per-cell to per-row

Currently, heatmap charts allow users to select and view results on a per-cell granularity, which means results can only be viewed per-entity, per-time-chunk.

This should be changed to being able to select and view results on a per-row granularity, which means results can be viewed per-entity, and not limited by a time chunk. By clicking on a row, the entire entity's results (based on the time range for the entire heatmap chart) should be populated in the anomaly chart / anomaly results table / feature chart.

The view of the heatmap chart can remain the same; only the selection and corresponding populated results should be changed.

Some advantages of this:

  • closes the gap on the UI differences between non-HC and HC by removing the celled approach for HC, and showing the entire results per-entity
  • more holistic view of an entityโ€™s results, rather than having to click on different cells to view different anomalies for the same entity
  • because zooming is supported in the individual entity charts, users can still zoom in to a particular time range chunk, removing the need for the automatic time chunking done by the heatmap cells

Add documentation on formatting in README

Auto-formatting can be done in a few different ways, these should be added to the README somewhere:

  1. Using VS code - there is a "Prettier - Code formatter" extension, and coupled with "Format on save" set to true, allows for auto-formatting whenever saving files.
  2. Using CLI - there is a prettier yarn dependency that can format everything via running yarn prettier <file-to-format> --write in base plugin repo.

Release version 1.2

Coming from opensearch-build#567, release version 1.2. Please follow the following checklist.

Preparation

  • Assign this issue to a release owner.
  • Finalize scope and feature set and update the Public Roadmap.
  • Create, update, triage and label all features and issues targeted for this release with v1.2.0.

CI/CD

  • Increment version on main to 1.2.0.0.
  • Ensure working and passing CI.
  • Re(add) this repo to the manifest.

Pre-Release

  • Branch and build from a 1.2 branch.
  • Update your branch in the manifest.
  • Feature complete, pencils down.
  • Fix bugs that target this release.

Release

  • Complete documentation.
  • Gather, review and publish release notes.
  • Verify all issued labeled for this release are closed or labelled for the next release.

Post Release

Extract component titles into variables

Currently, a lot of the components (specifically, anomaly chart components) have hard-coded names in them, and are repeated several places throughout the codebase. Example: here

Prefer to extract these names into a standard set of constants that can be referenced by the components, to ease code maintainability and readability.

Add an alerting integration test suite to Cypress

Add integration tests to test the integration with alerting plugin (links work properly, plugins recognize when the other one is installed, etc.). Can add a cypress environment variable (e.g., ALERTING) to run this specific set of tests, and set to false by default locally (as it may be cumbersome for developers to set up a full dev env to test this scenario).

In the GitHub workflows, we may set this flag to true by default, since the workflow should have all plugins installed in its environment, as it uses the most recent offically released docker images of OpenSearch and OpenSearch Dashboards.

Update plugin description

Coming from opensearch-project/opensearch-plugins#92.

Each plugin has a short descriptive text blurb that says what it does: this appears in plugin-descriptor.properties (OpenSearch plugins) or package.json (OpenSearch Dashboards plugins), as well as on the project website's "source" page and in the "About" section of every repo. These were created one-by-one over the years as plugins were created, so looking at them all together now it would be hard for somebody new to the project to use these to understand what the plugins do.

Update sample detectors names & indices

Currently, sample detectors & their corresponding indices used to detect over are prefixed with opendistro-. This prefix will need to be removed.

Note this will involve updating logic used to determine the existence of created sample detectors as well.

Will need to consider backwards compatibility to accommodate users who have already created detectors/indices with the opendistro- prefix.

Add error handling to server-side functions making multiple API calls

Currently, some server-side functions (e.g., server/routes/ad.ts) have multiple API calls (e.g., getDetectors()). If it fails at certain points, then only an exception may be caught and returned, even if partial data was already retrieved. The code should be more robust to errors by formatting and returning a partial response if applicable, instead of an error response.

A good example of where this can be updated:
in getDetectors(), a call to fetch basic detector info is made, then a subsequent call is made to fetch detector results, then subsequent calls are made to fetch real-time tasks and historical tasks. If the detector results call throws an exception, currently only an error is returned. Ideally, we should store that error, but continue in the function to retrieve the remaining detector info. When returning from the function, we can return whatever info was retrieved, along with an error message, so the frontend can show partial detector results + an error toast describing any exceptions found when making the API calls. For missing data, we can have default empty/unknown data (e.g., call to get detector state fails: set to some UNKNOWN state).

Unable to get all detectors on detector list page

Describe the bug
If enable AD backend role filtering, it will pop up "Unable to get all detectors" error message when open detector list page on security enabled page.
The response shows

{"ok":false,"error":"Error retrieving all detector states: Fail to fetch profile for QffaQHkBOdTQCy_4f5bT Exceptions: [Cannot find hosted model or related checkpoint]"}

If some detector has no model running and has no checkpoint, we should handle this exception when get profile and show its state as stopped.

Showing "Detector not found" when deleting detector and clicking on new one in detector list

This is a flaky bug seen only when the cluster is under heavy load. When deleting a detector, the user is redirected to the detector list page. If the user quickly clicks on a new detector, it will occasionally fail with the message "Detector not found" even if it does exist. Retrying clicking will usually work.

Investigate the state update error that is causing this issue

Update detector data model

The current detector data model doesn't perfectly correspond with the backend implementation. It should be updated for consistency with the backend plugin, and for easier code readability and logic.

For example, all task-prefixed fields in the Detector correspond to the historical task, even though real-time detection also uses tasks. These should be refactored and grouped into real-time and historical task info. Backwards compatibility will need to be considered as well.

Release version 1.1.1

This is a component issue for release 1.1.1.
Coming from release issue 1.1.1, release version 1.1.1. Please follow the following checklist.

How to use this component issue

This Component Issue

This component issue captures the state of the OpenSearch release, on component/plugin level, its assignee is responsible for driving the release of the component. Please contact them or @mention them on this issue for help.

Release Steps

There are several steps to the release process, components that are behind present risk to the release. Component owners resolve tasks on this ticket to communicate with the overall release owner.

Steps have completion dates for coordinating efforts between the components of a release; components can start as soon as they are ready far in advance of a future release.

You can find all the corresponding dates of each step in the release issue above.

What should I do if my plugin isn't making any changes?

If including changes in this release, increment the version on 1.1 branch to 1.1.1 for Min/Core, and 1.1.1.0 for components. Otherwise, keep the version number unchanged for both.

You can find all the date in above issue

Preparation

  • Assign this issue to a release owner.
  • All the tasks in this issue have been reviewed by the release owner.
  • Create, update, triage and label all features and issues targeted for this release with v1.1.1.

CI/CD

  • All code changes for 1.1.1 are complete.
  • Ensure working and passing CI.

Pre-Release

  • Confirm that all changes for 1.1.1 have been merged.
  • Complete integration and sanity tests, and update results in the comment, example.
  • Find/fix bugs using latest tarball and docker image provided in meta issue.
  • Completed release candidate testing build #TBD.
  • All intermittent test failures have issues filed.

Release

  • Complete documentation.
  • Gather, review and publish release notes.
  • Verify all issued labeled for this release are closed or labelled for the next release.

Post Release

Live anomaly chart for HC chart shows stacked results

For high-cardinality / category field detectors, the live anomalies chart on the real-time results page may show stacked bars, if multiple entities found anomalies during the same interval.

There are a few possible ways to mitigate this issue:

  1. Have some criteria for which anomaly should be shown in the chart, if there are multiple per interval
  2. Remove live anomalies chart and replace with a different way to represent the most recent anomalies for high-cardinality detectors

Update GitHub workflows

Once everything is public, there are 2 changes we can make to the GitHub workflows:

  1. Update integration test & CD workflows to use the names of the published artifacts to run successfully.
  2. Update unit & integration tests to run during PR rather than merging into main

Anomaly by index and detector overview page should link to the anomaly detector that detected the anomaly

Is your feature request related to a problem? Please describe.
If an anomaly detector detected an anomaly, this is shown under Anomaly detection Dashboards as a big yellow circle with index name in the middle and on the bottom with the detector name. If you click on it nothing happens.

Describe the solution you'd like
If you hover over the yellow circle it changes its color, but why is it not linked to detector, customer is interested in the results of the detector, the next thing he would do is to click in the links of the detector of the upper right corner?. Wouldn't it be useful to direct click on the yellow circle and redirected to the anomaly detector overview?

Describe alternatives you've considered
Click on the detector on the right hand side.

Additional context
Add any other context or screenshots about the feature request here.

Can't show detector list if no detector started

On a new cluster, create a detector but don't start realtime/historical detection. Go to detector list page, it won't show the detector. Check backend OpenSearch log, found such error

[2021-10-28T22:51:29,301][ERROR][o.o.a.u.RestHandlerUtils ] [integTest-0] Wrap exception before sending back to user
org.opensearch.index.IndexNotFoundException: no such index [.opendistro-anomaly-detection-state]
        at org.opensearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:959) ~[opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:896) ~[opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:852) ~[opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:231) ~[opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:207) ~[opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.search.TransportSearchAction.resolveLocalIndices(TransportSearchAction.java:634) ~[opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.search.TransportSearchAction.executeSearch(TransportSearchAction.java:662) ~[opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.search.TransportSearchAction.executeLocalSearch(TransportSearchAction.java:541) ~[opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.search.TransportSearchAction.lambda$executeRequest$3(TransportSearchAction.java:333) ~[opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.ActionListener$1.onResponse(ActionListener.java:76) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:127) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:100) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.search.TransportSearchAction.executeRequest(TransportSearchAction.java:376) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:259) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:120) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:192) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.support.TransportAction.execute(TransportAction.java:168) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.support.TransportAction.execute(TransportAction.java:96) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.client.node.NodeClient.executeLocally(NodeClient.java:99) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.client.node.NodeClient.doExecute(NodeClient.java:88) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.client.support.AbstractClient.execute(AbstractClient.java:428) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.client.support.AbstractClient.search(AbstractClient.java:561) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.ad.transport.handler.ADSearchHandler.validateRole(ADSearchHandler.java:69) [opensearch-anomaly-detection-1.1.0.0-SNAPSHOT.jar:1.1.0.0-SNAPSHOT]
        at org.opensearch.ad.transport.handler.ADSearchHandler.search(ADSearchHandler.java:57) [opensearch-anomaly-detection-1.1.0.0-SNAPSHOT.jar:1.1.0.0-SNAPSHOT]
        at org.opensearch.ad.transport.SearchADTasksTransportAction.doExecute(SearchADTasksTransportAction.java:35) [opensearch-anomaly-detection-1.1.0.0-SNAPSHOT.jar:1.1.0.0-SNAPSHOT]
        at org.opensearch.ad.transport.SearchADTasksTransportAction.doExecute(SearchADTasksTransportAction.java:24) [opensearch-anomaly-detection-1.1.0.0-SNAPSHOT.jar:1.1.0.0-SNAPSHOT]
        at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:192) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.support.TransportAction.execute(TransportAction.java:168) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.action.support.TransportAction.execute(TransportAction.java:96) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.client.node.NodeClient.executeLocally(NodeClient.java:99) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.client.node.NodeClient.doExecute(NodeClient.java:88) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.client.support.AbstractClient.execute(AbstractClient.java:428) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.ad.rest.AbstractSearchAction.lambda$prepareRequest$0(AbstractSearchAction.java:88) [opensearch-anomaly-detection-1.1.0.0-SNAPSHOT.jar:1.1.0.0-SNAPSHOT]
        at org.opensearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:123) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.rest.RestController.dispatchRequest(RestController.java:271) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.rest.RestController.tryAllHandlers(RestController.java:353) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.rest.RestController.dispatchRequest(RestController.java:204) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:332) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:397) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:322) [opensearch-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:55) [transport-netty4-client-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at org.opensearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:41) [transport-netty4-client-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at org.opensearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:71) [transport-netty4-client-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.59.Final.jar:4.1.59.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.59.Final.jar:4.1.59.Final]
        at java.lang.Thread.run(Thread.java:832) [?:?]

Looks this error happens when detector list page calls search task API.
Search task API returns such error if no detector started.

{
  "error" : {
    "root_cause" : [
      {
        "type" : "index_not_found_exception",
        "reason" : "no such index [.opendistro-anomaly-detection-state]",
        "index" : ".opendistro-anomaly-detection-state",
        "resource.id" : ".opendistro-anomaly-detection-state",
        "resource.type" : "index_or_alias",
        "index_uuid" : "_na_"
      }
    ],
    "type" : "index_not_found_exception",
    "reason" : "no such index [.opendistro-anomaly-detection-state]",
    "index" : ".opendistro-anomaly-detection-state",
    "resource.id" : ".opendistro-anomaly-detection-state",
    "resource.type" : "index_or_alias",
    "index_uuid" : "_na_"
  },
  "status" : 404
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.