GithubHelp home page GithubHelp logo

cryostat3's People

Contributors

aali309 avatar andrewazores avatar dependabot[bot] avatar maxcao13 avatar mwangggg avatar tthvo avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar

cryostat3's Issues

[Story] JMC Agent

The server should be able to communicate with and configure JMC Agents. Probe templates should be stored in a specific S3 bucket.

[Story] cryostat repo integration test suite port

There are some integration tests that have been ported directly from the cryostat repo, but are disabled that need fixing. There are some that also haven't been ported over. Here's a current status of all of them.

Emoji Name Status Fixed in PR
ApiListingIT.java passes -
- ArchivedRecordingJwtDownloadIT.java not ported
- ArchivedReportJwtDownloadIT.java not ported
- AutoRulesCleanupIT.java not ported
- AutoRulesIT.java not ported
ClientAssetsIT.java passes
- CredentialsIT.java not ported
- CredentialsV2_2IT.java not ported
CryostatTemplateIT.java passes
CustomTargetsIT.java broken
- DiscoveryIT.java not ported
DiscoveryPluginIT.java broken
- FileSystemArchivedRequestsIT.java not ported
GrafanaSetupIT.java broken
- GraphQLIT.java not ported
HealthIT.java passes ?
InterleavedExternalTargetRequestsIT.java not ported
- JmxAuthIT.java not ported
- JvmIdIT.java not ported
MessagingServerIT.java broken
NonExistentTargetIT.java passes -
NoopAuthIT.java broken
NoopAuthV2IT.java broken
NotificationsUrlIT.java broken
- RecordingJwtDownloadIT.java not ported
- RecordingMetadataIT.java not ported
RecordingWorkflowIT.java broken
ReportIT.java broken
- ReportJwtDownloadIT.java not ported
RulesPostFormIT.java passes #34
RulesPostJsonIT.java passes #34
- RulesPostMultipartIT.java not ported
SnapshotIT.java broken
TargetEventsGetIT.java passes ?
TargetPostDeleteIT.java broken
TargetRecordingOptionsIT.java broken
TargetRecordingPatchIT.java broken
TargetRecordingsClientErrorIT.java broken
TargetReportIT.java broken
- TemplateJwtDownloadIT.java not ported
TemplatePostDeleteIT.java broken
UploadCertificateIT.java broken
UploadRecordingIT.java broken
- WrongServiceListeningOnPortIT.java not ported

[Task] Update k8s smoketest

./smoketest/k8s/ contains some Kubernetes YAML files defining a smoketest setup that is meant to run in kind rather than docker-compose. These are out of date as of #3 #4 #29 - the compose file structure has changed and the CRYOSTAT_JDP_ENABLED env var referenced in the outdated YAMLs is no longer valid.

[Story] Active recording downloads from S3

The API should expose an endpoint that allows clients to download the current contents of JFR files from target applications. In Cryostat 2.4 and prior this is available directly via /api/v1/targets/:targetId/recordings/:recordingName as well as via the JWT-auth'd flow (used for graphical clients). The JWT flow replaced the original method due to auth difficulties when dealing with graphical clients (web browsers) but is annoying for headless client access. In this case I am in favour of re-implementing neither of these endpoints directly, and instead providing only a single v3 endpoint that can service any kind of client.

[Epic] Provide local filesystem-backed mock S3 service

For users who do not have an external S3 provider available, Cryostat 3 should include some barebones implementation that simply writes through to local filesystem (ex. k8s PVC as in prior Cryostat versions). Ideally this would be done in-process but it might be a separate process running in the same container, or perhaps as a separate sidecar container in the same pod.

Potential leads:

[Story] Automated Rules

Currently, the basic API of Automated Rules is implemented, but these rule definitions do not actually do anything - they never trigger recording creation, archival, etc. This needs to be re-implemented.

[Task] Change BOM or override S3 versions

The pom.xml definitions for the dependency versions pulled in for AWS SDK/S3 client support etc. need to be either changed for a different BOM, or the BOM removed and each dependency enumerated with a specific version. This is needed so that we are able to synchronize these dependency versions with what is available for our downstream builds.

[Task] Clean up Spotbugs deficiencies or suppress

#112 turned on Spotbugs, which reveals that there are several potential problem areas identified already:

https://github.com/cryostatio/cryostat3/actions/runs/6579676143/job/17876067218#step:15:1595

[INFO] --- spotbugs:4.7.3.6:check (spotbugs) @ cryostat3 ---
[INFO] BugInstance size is 34
[INFO] Error size is 0
[INFO] Total bugs: 34
Error:  Medium: Dead store to pod in io.cryostat.discovery.ContainerDiscovery.handleContainerEvent(ContainerDiscovery$ContainerSpec, Target$EventKind) [io.cryostat.discovery.ContainerDiscovery] At ContainerDiscovery.java:[line 341] DLS_DEAD_LOCAL_STORE
Error:  High: io.cryostat.discovery.DiscoveryNode.NODE_TYPE isn't final but should be [io.cryostat.discovery.DiscoveryNode] At DiscoveryNode.java:[line 58] MS_SHOULD_BE_FINAL
Error:  High: io.cryostat.discovery.DiscoveryNode.POD isn't final but should be [io.cryostat.discovery.DiscoveryNode] At DiscoveryNode.java:[line 61] MS_SHOULD_BE_FINAL
Error:  High: io.cryostat.discovery.DiscoveryNode.REALM isn't final but should be [io.cryostat.discovery.DiscoveryNode] At DiscoveryNode.java:[line 60] MS_SHOULD_BE_FINAL
Error:  High: io.cryostat.discovery.DiscoveryNode.UNIVERSE isn't final but should be [io.cryostat.discovery.DiscoveryNode] At DiscoveryNode.java:[line 59] MS_SHOULD_BE_FINAL
Error:  High: Method io.cryostat.discovery.DiscoveryNode$Listener.onMessage(Target$TargetDiscovery) seems to be useless [io.cryostat.discovery.DiscoveryNode$Listener] At DiscoveryNode.java:[line 167] UC_USELESS_VOID_METHOD
Error:  High: Found reliance on default encoding in io.cryostat.discovery.DiscoveryPlugin$PluginCallback$AuthorizationFilter.filter(ClientRequestContext): String.getBytes() [io.cryostat.discovery.DiscoveryPlugin$PluginCallback$AuthorizationFilter] At DiscoveryPlugin.java:[line 145] DM_DEFAULT_ENCODING
Error:  High: Null pointer dereference of credential in io.cryostat.discovery.DiscoveryPlugin$PluginCallback$AuthorizationFilter.filter(ClientRequestContext) [io.cryostat.discovery.DiscoveryPlugin$PluginCallback$AuthorizationFilter] Dereferenced at DiscoveryPlugin.java:[line 140] NP_ALWAYS_NULL
Error:  Medium: io.cryostat.events.SerializableEventTypeInfo.category() may expose internal representation by returning SerializableEventTypeInfo.category [io.cryostat.events.SerializableEventTypeInfo] At SerializableEventTypeInfo.java:[line 26] EI_EXPOSE_REP
Error:  Medium: io.cryostat.events.SerializableEventTypeInfo.options() may expose internal representation by returning SerializableEventTypeInfo.options [io.cryostat.events.SerializableEventTypeInfo] At SerializableEventTypeInfo.java:[line 26] EI_EXPOSE_REP
Error:  Medium: io.cryostat.expressions.MatchExpression$ExpressionEvent.expression() may expose internal representation by returning MatchExpression$ExpressionEvent.expression [io.cryostat.expressions.MatchExpression$ExpressionEvent] At MatchExpression.java:[line 148] EI_EXPOSE_REP
Error:  Medium: new io.cryostat.expressions.MatchExpression$ExpressionEvent(MatchExpression$ExpressionEventCategory, MatchExpression) may expose internal representation by storing an externally mutable object into MatchExpression$ExpressionEvent.expression [io.cryostat.expressions.MatchExpression$ExpressionEvent] At MatchExpression.java:[line 149] EI_EXPOSE_REP2
Error:  Medium: io.cryostat.expressions.MatchExpression$MatchedExpression.targets() may expose internal representation by returning MatchExpression$MatchedExpression.targets [io.cryostat.expressions.MatchExpression$MatchedExpression] At MatchExpression.java:[line 92] EI_EXPOSE_REP
Error:  Medium: new io.cryostat.expressions.MatchExpression$MatchedExpression(Long, String, Collection) may expose internal representation by storing an externally mutable object into MatchExpression$MatchedExpression.targets [io.cryostat.expressions.MatchExpression$MatchedExpression] At MatchExpression.java:[line 94] EI_EXPOSE_REP2
Error:  Medium: Unread field: io.cryostat.expressions.MatchExpressionEvaluator$MatchExpressionAppliesEvent.matchExpression [io.cryostat.expressions.MatchExpressionEvaluator$MatchExpressionAppliesEvent] At MatchExpressionEvaluator.java:[line 183] URF_UNREAD_FIELD
Error:  Medium: io.cryostat.recordings.EventOptionsBuilder.build() may expose internal representation by returning EventOptionsBuilder.map [io.cryostat.recordings.EventOptionsBuilder] At EventOptionsBuilder.java:[line 90] EI_EXPOSE_REP
Error:  High: Found reliance on default encoding in io.cryostat.recordings.RecordingHelper.createMetadataTagging(Recordings$Metadata): String.getBytes() [io.cryostat.recordings.RecordingHelper] At RecordingHelper.java:[line 626] DM_DEFAULT_ENCODING
Error:  High: Found reliance on default encoding in io.cryostat.recordings.RecordingHelper.lambda$createMetadataTagging$9(Map$Entry): String.getBytes() [io.cryostat.recordings.RecordingHelper, io.cryostat.recordings.RecordingHelper] At RecordingHelper.java:[line 611]Another occurrence at RecordingHelper.java:[line 614] DM_DEFAULT_ENCODING
Error:  Medium: Unread field: io.cryostat.recordings.RecordingHelper.httpTimeoutSeconds; should this field be static? [io.cryostat.recordings.RecordingHelper] At RecordingHelper.java:[line 112] SS_SHOULD_BE_STATIC
Error:  High: Found reliance on default encoding in io.cryostat.recordings.Recordings.lambda$createMetadataTagging$33(Map$Entry): String.getBytes() [io.cryostat.recordings.Recordings, io.cryostat.recordings.Recordings] At Recordings.java:[line 852]Another occurrence at Recordings.java:[line 855] DM_DEFAULT_ENCODING
Error:  Medium: io.cryostat.recordings.Recordings$ArchivedRecordingDirectory.recordings() may expose internal representation by returning Recordings$ArchivedRecordingDirectory.recordings [io.cryostat.recordings.Recordings$ArchivedRecordingDirectory] At Recordings.java:[line 957] EI_EXPOSE_REP
Error:  Medium: io.cryostat.recordings.Recordings$Metadata.labels() may expose internal representation by returning Recordings$Metadata.labels [io.cryostat.recordings.Recordings$Metadata] At Recordings.java:[line 968] EI_EXPOSE_REP
Error:  Medium: new io.cryostat.recordings.Recordings$Metadata(Map, Instant) may expose internal representation by storing an externally mutable object into Recordings$Metadata.labels [io.cryostat.recordings.Recordings$Metadata] At Recordings.java:[line 969] EI_EXPOSE_REP2
Error:  Medium: Unread public/protected field: io.cryostat.rules.Rule.description [io.cryostat.rules.Rules] At Rules.java:[line 120] URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD
Error:  Medium: io.cryostat.rules.Rule$RuleEvent.rule() may expose internal representation by returning Rule$RuleEvent.rule [io.cryostat.rules.Rule$RuleEvent] At Rule.java:[line 124] EI_EXPOSE_REP
Error:  Medium: new io.cryostat.rules.Rule$RuleEvent(Rule$RuleEventCategory, Rule) may expose internal representation by storing an externally mutable object into Rule$RuleEvent.rule [io.cryostat.rules.Rule$RuleEvent] At Rule.java:[line 125] EI_EXPOSE_REP2
Error:  Medium: io.cryostat.rules.RuleService$RuleRecording.recording() may expose internal representation by returning RuleService$RuleRecording.recording [io.cryostat.rules.RuleService$RuleRecording] At RuleService.java:[line 284] EI_EXPOSE_REP
Error:  Medium: io.cryostat.rules.RuleService$RuleRecording.rule() may expose internal representation by returning RuleService$RuleRecording.rule [io.cryostat.rules.RuleService$RuleRecording] At RuleService.java:[line 284] EI_EXPOSE_REP
Error:  Medium: new io.cryostat.rules.RuleService$RuleRecording(Rule, ActiveRecording) may expose internal representation by storing an externally mutable object into RuleService$RuleRecording.recording [io.cryostat.rules.RuleService$RuleRecording] At RuleService.java:[line 285] EI_EXPOSE_REP2
Error:  Medium: new io.cryostat.rules.RuleService$RuleRecording(Rule, ActiveRecording) may expose internal representation by storing an externally mutable object into RuleService$RuleRecording.rule [io.cryostat.rules.RuleService$RuleRecording] At RuleService.java:[line 285] EI_EXPOSE_REP2
Error:  Medium: io.cryostat.targets.Target$Annotations.cryostat() may expose internal representation by returning Target$Annotations.cryostat [io.cryostat.targets.Target$Annotations] At Target.java:[line 128] EI_EXPOSE_REP
Error:  Medium: io.cryostat.targets.Target$Annotations.platform() may expose internal representation by returning Target$Annotations.platform [io.cryostat.targets.Target$Annotations] At Target.java:[line 128] EI_EXPOSE_REP
Error:  Medium: io.cryostat.targets.Target$TargetDiscovery.serviceRef() may expose internal representation by returning Target$TargetDiscovery.serviceRef [io.cryostat.targets.Target$TargetDiscovery] At Target.java:[line 174] EI_EXPOSE_REP
Error:  Medium: new io.cryostat.targets.Target$TargetDiscovery(Target$EventKind, Target) may expose internal representation by storing an externally mutable object into Target$TargetDiscovery.serviceRef [io.cryostat.targets.Target$TargetDiscovery] At Target.java:[line 175] EI_EXPOSE_REP2
[INFO] 


To see bug detail using the Spotbugs GUI, use the following command "mvn spotbugs:gui"



[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  04:45 min
[INFO] Finished at: 2023-10-19T20:13:21Z
[INFO] ------------------------------------------------------------------------
Error:  Failed to execute goal com.github.spotbugs:spotbugs-maven-plugin:4.7.3.6:check (spotbugs) on project cryostat3: failed with 34 bugs and 0 errors -> [Help 1]

Each of these should be checked and either corrected (the Spotbugs error code can be referenced against the bug descriptions list for potential resolutions), or if it is a false positive then @SuppressFBWarnings can be added to suppress the warning and allow the CI to pass.

Support native image builds

quarkusio/quarkus#11923

[INFO] [io.quarkus.deployment.pkg.steps.NativeImageBuildStep] Running Quarkus native-image plugin on native-image 22.3.0.1-Final Mandrel Distribution (Java Version 17.0.5+8)
[INFO] [io.quarkus.deployment.pkg.steps.NativeImageBuildRunner] podman run --env LANG=C --rm --user 1000:1000 --userns=keep-id -v /home/andrew/workspace/cryostat3/target/cryostat3-3.0.0-SNAPSHOT-native-image-source-jar:/project:z --name build-native-ZVsqN quay.io/quarkus/ubi-quarkus-mandrel-builder-image:22.3-java17 -J-DCoordinatorEnvironmentBean.transactionStatusManagerEnable=false -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dlogging.initial-configurator.min-level=500 -J-Dsun.nio.ch.maxUpdateArraySize=100 -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=3 -J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory -J-Dvertx.disableDnsResolver=true -J-Duser.language=en -J-Duser.country=CA -J-Dfile.encoding=UTF-8 --features=org.hibernate.graalvm.internal.GraalVMStaticFeature,org.hibernate.graalvm.internal.QueryParsingSupport,io.quarkus.websockets.client.runtime.DisableLoggingFeature,io.quarkus.runner.Feature,io.quarkus.runtime.graal.ResourcesFeature,io.quarkus.runtime.graal.DisableLoggingFeature,io.quarkus.hibernate.orm.runtime.graal.DisableLoggingFeature,io.quarkus.jdbc.postgresql.runtime.graal.SQLXMLFeature,io.quarkus.caffeine.runtime.graal.CacheConstructorsFeature -J--add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED -J--add-opens=java.base/java.text=ALL-UNNAMED -J--add-opens=java.base/java.io=ALL-UNNAMED -J--add-opens=java.base/java.lang.invoke=ALL-UNNAMED -J--add-opens=java.base/java.util=ALL-UNNAMED -H:+CollectImageBuildStatistics -H:ImageBuildStatisticsFile=cryostat3-3.0.0-SNAPSHOT-runner-timing-stats.json -H:BuildOutputJSONFile=cryostat3-3.0.0-SNAPSHOT-runner-build-output-stats.json -H:+AllowFoldMethods -J-Djava.awt.headless=true --no-fallback --link-at-build-time -H:+ReportExceptionStackTraces -H:-AddAllCharsets --enable-url-protocols=http,https -H:NativeLinkerOption=-no-pie -H:-UseServiceLoaderFeature -H:+StackTrace -J--add-exports=org.graalvm.sdk/org.graalvm.nativeimage.impl=ALL-UNNAMED -J--add-exports=org.graalvm.nativeimage.builder/com.oracle.svm.core.jdk=ALL-UNNAMED cryostat3-3.0.0-SNAPSHOT-runner -jar cryostat3-3.0.0-SNAPSHOT-runner.jar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/project/lib/org.slf4j.slf4j-jdk14-1.7.35.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/project/lib/org.jboss.slf4j.slf4j-jboss-logmanager-1.2.0.Final.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]
========================================================================================================================
GraalVM Native Image: Generating 'cryostat3-3.0.0-SNAPSHOT-runner' (executable)...
========================================================================================================================
Warning: Could not register io.netty.handler.codec.compression.Lz4FrameDecoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: net/jpountz/lz4/LZ4Exception.
Warning: Could not register io.netty.handler.codec.compression.Lz4FrameEncoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: net/jpountz/lz4/LZ4Exception.
Warning: Could not register io.netty.handler.codec.marshalling.CompatibleMarshallingDecoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: org/jboss/marshalling/ByteInput.
Warning: Could not register io.netty.handler.codec.marshalling.CompatibleMarshallingEncoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: org/jboss/marshalling/ByteOutput.
Warning: Could not register io.netty.handler.codec.marshalling.MarshallingDecoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: org/jboss/marshalling/ByteInput.
Warning: Could not register io.netty.handler.codec.marshalling.MarshallingEncoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: org/jboss/marshalling/ByteOutput.
Warning: Could not register io.netty.handler.codec.protobuf.ProtobufDecoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: com/google/protobuf/ExtensionRegistryLite.
Warning: Could not register io.netty.handler.codec.compression.Lz4FrameDecoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: net/jpountz/lz4/LZ4Exception.
Warning: Could not register io.netty.handler.codec.compression.Lz4FrameEncoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: net/jpountz/lz4/LZ4Exception.
Warning: Could not register io.netty.handler.codec.marshalling.CompatibleMarshallingDecoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: org/jboss/marshalling/ByteInput.
Warning: Could not register io.netty.handler.codec.marshalling.CompatibleMarshallingEncoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: org/jboss/marshalling/ByteOutput.
Warning: Could not register io.netty.handler.codec.marshalling.MarshallingDecoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: org/jboss/marshalling/ByteInput.
Warning: Could not register io.netty.handler.codec.marshalling.MarshallingEncoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: org/jboss/marshalling/ByteOutput.
Warning: Could not register io.netty.handler.codec.protobuf.ProtobufDecoder: queryAllPublicMethods for reflection. Reason: java.lang.NoClassDefFoundError: com/google/protobuf/ExtensionRegistryLite.
[1/7] Initializing...                                                                                   (10.3s @ 0.20GB)
 Version info: 'GraalVM 22.3.0.1-Final Java 17 Mandrel Distribution'
 Java version info: '17.0.5+8'
 C compiler: gcc (redhat, x86_64, 8.5.0)
 Garbage collector: Serial GC
 9 user-specific feature(s)
 - io.quarkus.caffeine.runtime.graal.CacheConstructorsFeature
 - io.quarkus.hibernate.orm.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase for the [org.hibernate.Version, org.hibernate.annotations.common.Version, org.hibernate.dialect.Dialect] categories
 - io.quarkus.jdbc.postgresql.runtime.graal.SQLXMLFeature
 - io.quarkus.runner.Feature: Auto-generated class by Quarkus from the existing extensions
 - io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase for the [org.jboss.threads] categories
 - io.quarkus.runtime.graal.ResourcesFeature: Register each line in META-INF/quarkus-native-resources.txt as a resource on Substrate VM
 - io.quarkus.websockets.client.runtime.DisableLoggingFeature: Disables INFO logging during the analysis phase for the [io.undertow.websockets] categories
 - org.hibernate.graalvm.internal.GraalVMStaticFeature: Hibernate ORM's static reflection registrations for GraalVM
 - org.hibernate.graalvm.internal.QueryParsingSupport: Hibernate ORM's support for HQL Parser in GraalVM
[2/7] Performing analysis...  []                                                                        (18.6s @ 1.09GB)
   7,835 (87.39%) of  8,966 classes reachable
  10,194 (57.30%) of 17,791 fields reachable
  32,642 (79.83%) of 40,891 methods reachable
     672 classes,     0 fields, and     0 methods registered for reflection

Fatal error: com.oracle.graal.pointsto.util.AnalysisError$ParsingError: Error encountered while parsing io.cryostat.core.net.discovery.JvmDiscoveryClient.<init>(io.cryostat.core.log.Logger) 
Parsing context:
   at io.cryostat.core.net.discovery.JvmDiscoveryClient.<init>(JvmDiscoveryClient.java:101)
   at io.cryostat.discovery.JDPDiscovery.produceJvmDiscoveryClient(JDPDiscovery.java:69)
   at io.cryostat.discovery.JDPDiscovery_ProducerMethod_produceJvmDiscoveryClient_dfd224c6dd1d1d697378a82d35d8cf2c51345c65_Bean.create(Unknown Source)
   at io.cryostat.discovery.JDPDiscovery_ProducerMethod_produceJvmDiscoveryClient_dfd224c6dd1d1d697378a82d35d8cf2c51345c65_Bean.create(Unknown Source)
   at io.quarkus.arc.impl.AbstractSharedContext.createInstanceHandle(AbstractSharedContext.java:113)
   at io.quarkus.arc.impl.AbstractSharedContext$1.get(AbstractSharedContext.java:37)
   at io.quarkus.arc.impl.AbstractSharedContext$1.get(AbstractSharedContext.java:34)
   at com.oracle.svm.core.jdk.SystemPropertiesSupport.initializeLazyValue(SystemPropertiesSupport.java:217)
   at com.oracle.svm.core.jdk.SystemPropertiesSupport.getProperty(SystemPropertiesSupport.java:170)
   at com.oracle.svm.core.jdk.Target_java_lang_System.getProperty(JavaLangSubstitutions.java:305)
   at java.util.concurrent.ForkJoinPool.<init>(ForkJoinPool.java:2573)
   at com.oracle.svm.core.jdk.ForkJoinPoolCommonAccessor.initializeCommonPool(RecomputedFields.java:385)
   at com.oracle.svm.core.jdk.ForkJoinPoolCommonAccessor.get(RecomputedFields.java:377)
   at java.util.concurrent.ForkJoinTask.awaitDone(ForkJoinTask.java:410)
   at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:687)
   at java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:159)
   at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:173)
   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
   at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
   at java.lang.reflect.Executable.sharedToString(Executable.java:122)
   at java.lang.reflect.Method.toString(Method.java:412)
   at java.lang.StringConcatHelper.stringOf(StringConcatHelper.java:453)
   at java.lang.StringConcatHelper.simpleConcat(StringConcatHelper.java:408)
   at com.oracle.svm.core.classinitialization.ClassInitializationInfo.invokeClassInitializerError(ClassInitializationInfo.java:366)
   at com.oracle.svm.core.classinitialization.ClassInitializationInfo.invokeClassInitializer(ClassInitializationInfo.java:359)
   at com.oracle.svm.core.classinitialization.ClassInitializationInfo.initialize(ClassInitializationInfo.java:277)

	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.util.AnalysisError.parsingError(AnalysisError.java:153)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.flow.MethodTypeFlow.createFlowsGraph(MethodTypeFlow.java:104)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.flow.MethodTypeFlow.ensureFlowsGraphCreated(MethodTypeFlow.java:83)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.flow.MethodTypeFlow.getOrCreateMethodFlowsGraph(MethodTypeFlow.java:65)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.typestate.DefaultSpecialInvokeTypeFlow.onObservedUpdate(DefaultSpecialInvokeTypeFlow.java:61)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.flow.TypeFlow.update(TypeFlow.java:562)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.PointsToAnalysis$1.run(PointsToAnalysis.java:488)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.util.CompletionExecutor.executeCommand(CompletionExecutor.java:193)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.util.CompletionExecutor.lambda$executeService$0(CompletionExecutor.java:177)
	at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1395)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
Caused by: org.graalvm.compiler.java.BytecodeParser$BytecodeParserError: com.oracle.graal.pointsto.constraints.UnsupportedFeatureException: No instances of java.net.Inet4Address are allowed in the image heap as this class should be initialized at image runtime. To see how this object got instantiated use --trace-object-instantiation=java.net.Inet4Address.
	at parsing org.openjdk.jmc.jdp.client.JDPClient.<init>(JDPClient.java:62)
	at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.throwParserError(BytecodeParser.java:2518)
	at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.phases.SharedGraphBuilderPhase$SharedBytecodeParser.throwParserError(SharedGraphBuilderPhase.java:110)
	at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.iterateBytecodesForBlock(BytecodeParser.java:3393)
	at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.handleBytecodeBlock(BytecodeParser.java:3345)
	at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.processBlock(BytecodeParser.java:3190)
	at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.build(BytecodeParser.java:1138)
	at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.buildRootMethod(BytecodeParser.java:1030)
	at jdk.internal.vm.compiler/org.graalvm.compiler.java.GraphBuilderPhase$Instance.run(GraphBuilderPhase.java:97)
	at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.phases.SharedGraphBuilderPhase.run(SharedGraphBuilderPhase.java:84)
	at jdk.internal.vm.compiler/org.graalvm.compiler.phases.Phase.run(Phase.java:49)
	at jdk.internal.vm.compiler/org.graalvm.compiler.phases.BasePhase.apply(BasePhase.java:446)
	at jdk.internal.vm.compiler/org.graalvm.compiler.phases.Phase.apply(Phase.java:42)
	at jdk.internal.vm.compiler/org.graalvm.compiler.phases.Phase.apply(Phase.java:38)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.flow.AnalysisParsedGraph.parseBytecode(AnalysisParsedGraph.java:135)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.meta.AnalysisMethod.ensureGraphParsed(AnalysisMethod.java:685)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.phases.InlineBeforeAnalysisGraphDecoder.lookupEncodedGraph(InlineBeforeAnalysis.java:180)
	at jdk.internal.vm.compiler/org.graalvm.compiler.replacements.PEGraphDecoder.doInline(PEGraphDecoder.java:1162)
	at jdk.internal.vm.compiler/org.graalvm.compiler.replacements.PEGraphDecoder.tryInline(PEGraphDecoder.java:1145)
	at jdk.internal.vm.compiler/org.graalvm.compiler.replacements.PEGraphDecoder.trySimplifyInvoke(PEGraphDecoder.java:1003)
	at jdk.internal.vm.compiler/org.graalvm.compiler.replacements.PEGraphDecoder.handleInvoke(PEGraphDecoder.java:957)
	at jdk.internal.vm.compiler/org.graalvm.compiler.nodes.GraphDecoder.processNextNode(GraphDecoder.java:817)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.phases.InlineBeforeAnalysisGraphDecoder.processNextNode(InlineBeforeAnalysis.java:240)
	at jdk.internal.vm.compiler/org.graalvm.compiler.nodes.GraphDecoder.decode(GraphDecoder.java:548)
	at jdk.internal.vm.compiler/org.graalvm.compiler.replacements.PEGraphDecoder.decode(PEGraphDecoder.java:833)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.phases.InlineBeforeAnalysis.decodeGraph(InlineBeforeAnalysis.java:98)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.flow.MethodTypeFlowBuilder.parse(MethodTypeFlowBuilder.java:179)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.flow.MethodTypeFlowBuilder.apply(MethodTypeFlowBuilder.java:349)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.flow.MethodTypeFlow.createFlowsGraph(MethodTypeFlow.java:93)
	... 13 more
Caused by: com.oracle.graal.pointsto.constraints.UnsupportedFeatureException: No instances of java.net.Inet4Address are allowed in the image heap as this class should be initialized at image runtime. To see how this object got instantiated use --trace-object-instantiation=java.net.Inet4Address.
	at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.classinitialization.ClassInitializationFeature.checkImageHeapInstance(ClassInitializationFeature.java:132)
	at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.meta.AnalysisUniverse.replaceObject(AnalysisUniverse.java:595)
	at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.ameta.AnalysisConstantReflectionProvider.replaceObject(AnalysisConstantReflectionProvider.java:177)
	at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.ameta.AnalysisConstantReflectionProvider.interceptValue(AnalysisConstantReflectionProvider.java:148)
	at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.ameta.AnalysisConstantReflectionProvider.readValue(AnalysisConstantReflectionProvider.java:100)
	at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.ameta.AnalysisConstantReflectionProvider.readFieldValue(AnalysisConstantReflectionProvider.java:79)
	at jdk.internal.vm.compiler/org.graalvm.compiler.nodes.util.ConstantFoldUtil$1.readValue(ConstantFoldUtil.java:55)
	at jdk.internal.vm.compiler/org.graalvm.compiler.core.common.spi.JavaConstantFieldProvider.readConstantField(JavaConstantFieldProvider.java:78)
	at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.ameta.AnalysisConstantFieldProvider.readConstantField(AnalysisConstantFieldProvider.java:72)
	at jdk.internal.vm.compiler/org.graalvm.compiler.nodes.util.ConstantFoldUtil.tryConstantFold(ConstantFoldUtil.java:51)
	at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.phases.ConstantFoldLoadFieldPlugin.tryConstantFold(ConstantFoldLoadFieldPlugin.java:53)
	at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.phases.ConstantFoldLoadFieldPlugin.handleLoadStaticField(ConstantFoldLoadFieldPlugin.java:49)
	at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.genGetStatic(BytecodeParser.java:4801)
	at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.genGetStatic(BytecodeParser.java:4772)
	at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.processBytecode(BytecodeParser.java:5282)
	at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.iterateBytecodesForBlock(BytecodeParser.java:3385)
	... 38 more
------------------------------------------------------------------------------------------------------------------------
                        1.0s (3.2% of total time) in 21 GCs | Peak RSS: 3.20GB | CPU load: 5.90
========================================================================================================================
Failed generating 'cryostat3-3.0.0-SNAPSHOT-runner' after 29.3s.
Error: Image build request failed with exit status 1
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  54.269 s
[INFO] Finished at: 2023-01-12T19:46:23-05:00
[INFO] ------------------------------------------------------------------------

[Bug] `smoktest.sh` does not pull latest images

smoketest.sh will cause the docker command to load container images when they are missing (ex. on first run), but will not attempt to pull published updates. If using docker-compose v1.x (which is what Fedora ships, for example) then the pull_policy yaml property is not implemented, and docker-compose pull has a bug that prevents it from updating all of the images if any one image cannot be pulled from the registry, which is the case here since this project is currently still private and the images are not published.

Potential solutions:

  1. just docker pull/podman pull the images manually as needed. This is error-prone and tedious.
  2. make the repo public and start pushing upstream images. This will happen eventually but it would be nice for this to work well while this project is still private.
  3. add some scripting to smoketest.sh to automate option 1
  4. update to a later Docker version that includes the newer compose-plugin v2+, which supposedly fixes the docker compose pull behaviour as well as implements pull_policy

[Task] CI builds for JDK 21 should be optional

The first release of 3.0 will most likely use JDK 17. Upgrade to JDK 21 will come later on, but will be an eventual goal. It would be good to have testing against JDK 21, but failures there should not fail the whole build.

[Task] Embed details about target in recording metadata file

In the longer term I'm tempted to say let's do something much more radical. We can safely embed those details about the source target (alias, connectUrl, JVM ID, labels/annotations) into a .metadata.json file that accompanies the recording now that we have somewhere better to store those details than the recording filename. The file itself could just be named something simple like ${target_alias}.${sequence}.jfr - the timestamp can/should simply be in the created/modified timestamp of the file but might also be embedded in the JSON metadata, too. We can model this by creating something like an ArchivedRecording table in Postgres that has columns for:

  1. the recording primary key, just an int ID
  2. the source recording ID foreign key
  3. the filename as it was saved in S3

If we redo the tables for targets and active recordings so that we don't delete records when targets go offline or recordings are deleted, and instead add a status flag column, then we can always go back and look up the details of the source recording for an archived recording, and from there also look up the details of the source target.

Originally posted by @andrewazores in #34 (comment)

[Story] Custom event templates

Custom event templates should work the same way they have since the early days of Cryostat. These should also be stored in a specific S3 bucket.

[Task] Store reports in S3

See #9 #51 #59 . Reports are currently generated and held in-memory in a Caffeine cache. Reports are generally small so this is OK for the most part, but over time the number of reports may exert noticeable memory pressure, so some cache tuning needs to be applied. Also, reports for archived recordings are likely to be accessed infrequently, but it is still worth saving the cost of regenerating a report (requires intensive CPU and memory to generate, but only takes a very small amount of disk to store), so they should be tiered between in-memory for rapid access and in S3 alongside the source recording for later reference, or for access after the Cryostat server is restarted.

[Story] Discovery JWT

Currently the stubbed out discovery implementation simply provides registering plugins with a UUID for a token, and accepts any token on refresh/ping/update. This should be replaced with a proper JWT flow logic like what 2.4.0 does.

[Task] Port Docker label-based discovery

After #23/#26 , it may actually be possible to just reuse the same implementation with different configuration. The Podman/libpod HTTP API over Unix socket intentionally mimics the Docker daemon HTTP API, and the Podman discovery implementation understands that containers may optionally belong to Pods. Podman containers not belonging to Pods generally look the same in the libpod API as Docker containers do in its API, so the same discovery implementation should be usable for Docker by simply changing the path to the Unix socket and perhaps the HTTP URL used to talk to it. These two parameters could be controlled by @ConfigProperty or could assume values based on whether cryostat.discovery.podman.enabled or cryostat.discovery.docker.enabled is set. It seems unlikely that a Cryostat instance would be deployed in a way that it needs to talk to both Podman and Docker APIs at the same time, so simply switching the implementation to talk to one or the other should be fine.

[Bug] CI fails to load image for testing

See #81

https://github.com/cryostatio/cryostat3/actions/runs/6509195110/job/17680185925

[INFO] Connected to docker: 
  Server Version: 4.6.2
  API Version: 1.41
  Operating System: ubuntu
  Total Memory: 6921 MB
Warning:  
********************************************************************************
Ryuk has been disabled. This can cause unexpected behavior in your environment.
********************************************************************************
[INFO] Checking the system...
[INFO] ✔︎ Docker server version should be at least 1.6.0
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] Starting (local) container image build for jar using docker.
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] Executing the following command to build docker image: 'docker build -f /home/runner/work/cryostat3/cryostat3/src/main/docker/Dockerfile.jvm -t quay.io/cryostat/cryostat3:3.0.0-SNAPSHOT /home/runner/work/cryostat3/cryostat3'
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #0 building with "default" instance using docker-container driver
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #1 [internal] booting buildkit
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #1 pulling image moby/buildkit:buildx-stable-1
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #1 pulling image moby/buildkit:buildx-stable-1 2.8s done
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #1 creating container buildx_buildkit_default
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #1 creating container buildx_buildkit_default 2.2s done
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #1 DONE 5.0s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #2 [internal] load build definition from Dockerfile.jvm
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #2 transferring dockerfile: 5.56kB done
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #2 DONE 0.0s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #3 [internal] load metadata for registry.access.redhat.com/ubi8/openjdk-17-runtime:1.16-2
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #3 DONE 0.5s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #4 [internal] load .dockerignore
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #4 transferring context: 143B done
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #4 DONE 0.0s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 [1/6] FROM registry.access.redhat.com/ubi8/openjdk-17-runtime:1.16-2@sha256:de58a898c3a8cbaba1732b96e3ab67994160d3ea0f649c6904e7f3dcb8fca16b
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 resolve registry.access.redhat.com/ubi8/openjdk-17-runtime:1.16-2@sha256:de58a898c3a8cbaba1732b96e3ab67994160d3ea0f649c6904e7f3dcb8fca16b done
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:5329d7039f252afc1c5d69521ef7e674f71c36b50db99b369cbb52aa9e0a6782 9.44MB / 39.33MB 0.2s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 0B / 91.38MB 0.2s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:5329d7039f252afc1c5d69521ef7e674f71c36b50db99b369cbb52aa9e0a6782 28.31MB / 39.33MB 0.3s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:5329d7039f252afc1c5d69521ef7e674f71c36b50db99b369cbb52aa9e0a6782 39.33MB / 39.33MB 0.4s done
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 11.53MB / 91.38MB 0.6s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 extracting sha256:5329d7039f252afc1c5d69521ef7e674f71c36b50db99b369cbb52aa9e0a6782
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 16.78MB / 91.38MB 0.8s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 26.21MB / 91.38MB 1.2s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 31.46MB / 91.38MB 1.8s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 36.70MB / 91.38MB 2.0s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 extracting sha256:5329d7039f252afc1c5d69521ef7e674f71c36b50db99b369cbb52aa9e0a6782 1.6s done
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 44.04MB / 91.38MB 2.3s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 52.43MB / 91.38MB 2.6s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 ...
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #6 [internal] load build context
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #6 transferring context: 75.40MB 2.9s done
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #6 DONE 2.9s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 [1/6] FROM registry.access.redhat.com/ubi8/openjdk-17-runtime:1.16-2@sha256:de58a898c3a8cbaba1732b96e3ab67994160d3ea0f649c6904e7f3dcb8fca16b
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 59.77MB / 91.38MB 2.9s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 70.25MB / 91.38MB 3.0s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 75.50MB / 91.38MB 3.3s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 88.08MB / 91.38MB 3.6s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 91.38MB / 91.38MB 4.1s done
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 extracting sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 extracting sha256:d8b515f03629826cbe0e8d4333e6c211a736172af3d5a42f4837356bc5cccc68 4.2s done
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #5 DONE 8.3s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #7 [2/6] COPY --chown=185 src/main/docker/include/cryostat.jfc /usr/lib/jvm/jre/lib/jfr/
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #7 DONE 0.0s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #8 [3/6] COPY --chown=185 target/quarkus-app/lib/ /deployments/lib/
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #8 DONE 0.3s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #9 [4/6] COPY --chown=185 target/quarkus-app/*.jar /deployments/
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #9 DONE 0.0s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #10 [5/6] COPY --chown=185 target/quarkus-app/app/ /deployments/app/
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #10 DONE 0.0s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] 
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #11 [6/6] COPY --chown=185 target/quarkus-app/quarkus/ /deployments/quarkus/
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] #11 DONE 0.0s
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[INFO] [io.quarkus.container.image.docker.deployment.DockerProcessor] Built container image quay.io/cryostat/cryostat3:3.0.0-SNAPSHOT (null)

Error response from daemon: failed to find image quay.io/cryostat/cryostat3:3.0.0-SNAPSHOT: quay.io/cryostat/cryostat3:3.0.0-SNAPSHOT: image not known
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  05:27 min
[INFO] Finished at: 2023-10-13T14:08:25Z
[INFO] ------------------------------------------------------------------------
Error:  Failed to execute goal io.quarkus.platform:quarkus-maven-plugin:3.2.4.Final:build (build) on project cryostat3: Failed to build quarkus application: io.quarkus.builder.BuildException: Build failure: Build failed due to errors
Error: [ERROR] 	[error]: Build step io.quarkus.container.image.docker.deployment.DockerProcessor#dockerBuildFromJar threw an exception: java.lang.RuntimeException: Execution of 'docker tag quay.io/cryostat/cryostat3:3.0.0-SNAPSHOT quay.io/cryostat/cryostat3:dev' failed. See docker output for more details
Error:  	at io.quarkus.container.image.docker.deployment.DockerProcessor.dockerException(DockerProcessor.java:349)
Error:  	at io.quarkus.container.image.docker.deployment.DockerProcessor.createAdditionalTags(DockerProcessor.java:332)
Error:  	at io.quarkus.container.image.docker.deployment.DockerProcessor.createContainerImage(DockerProcessor.java:223)
Error:  	at io.quarkus.container.image.docker.deployment.DockerProcessor.dockerBuildFromJar(DockerProcessor.java:110)
Error:  	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Error:  	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
Error:  	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Error:  	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
Error:  	at io.quarkus.deployment.ExtensionLoader$3.execute(ExtensionLoader.java:864)
Error:  	at io.quarkus.builder.BuildContext.run(BuildContext.java:282)
Error:  	at org.jboss.threads.ContextHandler$1.runWith(ContextHandler.java:18)
Error:  	at org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2513)
Error:  	at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1538)
Error:  	at java.base/java.lang.Thread.run(Thread.java:833)
Error:  	at org.jboss.threads.JBossThread.run(JBossThread.java:501)
Error:  -> [Help 1]
Error:  
Error:  To see the full stack trace of the errors, re-run Maven with the -e switch.
Error:  Re-run Maven using the -X switch to enable full debug logging.
Error:  
Error:  For more information about the errors and possible solutions, please read the following articles:
Error:  [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[INFO] Recoverable I/O exception (java.io.IOException) caught when processing request to {}->unix://localhost:2375
Error: Process completed with exit code 1.

[Bug] Smoketest is not working after system package upgrades

Trying to run smoketest with bash smoketest.bash but runs into this issue:

[thvo@fedora]$ PULL_IMAGES=false bash smoketest.bash 
...output_omitted...
Creating network "compose_default" with the default driver
Creating volume "compose_postgresql" with local driver
Creating volume "compose_minio_data" with local driver
Creating volume "compose_minio_certs" with local driver
Creating compose_db_1 ... 
Creating compose_s3_1 ... error

Creating compose_db_1 ... error
 `/proc/self/oom_score_adj`: Permission denied: OCI permission denied

ERROR: for compose_db_1  preparing container 64577087dce3a2526b31536467393d9b5baa07c73f4218ad75bd0345ada09983 for attach: crun: write to `/proc/self/oom_score_adj`: Permission denied: OCI permission denied

ERROR: for s3  preparing container b29cf2db059647dd245eb469f623bf1ec356685b68f8c378a972500d9c5ffbf0 for attach: crun: write to `/proc/self/oom_score_adj`: Permission denied: OCI permission denied

ERROR: for db  preparing container 64577087dce3a2526b31536467393d9b5baa07c73f4218ad75bd0345ada09983 for attach: crun: write to `/proc/self/oom_score_adj`: Permission denied: OCI permission denied
ERROR: Encountered errors while bringing up the project.
+ cleanup
+ DOWN_FLAGS=('--remove-orphans')
+ '[' false '!=' true ']'
+ DOWN_FLAGS+=('--volumes')
+ docker-compose -f ./smoketest/compose/db.yml -f ./smoketest/compose/cryostat.yml -f ./smoketest/compose/s3-minio.yml down --remove-orphans --volumes
Removing compose_db_1 ... done
Removing compose_s3_1 ... done
Removing network compose_default
Removing volume compose_postgresql
Removing volume compose_minio_data
Removing volume compose_minio_certs

Other information

Looks like crun is upgrade to 1.9.0 and is causing the issue as said here: containers/podman#19930 (comment)

I downgraded crun to 1.8.3 and it worked just fine. Next podman patch 4.6.3 (currently using 4.6.2 on Fedora 38) might just resolve this issue. For now, we should document the versions of the tools we use.

References

containers/podman#19930

[Story] Report generation

Reports should reside in a specific S3 bucket. If an existing report is found in the bucket then that report should either be downloaded temporarily by Cryostat and returned to the user as JSON/HTML through the API (old API compatibility), or returned to the client as a presigned GET request link directly through to S3 (new API for downloads). If no report is found in the S3 bucket then a new report should be generated, stored in S3, and a response sent to the client as before.

[Epic] Redefine Target data model to allow multiple Connection URLs

Currently, Targets are defined expecting unique Connection URLs. However, it can be the case that there are multiple known Connection URLs that point to the same JVM instance - either because there are overlapping built-in discovery plugins or because the target is discoverable by one of those mechanisms and also has the Cryostat Agent attached. The JVM ID hash is therefore intended to help identify which Target instances are actually duplicate references.

A better data model would be that Targets can be defined, which have 0 or 1 JVM IDs (ie. null String for Target definitions that have not yet been connected to, or non-null String afterward - this ID should still be simply a column on the Target table). There would be a separate model for Connection URLs. A Connection URL would belong to exactly one Target, but a Target would have a list of zero or more Connection URLs. When a new Connection URL is to be added, the Cryostat server should attempt to open a connection there and compute (JMX)/retrieve (HTTP) the JVM ID. If the JVM ID cannot be determined then this Connection URL is rejected. If the JVM ID is determined then it should be linked to an existing Target record, if any, or a new corresponding Target should be created. This way, a Target record has a one-to-one relationship with an actual JVM instance.

Any background tasks, such as Automated Rule processing, that operate over Targets, can then choose any of the known Connection URLs for that Target to perform their work. Probably HTTP URLs should be preferred over JMX URLs, and some other ranking preference heuristics could be applied as well, for example localhost should be high priority, raw IP addresses should be next priority, hostnames (or hostname.local) should be next, and finally other URLs using full resolvable domains.

[Bug] CI failures due to `podman-docker` package installation

ex. https://github.com/cryostatio/cryostat3/actions/runs/6497083628/job/17645406592?pr=80

Run sudo apt-get -y purge moby-cli
Reading package lists...
Building dependency tree...
Reading state information...
Package 'moby-cli' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 78 not upgraded.

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Reading package lists...
Building dependency tree...
Reading state information...
The following packages will be REMOVED:
  docker-ce
The following NEW packages will be installed:
  podman-docker
0 upgraded, 1 newly installed, 1 to remove and 78 not upgraded.
Need to get 5800 B of archives.
After this operation, 96.6 MB disk space will be freed.
Get:1 https://download.opensuse.org/repositories/devel:kubic:libcontainers:unstable/xUbuntu_22.04  podman-docker 4:4.6.2-0ubuntu22.04+obs81.12 [5800 B]
Fetched 5800 B in 1s (5112 B/s)
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 226122 files and directories currently installed.)
Removing docker-ce (5:24.0.6-1~ubuntu.22.04~jammy) ...
Selecting previously unselected package podman-docker.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 226115 files and directories currently installed.)
Preparing to unpack .../podman-docker_4%3a4.6.2-0ubuntu22.04+obs81.12_all.deb ...
Unpacking podman-docker (4:4.6.2-0ubuntu22.04+obs81.12) ...
dpkg: error processing archive /var/cache/apt/archives/podman-docker_4%3a4.6.2-0ubuntu22.04+obs81.12_all.deb (--unpack):
 trying to overwrite '/usr/bin/docker', which is also in package docker-ce-cli 5:24.0.6-1~ubuntu.22.04~jammy
Errors were encountered while processing:
 /var/cache/apt/archives/podman-docker_4%3a4.6.2-0ubuntu22.04+obs81.12_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
Error: Process completed with exit code 100.

[Task] Smoketest configuration switches

smoketest.bash should have flags to configure the following behaviours:

  1. sample app deployment, type (quarkus-test, quarkus-test-agent, vertx-fib-demo, etc.) and possibly number of replicas
  2. cryostat-reports deployment, caching tiers
  3. s3 implementation (done)
  4. discovery plugins enabled

[Task] Remove gson in favour of jackson

Some code carried forward from the old repository included gson as a dependency. Elsewhere we also have a dependency on Jackson via Quarkus, and more recently via projectnessie CEL as well. Gson should be dropped and all usages converted to use Jackson instead.

[Bug] itest failure due to JDP discovery failure

$ ./mvnw clean verify ; podman image prune -f
...
[ERROR] Errors: 
[ERROR]   ClientAssetsIT>StandardSelfTest.waitForJdp:104 » Runtime
[ERROR]   CryostatTemplateIT>StandardSelfTest.waitForJdp:104 » Runtime
[ERROR]   GrafanaSetupIT>StandardSelfTest.waitForJdp:104 » Runtime
[ERROR]   HealthIT>StandardSelfTest.waitForJdp:104 » Runtime
[ERROR]   RulesPostFormIT>StandardSelfTest.waitForJdp:104 » Runtime
[ERROR]   RulesPostJsonIT>StandardSelfTest.waitForJdp:104 » Runtime
[ERROR]   TargetEventsGetIT>StandardSelfTest.waitForJdp:104 » Runtime
[INFO] 
[ERROR] Tests run: 89, Failures: 0, Errors: 7, Skipped: 82
[INFO] 
[INFO] 
[INFO] --- failsafe:3.1.2:verify (default) @ cryostat3 ---
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  04:42 min
[INFO] Finished at: 2023-10-13T14:34:11-04:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-failsafe-plugin:3.1.2:verify (default) on project cryostat3: 
[ERROR] 
[ERROR] Please refer to /home/work/workspace/cryostat3/target/failsafe-reports for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

All of the failures look like this:

[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.11 s <<< FAILURE! -- in itest.TargetEventsGetIT
[ERROR] itest.TargetEventsGetIT -- Time elapsed: 30.11 s <<< ERROR!
java.lang.RuntimeException
	at itest.bases.StandardSelfTest.waitForJdp(StandardSelfTest.java:104)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeAllMethod(TimeoutExtension.java:70)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
	at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllMethods$13(ClassBasedTestDescriptor.java:411)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllMethods(ClassBasedTestDescriptor.java:409)
	at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:215)
	at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:84)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.apache.maven.surefire.junitplatform.LazyLauncher.execute(LazyLauncher.java:56)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:184)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:148)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:122)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
	at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)

[Epic] V3 API design

          I'm thinking the v3 API should actually have a Target model that has a jvmId, alias, labels and annotations, and:
  • 1-to-1 relation to a DiscoveryNode (this places the Target in the discovery tree)
  • 1-to-many relation to ActiveRecordings that this Target "owns"
  • 1-to-many relation to ConnectionUrl, which may be JMX or Agent HTTP. When the server learns about a connection it's added to this table, and when a connection is opened and the JVM ID is computed then the relation between the Target with that JVM ID and this new URL is modelled.

This way, stored credentials can also be associated to a particular ConnectionUrl or set of ConnectionUrls rather than a whole Target, since if the Target actually exposes both JMX and Agent HTTP then it needs two different sets of credentials.

Then the API should require clients to specify Targets by ID, since a Target may have a null jvmId until a working ConnectionUrl is added for that Target.

Right now that doesn't make much sense for dealing with archived recordings because the Target records are deleted when a Target disappears, so any data we persisted using the Target's ID or jvmId or anything else becomes "orphaned" and we lose any of the other context we had about that Target. So I think there should also be a field in the Target record that reflects whether the Target is online/reachable, and this flag is simply updated when a target disappears rather than deleting the record. Then we can always look the record up again and have that full context about it to remain associated with its archived recordings.

Originally posted by @andrewazores in #62 (comment)

[Story] GraphQL

A GraphQL endpoint should be re-introduced, and it should implement an identical schema as what is available in 2.4.0.

[Task] Set up CI

This project should have a similar set of CI configurations as the previous cryostat project.

[Task] Add JFR events to Cryostat itself

Same as cryostatio/cryostat#461 , but for Cryostat 3.

Since #27 the same cryostat.jfc event template has been added to the container image, but this still references JFR events that are present in the old codebase and not the new one. Each of these event types in the Cryostat event template should be evaluated for either removal or having an equivalent JFR event added in the new codebase.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.