GithubHelp home page GithubHelp logo

elastic / package-spec Goto Github PK

View Code? Open in Web Editor NEW
14.0 252.0 67.0 2.47 MB

EPR package specifications

License: Other

Makefile 1.10% Go 89.58% Handlebars 2.98% HCL 0.31% Shell 4.43% Dockerfile 0.36% PowerShell 0.41% Gherkin 0.84%

package-spec's Introduction

Introduction

This repository contains:

  • Specifications for Elastic Packages, as served up by the Elastic Package Registry (EPR). There may be multiple versions of the specifications; these are resolved when loading the spec depending on the format_version of the package. Read more in the Specification Versioning section below.
  • Code libraries for validating said specifications; these can be found under the code top-level folder.

Please use this repository to discuss any changes to the specification, either by making issues or PRs to the specification.

What is an Elastic Package?

An Elastic Package is a collection of assets for the Elastic Stack. In addition, it contains manifest files which contain additional information about the package. The exact content and structure of a package are described by the Package Spec.

A package with all its assets is downloaded as a .zip file from the package-registry by Fleet inside Kibana. The assets are then unpacked and each asset is installed into the related API and the package can be configured.

In the following is a high level overview of a package.

Asset organisation

In general, assets within a package are organised by {stack-component}/{asset-type}. For example assets for Elasticsearch ingest pipelines are in the folder elasticsearch/ingest-pipeline. The same logic applies to all Elasticsearch, Kibana and Elastic Agent assets.

There is a special folder data_stream. All assets inside the data_stream folder must follow the Data Stream naming scheme. data_stream can contain multiple folders, each with the name of that describes the content. Inside this folder, the same structure as before for {stack-component}/{asset-type} applies. The difference is that for all these assets, Fleet during installation enforces naming rules related to the Data Stream naming scheme. All assets in this folder belong directly or indirectly to data streams.

In contrast, any asset added on the top level will be picked up as json document, pushed to the corresponding Elasticsearch / Kibana APIs and used as is. In most scenarios, only data stream assets are needed. There are exceptions where global assets are required to get more flexibility. This could be, for example, an ILM policy that applies to all data streams.

Supported assets

For a quick overview, these are the assets typically found in an Elastic Package. The Package Spec will always contain the fully up-to-date list.

  • Elasticsearch
    • Ingest Pipeline
    • Index Template
    • Transform
    • Index template settings
  • Kibana
    • Dashboards
    • Visualization
    • Index patterns
    • ML Modules
    • Map
    • Search
    • Security rules
    • CSP (cloud security posture) rule templates
  • Other
    • fields.yml

The special asset fields.yml is used to generate out of a single definition Elasticsearch Index Templates and Kibana index patterns. The exact definition can be found in the Package Spec.

Specification Format

An Elastic Package specification describes:

  1. the folder structure of packages and expected files within these folders; and
  2. the structure of the expected files' contents.

In the spec folder there is be a spec.yml file. This file is the entry point for the specification for a package's contents. It describes the folder structure of packages and expected files within these folders (this is point 1. above). The specification is expressed using a schema similar to JSON Schema, but with a couple of differences: -- The type field can be either folder or file, -- A new field, contents is introduced to (recursively) describe the contents of folders (i.e. when ty pe == folder), and -- The specification is written as YAML for readability.

Expected package files, e.g. manifest.yml themselves have a structure to their contents. This structure is described in specification files using JSON schema (this is point 2. above). These specification files are also written as YAML for readability.

Note that the specification files primarily define the structure (syntax) of a package's contents. To a limited extent they may also define some semantics, e.g. enumeration values for certain fields. Richer semantics, however, will need to be expressed as validation code.

Specification Versioning

Package Spec version follows semantic versioning for its compatibility with the Stack and only partially for the packages format. That means that patch versions may include stricter validations for packages, but they should not include support for new features. Major versions are reserved for significant changes in the format of the files, the structure of packages or the interpretation of the Package Spec.

Packages must specify the specification version they are using. This is done via the format_version property in the package's root manifest.yml file. The value of format_version must conform to the semantic versioning scheme.

Specifications are defined by schema files and semantic rules, some attributes or files will only be available since, or till a version.

Note that some versions may include a pre-release suffix, e.g. 1.4.0-alpha1. This indicates that these versions are still under development and may be changed multiple times. These versions in development can be used in prerelease versions of packages, but breaking changes can still occur. Once the pre-relase suffix is removed, however, the specification at that version becomes immutable. Further changes must follow the process outlined below in Changing a Specification.

Changing a Specification

Consider a proposal to change the specification in some way. The version number of the changed specification must be determined as follows:

  • If the proposed change modifies the format of the files in a way that require manual adjustments in packages, the new version number will be X.0.0, where X = x + 1. That is, we bump up the major version. There are some exceptions, for changes that could be done in patch versions:
    • When the proposed change is intended to address existing issues in packages like ambiguous mappings or security risks.
    • When the proposed change affects a feature marked as technical preview.
  • If the proposed change introduces support for a new feature that requires explicit support in the Stack, the new version will be x.Y.0, where Y = y + 1. That is, we bump up the minor version. See note below about compatibility between packages and the Stack.
  • Any other change would be included in the next patch version, x.y.Z where Z = z + 1. This includes any change on validation that doesn't neccesarily lead to a change in the behaviour of the installed package.

If the change is in a schema file, add a JSON patch in the versions section to continue supporting the previous format.

If the change is in semantic rules, add a constraint in the rule, so they only apply on the indicated version range and package types.

Remember to add a changelog entry in spec/changelog.yml for any change in the spec. If no section exists for the version determined by the above rules, please add the new section. Multiple next versions may exist at the same moment if multiple versions are in development.

Version Compatibility between Packages and Specifications

A package specifying its format_version as x.y.z must be valid against specifications in the semantic version range [x.y.z, X.0.0), where X = x + 1.

Version Compatibility between Packages and the Stack

Starting on Package Spec v3 and for some Elastic offerings, compatibility between packages and the Stack is based on the major and minor Package Spec version.

Eventually all Elastic Stack offerings will have ranges of compatible versions. In these ranges the patch is ignored. So for example a Stack could be declared compatible with a minimum spec version of 2.0 and maximum of 3.0. This would mean that it is compatible with packages using any spec version >= 2.0.0 and <3.1.0.

Contributing

Please check out our contributing documentation for guidelines about how to contribute in the specification for Elastic Packages.

package-spec's People

Contributors

adriansr avatar aleksmaus avatar alvarezmelissa87 avatar axw avatar dependabot[bot] avatar exekias avatar eyalkraft avatar jalvz avatar jgowdyelastic avatar jillguyonnet avatar jlind23 avatar jonathan-buttner avatar jsoriano avatar justinkambic avatar kaiyan-sheng avatar kfirpeled avatar kpollich avatar kuisathaverat avatar marc-gr avatar mrodm avatar mtojek avatar nnamdifrankie avatar r00tu53r avatar rw-access avatar sharbuz avatar stuartnelson3 avatar taylor-swanson avatar v1v avatar ycombinator avatar zmoog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

package-spec's Issues

[Feature Request] Dynamic variables in data stream names

As per the current APM Index Strategy proposal, we need the ability to use service.name as a data stream name, or part of a data stream name.

service.name is a top level indexed field in APM documents (example, spec), and having it in the data_stream would give us the guarantee that each APM service will have its own indices. This would improve performance, allow users to have more granular security and retention policies, and simplify the APM experience in general.

So for metrics data, for instance, we would have indices looking like:
metrics-backend_service-production
metrics-backend_service.profiles-production
metrics-frontend_service-staging

Same thing for other types (traces, logs).

Right now, I believe folders in the package need to match exactly the data stream name set in manifest.yml, so we would need to circumvent that limitation as well.

Add validation for supported inputs in a package

It would be nice to validate the supported inputs in the package registry. New inputs will be added over time and it will mean the registry has to be updated, but it will make it easier for package developer to catch typos or deprecated inputs.

Add "data" asset type

Add support to the package to add data.

It needs to be defined which format this data is stored in the package.

[Discuss] Impact of renaming dataset to data_stream

It was decided recently to rename the dataset.* fields to data_stream.* fields and quite a few changes were already made for this in 7.9. No changes have happened in the package spec so far. This issue is to discuss what impact this rename has on the package spec.

Validate that Kibana asset IDs start with lowercase package names

For packages that install Kibana assets (saved searches, visualizations, dashboards, etc.) we should ensure that the asset IDs start with the package name, all lowercased. This will help avoid conflicts with the same assets being installed by corresponding Beats modules (for users not using Agent).

This issue will be resolved via multiple PRs (please keep the list below updated with links to the various PRs)

  • Validate that Kibana object filenames conform to the ID rules laid out above: #66
  • Validate that the id fields inside Kibana object files have the same value as the object filename (minus the .json extension, of course): #160
  • Validate that there are no dangling references to Kibana objects in any Kibana object files within the scope of a package. That is, if a Kibana object file in package P references another Kibana object with ID i, then there must exist a Kibana object file for id i in package P:

Add support for icons in dark mode

Some icons like AWS, Kafka, IBM cloud, Github, etc. will not render well if a dark theme is used. In EUI, there is a mechanism for dealing with this, but since we're technically not using EUI icons (rather, we're defining our own custom ones which happen to be identical), we don't get this magic out of the box.

Since the majority of the icons we're using are already defined in EUI, it might be better if we just reference those icons by name instead, and only define custom icons when we need an icon that is not specified in EUI.

So in our manifest:

icons:
- src: /img/logo_postgres.svg
  title: logo postgres
  size: 32x32
  type: image/svg+xml

could become something much simpler like

icons:
- name: logoPostgres

I'm not sure why we need to define a title, size, and type. I think one caveat of referencing by name is that it places a dependency on EUI - meaning any consuming app that wants to see a package icon from the registry will need to have EUI setup.

image

image

Add validation that owner is set

The owner of a package is becoming more important. I think every package "must" specify an owner. The owner can be a group or a single person. The validation script should be able to validate that an owner exists.

An addition to that would be that it even validates if the group / users exists on Github but that might become excessive and cause permissions problems.

META Issue: Packages should be cryptographically signed

This may be covered elsewhere, but I'll add just in case.

The packages contains instructions that could significantly modify the structure of a customer's indices; ILM policy, transforms, mappings etc. We should not be reliant solely on HTTPS DNS name validation to prove provenance of the package. A determined state level attacker could forge a certificate and play man in the middle on any of the requests, serving up packages intended to disrupt the target.

This will become a larger issue as we add more content to the packages. I am thinking, for example, of SIEM ruleset for Elastic Security. In that case, an attacker could undermined a target's security posture by replacing or modifying effective detection rules.

The packages should be signed much like the code we deliver to customers. The recipient, in this case Kibana, should validate the signature before installation. To support non-elastic or dev packages, the UI should warn if a package is not signed, but allow the customer to override.

Do not ignore vars when there are no inputs

Consider this snippet of an 0.1.0/manifest.yml example:

policy_templates:
  - name: apm
    inputs:
      - type: traces
        vars:
          - name: host
            type: int
            title: APM Server port
            required: true
            show_user: true
            default: 8200

If my data stream(s) don't have a streams key, the generated policy will just have an input with streams: [], and the variable host will not be there even if I had filled it in the UI.

For apm-server we don't have streams, so we need that the vars defined at the policy level get picked up and rendered in the generated policy.

Not sure on which repo this issue belongs, let me know if it should be transferred to Kibana.

Update: The ideal way forward is that the top level config shows right under inputs, and not under a streams entry. For this we need hbs templates at the top level too.

Manifest files: add support for corresponding flat and object properties

Due to different implementations, some properties may be considered as faulty:

kibana.version: 1

vs.

kibana:
   version: 1

Same for: elasticsearch.index_template.mappings

The goal of this task is to research all places that are required to be adjusted in elastic-package and in package-spec.

Setup CI

Add a Jenkinsfile to setup CI for this repository. CI should run the following Makefile targets in the repository's root folder:

  • check: to check if language libraries have latest schemas bundled in them
  • test: to test language libraries' code

Validate input templates

To make sure we don't ship a combination of input templates + manifest for a dataset, it would be nice to already validate it during packaging.

[Discuss] Impact of renaming config to policy

It was decided to rename Agent Config to Agent Policy. This also applies to all config blocks inside the config blocks inside the config, which are now also policies.

The main usage of config inside the package spec seems to be config_template which could become policy_template instead.

Allow dataset to not load a template

There are some datasets which allow to select and configure and input directly on the Agent side and with it, configure the data_stream.dataset field. Because of this, loading a template does not make sense in this context.

Our first example of this is the log package. It allows to tail any log file and the user can configure where the log should be shipped to. It currently loads a template into logs-log.log-*, but the template that actually applies to the data is the logs-*-* template except by chance log.log was set as dataset name.

We currently validate that dataset MUST contain the data_stream.* fields. But in this case the creator of the package should be able to set a flag in the daaset/manifest.yml file like template: false (better name suggestions welcome) which disables this validation. And then on the Kibana side, if no fields.yml exists, no template is created.

Multiple modules, only one config set

I received this question from @fearful-symmetry today. Let's discuss it here -

Is it possible to have an integration with multiple modules but only one config set? Like, the only way I know to link the integration to a given module is to use config templates:

config_templates:
  - name: system
    title: System logs and metrics
    description: Collect logs and metrics from System instances
    inputs:
      - type: logfile
        title: Collect logs from System instances
        description: Collecting System auth and syslog logs
      - type: system/metrics
        title: Collect metrics from System instances
        description: Collecting System core, CPU, diskio

But this will produce two config sets, one for Collect logs from System instances in the logfile module, and one for Collect metrics from System instances from the system/metrics module. Is there a way to just get one config set for an arbitrary number of modules?

/cc @ruflin @ycombinator

Add package validator in typescript

To be able to use the package validation from this repository in EPM in Kibana, we need to publish an npm package to the @elastic namespace on https://www.npmjs.com/ . From there, we can import the package in Kibana like any other npm package.

Publishing to npmjs.com can be done manually whenever we have a release.

  • Create a subdirectory code/typescript
  • create a standard npm package there
  • implement a stub validator
  • add tests & integrate into CI of this repository
  • document how to use a local checkout of the validator in a local checkout of kibana for development
  • publish to npm
  • automate publishing to npm?
  • implement the real validator

Add manifest version to dataset manifest

Today the dataset manifest does not contain a format_version, only the package manifest. I think it could become handy in the future if also the dataset manifest contains a version to handle small differences.

Package CHANGELOGs

Forked off from discussion about package CHANGELOGs in #14.

We need to decide how best to capture CHANGELOGs for packages. Some ideas that have been proposed so far:

  • Writing a file at the root of the package's folder,
  • Generating them from commit messages,
  • Generating them from PR descriptions

[UPDATE] After discussion below and consensus on where and how package CHANGELOGs should be stored, I'm adding a few checklist items below for next steps to implement the agreed-upon changes:

  • Create PR to package spec to make CHANGELOG files required: #131
  • Once package spec PR is approved but not yet merged:
    • Update integrations repo with skeleton CHANGELOG files for each integration package: elastic/integrations#675
    • Inform non-integrations repo package owners about adding CHANGELOG files: email sent
  • Once existing packages have been updated with skeleton CHANGELOG files:

Add Facility for deploying ElasticSearch Transform

Background

And the ability to deploy a defined ElasticSearch transform to be be deployed when a package is applied or upgraded.

Acceptance Criteria

  • As a user I should be able to define an ElasticSearch transform as part of a package.
  • The ElasticSearch transform should be started after it is added to the search database.
  • As a user I should be able to update the attributes of the transform including the name possibly. This should not result in two Transform running.
  • As a user we should be able to delete a transform through an update to the package.
  • As a user I should be able to view statistics and information about a transform after deployment using the ElasticSearch API or Kibana if available.

Sample Transform Creation Statements Captured From Kibana Devtools

PUT _transform/endpoint_host_metadata_transform
{
  "source": {
    "index": "metrics-endpoint.metadata-default"
  },
  "dest": {
    "index": "metrics-endpoint.metadata_current-default"
  },
  "pivot": {
    "group_by": {
      "agent.id": {
        "terms": {
          "field": "agent.id"
        }
      }
    },
    "aggregations": {
      "HostDetails": {
        "scripted_metric": {
          "init_script": "state.timestamp_latest = 0L; state.last_doc=''",
          "map_script": "def current_date = doc['@timestamp'].getValue().toInstant().toEpochMilli(); if (current_date > state.timestamp_latest) {state.timestamp_latest = current_date;state.last_doc = new HashMap(params['_source']);}",
          "combine_script": "return state",
          "reduce_script": "def last_doc = '';def timestamp_latest = 0L; for (s in states) {if (s.timestamp_latest > (timestamp_latest)) {timestamp_latest = s.timestamp_latest; last_doc = s.last_doc;}} return last_doc"
        }
      }
    }
  },
  "description": "collapse and update the latest document for each host",
  "frequency": "1m",
  "sync": {
    "time": {
      "field": "event.created",
      "delay": "60s"
    }
  }
}

POST _transform/endpoint_host_metadata_transform/_start
DELETE _transform/endpoint_host_metadata_transform

[Proposal] _dev folder

We are starting to implement test runners for various types of package tests in the elastic-package tool: elastic/elastic-package#15, elastic/elastic-package#16, etc. This is bringing up the need to keep files that are needed only at development time somewhere under the package root folder. Examples of such files could be test golden files, docker compose files, etc. As such files would be needed at package development time only, we would not want them to be bundled into the package when it is published to the package registry.

So I'm proposing one change to the package spec and one change to the spec format itself:

  • Enhancing the package spec to allow for a new _dev folder that could exist under the package root folder or under a dataset's folder, so essentially:

    <packageRootFolder>/
      _dev/
      dataset/
        <dataset>/
          _dev/
    

    Conceptually, _dev is similar to the _meta folders we see in Beats. However, I went with _dev to make it more explicit that the files under this folder are intended for development-time only. That said, I'm not married to the name _dev; happy to consider other names or other ideas that achieve the goal of creating a space for development-time-only files.

  • Adding an optional field in the package spec format called visibility. This field is applicable to file or folder elements in the specification format. It can have values such as private or public, defaulting to public. It indicates whether the file or folder being defined in the spec should exist in the public version of the package, i.e. the one available from the package registry (visibility: public) or not (visibility: private).

    Accordingly, the aforementioned _dev folder specification would be set to visibility: private in the spec.

[Idea] Add validation check against ECS fields

At the moment, a package can contain any fields. One important thing about ECS is that it can be extended with any fields but fields should not conflict with ECS, for example host vs host.name (keyword vs object). On the registry side we validate packages and it would also be possible to validate that all the fields defined in the fields.yml are not conflicting with ECS.

This is mainly to document the idea for now.

Make validation error messages more meaningful

We just had a validation failure in this PR: elastic/integrations#272. The failure looks like this to the package author:


[2020-11-02T09:28:35.769Z] packages/azure:

[2020-11-02T09:28:35.769Z] elastic-package lint

[2020-11-02T09:28:35.769Z] Lint the package

[2020-11-02T09:28:35.769Z] Error: linting package failed: found 3 validation errors:

[2020-11-02T09:28:35.769Z]    1. item [0f559cc0-f0d5-11e9-90ec-112a988266d5.json] is not allowed in folder [/var/lib/jenkins/workspace/Beats_integrations_PR-272/src/github.com/elastic/integrations/packages/azure/kibana/dashboard]

[2020-11-02T09:28:35.769Z]    2. item [41e84340-ec20-11e9-90ec-112a988266d5.json] is not allowed in folder [/var/lib/jenkins/workspace/Beats_integrations_PR-272/src/github.com/elastic/integrations/packages/azure/kibana/dashboard]

[2020-11-02T09:28:35.769Z]    3. item [87095750-f05a-11e9-90ec-112a988266d5.json] is not allowed in folder [/var/lib/jenkins/workspace/Beats_integrations_PR-272/src/github.com/elastic/integrations/packages/azure/kibana/dashboard]

This error message is not super helpful as it doesn't give the package author any idea of how to make it go away. In this case I believe the fix is to prefix the package name. We need to make our error messages more actionable. In general, here are some guidelines on writing good error messages:

[Discuss] Avoiding duplication of ECS field definitions

Currently each dataset duplicates the definitions of the ECS fields that it uses (data type, descriptions, examples, etc). This puts a burden on the package maintainers to copy the ECS definitions into the dataset and keep it in sync with ECS.

It would be simpler to develop and maintain a package if the dataset only required listing the names of ECS fields that the module uses (and the ECS version). When the package is build the full field definitions for the specified fields can be imported from ECS.

The dataset would declare:

  • ECS version
  • List of ECS fields (maybe allowing for patterns like host.*, but that's part of the discussion)

The build step would create field declarations in YAML format that conform the package spec.

Create initial specification files

Currently all packages are using format_version: 1.0.0. Create a set of specification files for this version. The files should define the structure of a package, including it's folder structure, expected files within folders, and the structure of the expected files as well.

Try to use an existing schema standard like JSON schema as much as possible.

Write tests for the spec spec (how meta!)

The goal is to catch any mistakes when a package spec is updated or a new version of a package spec is added.

Also, run these tests via the test target in the top-level Makefile so they may automatically be run by CI once it's setup (#6).

Add Canvas Asset

Canvas stores different objects in the saved objects today. These assets should also be supported as part of the integration packages. The current assets found:

  • Canvas workpad
  • Canvas element
  • Canvas assets (images stored in Canvas)

Canvas is already used today in the example data. Having it as part of the packages would allow to ship these directly also from the registry.

Add support for runtime fields in packages

Soon Elasticsearch is planned to support runtime fields: elastic/elasticsearch#59332 I assume these can not be specified currently in our fields.yml. We should add support for these fields in fields.yml and on the Kibana side be able to create the correct Elasticsearch templates and Kibana index patterns.

Validate YAML files

All available yaml files in the packages should be validated for valid yaml to make sure packages only contain valid yaml.

An example where this would have been useful can be found here: elastic/package-registry#421

Enforce data streams

As of now, data streams and inputs can always be enabled/disabled via a Kibana toggle switch.

This makes sense for existing integrations, but not so much for APM. Eg., APM records and ingests traces, so if you disable a traces data stream then APM wouldn't work.

We need a way in the spec to define that a data stream is always enabled, so that Kibana doesn't even show a toggle for it.

I suggest a simple boolen attribute force_enabled in the data stream manifest.yml and default to false.

[META] Semantic validations

The package spec allows for syntactic validations of packages. These validate the structure of package contents. They have been implemented via #14, #41, and several minor follow up PRs.

There is also a need for semantic validations. These validate that values found in package contents conform to further rules that may not be expressible in the spec itself. For example:

  • Validate that icon and screenshot sources exist: #31
  • Validate that dashboard IDs start with package names: #65
  • Ensure consistency between CHANGELOG and manifest: #167

Create checklist for adding new assets to packages

Today packages support a basic list of assets for the Elastic Stack. Moving forward more assets like transform (#23) need to be supported. Any addition of an assets needs changes in multiple places and similar questions need to be answered. This issue is to discuss and create a checklist for adding new assets.

Here a first attempt on what needs to be checked / discussed for each asset:

  • Package spec
    • What is the format of the assets in the package
    • What is the path the assets is stored under
    • Does it support any special configuration
    • Naming of the files and supported formats (for example json vs yaml)
  • Elasticsearch Asset
    • How is the asset created?
    • Does the asset to be created require any previous assets?
    • What permissions are needed to create and manage the asset?
  • Kibana Assets
    • Is it a saved object and if yes, what is it's visibiliy? (space agnostic, shared, ...)
    • Is there anything related to the indexing strategy?
    • What permissions are needed to create and manage the asset?
  • Kibana Ingest Manager
    • What are the CRUD operations for the asset?
    • Any special behaviour on upgrade?
    • Under which name is the asset created / stored?

I filed this in the package-spec repo as it seems to be the central place where all the parts come together.

[discuss] Sorting of stream configs

Today when creating an integration configs, the streams configs inside an input are sorted alphabetical. Unfortunately in the case of the system package, it means on top configs show up like core or entropy which are not used that often. Instead memory and others should be at the top.

To solve this issue I'm proposing we add support for an optional "priority/order" config option in the stream definition.

This would then look similar to:

streams:
  - input: logs
    priority: 1

Kibana would then sort first based on priority and second alphabetical. The ones without a priority would go at the end.

Screenshot 2020-06-17 at 10 21 08

Validation: Kibana asset IDs should not contain `-ecs` suffix

Per the discussion starting from elastic/integrations#269 (comment), it was agreed that Kibana asset IDs in Integrations packages should not contain the -ecs suffix. Also, any asset titles/labels/markdown links/etc. should not contain the ECS suffix either (per @andrewkroh's suggestion here: elastic/integrations#339 (comment)). We should add validation to the spec for this.

This issue will be resolved via multiple PRs (please keep the list below updated with links to the various PRs)

  • Validate that Kibana object filenames do not contain the -ecs suffix:
  • Validate that titles/labels/markdown links/etc. within Kibana object files should not contain the ECS suffix:

Add documentation and handlebars validation for helpers

We just added the first handlebars helper for use in packages in elastic/kibana#72698 in order to support some beats filebeat module conversion to packages. We should have:

  1. documentation for the available helpers here for anyone writing a new package, and
  2. code for validating the handlebars templates prior to being pushed down to Kibana.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.