GithubHelp home page GithubHelp logo

isabella232 / iq-success-metrics Goto Github PK

View Code? Open in Web Editor NEW

This project forked from sonatype-nexus-community/iq-success-metrics

0.0 0.0 0.0 670 KB

License: Apache License 2.0

Shell 0.68% Python 98.63% Dockerfile 0.68%

iq-success-metrics's Introduction

[DEPRECATED] IQ Success Metrics













Old instructions kept below for reference.

CircleCI

Overview

Nexus IQ Server has a number of REST APIs that allow you to automate certain tasks as well as quickly retrieve IQ server data. One of those APIs is the Success Metrics Data API which collects all the violations and other measurements and shares them as counters inside a JSON dictionary. In order to better capture the results, we have developed a Python script to collect, aggregate and process the counters into outcome-based metrics. We can use these outcome-based metrics to measure progression toward your PDOs.

Understanding the Python script

Though the source code can be modified to suit your particular needs if necessary, the following is an explaination of the script in its current form.

The script is actually two different files: success_metrics.py and reports.py

success_metrics.py makes the API calls according to the command-line parameters and it will process the counters to generate the more relevant outcome-based Success Metrics, returning them as a JSON dictionary called successmetrics.json.

reports.py consumes the JSON file generated by success_metrics.py and produces different types of reports and graphs depending on the Primary Desired Outcome (PDO). The main output will be one or more pdf reports containing graphs and data relevant to the chosen PDO. Additionally, all graphs are also saved to individual .png files for further re-use in presentations.

Usage

CSE Demo Image from Gyazo

First we create a temporary folder to store the outputs:

mkdir ~/Documents/output

Then we pull the latest docker image from Docker Hub:

docker pull sonatypecommunity/iq-success-metrics:latest

Then we use the following docker command to generate the JSON file (switch -s stands for scope and it is the number of weeks that we want data from, in this case the past 50 weeks) and the desired PDF reports (-r stands for reports and will generate an executive report and an app-level detailed table report). Do not forget to replace the URL with your IQ server's:

docker run --name iq-success-metrics --rm -it -v ~/Documents/output:/usr/src/app/output sonatypecommunity/iq-success-metrics:latest success_metrics.py -u http://iq.server.com:8070 -a user:password -s 50 -r

If you want to generate the executive and table reports just for security violations, use the -rs switch instead of just -r. If you are only interested in licence obligations, you can generate such executive and table reports by using the -rl switch instead.

Troubleshooting

90% of issues lie with the URL format. Here is a quick guide on how to troubleshoot them:

  1. All URLs must NOT end in /. For example, -u http://iq.server.com:8070 is a valid URL, but -u http://iq.server.com/:8070 is invalid and will throw an error.

  2. Most times it is best to enter the URL without quotes (single ' or double "). If you must use quotes because everything else fails, then you can try using quotes for the URL, but only as last troubleshooting resort.

  3. If you are using an URL with HTTP, then you must indicate the port 8070. For example: -u http://iq.server.com:8070. Forgetting to add the port will throw an error.

  4. However, if you are using a reverse proxy, then the proxy will take care of the port number, so you should not include it. For example -u http://iq.server.com or -u https://iq.server.com

  5. If you are using HTTPS, then you must use the -k switch to enable insecure mode in IQ server. This will disable the need to have a verified SSL certificate for IQ server.

For example, for HTTPS not using reverse proxy:

docker run --name iq-success-metrics --rm -it -v ~/Documents/output:/usr/src/app/output sonatypecommunity/iq-success-metrics:latest success_metrics.py -u https://iq.server.com:8070 -a user:password -s 50 -r -k

An equivalent example for HTTPS using reverse proxy would be:

docker run --name iq-success-metrics --rm -it -v ~/Documents/output:/usr/src/app/output sonatypecommunity/iq-success-metrics:latest success_metrics.py -u https://iq.server.com -a user:password -s 50 -r -k

If after trying all of this, you still encounter problems, contact your CSE.

Please, ignore any warnings.

The Docker Hub repository for the docker image is here: https://hub.docker.com/r/sonatypecommunity/iq-success-metrics

For Windows users

The main difference for Windows users is in the path to your local folder to store the outputs. Below are the equivalent commands:

mkdir c:\temp

docker pull sonatypecommunity/iq-success-metrics:latest

docker run --name iq-success-metrics --rm -it -v c:\temp\:/usr/src/app/output sonatypecommunity/iq-success-metrics:latest success_metrics.py -u http://iq.server.com:8070 -a user:password -s 50 -r

The aforementioned Troubleshooting section also applies to the Windows users. Just remember to adapt the path to your local folder to the Windows format as indicated before.

Advanced Usage

If you have thousands of apps, or you would like to produce a customised report just for a specific set of apps and/or orgs, then you will have to use different switches to achieve this.

If you are using the python script directly without Docker, you can get started by running the following command to display all the available options. The Docker equivalent is explained right after:

python3 success_metrics.py -h

Usage: python3 success_metrics.py [-h] [-a AUTH] [-s SCOPE] [-u URL] [-i APPID] [-o ORGID] [-p] [-r] [-rs] [-rl] [-d] [-snap]

The optional arguments are:
-h, --help (shows this help message and exits)
-a AUTH, --auth AUTH (in the format user:password, by default admin:admin123)
-s SCOPE, --scope SCOPE (number of weeks from current one to gather data from. Default value is six weeks)
-u URL, --url URL (URL for IQ server, by default http://localhost:8070)
-k, --insecure (Disable SSL Certificate Validation)
-i APPID, --appId APPID (list of application IDs, application Names, application Public IDs or combination thereof to filter from all available data. Default is all available data)
-o ORGID, --orgId ORGID (list of organization IDs, organization Names or combination thereof to filter from all available data. Default is all available data)
-p, --pretty (indents the JSON printout 4 spaces. Default is no indentation)
-r, --reports (generates the executive report and the table report for all violations)
-rs, --reportsSec (same as -r but only for Security violations)
-rl, --reportsLic (same as -r but only for Licensing violations)
-d DATERANGE, --dateRange DATERANGE (creates JSON for a specified date range [yyyy-mm-dd:yyyy-mm-dd]. Do not use in conjunction with -s option)
-snap SNAPSHOT, --snapshot SNAPSHOT (runs the script just for the apps present in the specified snapshot date yyyy-mm-dd)

A valid example would be:

python3 success_metrics.py -a admin:admin123 -s 50 -u http://localhost:8070 -i 'd8f63854f4ea4405a9600e34f4d4514e','Test App1','MyApp3' -o 'c6f2775a45d44d43a32621536e638a8e','The A Team' -p -r

This collects the past fifty weeks of data for the three applications listed ('d8f63854f4ea4405a9600e34f4d4514e','Test App1','MyApp3'), irrespective of them belonging to any particular organization. In addition, this also collects the past fifty weeks of data for all the applications under organizations 'c6f2775a45d44d43a32621536e638a8e' and 'My Org'. The filtering does an OR filtering, so the collected data will be the union of the three apps with the two organizations. Then it processes the data, indents the results in the "pretty" format (indented 4 spaces), saves it to the JSON file successmetrics.json and uses it to generate the executive report and the table report for those apps and orgs.

A similar example but using date range would be:

python3 success_metrics.py -a admin:admin123 -d 2019-06-01:2020-05-05 -u http://localhost:8070 -i 'd8f63854f4ea4405a9600e34f4d4514e','Test App1','MyApp3' -o 'c6f2775a45d44d43a32621536e638a8e','The A Team' -p -r

The Docker equivalent for advanced usage

docker run --name iq-success-metrics --rm -it -v ~/Documents/output:/usr/src/app/output sonatypecommunity/iq-success-metrics:latest success_metrics.py -a admin:admin123 -s 50 -u http://host.docker.internal:8070 -i 'd8f63854f4ea4405a9600e34f4d4514e','Test App1','MyApp3' -o 'c6f2775a45d44d43a32621536e638a8e','The A Team' -p -r

Do not forget to replace http://host.docker.internal:8070 with the URL of your IQ server and use your own user:password instead of admin:admin123.

The Snapshot feature

Since release v4.13, the snapshot feature is available by using the switch -snap. Every time you run the script (from v4.13 onwards), the current list of unique IDs for all of the apps in the IQ server will be downloaded and added to the snapshot.json file inside the output folder with a timestamp. For example, if you run the script on 4th May 2020 and then again a month later on the 4th June 2020, you would get a snapshot.json like this:

{
"2020-05-04": ["64d87188e83f443ab219e05796884826", "c55c6016614047ca95d39a31320a60f3"], 

"2020-06-04": ["64d87188e83f443ab219e05796884826", "c55c6016614047ca95d39a31320a60f3", "3dc04244158f4b06baef2864ae5b8bfe", "1fa77b9cd9164862a7b062c28f70c882", "ff474edbdb04491c8bdc10d1ed43b76c", "1c08d2d27e874c0494b098238087941d", "d8f63854f4ea4405a9600e34f4d4514e", "091b5ae36c0144eb87df172bd338d834", "37f20d1fa8804f88b8e41b860c31b2be"]
}

You can see that in May, only two apps were onboarded and in June there are nine apps. If you want to check the progress only for those two original apps from 1st January 2020 till 4th June 2020, you just need to run the usual command but with the -snap switch, selecting the 4th May 2020 snapshot:

python3 success_metrics.py -a admin:admin123 -u http:localhost:8070 -d 2020-01-01:2020-06-04 -snap 2020-05-04 -r

Or using docker:

docker run --name iq-success-metrics --rm -it -v ~/Documents/output:/usr/src/app/output sonatypecommunity/iq-success-metrics:latest success_metrics.py -a admin:admin123 -u http://host.docker.internal:8070 -d 2020-01-01:2020-06-04 -snap 2020-05-04 -r

The Insights feature

Since release v5.1, the insights feature is available by calling the insights.py script. This script compares two json files and generates a pdf report with a detailed analysis providing insights of what has happened between the two time periods covered.

Every time you run the success_metrics.py script (from v5.1 onwards), a time-stamped json file will be generated. The filename format will be yyyy-mm-dd_successmetrics.json. If you run the success_metrics.py script multiple times within the same day, the time-stamped json file will overwrite itself so that only the most recent one is kept.

Let's say that you ran the success_metrics.py script on 4th May 2020 and a month later on 4th June 2020. If you were on version v5.1 at those dates, you would have a 2020-05-04_successmetrics.json and a 2020-06-04_successmetrics.json files in your output folder.

If you wanted to compare what has happened between those dates, you would simply run the following docker command to generate the pdf report for all violations (use -s for security violations only and -l for licensing violations only):

docker run --name iq-success-metrics --rm -it -v ~/Documents/output:/usr/src/app/output sonatypecommunity/iq-success-metrics:latest insights.py -before ./output/2020-05-04_successmetrics.json -after ./output/2020-06-04_successmetrics.json -all

It is important to select the correct json files with the correct date ranges. Choose the json file with the latest or most recent data for the -after switch and the json with the older data for the -before switch. Ideally, there should be one or more weeks of data overlapping between the two json files (the script is intelligent enough to pick that up and select the correct data).

Valid examples would be:

  • json1 with data from 1st Jan 2020 until 4th May 2020 (before) combined with json2 with data from 1st Jan 2020 until 8th June 2020 (after). In this case, the insights report would display data from the first ISO week containing or after 4th May 2020 until the last fully completed ISO week containing or before the 8th June 2020.

  • json1 with data from 1st Jan 2020 until 4th May 2020 (before) combined with json2 with data from 4th May 2020 until 8th June 2020 (after). In this case, the insights report would also display data from the first ISO week containing or after 4th May 2020 until the last fully completed ISO week containing or before the 8th June 2020 (both examples are equivalent).

It is not possible to compare two json files that have a gap in data between them (if they don't overlap). The best way of ensuring success is to have the same date for both the end date of the "before" file and the start date of the "after" file. e.g. "before" json file from 1st January 2020 until 30th April 2020 and "after" json file from 30th April 2020 until 30th June 2020. In this case, the 30th April 2020 was shared by both json files, so there is overlap and the insights report will run.

Explaining the Success Metrics Data API

The Success Metrics Data API returns policy evaluation, violation and remediation data, aggregated monthly or weekly. The API uses the following common language in its return values:

API Legend:

  • Threat Level Low - Policy Threat Level 1
  • Threat Level Moderate - Policy Threat Level 2 - 3
  • Threat Level Severe - Policy Threat Level 4 - 7
  • Threat Level Critical - Policy Threat Level 8 - 10
  • Security Violation - Violation for which the policy constraint was on the Security Vulnerability Severity Score
  • License Violation - Violation for which the policy constraint was on the License or License Threat Group
  • Quality Violation - Violation for which the policy constraint was on the Age or Relative Popularity of a component
  • Other Violation - Violation for which the policy constraint was something other than a Security, License, or Quality constraint, such as a label

Here are the actual values returned from the REST call:

Dimensional Data

  • applicationId - Unique ID per application, assigned by IQ server
  • applicationPublicId - ID, assigned by customer
  • applicationName - Name, assigned by customer
  • organizationId - Unique Organization ID, assigned by IQ server
  • organizationName - Organization name, assigned by customer
  • timePeriodStart - Start time period of aggregration of the data (usually weekly or monthly from this date). In ISO 8601 date format.

Scan Data

  • evaluationCount - Number of evaluations or scans for a particular application

Violation Data

  • Mean Time To Resolution (MTTR) in milliseconds for Low (Threat Level violation 1 ), Moderate (Threat Level violations 2-3), Severe (Threat Level violations 4-7), or Critical (Threat Level violations 8-10)
  • mttrLowThreat
  • mttrModerateThreat
  • mttrSevereThreat
  • mttrCriticalThreat

Number of newly discovered Security/License/Quality/Other violations during the time period for Low/Moderate/Severe/Critical threat levels (Note: does not include violations that existed in previous time periods)

  • discoveredCountSecurityLow
  • discoveredCountSecurityModerate
  • discoveredCountSecuritySevere
  • discoveredCountSecurityCritical
  • discoveredCountLicenseLow
  • discoveredCountLicenseModerate
  • discoveredCountLicenseSevere
  • discoveredCountLicenseCritical
  • discoveredCountQualityLow
  • discoveredCountQualityModerate
  • discoveredCountQualitySevere
  • discoveredCountQualityCritical
  • discoveredCountOtherLow
  • discoveredCountOtherModerate
  • discoveredCountOtherSevere
  • discoveredCountOtherCritical

Number of "fixed" Security/License/Quality/Other violations during the time period for Low/Moderate/Severe/Critical threat levels (Note: fixed is defined as a specific violation that existed in the immediately prior scan and now no longer appears in the subsequent scan)

  • fixedCountSecurityLow
  • fixedCountSecurityModerate
  • fixedCountSecuritySevere
  • fixedCountSecurityCritical
  • fixedCountLicenseLow
  • fixedCountLicenseModerate
  • fixedCountLicenseSevere
  • fixedCountLicenseCritical
  • fixedCountQualityLow
  • fixedCountQualityModerate
  • fixedCountQualitySevere
  • fixedCountQualityCritical
  • fixedCountOtherLow
  • fixedCountOtherModerate
  • fixedCountOtherSevere
  • fixedCountOtherCritical

Number of waived Security/License/Quality/Other violations during the time period for Low/Moderate/Severe/Critical threat levels.

  • waivedCountSecurityLow
  • waivedCountSecurityModerate
  • waivedCountSecuritySevere
  • waivedCountSecurityCritical
  • waivedCountLicenseLow
  • waivedCountLicenseModerate
  • waivedCountLicenseSevere
  • waivedCountLicenseCritical
  • waivedCountQualityLow
  • waivedCountQualityModerate
  • waivedCountQualitySevere
  • waivedCountQualityCritical
  • waivedCountOtherLow
  • waivedCountOtherModerate
  • waivedCountOtherSevere
  • waivedCountOtherCritical

Number of "open" Security/License/Quality/Other violations at the end of the time period for Low/Moderate/Severe/Critical threat levels.

Open counts accumulate from previous time periods (weeks/months) and constitute the technical debt backlog to fix/remediate. For example, if you discovered 10 Security Critical violations each week for 3 weeks (total of 30 violations) and you fixed and/or waived a total of 10 Security Critical violations at the end of those 3 weeks, the openCountAtTimePeriodEndSecurityCritical counter would show 20 (Security Critical open violations).

  • openCountAtTimePeriodEndSecurityLow
  • openCountAtTimePeriodEndSecurityModerate
  • openCountAtTimePeriodEndSecuritySevere
  • openCountAtTimePeriodEndSecurityCritical
  • openCountAtTimePeriodEndLicenseLow
  • openCountAtTimePeriodEndLicenseModerate
  • openCountAtTimePeriodEndLicenseSevere
  • openCountAtTimePeriodEndLicenseCritical
  • openCountAtTimePeriodEndQualityLow
  • openCountAtTimePeriodEndQualityModerate
  • openCountAtTimePeriodEndQualitySevere
  • openCountAtTimePeriodEndQualityCritical
  • openCountAtTimePeriodEndOtherLow
  • openCountAtTimePeriodEndOtherModerate
  • openCountAtTimePeriodEndOtherSevere
  • openCountAtTimePeriodEndOtherCritical

Understanding the successmetrics.json file

The successmetrics.json file is currently composed of four dictionaries:

  • summary: this is the overall summary that collates and aggregates all the data together, giving the global view. This dictionary is the main one used for generating the global reports.
  • apps: this is a list of all the applications within scope. It contains the raw data coming from the API call (aggregations) and also a summary view for that app, a licences view and a security view.
  • licences: this is the same as summary but exclusively for licence violations.
  • security: this is the same as summary but exclusively for security violations.

NOTE: adding the licences and security data together will not produce the overall summary data because there are also quality and other types of violations that are included in summary but not in licences or security.

Understanding summary

If we go inside summary we can see the following:

  • appNames: this is a list of all the application names within scope.
  • orgNames: this is a list of all the organization names within scope. The entries match one-for-one each one of the applications, so there will be duplicate organization names.
  • weeks: this is the range of weeks in scope, in ISO format (week number). This was selected when running the success_metrics.py script and was set by default to 6 weeks, so if we were in the middle of week 38, we would request the IQ server for weeks 32, 33, 34, 35, 36 and 37 (the past six fully completed weeks).
  • timePeriodStart: this is a list of the weeks in scope in normal date format instead of ISO format.
  • appNumberScan: this is a list of the number of applications that have been scanned in each of the weeks in scope.
  • appOnboard: this is a list of the number of applications onboarded in the IQ server in each of the weeks in scope.
  • weeklyScans: this is a list of the total number of scans per week in scope.
  • mttrLowThreat: this is a list of the overall MTTR (Mean Time To Resolution) measured in days for all Low Threat vulnerabilities per week.
  • mttrModerateThreat: this is a list of the overall MTTR (Mean Time To Resolution) measured in days for all Moderate Threat vulnerabilities per week.
  • mttrSevereThreat: this is a list of the overall MTTR (Mean Time To Resolution) measured in days for all Severe Threat vulnerabilities per week.
  • mttrCriticalThreat: this is a list of the overall MTTR (Mean Time To Resolution) measured in days for all Critical Threat vulnerabilities per week.
  • discoveredCounts: this is a dictionary containing all the combined (Security, License, Quality & Other) discovered vulnerabilities for each threat level. LIST is the aggregation of all threat level violations where each element of the list is one of the applications in scope. TOTAL is a list aggregating all threat level violations for all applications in scope combined where each element of the list is one of the weeks in scope.
  • fixedCounts: this is a dictionary containing all the combined (Security, License, Quality & Other) fixed vulnerabilities for each threat level. LIST is the aggregation of all threat level violations where each element of the list is one of the applications in scope. TOTAL is a list aggregating all threat level violations for all applications in scope combined where each element of the list is one of the weeks in scope.
  • waivedCounts: this is a dictionary containing all the combined (Security, License, Quality & Other) waived vulnerabilities for each threat level. LIST is the aggregation of all threat level violations where each element of the list is one of the applications in scope. TOTAL is a list aggregating all threat level violations for all applications in scope combined where each element of the list is one of the weeks in scope.
  • openCountsAtTimePeriodEnd: this is a dictionary containing all the combined (Security, License, Quality & Other) vulnerabilities for each threat level that have not yet been fixed or waived (this is the current backlog or risk exposure). LIST is the aggregation of all threat level violations where each element of the list is one of the applications in scope. TOTAL is a list aggregating all threat level violations for all applications in scope combined where each element of the list is one of the weeks in scope.
  • riskRatioCritical: this is a list calculating the Critical risk ratio (number of Critical vulnerabilities divided by the total number of applications onboarded) for each week in scope.
  • riskRatioSevere: this is a list calculating the Severe risk ratio (number of Severe vulnerabilities divided by the total number of applications onboarded) for each week in scope.
  • riskRatioModerate: this is a list calculating the Moderate risk ratio (number of Critical vulnerabilities divided by the total number of applications onboarded) for each week in scope.
  • riskRatioLow: this is a list calculating the Low risk ratio (number of Critical vulnerabilities divided by the total number of applications onboarded) for each week in scope.

Understanding apps

If we go inside apps, we can see that first element in the list (number 0), has an applicationId, applicationPublicId, applicationName, organizationId, organizationName to be able to identify this particular application within a specific organization.

Then we can see the following:

  • aggregations: this is the raw data collected by the API call. All the values inside aggregations have been explained in section 2. Explaining the Success Metrics Data API
  • summary: this is the summary of all the outcome-based success metrics resulting from processing the raw data from the API call. More information later below.
  • licences: this is the same as summary but exclusively for licence violations (for this particular app)
  • security: this is the same as summary but exclusively for security violations (for this particular app)

Now it is time to explore the summary dictionary in more detail:

Below are each one of them explained:

  • weeks: this is a list of all the weeks in ISO format that contain data. It is possible that a particular app was not scanned during one or more of the weeks in scope
  • fixedRate: this is the YTD weekly rolling average (in percentage) of the Fixed Rate for Security/License/Quality/Other vulnerabilities combined, for all Low/Moderate/Severe/Critical threat levels combined for that particular app. fixedRate is calculated as fixedCounts / openCountsAtTimePeriodEnd (for the previous week) in percentage. For example if you fixed 5 Security Critical vulnerabilities in week 2 and at the end of week 1 you had left 50 open, the Fixed Rate would be 10%.
  • waivedRate: this is the YTD weekly rolling average (in percentage) of the Waived Rate for Security/License/Quality/Other vulnerabilities combined, for all Low/Moderate/Severe/Critical threat levels combined for that particular app. waivedRate is calculated as waivedCounts / openCountsAtTimePeriodEnd (for the previous week) in percentage. For example if you waived 5 Security Critical vulnerabilities in week 2 and at the end of week 1 you had left 50 open, the Waived Rate would be 10%.
  • dealtRate: this is the YTD weekly rolling average (in percentage) of the Dealt-with Rate for Security/License/Quality/Other vulnerabilities combined, for all Low/Moderate/Severe/Critical threat levels combined for that particular app. DealtRate is calculated as (fixedCounts + waivedCounts) / openCountsAtTimePeriodEnd (for the previous week) in percentage. For example if you fixed 5 and waived 15 Security Critical vulnerabilities in week 2 and at the end of week 1 you had left 100 open, the Dealt-with Rate would be 20% for Security Critical vulnerabilities. *FixRate: this is the overall combined Fix rate over all the weeks in scope. *WaiveRate: this is the overall combined Waive rate over all the weeks in scope. *DealtRate: this is the overall combined Dealt rate over all the weeks in scope.
  • FixPercent: this is the unitary percentage (0.5 = 50%) of all dealt-with vulnerabilities that were fixed for that particular app.
  • WaiPercent : this is the unitary percentage (0.5 = 50%) of all dealt-with vulnerabilities that were waived for that particular app. Please note that FixPercent + WaiPercent = 1
  • evaluationCount: this is the number of evaluations or scans that were performed on that particular app. avg provides the overall average number of scans over the weeks in scope and rng provides the isolated scans/week.

The following metrics are dictionaries and inside them, they have the avg (average value) and rng (range, or isolated values per week of data) parameters. Some of them go into more detail, with a selection of TOTAL, SECURITY, LICENSE, QUALITY and OTHER violation types and LOW, MODERATE, SEVERE and CRITICAL threat levels.

  • mttrLowThreat: this is the Mean Time To Resolution (measured in days instead of milliseconds) for Low threat level violations for that particular app. avg provides the overall average of the weeks in scope and rng provides the isolated MTTR values.
  • mttrModerateThreat: this is the Mean Time To Resolution (measured in days instead of milliseconds) for Moderate threat level violations for that particular app. avg provides the overall average of the weeks in scope and rng provides the isolated MTTR values.
  • mttrSevereThreat: this is the Mean Time To Resolution (measured in days instead of milliseconds) for Severe threat level violations for that particular app. avg provides the overall average of the weeks in scope and rng provides the isolated MTTR values.
  • mttrCriticalThreat: this is the Mean Time To Resolution (measured in days instead of milliseconds) for Critical threat level violations for that particular app. avg provides the overall average of the weeks in scope and rng provides the isolated MTTR values.
  • discoveredCounts: this is the number of discovered vulnerabilities of a particular type (TOTAL, SECURITY, LICENSE, QUALITY, OTHER), of a particular threat level (TOTAL, LOW, MODERATE, SEVERE, CRITICAL) for that particular app. avg provides the overall average number of vulnerabilities of that type and threat level and rng provides the isolated number per week.
  • fixedCounts: this is the number of fixed vulnerabilities of a particular type (TOTAL, SECURITY, LICENSE, QUALITY, OTHER), of a particular threat level (TOTAL, LOW, MODERATE, SEVERE, CRITICAL) for that particular app. avg provides the overall average number of vulnerabilities of that type and threat level and rng provides the isolated number per week.
  • waivedCounts: this is the number of waived vulnerabilities of a particular type (TOTAL, SECURITY, LICENSE, QUALITY, OTHER), of a particular threat level (TOTAL, LOW, MODERATE, SEVERE, CRITICAL) for that particular app. avg provides the overall average number of vulnerabilities of that type and threat level and rng provides the isolated number per week.
  • openCountsAtTimePeriodEnd: this is the number of open vulnerabilities of a particular type (TOTAL, SECURITY, LICENSE, QUALITY, OTHER), of a particular threat level (TOTAL, LOW, MODERATE, SEVERE, CRITICAL) for that particular app. avg provides the overall average number of vulnerabilities of that type and threat level and rng provides the isolated number per week. Open counts accumulate from previous time periods (weeks/months) and constitute the technical debt backlog to fix/remediate. For example, if you discovered 10 Security Critical violations each week for 3 weeks (total of 30 violations) and you fixed and/or waived a total of 10 Security Critical violations at the end of those 3 weeks, the openCountAtTimePeriodEndSecurityCritical counter would show 20 (Security Critical open violations).

Understanding licences

The structure is identical to summary with data being exclusive to licence violations.

Understanding security

The structure is identical to summary with data being exclusive to security violations.

Contributing

If you as well want to speed up the pace of software development by working on this project, jump on in! Before you start work, create a new issue, or comment on an existing issue, to let others know you are!

The Fine Print

It is worth noting that this is NOT SUPPORTED by Sonatype, and is a contribution of ours to the open source community (read: you!)

Don't worry, using this community item does not "void your warranty". In a worst case scenario, you may be asked by the Sonatype Support team to remove the community item in order to determine the root cause of any issues.

Remember:

  • Use this contribution at the risk tolerance that you have
  • Do NOT file Sonatype support tickets related to iq-success-metrics
  • DO file issues here on GitHub, so that the community can pitch in

Phew, that was easier than I thought. Last but not least of all:

Have fun creating and using this plugin and the Nexus platform, we are glad to have you here!

Getting help

Looking to contribute to our code but need some help? There's a few ways to get information:

iq-success-metrics's People

Contributors

cmorenoserrano avatar bhamail avatar ctolo avatar michaelmworthington avatar darthhater avatar reedreceipts avatar scherzhaft avatar deborahcai avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.