GithubHelp home page GithubHelp logo

rcoh / angle-grinder Goto Github PK

View Code? Open in Web Editor NEW
3.0K 22.0 68.0 1.77 MB

Slice and dice logs on the command line

License: MIT License

Rust 99.17% Shell 0.36% Python 0.25% Ruby 0.22%
logging analytics rust cli-app

angle-grinder's Introduction

angle-grinder Build Status Gitter chat

Slice and dice log files on the command line.

Angle-grinder allows you to parse, aggregate, sum, average, min/max, percentile, and sort your data. You can see it, live-updating, in your terminal. Angle grinder is designed for when, for whatever reason, you don't have your data in graphite/honeycomb/kibana/sumologic/splunk/etc. but still want to be able to do sophisticated analytics.

Angle grinder can process well above 1M rows per second (simple pipelines as high as 5M), so it's usable for fairly meaty aggregation. The results will live update in your terminal as data is processed. Angle grinder is a bare bones functional programming language coupled with a pretty terminal UI.

overview gif

Quick Links

Installation

Binaries are available for Linux and OSX. Many more platforms (including Windows) are available if you compile from source. In all of the commands below, the resulting binary will be called agrind. Starting with v0.9.0, agrind can self-update via the --self-update flag. Thanks to the many volunteers who maintain angle-grinder on different package managers & environments!

macOS

Brew

brew install angle-grinder

Macports

sudo port selfupdate
sudo port install angle-grinder
pkg install angle-grinder

Linux (any MUSL compatible variant)

curl -L https://github.com/rcoh/angle-grinder/releases/download/v0.18.0/agrind-x86_64-unknown-linux-musl.tar.gz \
  | tar Ozxf - \
  | sudo tee /usr/local/bin/agrind > /dev/null && sudo chmod +x /usr/local/bin/agrind
  
agrind --self-update  

Cargo (most platforms)

If you have Cargo installed, you can compile & install from source: (Works with Stable Rust >=1.26)

cargo install ag

Query Syntax

An angle grinder query is composed of filters followed by a series of operators. The filters select the lines from the input stream to be transformed by the operators. Typically, the initial operators will transform the data in some way by parsing fields or JSON from the log line. The subsequent operators can then aggregate or group the data via operators like sum, average, percentile, etc.

agrind '<filter1> [... <filterN>] | operator1 | operator2 | operator3 | ...'

A simple query that operates on JSON logs and counts the number of logs per level could be:

agrind '* | json | count by log_level'

Escaping Field Names

Field names containing spaces, periods, or quotes must be escaped using ["<FIELD>"]:

agrind '* | json | count by ["date received"], ["grpc.method"]

Filters

There are three basic filters:

  • *: Match all logs
  • filter-me* (with no quotes) is a case-insensitive match that can include wildcards
  • "filter-me" (in quotes) is a case-sensitive match (no wildcards, * matches literal * , filter-me, or "filter me!".

Filters can be combined with AND, OR and NOT

("ERROR" OR WARN*) AND NOT staging | count

Sub-expressions must be grouped in parenthesis. Only lines that match all filters will be passed to the subsequent operators. filter.gif

Aliases

Starting with v0.12.0, angle grinder supports aliases, pre-built pipelines do simplify common tasks or formats. The only alias currently defined is apache, which parses apache logs. Adding more aliases is one of the easiest ways to contribute!

Examples:

* | apache | count by status

Operators

Non Aggregate Operators

These operators have a 1 to 1 correspondence between input data and output data. 1 row in, 0 or 1 rows out.

JSON

json [from other_field]: Extract json-serialized rows into fields for later use. If the row is not valid JSON, then it is dropped. Optionally, from other_field can be specified. Nested JSON structures are supported out of the box. Simply access nested values with .key[index], for example, .servers[6]. Negative indexing is also supported.

Examples:

* | json
* | parse "INFO *" as js | json from js

Given input like:

{"key": "blah", "nested_key": {"this": "that"}}
* | json | count_distinct(nested_key.this)

json.gif

Logfmt

logfmt [from other_field]: Extract logfmt-serialized rows into fields for later use. If the row is not valid logfmt, then it is dropped. Optionally, from other_field can be specified. Logfmt is a an output format commonly used by Heroku and Splunk, described at https://www.brandur.org/logfmt.

Examples:

* | logfmt

Given input like:

{"key": "blah", "nested_key": "some=logfmt data=more"}
* | json | logfmt from nested_key | fields some
Split

split[(input_field)] [on separator] [as new_field]: Split the input via the separator (default is ,). Output is an array type. If no input_field or new_field, the contents will be put in the key _split.

Examples:

* | split on " "

Given input like:

INFO web-001 influxd[188053]: 127.0.0.1 "POST /write HTTP/1.0" 204

Output:

[_split=[INFO, web-001, influxd[188053]:, 127.0.0.1, POST /write HTTP/1.0, 204]]

If input_field is used, and there is no new_field specified, then the input_field will be overridden with the split data-structure. For example:

* | parse "* *" as level, csv | split(csv)

Given input like:

INFO darren,hello,50
WARN jonathon,good-bye,100

Will output:

[csv=[darren, hello, 50]]        [level=INFO]
[csv=[jonathon, good-bye, 100]]        [level=WARN]

Other examples:

* | logfmt | split(raw) on "blah" as tokens | sum(tokens[1])
Parse

parse "* pattern * otherpattern *" [from field] as a,b,c [nodrop]: Parse text that matches the pattern into variables. Lines that don't match the pattern will be dropped unless nodrop is specified. * is equivalent to regular expression .* and is greedy. By default, parse operates on the raw text of the message. With from field_name, parse will instead process input from a specific column. Any whitespace in the parse expression will match any whitespace character in the input text (eg. a literal tab).

Examples:

* | parse "[status_code=*]" as status_code

parse.gif

Parse Regex

parse regex "<regex-with-named-captures>" [from field] [nodrop]: Match the input text against a regular expression and populate the record with the named captures. Lines that don't match the pattern will be dropped unless nodrop is specified. By default, parse operates on the raw text of the message. With from field_name, parse will instead process input from a specific column.

Notes:

  • Only named captures are supported. If the regular expression includes any unnamed captures, an error will be raised.
  • The Rust regular expression syntax is used.
  • Escape sequences do not require an extra backslash (i.e. \w works as-is).

Examples: To parse the phrase "Hello, ...!" and capture the value of the "..." in the name field:

* | parse regex "Hello, (?P<name>\w+)"
Fields

fields [only|except|-|+] a, b: Drop fields a, b or include only a, b depending on specified mode.

Examples: Drop all fields except event and timestamp

* | json | fields + event, timestamp

Drop only the event field

* | fields except event
Where

where <bool-expr>: Drop rows where the condition is not met. The condition must be an expression that returns a boolean value. The expression can be as simple as a field name or a comparison (i.e. ==, !=, <=, >=, <, >) between fields and literal values (i.e. numbers, strings). The '!' operator can be used to negate the result of a sub-expression. Note that None == None, so a row where both the left and right sides match a non-existent key will match.

Examples

* | json | where status_code >= 400
* | json | where user_id_a == user_id_b
* | json | where url != "/hostname"
Limit

limit #: Limit the number of rows to the given amount. If the number is positive, only the first N rows are returned. If the number is negative, the last N rows are returned.

Examples

* | limit 10
* | limit -10
Field Expression

<expr> as <name>: The given expression is evaluated and the result is stored in a field with the given name for the current row. The expression can be made up of the following:

  • +, -, *, /: Mathematical operators with the normal precedence rules. The operators work on numeric values and strings that can automatically be converted to a number. In addition, these operators work for date-time and duration values when appropriate. For example, you can take the difference between two date-times, but cannot add them together.
  • ==, != (or <>), <=, >=, <, >: Boolean operators work on most data types.
  • and, &&, or, ||: Short-circuiting logical operators.
  • <field>: The name of a field in the current row. If the row does not contain the given field, an error will be reported.
  • Parentheses to group operations

The following functions are supported within expressions:

  • Mathematical functions: abs(), acos(), asin(), atan(), atan2(), cbrt(), ceil(), cos(), cosh(), exp(), expm1(), floor(), hypot(), log(), log10(), log1p(), round(), sin(), sinh(), sqrt(), tan(), tanh(), toDegrees(), toRadians()
  • concat(arg0, ..., argN) - Concatenate the arguments into a string
  • contains(haystack, needle) - Return true if the haystack contains the needle.
  • length(str) - Returns the number of characters in "str".
  • now() - Returns the current date and time.
  • num(value) - Returns the given value as a number.
  • parseDate(str) - Attempt to parse a date from the given string.
  • parseHex(str) - Attempt to convert a hexadecimal string into an integer.
  • substring(str, startOffset, [endOffset]) - Returns the part of the string specified by the given starting offset up to the end offset (if specified).
  • toLowerCase(str) - Returns the lowercase version of the string.
  • toUpperCase(str) - Returns the uppercase version of the string.
  • isNull(value) - Returns true if value is null, false otherwise.
  • isEmpty(value) - Returns true if value is null or an empty string, false otherwise.
  • isBlank(value) - Returns true if value is null, an empty string, or a whitespace-only string, false otherwise.
  • isNumeric(str) - Returns true if the given string is a number.

Examples Multiply value by 100 to get the percentage

* | json | value * 100 as percentage
if Operator

if(<condition>, <value-if-true>, <value-if-false>): Choose between two values based on the provided condition.

Examples

To get byte counts for successful requests:

* | json | if(status == 200, sc_bytes, 0) as ok_bytes

Aggregate Operators

Aggregate operators group and combine your data by 0 or more key fields. The same query can include multiple aggregates. The general syntax is:

(operator [as renamed_column])+ [by key_col1, key_col2]

In the simplest form, key fields refer to columns, but they can also be generalized expressions (see examples) Examples:

* | count
* | json | count by status_code
* | json | count, p50(response_ms), p90(response_ms) by status_code
* | json | count as num_requests, p50(response_ms), p90(response_ms) by status_code
* | json | count, p50(response_ms), p90(response_ms), count by status_code >= 400, url

There are several aggregate operators available.

Count

count[(condition)] [as count_column]: Counts the number of input rows. Output column defaults to _count. Optionally, you can provide a condition -- this will count all rows for which the condition evaluates to true.

Examples:

Count number of rows by source_host:

* | count by source_host

Count number of source_hosts:

* | count by source_host | count

Count the number of info vs. error logs:

* | json | count(level == "info") as info_logs, count(level == "error") as error_logs
Sum

sum(column) [as sum_column]: Sum values in column. If the value in column is non-numeric, the row will be ignored. Examples:

* | json | sum(num_records) by action
Min

min(column) [as min_column] [by a, b] : Compute the min of values in column. If the value in column is non-numeric, the row will be ignored.

Examples:

* | json | min(response_time)
Average

average(column) [as average_column] [by a, b] : Average values in column. If the value in column is non-numeric, the row will be ignored.

Examples:

* | json | average(response_time)
Max

max(column) [as max_column] [by a, b] : Compute the max of values in column. If the value in column is non-numeric, the row will be ignored.

Examples:

* | json | max(response_time)
Percentile

pXX(column): calculate the XXth percentile of column

Examples:

* | json | p50(response_time), p90(response_time) by endpoint_url, status_code
Sort

sort by a, [b, c] [asc|desc]: Sort aggregate data by a collection of columns. Defaults to ascending.

Examples:

* | json | count by endpoint_url, status_code | sort by endpoint_url desc

In addition to columns, sort can also sort an arbitrary expressions.

* | json | sort by num_requests / num_responses
* | json | sort by length(endpoint_url)
Timeslice

timeslice(<timestamp>) <duration> [as <field>]: Truncates a timestamp to the given duration to allow for partitioning messages into slices of time. The timestamp parameter must be a date value, such as that returned by the parseDate() function. The duration is an amount followed by one of the following units:

  • ns - nanoseconds
  • us - microseconds
  • ms - milliseconds
  • s - seconds
  • m - minutes
  • h - hours
  • d - days
  • w - weeks

The resulting timestamp is placed in the _timeslice field by default or the field specified after the as keyword.

Examples:

* | json | timeslice(parseDate(ts)) 5m
Total

total(a) [as renamed_total]: Compute the running total of a given field. Total does not currently support grouping!

Examples:

* | json | total(num_requests) as tot_requests
Count Distinct

count_distinct(a): Count distinct values of column a. Warning: this is not fixed memory. Be careful about processing too many groups.

Examples:

* | json | count_distinct(ip_address)

Example Queries

  • Count the number of downloads of angle-grinder by release (with special guest jq)
curl  https://api.github.com/repos/rcoh/angle-grinder/releases  | \
   jq '.[] | .assets | .[]' -c | \
   agrind '* | json
         | parse "download/*/" from browser_download_url as version
         | sum(download_count) by version | sort by version desc'

Output:

version       _sum
-----------------------
v0.6.2        0
v0.6.1        4
v0.6.0        5
v0.5.1        0
v0.5.0        4
v0.4.0        0
v0.3.3        0
v0.3.2        2
v0.3.1        9
v0.3.0        7
v0.2.1        0
v0.2.0        1
  • Take the 50th percentile of response time by host:
tail -F my_json_logs | agrind '* | json | pct50(response_time) by url'
  • Count the number of status codes by url:
tail -F  my_json_logs | agrind '* | json | count status_code by url'

More example queries can be found in the tests folder

Rendering

Non-aggregate data is simply written row-by-row to the terminal as it is received:

tail -f live_pcap | agrind '* | parse "* > *:" as src, dest | parse "length *" as length'
[dest=111.221.29.254.https]        [length=0]        [src=21:50:18.458331 IP 10.0.2.243.47152]
[dest=111.221.29.254.https]        [length=310]      [src=21:50:18.458527 IP 10.0.2.243.47152]

Alternate rendering formats can be provided with the --output flag. Options:

  • --output json: JSON output
  • --output logfmt: logfmt style output (k=v)
  • --output format=<rust formatter>: This flag uses rust string formatting syntax. For example:
    tail -f live_pcap | agrind --format '{src} => {dst} | length={length}' '* | parse "* > *:" as src, dest | parse "length *" as length'
    21:50:18.458331 IP 10.0.2.243.47152 => 111.221.29.254.https | length=0
    21:50:18.458527 IP 10.0.2.243.47152 => 111.221.29.254.https | length=310
    

Aggregate data is written to the terminal and will live-update until the stream ends:

k2                  avg
--------------------------------
test longer test    500.50
test test           375.38
alternate input     4.00
hello               3.00
hello thanks        2.00

The renderer will do its best to keep the data nicely formatted as it changes and the number of output rows is limited to the length of your terminal. Currently, it has a refresh rate of about 20hz.

The renderer can detect whether or not the output is a tty -- if you write to a file, it will print once when the pipeline completes.

Contributing

angle-grinder builds with Rust >= 1.26. rustfmt is required when submitting PRs (rustup component add rustfmt).

There are a number of ways you can contribute:

  • Defining new aliases for common log formats or actions
  • Adding new special purpose operators
  • Improve documentation of existing operators + providing more usage examples
  • Provide more test cases of real queries on real world data
  • Tell more people about angle grinder!
cargo build
cargo test
cargo install --path .
agrind --help
... write some code!

cargo fmt

git commit ... etc.

When submitting PRs, please run cargo fmt -- this is necessary for the CI suite to pass. You can install cargo fmt with: rustup component add rustfmt if it's not already in your toolchain.

See the following projects and open issues for specific potential improvements/bugs.

Project: Improving Error Reporting

Usability can be greatly improved by accurate and helpful error messages for query-related issues. If you have struggled to figure out why a query is not working correctly and had a hard time fixing the issue, that would be a good place to jump in and start making changes!

First, you need to determine where the problem is occurring. If the parser is rejecting a query, the grammar may need some tweaking to be more accepting of some syntax. For example, if the field names are not provided for the parse operator, the query can still be parsed to produce a syntax tree and the error can be raised in the next phase. If the query passes the parsing phase, the problem may lie in the semantic analysis phase where the values in parse tree are verified for correctness. Continuing with the parse example, if the number of captures in the pattern string does not match the number of field names, the error would be raised here. Finally, if the query has been valid up to this point, you might want to raise an error at execution time. For example, if a field name being accessed does not exist in the records being passed to an operator, an error could be raised to tell the user that they might have mistyped the name.

Once you have an idea of where the problem might lie, you can start to dig into the code. The grammar is written using nom and is contained in the lang.rs module. The enums/structs that make up the parse tree are also in the lang.rs module. To make error reporting easier, values in the parse tree are wrapped with a Positioned object that records where the value came from in the query string. The Positioned objects are produced by the with_pos!() parser combinator. These objects can then be passed to the SnippetBuilder in the errors.rs module to highlight portions of the query string in error messages.

The semantic phase is contained in the typecheck.rs module and is probably where most of the work will need to be done. The semantic_analysis() methods in that module are passed an ErrorBuilder that can be used to build and send error reports to the user.

After adjusting the grammar and adding a check for the problem, it will be time to figure out how to inform the user. Ideally, any errors should explain the problem, point the user to the relevant part of the query string, and lead the user to a solution. Using the ErrorBuilder, you can call the new_error_report_for() method to construct a SnippetBuilder for a given error. To highlight a portion of the query string, use the with_code_pointer() method with the Positioned object that refers to the relevant segment of the query string. Finally, additional help/examples can be added by calling the with_resolution() method.

Once you're all done, you should see a nicely formatted error message like the following:

error: Expecting an expression to count
  |
1 | * | count_distinct
  |     ^^^^^^^^^^^^^^ No field argument given
  |
  = help: example: count_distinct(field_to_count)

Similar Projects

  • Angle Grinder is a rewrite of Sumoshell written to be easier to use, testable and a better platform for new features.
  • lnav is a full featured log analysis platform in your terminal (with many more features than angle-grinder). It includes support for common log file formats out-of-the-box, generalized SQL queries on your logs, auto-coloring and a whole host of other features.
  • visidata is a spreadsheets app in your terminal

angle-grinder's People

Contributors

akdor1154 avatar asutherland avatar byron avatar darrentsung avatar dch avatar dependabot[bot] avatar franciosi avatar herbygillot avatar joelparkerhenderson avatar killercup avatar kyegupov avatar mikaelsmith avatar ntfwc avatar paralax avatar philipdexter avatar piggynl avatar rcoh avatar slaunay avatar tstack avatar wfchandler avatar ziul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

angle-grinder's Issues

Get PR builds to finish in under 5 minutes

It's now taking ~30 minutes per platform. Goal: PR builds finish in under 5 minutes.

Investigate if we're doing something significantly different than other Rust CLI apps.

  • Maybe it's the cache? I don't think we're really because we use cross using it but it takes a significant amount of time to save and load
  • We don't really need to build with and without --release on branches, but it would be good to keep doing it on master
  • Do we need to use cross for PR builds?
  • Is it worth testing OSX for PR builds? (multi platform doesn't really add much time since they run in parallel though)

Things that matter:

  • master builds & runs tests on windows, osx and linux + nightly rust on linux
  • Tags produce statically linked binaries

Add a 'follow' feature, à la 'tail -f'

Although it would only be a built-in implementation of tail -F -n +0 file.log | agrind ..., so if you don't want this complexity it would be understandable.

Truncated output when sorting

Weird thing happened when I tried to parse this file (https://gist.github.com/DarrenTsung/be52ede5c7deb46e24297ce75c10b704) with the query:

* | parse '*Oct*' as num, date | sort date desc

The sort seems to be missing rows (specifically lines with dates on the minute 09:49). However, when I remove the sort date desc, all the lines appear as normal.

Other weird thing is I put it in an integration test and it works fine, but running this locally:

cargo run -- -f weird.txt "* | parse '*Oct*' as num, date | sort date desc"

still has the truncated behavior. I also tried:

cat weird.txt | cargo run -- "* | parse '*Oct*' as num, date | sort date desc"

Any idea what could be going on here? I think I've seen this behavior before (truncation on sort).

Maybe there's something weirdly configured with my terminal?

  • Just tried it in default terminal, same behavior.

Feature Suggestion: 'split'

Hi, thanks for the great tool!

One common case that I run into quite often is logs with the format like:

Oct 09 20:22:21 web-001 influxd[188053]: [httpd] 127.0.0.1 - - [09/Oct/2019:20:22:21 +0000] "POST /write HTTP/1.0" 204 0 "-" "reqwest/0.9.13" 7g5f9126-ead2-11e

And wanting to analyze the log based on one of the columns like: "reqwest/0.9.13". To do so I usually do something like:

agrind '* | parse "* * * * *: * * - - [*] \"*\" * * \"*\" \"*\" * *" as a, b, c, d, e, f, g, h, i, j, k, l, request_header, n | count by request_header'

This is kinda painful, so I was going to suggest a feature like:

// -2 being the second last field
agrind '* | split " " | count by -2'

Now that I'm writing this out, I realize that a split operator would have to dynamically change the parsing / column mappings based on the lines coming in.. which I don't think is supported with the current functionality.

Actually, the json operator dynamically parses the line, so maybe this can be supported in a similar way.

It might be difficult to support -index column naming, my thoughts were that the same data could map to both column1 and column-8 or something. But not sure how realistic that is.

Additional thoughts: it would be nice if the split operator worked with quoting so that split "," grouped abc,def,"g,h" as abc, def, g,h.


Thoughts on:

  1. Is this feature a good idea to add?
  2. How feasible is each part of the feature with the current architecture?
  3. Would you be open to a PR with this functionality? (no promises)

Split out `Expressions` section of `README`

There are currently several places in the language where we accept arbitrary expressions:

  1. In the clause of where
  2. In the argument of aggregates (sum(xyz)),
  3. In the group-by clause of aggregates (* | count by status_code > 500, response_ms < 300, url)

The set of arbitrary expressions that can be constructed is growing, and it'd be great to have those documented separately where they could be linked to within the README so people could have a full understanding of what they could use.

Structured output formats

agrind should be able to output structured, computer readable data in addition to the current human readable output format

  • JSON
  • YAML
  • logfmt

I think this would fit best currently with a --output/-o flag in addition to the query but I could also imagine something like a format(<format>) operator. I'm curious what power users of agrind think.

Many thanks !

Hey there !

This tool is really awesome ! Fast, easy to use and working neat !
I see there's less updates since a few months, don't give up on it :-)

Maybe add in readme how people that don't know rust (yet) can contribute.

Thanks again !

Boolean logic for keywords

(parenthesis) and not, and, and or (or !, &&, and ||, or ¬, , and , etc. etc.) would be really helpful when combining filters.

A naïve implementation of this would be backwards incompatible with the current ANDed list of filters.

Upgrade to Nom 5

Nom 5.0 removes macros and should make the parser much easier to work on for people unfamiliar with nom. This is currently blocked by nom_locate supporting nom 5: fflorent/nom_locate#23

Support parsing tabs

It would be useful to be able to parse tabs.

For example, I have logs like:

 [WARNING]2020-01-01T00:00:00.000Z        1e4cd606-a550-4d41-b013-f61a4a3d959d      init_attendance no students user='kennimrod'
/aws/lambda/function-4TPZE69CJ8X7 2020/01/01/[$LATEST]e75d9ab689c4457da0a79316cf624c19 [WARNING]	2020-01-01T00:00:00.000Z	1e4cd106-a551-4d41-b313-f61a4a3d059d	init_attendance no students user='user'

Note that there are some tabs in between some of the parts.

I tried ag '* | parse "* * [*]\t.*" as lambda_name, run_instance, log_level, datetime' < log but it does not work, seemed like it does not recognize \t.

One solution in to process it with sed or sd to substitute the tabs first. For example, sd \t , < log | ag '* | parse "* * [*],*,*,*" as lambda_name, run_instance, log_level, datetime, request_id, message.

More control over human-formatted output

I'd like to use agrind to transform structured log output into something that's easy to read while tailing some logs

Currently agrind's human-formatted output looks something like

[id=ConsumerFetcherManager-1382721708341][level=info][module=kafka.consumer.ConsumerFetcherManager][msg=Stopping all fetchers][tag=stopping_fetchers]

I'd like more control over the output formatting so I could turn it into something like

info | Stopping all fetchers          tag=stopping_fetchers

That seems possible by exposing a printf-style format operator. The above would be something like

* | logfmt | printf "%s | %30s tag=%s" level, msg, tag

Timeslice operator

See 274d342 for a template
A non trivial undertaking. My suggested path:

  • Add a date subtype to data::Value -- this will include the date rendering logic
  • Add a parseDate expression that produces a date value from a string
  • Create the timeslice operator that accepts an Expression<Date> as an argument and does the bucketing (nontrivial)
  • Add an automatic sort by _timeslice automatically after groupingby _timeslice

Proposed query structure:
* | parse "* * *" as level, date, message | timeslice(parseDate(level)) 5m | count by _timeslice

Allow stdout redirect for further processing

The default live update is amazing, but I tried to pipe stdout of a query to do some further processing with awk, as I was hoping to get a cumulative sum (which would be a nice aggregator to have like SumoLogic has).

On OSX at least, attempting to redirect stdout seems to hang the process, and nothing comes out.

use coveralls for test coverage

I tried to add test coverage on my fork, but the support for rust is still a bit lacking. I tried to use https://github.com/roblabla/cargo-travis, but there is an issue with more recent versions of cargo. I tried to use an unreleased branch of cargo-travis, but the build failed with some linking errors. So, I switched to https://github.com/xd009642/tarpaulin, but that doesn't work for CLI integration tests. I think I'm going to give up at this point and just wait for tarpaulin's support to improve. Just wanted to file this issue to log what was done so far.

In-process editing of pipeline

I am imagining something like the workflow Ultimate Plumber (https://github.com/akavel/up) provides.

n.b. None of these issues are demands or even requests! It's more I am wondering about things I might want to hack on, and soliciting feedback on whether you would welcome them or not before I get started.

Support for multiline patterns

Support for multiline logfiles (i.e. slow query logs) would be nice. I did not find anything in the documentation or existing issues.

Thanks for this awesome tool.

Parsing embedded JSON

Hi,

I have the following JSON format:

{"dst_host": "159.65.224.130", "dst_port": 23, "honeycred": false, "local_time": "2019-01-23 11:57:11.834296", "logdata": {"PASSWORD": "1111", "USERNAME": "root"}, "logtype": 6001, "node_id": "opencanary-1", "src_host": "41.139.253.2", "src_port": 36653}

How would I get agrind to also parse the logdata field?

TIA!

nit: lnav is not 100k lines of C++...

Howdy! Angle-grinder is a great looking tool and I was excited to see it when I first stumbled onto it awhile ago! I just noticed today that you mention lnav in the README, which is very nice of you. But, I wanted to at least correct the line count as mentioned. There is some generated code and header-only libraries that are checked in to lnav's repo to make it a bit easier to compile. So, the quoted number of 100k lines is not really accurate, it's closer to 50k. (That 50k count is a surprise to me, actually. I haven't checked the line count in a while, thought it was closer to 20k...)

Respectfully, though, I'd ask you to remove the comparison altogether. All that extra code does provide a lot of additional value. It's not just a more complex version of angle-grinder.

support multi-aggregates

eg. | count, pct50(response_time), count_distinct(status_code) by server

server _count _p50, _countDistinct
...        5           95      40

Name of angle-grinder binary ag is the same as Silver Searcher binary ag

angle-grinder looks interesting, thanks for releasing it. Note that the binary for the Silver Searcher grep tool is also named ag. I don't have a good recommendation to mitigate: ag is the sensible name for angle-grinder, and Silver Searcher is likely to remain popular, although Rust developers may be early adopters of Ripgrep and be less likely to also use Silver Searcher, avoiding any name conflict.

Support YAML input

Just discovered this project ( which is an awesome idea, btw ); I wonder if it supports | yaml -- and if not, then how hard it would be to add that ?

support expressions

eg. sum(response_ms*2). Expressions can be used either inline or as an assignment:
let response_ms = response_ms * 2

logfmt is not a valid operator

A simple query like:

python -u test_files/gen_logs.py | agrind '* | json | logfmt'

...results in...

rror: Expected an operator
  |
1 | * | json | logfmt
  |            ^^^^^^
  |
  = help: logfmt is not a valid operator
Error: Failed to parse query
...

Support Apache Combined Log format

Hey this tool is really neat! The addition of the logfmt operator made me think of another type of log file that's in common use which people would likely use agrinder against- Apache's Combined log format. Many other web servers, including nginx and caddy, are able to output in combined log format, so adding a parser for this format would immediately make this program useful for a variety of web server logs. Since the fields are well-defined and structured, it shouldn't be too difficult to break them up into a format that's useful for angle-grinder.

fields after count causes panic

sudo lsof -i -P | grep '(LISTEN)' | agrind '* | parse ":* (" as port | count by port | fields port'

leads to a panic. Probably a bug in the pre-aggregate adapter.

Automate release process

Or at least validate that:

  • version in cargo.toml matches the tag
  • new version of published to crates.io

I seem to forget to do this 90% of the time

NaN on functions like sum, avg, p90 ... treats numbers as strings?

Sample input:

{"time_local":"18/Jun/2020:09:37:45 -0400","remote_addr":"1.2.3.4","remote_user":"","request":"POST /my_url.php HTTP/2.0","status":200,"request_time":0.063,"http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0"}

I am new to ag, but it seems to me from my testing that the request_time above does not work when it's output as an integer. If I quote it, ie "0.063" (literal) then when I run a function against it, I still get NaN.

cat /var/log/nginx/site/access.json.log | agrind '* | json | count by status | avg(request_time)'
_average

NaN

cat /var/log/nginx/site/access.json.log | agrind '* | json | count by status'
status _count

200 1746
301 161

Am I missing something obvious here? I even tried casting inside the agrind command, but that doesn't seem to work. Like I said, I've tried escaping the numerical output in quotes, makes no difference.

This tool looks amazing, but so far I cannot input my logs...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.