GithubHelp home page GithubHelp logo

rtic-scope / cargo-rtic-scope Goto Github PK

View Code? Open in Web Editor NEW
16.0 16.0 4.0 385 KB

Non-intrusive ITM tracing/replay toolset for RTIC programs with nanosecond timestamp accuracy.

Rust 97.33% Shell 1.37% Nix 1.31%
cargo-plugin cortex-m embedded-rust rtic

cargo-rtic-scope's People

Contributors

tmplt avatar yatekii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

cargo-rtic-scope's Issues

Register trace clock frequency from target DWT watch address write

Trace packets are associated with a trace-clock timestamp. This is simply a tuple of two register values:

  • the base timestamp: corresponds to global timestamp packets and denote the cycles since the target was reset;
  • the delta timestamp: the sum of all local timestamps received since the last global timestamps. Local timestamps denote the time since the last local timestamp. This delta is reset to zero when a global timestamp is received.

These register values are not very useful by themselves. To make sense of them one must know the frequency of the trace clock they sample. What we ultimately want is to find a host-side chrono::DateTime for each set of trace packets we receive. For this we need to know the frequency of the trace clock and the time of reset. We currently timestamp the reset time with sufficient accuracy (see https://github.com/tmplt/cargo-rtic-trace/blob/master/cargo-rtic-trace/src/trace.rs#L102).


I am not so sure, having a constant frequency is a good choice. The ITM clock is not fixed. So we want to change this at runtime depending on the chip.

Originally posted by @Yatekii in rtic-scope/itm-decode#13 (comment)

To figure out the trace clock frequency we either leave it up to the user to give this information to us in some manner, or we need to communicate this to cargo-rtic-trace before trace start. This can be realized in the init task by sending the frequency over ITM by writing it to a DWT watch address. It is then up to https://github.com/tmplt/rtic-trace to find this frequency.

Forward decode errors and non-mappable packets to sinks

We get decode errors when the target sends something we cannot interpret. This will happen when something is incorrectly configured, but it may also happen if the buffer of the trace source overflows/corrupts. Saving the decode errors for postmortem is then useful.

This will require a change to the API.

Don't drop trace packets received before the trace clock frequency payload

As of cd73692 packets received before the trace clock frequency payload are dropped. While these packets thus far have been noise they could be useful in some contexts.

We should consider saving them temporarily in some vector and then chain it together with the source again. Effectively, we would then just peek forward into the source until we hit the payload which is then consumed.

frontend termination not caught during drain

If the frontend fails while packets are drained we want to warn the user about it but continue to flush trace to file. Currently, we will only now that the frontend failed after all packets have been drained.

Full error stack and diagnostics are not printed

In RTICScopeError::render the error must be printed via {:?}, but this breaks the layout style. Additionally, diagnostics for some errors (at least RecoveryError) are not propagated, and thus not printed.

Support multiple RTIC versions

Our dependence on the latest RTIC syntax is small, and rtic-syntax handles most work for us. It may be worth to support multiple versions.

stderr race condition between back- and frontend

The spawned frontend inherits stderr from cargo-rtic-scope which means that their output becomes garbled when writing is simultaneous. Can we somehow multiplex to stderr via some mpsc channel? Further, should stderr from frontends be prepended by their names?

Compilation errors are not propagated

Upon a cargo build failure, the following is printed:

   Compiling trace-examples v0.1.0 (/home/tmplt/exjobb/trace-examples)
error: could not compile `trace-examples`

To learn more, run the command again with --verbose.
       Error `cargo build --bin blinky --bin blinky` failed with exit status exit code: 101

instead of the expected error.

Additionally, the final error message is incorrect. --bin blinky is not passed twice here.

Replay functionality

We want to be able to replay trace files for post-mortem analysis. Traces are already stored in <trace-dir>/rtic-traces/*.trace. A --list-traces should be added that enumerates all previous traces and a --replay <trace> should replay the trace to the frontend.

Propagate cargo-flash hints

The hints of cargo-flash are hidden inside of the executable, and are not available via probe_rs_cli_util yet. Move diag::DiagnosableError there and refactor a bit. Then, for the error types in cargo-rtic-scope that wrap an OperationError, their diagnose should chain the hints of the CustomError to it's own.

Enable frontend to ask for packets within a time frame

So, the recv component of the frontend will simply copy what the backend sends it, and redraw the UI according to what the user asks. This approach will not scale. A better solution would be if the frontend queries the backend for what happens within a range of time. The backend already streams the trace to disk, we might as well revamp the implementation so it can dig up what the frontend asks for.

Originally posted by @tmplt in #3 (comment)


The backend should do all the heavy work. The less state that the frontend need keep tabs on, the better.

Will this require a database of some kind?

Utilize --comment option

This option is currently ignored, but it should be used to add a comment to the trace metadata.

DAPSource does not properly configure exception tracing

After a target power-cycle:

$ cargo rtic-scope trace --bin blinky-noconf --chip stm32f401re --tpiu-freq 16000000 --clear-traces --tpiu-baud 115200
# Observe that stream does not contain any app::toggle events
$ cargo rtic-scope trace --bin blinky --chip stm32f401re --tpiu-freq 16000000 --clear-traces --tpiu-baud 115200
# Observe app::toggle events
$ cargo rtic-scope trace --bin blinky-noconf --chip stm32f401re --tpiu-freq 16000000 --clear-traces --tpiu-baud 115200
# Again observe app::toggle events because configuration carried over

How (and if) probe-rs configures exception tracing should be compared with cortex-m-rtic-trace.

Read --tpiu-{baud,freq} from crate manifest

QoL: TPIU freq and baud are unlikely to change when set. It would be ergonomic to just state it in the RTIC Scope metadata block of the application crate manifest. The flags should of course override.

mod pacp should be generalized for this purpose.

Add functionality to export raw ITM trace

A lot of malformed packets are observed on my end when using an STLink probe (to a stm32f401retx). I recall there being less problems when a TTY source was used in conjunction with a SWO pin, thereby bypassing the probe entirely when it comes to tracing. I need to dump the raw trace data so that is may be compared and eventually sent to the vendor for debugging.

Add CMSIS-DAP source

probe-rs offers flashing, tracing, etc. via CMSIS-DAP which is more portable than serial devices where the hs-probe buffer can be queried for the remaining space.

[...] [probe-rs] will not only simplify cargo-rtic-trace but also minimize required boilerplate in user firmware. Obviously, we want to exploit the chip support of probe-rs.

Originally posted by @tmplt in #11 (comment)

Create a test bench

Some unexpected bugs in v0.2.0 appeared today. A test bench should be created now that cargo-rtic-scope moves towards actually being useful. Some --don-trace or --resolve-only should be added so that we can build example trace applications and ensure we get an expected output. Similarly, a check should be added for applications that cannot be built (#44).

Support multiple frontends

One may want to forward a trace to multiple frontends. It should be possible to specify via --frontend dummy,web, for example.

Send decoded and resolved ITM data to a frontend

The main reason for cargo-rtic-trace is the ability to get a human-readable description of what the traced target is doing and when. Of main interest is a graphical frontend that displays the run-time of RTIC tasks, when they are preempted by other tasks, and auxilliary data such as task queue sizes, etc. The possibilities with a frontend are virtually endless:

  • Do we want statistics about task run-time?
  • Do we only want to trace some specific task and/or change the trace configuration from the frontend?
  • Do we want to add arbitrary hooks to task events?

The main idea now is a web frontend that displays the tasks and when they are running. Consider an oscilloscope or a logic analyzer, but instead of signals we have tasks and their state (running or not). This frontend is a completely separate project so this issue will only consider how the communication should be managed. For now, data will only flow from the backend (cargo-rtic-trace) to the frontend.

Some initial questions/considerations:

  • cargo-rtic-trace will not be very useful on its own without a frontend. Should it perhaps handle the startup of a default frontend (feature gated, course)? It could detach the process and reuse the instance after the initial run.

CC @Yatekii @perlindgren

Test on other targets

cargo-tric-trace has only been in development with a Nucleo STM32F401RE. We'll want to test it on some other targets to ensure we handle eventual edge-cases or don't assume some edge-cases as general for all targets. Below is an enumeration of targets I have on hand to test with (to be completed):

Add hints

cargo-flash prints a lof of useful hints when an error occurs. cargo-rtic-scope should mirror this behavior.

Software tasks are not properly mapped

    #[trace]
    fn some_other_task() {
        let _x = 42;
    }

    #[task(binds = SysTick, resources = [GPIOA])]
    fn toggle(mut ctx: toggle::Context) {
        static mut TOGGLE: bool = false;
        if *TOGGLE {
            ctx.resources
                .GPIOA
                .lock(|gpioa| gpioa.bsrr.write(|w| w.bs5().set_bit()));
        } else {
            ctx.resources
                .GPIOA
                .lock(|gpioa| gpioa.bsrr.write(|w| w.br5().set_bit()));
        }
        *TOGGLE = !*TOGGLE;

        some_other_task();
    }

yields Unknown(DataTraceValue { comparator: 1, access_type: Write, value: [0, 0, 0, 0] }) for calls to some_other_task.

Support replay of raw SWO files?

From the probe-rs matrix room, @adamgreig writes:

i think in this case i'll need to do something special for the swo anyway because some of my interrupts are entered and exited too fast for the buffer to keep up at any reasonable speed
does the file source just need a raw swo uart bytes file? i could probably arrange that more easily (i'll probably use a logic analyser to capture like 80MBd uart data)

Ensure TTY source is POSIX-complaint

We want cargo-rtic-scope to be POSIX-compliant, but this only makes sense if probe-rs itself is. That should be checked for first.

This is a portability issue and can be left for later.

Clean up on SIGINT

In the interest of reproducibility and hardware-in-the-loop testing, it would be a good idea to stop tracing when some condition is met. There a multiple viable approaches:

  • Some --stop-after <time> option that also halts the target after <time> milliseconds.
  • On SIGINT or other signal that also propagated to a target halt.
  • On specific DWT payload.

At the moment the only way to stop tracing is to sent SIGINT which terminates cargo-rtic-trace, but no cleanup is attempted and the target continues to run.

build: handle .cargo/config above temporary directory?

When cargo builds something, it will traverse directories upwards and look for .cargo/config{,.toml} files. If such a file is found when the intermediate library is built, something wrong is likely to occur. Can we handle this file, or is it up to the user to ensure that the environment does not need to be modified?

Add continuous status message when tracing/replaying

When everything goes well, the cargo-rtic-scope doesn't print anything. This is all well and good when the program it expected to end on its own, which is not the case when tracing (nor replaying, if the file happens to be infinite in practise). Some status message that is continuously updated when packets are processed should be added. It should count how many packets are being handled and how many for which translation/decoding fails.

Merge functionality with cargo-flash

cargo-rtic-trace shares some functionality with https://github.com/probe-rs/cargo-flash which we want to merge: --chip, --protocol, etc. We might as well also copy the flashing progress bar. cargo-rtic-trace's build.rs should also be compared with how cargo-flash works with cargo as a sub-process.

Is it perhaps better if we just fork cargo and add tracing features to it?

cargo-embed should also be reviewed for features we need.

Use two DWT channels for software tasks instead

At present (read: when implemented correctly; #43) we rely on a single DWT channel to communicate that a task has entered/exited. If we have

#[trace]
sw_task1() { ... }

#[trace]
sw_task2() { ... }

then we will receive the value 0 on the used DWT comparator when sw_task1 enters/exits and 1 for sw_task2. This is not very stable. Instead, two channels should be used: the first one for tasks that are entered, and the other for tasks that exits. This approach does not require us to record the state of the software tasks because we need only forward the events, just like for hardware tasks.

Save task resolve data to trace file

When #1 is closed, we do not necessarily want to require the RTIC application source file to resolve tasks already resolved during the initial run. It's best if the trace file contains everything we need to replay. Either we prepend the resolve maps to the trace file and reuse them during replay, or instead save the yet-to-be-added structs that we send to the frontend instead (which will contain resolved data).

A likely use-case is sharing a trace for debugging purposes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.