GithubHelp home page GithubHelp logo

brendangregg / perf-tools Goto Github PK

View Code? Open in Web Editor NEW
9.6K 9.6K 1.6K 758 KB

Performance analysis tools based on Linux perf_events (aka perf) and ftrace

License: GNU General Public License v2.0

Perl 9.77% Shell 88.35% Roff 1.88%

perf-tools's People

Contributors

acyberexpert avatar bastianbeischer avatar brendangregg avatar chandranshu12 avatar csfrancis avatar diegopomares avatar edwardbetts avatar g2p avatar goldshtn avatar lwindolf avatar pykun avatar ronin13 avatar scotte avatar yangoliver avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

perf-tools's Issues

How to use perf_event_open() to monitor the events which not in the enum struct perf_hw_id?

I want to use the perf_event_open() to monitor the events (such as mem_load_l3_miss_retired.remote_pmm、mem_load_retired.local_pmm). The doucument of linux\perf\design.txt says as follow. But how to get the event_id of a specific event (such as mem_load_l3_miss_retired.remote_pmm、mem_load_retired.local_pmm)?

The 'config' field specifies what the counter should count. It is divided into 3 bit-fields:
raw_type: 1 bit (most significant bit) 0x8000_0000_0000_0000
type: 7 bits (next most significant) 0x7f00_0000_0000_0000
event_id: 56 bits (least significant) 0x00ff_ffff_ffff_ffff
If 'raw_type' is 1, then the counter will count a hardware event specified by the remaining 63 bits of event_config. The encoding is machine-specific.

Thanks, Looking forward to your reply.

Unknown error using opensnoop

I'm using kali 1.0.9a and trying to use opensnoop. I get an error statring "events/syscalls/sys_exit_open/enable". Wondering which of the prerequisites I'm missing.

iosnoop doesn't clear the ftrace lock file while dying

chandranshu@chandranshu-laptop:~$ ./iosnoop 
Tracing block I/O. Ctrl-C to end.
./iosnoop: line 122: cd: /sys/kernel/debug/tracing: Permission denied
ERROR: accessing tracing. Root user? Kernel has FTRACE?
chandranshu@chandranshu-laptop:~$ sudo ./iosnoop 
Tracing block I/O. Ctrl-C to end.
ERROR: ftrace may be in use by PID 7690 /var/tmp/.ftrace-lock

Looking at the code, this problem is present at all the places where die is being used. We can improve the die function to clean up the lock and any other resources before dying.

syscount rewrite for eBPF/hist

A reminder that for 4.x kernels, syscount should switch to using either eBPF (Alexei Starovoitov) or hist triggers (Tom Zanussi), either of which can do these aggregations in-kernel.

perf script doesn't print the entire stack trace

After collecting perf trace using perf record -e 'raw_syscalls:sys_enter' --call-graph dwarf,4096, I use perf script to look at the collect data. In most part, it is as expected. However, one issue is that my application binary has symbols stripped. I noticed that whenever perf script cannot resolve the symbol of an item on the stack, it stops printing the stack. I don't see an option that says keep printing stack trace even if you cannot resolve a symbol. Did I miss something or is this by design?

CentOS 7 - Big size Slab

Hi, all. I am having problem with memory on one server, and I am don't understanding how is fixed ?
It's file /proc/meminfo
MemTotal: 24592560 kB
MemFree: 532324 kB
Buffers: 236832 kB
Cached: 459252 kB
SwapCached: 60448 kB
Active: 7312736 kB
Inactive: 1607264 kB
Active(anon): 6851356 kB
Inactive(anon): 1372752 kB
Active(file): 461380 kB
Inactive(file): 234512 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 2097148 kB
SwapFree: 1424980 kB
Dirty: 136 kB
Writeback: 24 kB
AnonPages: 8199460 kB
Mapped: 69492 kB
Shmem: 192 kB
Slab: 14876100 kB
SReclaimable: 14810428 kB
SUnreclaim: 65672 kB
KernelStack: 23496 kB
PageTables: 38716 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 14393428 kB
Committed_AS: 8999040 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 213232 kB
VmallocChunk: 34359392960 kB
HardwareCorrupted: 0 kB
AnonHugePages: 2250752 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 10240 kB
DirectMap2M: 25155584 kB

I seeing what slab used 14 GB memory, it's not normal.
I have second node, but them slab's having size 1.6Gb.
Maybe something now, why can slab have big size ?

iosnoop latency errors in the face of block merges

When blocks are merged it is possible that the resulting latency will be very high simply because the reply that will be taken will be from much later from a later IO.

See the following snippet:

  nb_truck_1-5402  [002]  8018.233160: block_rq_insert: 65,192 WS 0 () 184067232 + 16 [nb_truck_1]
  nb_truck_1-5402  [002]  8018.233161: block_rq_issue: 65,192 WS 0 () 184067232 + 16 [nb_truck_1]
  nb_truck_3-5404  [004]  8108.333833: block_rq_complete: 65,192 R () 184067232 + 16 [0]

The IO took 90 seconds to return which is completely unlikely, the IO that returned is a read whereas the original IO was a write.

Not able to generate Flame graphs for Java Apps

Hi Brendan,

We encountered the below errors while trying to generate the flame graphs from the host.
After successful perf record we invoked perf script command on perf.data
perf script | ./stackcollapse-perf.pl > out.perf-folded got the below errors. How could we overcome these errors. I believe few are related perfdata file not available in host since it resides in container. This is a multi container env.

Failed to open /tmp/perf-12964.map, continuing without symbols
Failed to open /tmp/perf-20444.map, continuing without symbols
Failed to open /lib/x86_64-linux-gnu/libpthread-2.23.so, continuing without symbols
Failed to open /x/web/LIVE/keymakeragent/keymakeragent/infra/lib/linux_x86_py27/_faststat.so, continuing without symbols
Failed to open /x/opt/pp/bin/python2.7, continuing without symbols
Failed to open /x/web/LIVE/keymakeragent/keymakeragent/infra/lib/linux_x86_py27/greenlet.so, continuing without symbols
Failed to open /applicationpackages/manifests/active/JDK/cronus/scripts/jdk1.8.0_60/jre/lib/amd64/server/libjvm.so, continuing without symbols
Failed to open /tmp/perf-29726.map, continuing without symbols
Failed to open /applicationpackages/manifests/active/JDK/cronus/scripts/jdk1.8.0_60/jre/lib/amd64/libnet.so, continuing without symbols
Failed to open /lib/x86_64-linux-gnu/libc-2.23.so, continuing without symbols
Failed to open /tmp/perf-5808.map, continuing without symbols
Failed to open /tmp/perf-28806.map, continuing without symbols
Failed to open /lib/ld-musl-x86_64.so.1, continuing without symbols
Failed to open /tmp/perf-25990.map, continuing without symbols
Failed to open /lib/libpthread-2.5.so, continuing without symbols
Failed to open /lib/libc-2.5.so, continuing without symbols
Failed to open /tmp/perf-14795.map, continuing without symbols
Failed to open /applicationpackages/manifests/active/JSW/cronus/scripts/JSW/bin/wrapper, continuing without symbols
Failed to open /tmp/perf-22375.map, continuing without symbols
Failed to open /tmp/perf-14695.map, continuing without symbols
Failed to open /usr/bin/ppregistrator, continuing without symbols
Failed to open /x/web/LIVE/keymakeragent/keymakeragent/infra/lib/linux_x86_py27/gevent/core.so, continuing without symbols
Failed to open /tmp/perf-14639.map, continuing without symbols
Failed to open /tmp/perf-23526.map, continuing without symbols
Failed to open /tmp/perf-8628.map, continuing without symbols
Failed to open /applicationpackages/manifests/active/JDK/cronus/scripts/jdk1.8.0_60/jre/lib/amd64/libjava.so, continuing without symbols
Failed to open /usr/lib/libstdc++.so.6.0.8, continuing without symbols
Failed to open /x/web/LIVE/caldaemon/caldaemon, continuing without symbols
Failed to open /tmp/perf-10390.map, continuing without symbols
Failed to open /lib/librt-2.5.so, continuing without symbols
Failed to open /tmp/perf-3844.map, continuing without symbols
no symbols found in /bin/dash, maybe install a debug package?
Failed to open /applicationpackages/manifests/active/JDK/cronus/scripts/jdk1.8.0_60/jre/lib/amd64/libnio.so, continuing without symbols
Failed to open /x/opt/pp/lib/python2.7/lib-dynload/select.so, continuing without symbols
Failed to open /x/opt/pp/lib/python2.7/lib-dynload/time.so, continuing without symbols
Failed to open /applicationpackages/manifests/active/JDK/cronus/scripts/jdk1.8.0_60/jre/lib/amd64/libmanagement.so, continuing without symbols
Failed to open /usr/bin/socat, continuing without symbols
Failed to open /x/opt/pp/lib/python2.7/lib-dynload/_socket.so, continuing without symbols

iosnoop on Linux and heredoc getting confused

I grabbed the isnoop script but unfortunately when I run it I end up with:

/usr/local/bin/iosnoop: line 231: warning: here-document at line 62 delimited by end-of-file (wanted `END')
/usr/local/bin/iosnoop: line 232: syntax error: unexpected end of file

I'm slightly confused by why this is popping up, as the HEREDOC opened on line 62 is closed on line 74 with the END, or at least it should be.

perf-stat-hist: line 125: buckets: bad array subscript

root@thinkpad ~ #perf-stat-hist net:net_dev_xmit len 30
/usr/bin/perf-stat-hist: line 125: buckets: bad array subscript
/usr/bin/perf-stat-hist: line 125: i && 0 <=  : syntax error: operand expected (error token is " ")
Tracing net:net_dev_xmit, power-of-4, max 1048576, for 30 seconds...

            Range          : Count    Distribution
              -> -1        : 0        |                                      |
            0 -> 0         : 0        |                                      |
            1 -> 3         : 0        |                                      |
            4 -> 15        : 0        |                                      |
           16 -> 63        : 254      |######################################|
           64 -> 255       : 222      |##################################    |
          256 -> 1023      : 18       |###                                   |
         1024 -> 4095      : 2        |#                                     |
         4096 -> 16383     : 0        |                                      |
        16384 -> 65535     : 0        |                                      |
        65536 -> 262143    : 0        |                                      |
       262144 -> 1048575   : 0        |                                      |
      1048576 ->           : 0        |                                      |
root@thinkpad ~ #head -n 1 /usr/bin/perf-stat-hist     
#!/bin/bash      
root@thinkpad ~ #rpm -qf /bin/bash                
bash-3.2.54-alt1 

perf events shows unsupported for cache statistics

Hi team,

When we try to collect statistics pertaining to caches using perf we got the below message stating not supported. I could do a perf top successfully but not a statistics with event. Could you please help us.
We use ubuntu distro 4.4.0-97-generic #120~14.04.1-Ubuntu SMP

perf stat -e cycles,instructions,cache-references,cache-misses,bus-cycles -a sleep 10

Performance counter stats for 'system wide':

cycles
instructions
cache-references
cache-misses
bus-cycles

  10.000910410 seconds time elapsed

Thanks

kprobe error when naming args

When not using a probe name, and naming args, kprobe parses the probe incorrectly. Eg:

# ./kprobe 'p:udp_send_skb si=%si'
Tracing kprobe udp_send_skb. Ctrl-C to end.
./kprobe: line 179: echo: write error: No such file or directory
ERROR: adding kprobe "p:udp_send_skb si=%si".
Last 2 dmesg entries (might contain reason):
    [1680629.084807] Event kprobes/udp_send_skb doesn't exist.
    [1685530.939679] Could not insert probe at si=%si+0: -2
Exiting.

Workaround:

# ./kprobe 'p:myprobe udp_send_skb si=%si'
Tracing kprobe myprobe. Ctrl-C to end.
        postgres-16392 [000] 2844852.131332: myprobe: (udp_send_skb+0x0/0x2a0) si=ffff8800e6f349e0
        postgres-16392 [000] 2844852.131349: myprobe: (udp_send_skb+0x0/0x2a0) si=ffff8800e6f349e0
        postgres-16392 [000] 2844852.131355: myprobe: (udp_send_skb+0x0/0x2a0) si=ffff8800e6f349e0
        postgres-16392 [000] 2844852.131362: myprobe: (udp_send_skb+0x0/0x2a0) si=ffff8800e6f349e0
[...]

But this should be fixed.

No CONFIG_PERF_EVENTS=y kernel support configured?

@brendangregg when i perf record or perf top -p pid
Error:
The sys_perf_event_open() syscall returned with 3 (No such process) for event (cycles:ppp).
/bin/dmesg may provide additional information.
No CONFIG_PERF_EVENTS=y kernel support configured?

but Not for all of the process

mawk: line 19: syntax error at or near ,

jibanes@wopr:~$ sudo ./opensnoop -d 1 omfg
Tracing open()s for filenames containing "omfg" for 1 seconds (buffered)...
COMM PID FD FILE
mawk: line 19: syntax error at or near ,

Ending tracing...

opensnoop: Handle paths with blanks correctly

Using opensnoop from Ubuntu 16.04 on a WINE installation.

sudo opensnoop 2>&1 | grep "^wine"
(...)
wineserver       6666   0x72 /home/me/wineprefix/dosdevices/c:/
wineserver       6666   0x72 Files/
wineserver       6666   0x72 Files/Notepad++/
(...)

The last two lines are wrong. The files are in

/home/me/wineprefix/dosdevices/c\:/Program\ Files/Notepad++/
# a.k.a.
"/home/me/wineprefix/dosdevices/c:/Program Files/Notepad++/"

Seemingly opensnoop does not handle paths with blanks correctly?


To reproduce:

sudo dpkg --add-architecture i386 
wget https://dl.winehq.org/wine-builds/Release.key
sudo apt-key add Release.key
sudo apt-add-repository 'https://dl.winehq.org/wine-builds/ubuntu/'
sudo apt-get update
sudo apt-get -y install winehq-stable 

mkdir -p wineprefix
export WINEPREFIX=$(readlink -f wineprefix)
export WINEARCH=win32

wget  https://raw.githubusercontent.com/Winetricks/winetricks/master/src/winetricks
chmod +x winetricks 

bash winetricks npp

sudo apt -y install perf-tools-unstable
sudo opensnoop

iosnoop doesn't work on 4.2.3-200.fc22.x86_64

[root@ipa ssv]# ./iosnoop.gregg
Tracing block I/O. Ctrl-C to end.
COMM         PID    TYPE DEV      BLOCK        BYTES     LATms
^C
Ending tracing...
[root@ipa ssv]# ./iosnoop.my 
Tracing block I/O. Ctrl-C to end.
COMM         PID    TYPE DEV      BLOCK        BYTES     LATms
jbd2/dm-8-44 446    WS   8,0      34898224     28672      0.26
jbd2/dm-8-44 446    FWS  8,0      18446744073709551615 0          2.90
kworker/0:0  26852  WS   8,0      34898280     4096       0.10
<idle>       0      FWS  8,0      18446744073709551615 0          0.86
^C
Ending tracing...
[root@ipa ssv]# diff -u ./iosnoop.gregg ./iosnoop.my 
--- ./iosnoop.gregg 2015-12-18 13:57:28.330833757 +0100
+++ ./iosnoop.my    2015-12-18 17:00:34.662192353 +0100
@@ -105,6 +105,7 @@
        warn "echo 0 > events/block/$b_start/filter"
        warn "echo 0 > events/block/block_rq_complete/filter"
    fi
+   warn "echo 0 > tracing_on"
    warn "echo > trace"
    (( wroteflock )) && warn "rm $flock"
 }
@@ -204,6 +205,9 @@
     ! echo 1 > events/block/block_rq_complete/enable; then
    edie "ERROR: enabling block I/O tracepoints. Exiting."
 fi
+if ! echo 1 > tracing_on; then
+       edie "ERROR: enabling tracing. Exiting"
+fi
 (( opt_start )) && printf "%-15s " "STARTs"
 (( opt_end )) && printf "%-15s " "ENDs"
 printf "%-12.12s %-6s %-4s %-8s %-12s %-6s %8s\n" \
[root@ipa ssv]# uname -r
4.2.3-200.fc22.x86_64
[root@ipa ssv]# cat /etc/redhat-release
Fedora release 22 (Twenty Two)

Execsnoop not working on Mac OS X

Not able to get execsnoop to work on OS X even after disabling SIP.

$ sw_vers
ProductName:	Mac OS X
ProductVersion: 10.15.7

$ csrutil status
System Integrity Protection status: disabled.

bash-3.2# ./execsnoop
Tracing exec()s. Ctrl-C to end.
./execsnoop: line 160: cd: /sys/kernel/debug/tracing: No such file or directory
ERROR: accessing tracing. Root user? Kernel has FTRACE?
    debugfs mounted? (mount -t debugfs debugfs /sys/kernel/debug)

Is there a workaround or config that enables tracing on Mac?

Thanks!

little output from './execsnoop' with do_execve()

I found running execsnoop has quite different results with do_execve() and stub_execve().

There is an example in Fedora with Kernel 3.11 and stub_execve(),:
'bash -x' results: http://paste.ubuntu.com/8431228/
'cat /sys/kernel/debug/tracing/trace_pipe' http://paste.ubuntu.com/8431209/.

And Fedora with 3.16 and do_execve():
'bash -x' http://paste.ubuntu.com/8431251/
'cat /sys/kernel/debug/tracing/trace_pipe' http://paste.ubuntu.com/8431253/

We could find a process with name starts with 'neutron-openvsw...' has execsnoop_stub_execve output from 'trace_pipe' but another not. In both of tests, that process has 'sched_process_fork' output. I don't know the develop history of stub_execve and do_execve and have to guess that are there any function call format changes between those, different limitations, or something else...

functrace and funcgraph only supports single function

Currently, functrace and funcgraph only supports single function.
There might be cases when we need to monitor multiple function in single tracing.

I had to manually edit and multiple function using.

function1 >current_tracer
function2>> current_tracer

using killsnoop.bt from inside a docker container?

Hi,
I'd like to invoke killsnoop from inside a docker container. I've tried adding:

  --privileged \
  --cap-add=ALL \

to its launch commands, but I still see:

usr/sbin/killsnoop.bt -p 3519776 
ERROR: tracepoint not found: syscalls:sys_enter_kill

I can run it from outside the container, if it launched with --pid=host that can make sense, however, thats just one of my 3 usecases where I need to trap things.

Confusing error message

This command works when I run with sudo, but If I don't have permissions I get

posix4e@posix4e-P27GV2:~$ perf stat -e 'ext4:' -a
invalid or unsupported event: 'ext4:
'
Run 'perf list' for a list of valid events

usage: perf stat [] []

-e, --event <event>   event selector. use 'perf list' to list available events

Anyway to make this give a sign to a user that they should enable permissions

execnsnoop - not all cat have -v

busybox and other cat tools do not have -v, added a check if the option is present

--- bin/execsnoop
+++ bin/execsnoop
@@ -58,6 +58,7 @@ tracing=/sys/kernel/debug/tracing
 flock=/var/tmp/.ftrace-lock; wroteflock=0
 opt_duration=0; duration=; opt_name=0; name=; opt_time=0; opt_reexec=0
 opt_argc=0; argc=8; max_argc=16; ftext=
+VOPT=-v
 trap ':' INT QUIT TERM PIPE HUP        # sends execution to end tracing section

 function usage {
@@ -156,6 +157,9 @@ else
        fi
 fi

+# not all cat have -v, e.g busybox
+echo test | cat -v 2> /dev/null | grep -q test || VOPT=
+
 ### check permissions
 cd $tracing || die "ERROR: accessing tracing. Root user? Kernel has FTRACE?
     debugfs mounted? (mount -t debugfs debugfs /sys/kernel/debug)"
@@ -224,10 +228,10 @@ warn "echo > trace"
 ( if (( opt_duration )); then
        # wait then dump buffer
        sleep $duration
-       cat -v trace
+       cat $VOPT trace
 else
        # print buffer live
-       cat -v trace_pipe
+       cat $VOPT trace_pipe
 fi ) | $awk -v o=$offset -v opt_name=$opt_name -v name=$name \
     -v opt_duration=$opt_duration -v opt_time=$opt_time -v kname=$kname \
     -v opt_reexec=$opt_reexec '

perf record -f is crashing during heavy load

Hi,
when i am using "perf record -F 99 -g -p -- sleep 300" during heavy load, it is crashing. Only when i reduce the frequency to around 30, i am getting results.Ideally, the frequency should be high to get better results(around 997 samples/sec). Is there any solution for this.

How to get device

How do we get the device 202,1 ? I cant figure it out . can you share few examples.

problem using cachestat

By modifying the cachestat, i was able to get the caches miss rate of L1. I would like to use that result in my scheduler program where the tasks will be allocated based on L1 cache miss rate.How can that be done?How can i use the results from cachestat on my scheduler on LITMUS-RT testbed

uprobe incorrectly finds library file

Hi!

I understand that these tools are not really meant for amateur use, but I figured I'd mention this if only because you may run into it yourself at some point.

I'm trying to figure out how/why/where/etc a process is resolving a hostname and this is what I tried:

uprobe -d 1 p:libc:gethostbyname2

That works if I hack uprobe's set_path to take the first file; the little awk script returns 3 lines which of course doesn't work. To be clear:

ERROR: resolved "libc" to "/lib/x86_64-linux-gnu/libc-2.21.so
/lib32/libc-2.21.so
/libx32/libc-2.21.so", but file missing

Additionally, and this may be me not understanding something, the check for a library's existence is with -x, and on my system (ubuntu 15.10) almost none of the .so's in /lib/x86_64-linux-gnu are actually executable.

cachestat shows negative numbers

Hi,
i dont think this is normal:

# sh cachestat -D 5
Counting cache functions... Output every 5 seconds.
    HITS   MISSES  DIRTIES    RATIO   BUFFERS_MB   CACHE_MB  DEBUG
   -3075        0     5748   100.0%          299     119830  (2673 5748 2023 5741)
   -3798        0     7365   100.0%          299     119841  (3567 7365 3069 7383)
   -2333        0     5245   100.0%          299     119850  (2912 5245 1929 5240)
   -2469        0     4877   100.0%          299     119858  (2408 4877 2068 4832)
   -1782        0     4761   100.0%          299     119867  (2979 4761 2506 4742)
   -1981        0     4393   100.0%          299     119876  (2412 4393 2194 4379)
    2811        0     3719   100.0%          299     119883  (6530 3719 1832 3709)
       7        0     2145   100.0%          299     119886  (2152 2145 873 2140)
   53008        0    10504   100.0%          299     119902  (63512 10504 4412 10420)
   -5710        0    15480   100.0%          299     119902  (9770 15480 7456 15471)

This is on RHEL.

does it wrap around or something ?

execsnoop: enable tracing_on for good measure

I was lately trying to run execsnoop on my workstation and seeing nothing being printed out. Confused, I read the script and tried catting all sorts of values in /sys/kernel/debug/tracing. To my surprise, tracing_on contained 0. Doing echo 1 > tracing_on thus fixed the problem.

I see that execsnoop doesn't deal with tracing_on, so maybe it would be good to just put a echo 1 > tracing_on in there?

I find it strange that I only encounter this now, as a cursory Google search tells me that tracing_on has been present in Linux kernels since forever.

syscount sort by time instead of count

First, thank you for his tool.
I'm profiling an application that spends 20% of its time in system calls. With the option -c I have the list with the topmost frequent syscalls. It would be also helpful to have them sorted by time spent within them.

funcslower: -p filters on thread id and not process id

As always with the kernel-colored glasses, the -p switch in some tools uses set_ftrace_pid, which is the kernel pid notion -- user thread id. So tracing a process with multiple threads using the -p switch only traces the first thread. I've run into this with funcslower but I assume it affects additional tools as well.

I can submit a PR that fixes this by parsing the thread ids from /proc/PID/task, or by parsing ps output. Thoughts?

Feature request: tagged git releases for packagers

I would like to package tagged, numbered releases of perf-tools for Arch; we have a -git package, but that requires continual manual updating by the end user. Would you be willing to git tag releases so that we can grab github.com generated tarballs from the releases/ page? I'm sure this would assist other distributions in packaging "stable" as well...

iosloop questions

Using iosnoop output BYTES!Where does the bytes size of the output come from? Can this be adjusted?

Invalid printf format in iolatency/cachestat

[root@thinkpad ~]# iolatency 1 2
Tracing block I/O. Output every 1 seconds.

  >=(ms) .. <(ms)   : I/O      |Distribution                          |
       0 -> 1       : 0        |                                      |

  >=(ms) .. <(ms)   : I/O      |Distribution                          |
       0 -> 1       : 0        |                                      |

Ending tracing...
[root@thinkpad ~]# iolatency -T 1 2
Tracing block I/O. Output every 1 seconds.
/usr/bin/iolatency: line 204: printf: `(': invalid format character
/usr/bin/iolatency: line 204: printf: `(': invalid format character

Ending tracing...
[root@thinkpad ~]# which printf
/usr/bin/printf

Unrecognized line: Was the 'perf record' command properly terminated? at FlameGraph/stackcollapse-perf.pl line 339, <> line 2.

@brendangregg hi,I admire you for a long time,but now i have a question that When I am with 'perf record' ,it is over before the sleep time was reached,and did not show any information. the flame graph shows: Unrecognized line: Was the 'perf record' command properly terminated?The at FlameGraph/stackcollapse - perf. Pl line 339, < > the line 2.
Can you give me some advice?

“perf script -F +insn” doesn't print insn field

After collecting perf data using perf record, I use perf script -i perf.data -F +insn to get the collected data. But there is nothing in the insn field.

        perf 32224 15889777.194652:          1 cycles:ppp:  ffffffff8826e448 native_write_msr ([kernel.kallsyms]) insn:
        perf 32224 15889777.194654:          1 cycles:ppp:  ffffffff8826e448 native_write_msr ([kernel.kallsyms]) insn:
        perf 32224 15889777.194655:         11 cycles:ppp:  ffffffff8826e448 native_write_msr ([kernel.kallsyms]) insn:
        perf 32224 15889777.194657:        285 cycles:ppp:  ffffffff8826e44a native_write_msr ([kernel.kallsyms]) insn:
        perf 32224 15889777.194658:       7550 cycles:ppp:  ffffffff88237a58 native_sched_clock ([kernel.kallsyms]) insn:
          sh 32224 15889777.194660:     191221 cycles:ppp:  ffffffff886219bd apparmor_bprm_committing_creds ([kernel.kallsyms]) insn:
          sh 32224 15889777.194707:    2700450 cycles:ppp:      7fc66bd036d7 [unknown] (/lib/x86_64-linux-gnu/ld-2.27.so) insn:
          sh 32225 15889777.195051:          1 cycles:ppp:  ffffffff8826e448 native_write_msr ([kernel.kallsyms]) insn:
          sh 32225 15889777.195054:          1 cycles:ppp:  ffffffff8826e448 native_write_msr ([kernel.kallsyms]) insn:
          sh 32225 15889777.195055:         13 cycles:ppp:  ffffffff8826e448 native_write_msr ([kernel.kallsyms]) insn:
          sh 32225 15889777.195056:        302 cycles:ppp:  ffffffff8826e44a native_write_msr ([kernel.kallsyms]) insn:
          sh 32225 15889777.195057:       7713 cycles:ppp:  ffffffff88237a58 native_sched_clock ([kernel.kallsyms]) insn:
          sh 32225 15889777.195060:     192158 cycles:ppp:  ffffffff88418cdb __handle_mm_fault ([kernel.kallsyms]) insn:
          sh 32225 15889777.195119:    2580729 cycles:ppp:  ffffffff88ba9d97 clear_page_erms ([kernel.kallsyms]) insn:

Is it a bug or just because I use it in wrong way?

opensnoop outputs unicode giberish for file names on centos6

# ./opensnoop -d 2
Tracing open()s for 2 seconds (buffered)...
COMM             PID      FD FILE
<...>            22794   0x3 
<...>            22799   0x3 P[���������7
<...>            22799   0x3 P[�����t�kڠ�
<...>            22798   0x3 ��%��������7
<...>            22798   0x3 ��%����ƜO;��
<...>            22798   0x3 ��%����ӅO;��
<...>            22798   0x3 ��%����t�O;��
<...>            22799   0x3 P[�����p���7
<...>            22798   0x3 ��%����p���7
<...>            22798   0x3 ��%��������7
���.>            22798   0x3 ��%����@y
<...>            22798    -1 ��%����
<...>            22798    -1 ��%�����@��
<...>            22798    -1 ��%�����*��
<...>            22798    -1 ��%�����+��
<...>            22798    -1 ��%����0A��
<...>            22798    -1 ��%�����+��
cat              22800   0x3 ���������7
cat              22800   0x3 �����tdT\h�
cat              22800   0x3 �����p���7

Ending tracing...

Tried with centos6 kernels 2.6.32-504.el6.x86_64 and 2.6.32-431.17.1.el6.x86_64 but of course I can't really comment on what's in them.

execsnoop doesn't work with kernel 4.17+

hi

This implementation is designed to work on older kernel versions, and without kernel debuginfo.

would be nice if it would also work on more recent kernel versions. I have tried changing the makeprobe call use __x64_sys_execve, but then half the output is gibberish. Unsure on how I have to adjust the output format of the probe.

uprobe fails to work when a library used by a binary is present in different locations.

uprobe fails to work when a library used by a binary is present in different locations. In my case, it is libc which was present at 3 different locations. The fix is to choose one of them and proceed, I am pasting the line here.

function set_path {
145 name=$1
146
147 path=$(which $name)
148 if [[ "$path" == "" ]]; then
149 path=$(ldconfig -v 2>/dev/null | awk -v lib=$name '
150 $1 ~ /:/ { sub(/:/, "", $1); path = $1 }
151 { sub(/..*/, "", $1); }
152 $1 == lib { print path "/" $3 }')
153 if [[ "$path" == "" ]]; then
154 die "ERROR: segment "$name" ambiguous."
155 "Program or library? Try a full path."
156 fi
157 fi
158 path=$(echo $path | cut -f2 -d" ")
159 if [[ ! -x $path ]]; then
160 die "ERROR: resolved "$name" to "$path", but file missing"
161 fi
162 }

opensnoop - lastfile[pid] is no longer valid when we get lost events

When the kernel informs us that we lost events "CPU:%d [LOST %lu EVENTS]\n", we cannot assume that a filename we saved from a do_sys_open line corresponds to the rval we get from the following sys_open line. With the existing logic, we do, and under certain conditions it's possible to get a spurious "valid" open of a nonexistent file in the opensnoop output.

(I'll have a PR for this shortly.)

Failed to open .map files, continuing without symbols

when I run:

perf script | ./stackcollapse-perf.pl > out.perf-folded`

Failed to open /tmp/perf-21607.map, continuing without symbols
Failed to open /tmp/perf-12967.map, continuing without symbols
Failed to open /tmp/perf-28472.map, continuing without symbols
Failed to open /tmp/perf-24981.map, continuing without symbols
Failed to open /tmp/perf-28048.map, continuing without symbols
Failed to open /tmp/perf-7905.map, continuing without symbols
Failed to open /tmp/perf-28432.map, continuing without symbols
Failed to open /tmp/perf-28669.map, continuing without symbols
Failed to open /tmp/perf-8011.map, continuing without symbols

Setup

- Debian 10 
- Perf version 4.19.152

Thanks





Use of uninitialized value in tcpretrans

root@thinkpad ~ #tcpretrans 
TIME     PID    LADDR:LPORT          -- RADDR:RPORT          STATE       
...
19:31:12 0      192.168.1.5:39331    R> 185.24.92.218:443    SYN_SENT    
19:31:15 0      192.168.1.5:39326    R> 185.24.92.218:443    SYN_SENT    
19:31:16 0      192.168.1.5:39331    R> 185.24.92.218:443    SYN_SENT    
Use of uninitialized value $pid in printf at /usr/bin/tcpretrans line 272.
19:31:17        192.168.1.5:39209    R> 185.24.92.218:443    SYN_SENT    
19:31:17 0      192.168.1.5:39217    R> 185.24.92.218:443    SYN_SENT    
19:31:23 0      192.168.1.5:39326    R> 185.24.92.218:443    SYN_SENT    
...
Use of uninitialized value $pid in printf at /usr/bin/tcpretrans line 272.
...

cachestat can't work on kernel 3.16

I am trying to run cachestat with ./cachestat 5 but it fails with following output

Counting cache functions... Output every 5 seconds.
./cachestat: line 115: function_profile_enabled: Permission denied
ERROR: enabling function profiling. Have CONFIG_FUNCTION_PROFILER? Exiting.

The kernel version is

Linux version 3.16.0-7-amd64 #1 SMP Debian 3.16.59-1 (2018-10-03)

And there isn't any file named function_profile_enabled in /sys/kernel/debug/tracing directory.

Is there any solutions for this problem?
Thanks a lot!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.