GithubHelp home page GithubHelp logo

awslabs / damo Goto Github PK

View Code? Open in Web Editor NEW
140.0 10.0 27.0 30.16 MB

DAMON user-space tool

Home Page: https://damonitor.github.io/

License: GNU General Public License v2.0

Python 94.68% Shell 5.26% C 0.06%

damo's Introduction

DAMO: Data Access Monitoring Operator

damo is a user space tool for DAMON. Using this, you can monitor the data access patterns of your system or workloads and make data access-aware memory management optimizations.

damo monitor demo for water_nsquared

Demo Video

Please click the below thumbnail to show the short demo video.

DAMON: a demo for the Kernel Summit 2020

Getting Started

Packaging status

Follow below instructions and commands to monitor and visualize the access pattern of your workload.

$ # ensure DAMON is enabled on your kernel
$ # install damo using packaging systems listed above,
$ # or cloning the source repo and updating $PATH.
$ sudo damo show $(pidof <your workload>)
$ sudo damo record $(pidof <your workload>)
$ sudo damo report heats --heatmap stdout --stdout_heatmap_color emotion

The second and last commands will show the access pattern of your workload, like below:

masim_stairs_snapshot masim stairs heatmap in ascii

FAQs

How can I ensure DAMON is enabled on my kernel?

Please refer to 'Install' section of the project webpage.

Where can I get more detailed usage?

The below sections provide quick introductions for damo's major features. For more detailed usage, please refer to USAGE.md file.

What does the version numbers mean?

Nothing at all but indicate which version is more fresh. A higher version number means it is more recently released.

Will pip3 install damo install the latest version of damo?

It will install the latest stable version of damo. If you want, you can also install less stable but more fresh damo from source code. For that, fetch the next branch of the source tree and use damo executable file in the tree.

$ git clone https://github.com/awslabs/damo -b next

How can I participate in the development of damo?

Please refer to CONTRIBUTING file.

Why some features are not documented on USAGE.md file?

Because those are not yet stabilized. In other words, such features are in their experimental stages, and therefore could be deprecated and removed without notice and grace periods. The documented features could also be deprecated, but those will provide some notifications and grace periods. If there are some features you're relying on but not documented, please report your usecase to the community.

How can I know deprecation schedule and alternatives of specific features?

The list of features that will be, or already deprecated, the deprecation schedule, and their alternatives are summarized in FEATURES_DEPRECATION_SCHEDULE.md file.

Quick Intro for Major Features

Below are quick introductions for damo's major features. For more detailed usage, please refer to USAGE.md file.

Snapshot Data Access Pattern

Below commands repeatedly get a snapshot of the access pattern of a program for every second.

$ git clone https://github.com/sjp38/masim
$ cd masim; make; ./masim ./configs/zigzag.cfg --silent &
$ sudo damo start --target_pid $(pidof masim)
$ while :; do sudo damo show; sleep 1; done

The first two lines of the commands get an artificial memory access generator program and run it in the background. It will repeatedly access two 100 MiB-sized memory regions one by one. You can substitute this with your real workload.

The third line asks damo to start monitoring the access pattern of the process. Finally, the last line retries a snapshot of the monitoring results every second and show on terminal.

Recording Data Access Patterns

Below commands record memory access patterns of a program and save the monitoring results in damon.data file.

$ git clone https://github.com/sjp38/masim
$ cd masim; make; ./masim ./configs/zigzag.cfg --silent &
$ sudo damo record -o damon.data $(pidof masim)

The first two lines of the commands get an artificial memory access generator program and run it in the background. It will repeatedly access two 100 MiB-sized memory regions one by one. You can substitute this with your real workload. The last line asks damo to record the access pattern in damon.data file.

Visualizing Recorded Patterns

Below three commands visualize the recorded access patterns into three image files.

$ damo report heats --heatmap stdout
$ damo report wss --range 0 101 1
$ damo report wss --range 0 101 1 --sortby time --plot
  • access_pattern_heatmap.png will show the data access pattern in a heatmap, which shows when (x-axis) what memory region (y-axis) is how frequently accessed (color).
  • wss_dist.png will show the distribution of the working set size.
  • wss_chron_change.png will show how the working set size has chronologically changed.

You can show the images on a web page [1]. Those made with other realistic workloads are also available [2,3,4].

[1] https://damonitor.github.io/doc/html/latest/admin-guide/mm/damon/start.html#visualizing-recorded-patterns
[2] https://damonitor.github.io/test/result/visual/latest/rec.heatmap.1.png.html
[3] https://damonitor.github.io/test/result/visual/latest/rec.wss_sz.png.html
[4] https://damonitor.github.io/test/result/visual/latest/rec.wss_time.png.html

Data Access Pattern Aware Memory Management

Below command makes every memory region of size >=4K that hasn't accessed for

=60 seconds in your workload to be swapped out. By doing this, you can make your workload more memory efficient with only modest performance overhead.

$ sudo damo start --damos_access_rate 0 0 --damos_sz_region 4K max \
                    --damos_age 60s max --damos_action pageout \
                    <pid of your workload>

damo's People

Contributors

dongdongju avatar fdu avatar honggyukim avatar michel-slm avatar onlyoneofme avatar paniakin-aws avatar piyushthange avatar puranjaymohan avatar sj-aws avatar sjp38 avatar velurimithun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

damo's Issues

Question about damos_quota at reclaim.c

Hi SeongJae,

I use quotas feature for my implementation but I feel that I don't fully understand the meaning of each field.

Could you please explain about the damos_quota setting in mm/damon/reclaim.c?

static struct damos_quota damon_reclaim_quota = {
        /* use up to 10 ms time, reclaim up to 128 MiB per 1 sec by default */
        .ms = 10,
        .sz = 128 * 1024 * 1024,
        .reset_interval = 1000,
        /* Within the quota, page out older regions first. */
        .weight_sz = 0,
        .weight_nr_accesses = 0,
        .weight_age = 1
};
DEFINE_DAMON_MODULES_DAMOS_QUOTAS(damon_reclaim_quota);

Since I use 100ms of sampling interval and 2 seconds of aggregation interval, which are 20 times of the default setting, I have multiplied above each value by 20 to match the proportion, but I'm not quite sure if those numbers are sensible.

It would be helpful for me to tune my damos_quota if I can better understand other use cases such as reclaim.c.

Thanks!

Support a single config file that describes all the options

Hi SeongJae,

We've been heavily testing DAMOS based on our custom actions and found a possible use case so I've created this issue.

We currently use damo schemes -c action.json with more custom options such as --sample, --aggr, and --monitoring_nr_regions_range together. We keep each json config file for each execution, but extra options must be maintained together.

I feel that it will be really useful if there is a single config file that describes all the damo options including damo options and DAMOS config options together.

I think one single json file can be provided for this purpose so it'd be helpful if you could consider this use case. Thanks!

Avoid `commit` in damo show and status

Hi SeongJae,

It looks damo show and damo status both write commit to status. But it makes conflict when running damo show and damo status together.

I feel it's quite useful to run those commands using watch command to run them repeatedly every second as follows.

# In terminal 1
$ watch -n 1 ./damo show

# In terminal 2
$ watch -n 1 ./damo status

I think it's not require to write commit to status for those commands because they don't change damon status so I'm wondering if there is a way to skip the commit. If not, then it'd be useful to have an option to avoid the commit.

Thanks.

arm

I run this program in aarch64 GNU/Linux environment, but it failed with the following logs:

[root@localhost damo-next]# sudo ./damo record -o damon.data $(pidof masim)
Traceback (most recent call last):
  File "./damo", line 91, in <module>
    main()
  File "./damo", line 70, in main
    record.main(args)
  File "/home/damo-next/record.py", line 158, in main
    _damon.chk_update_debugfs(args.debugfs)
  File "/home/damo-next/_damon.py", line 238, in chk_update_debugfs
    update_supported_features()
  File "/home/damo-next/_damon.py", line 168, in update_supported_features
    if test_debugfs_file(debugfs_target_ids, 'paddr\n', '42\n'):
  File "/home/damo-next/_damon.py", line 146, in test_debugfs_file
    f.write(input_str)
IOError: [Errno 16] Device or resource busy

My python version is Python 2.7.5, or I need to other lib ?

How to use filter in damo

Hello,

Is there anyway I can use filters with damo? I could find some examples here, but I'm not really used to using json with damo.

Or, can I use -c option along with normal usage like:

$ sudo damo schemes --damos_access_rate 0 0 --damos_sz_region 4K max \
                    --damos_age 60s max --damos_action pageout \
                    -c ./my_filter.json
                    <pid of your workload>

Also, any texts about the way to use json file will be useful to me.

Thanks!

Printing WSS at runtime (maybe with stat command)

Hi SeongJae,

It was great to speak with you in the google meet. I asked a question about WSS printing at runtime and I remember you mentioned that the interface isn't fixed but maybe stat command can be used for that.

I actually don't get a clear idea how to use stat command for WSS printing so could you please give me some ideas how to use it?

I also feel that it's not easy to find the usage for each command somewhere in the help message or documents. It'd be really helpful if there is some example usages for each command.

Thanks.

Record rbuf size has no effect on damon built with kernel 5.15.0-rc3

While trying to use the -l (--rbuf) length of record result buffer argument, I see no effect on it.

Usage example:
./damo record -l 4096 $(pidof masim)

The debugfs does not has the record fie which was previously used when running an rc for 5.14 version prior to mainline integration

ls -la /sys/kernel/debug/damon/
total 0
drwxr-xr-x. 2 root root 0 Oct 13 14:09 .
drwx------. 39 root root 0 Oct 13 14:09 ..
-rw-------. 1 root root 0 Oct 14 12:42 attrs
-rw-------. 1 root root 0 Oct 13 14:09 kdamond_pid
-rw-------. 1 root root 0 Oct 13 14:09 mk_contexts
-rw-------. 1 root root 0 Oct 14 12:41 monitor_on
-rw-------. 1 root root 0 Oct 13 14:09 rm_contexts
-rw-------. 1 root root 0 Oct 14 12:41 target_ids

Has this feature stopped being supported? Or is it a missing configuration in kernel flags?

No such file or directory: '/sys/devices/system/memory/block_size_bytes'

Hi SeongJae,

I have a numa system on qemu as follows.

$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3
node 0 size: 1972 MB
node 0 free: 1779 MB
node 1 cpus:
node 1 size: 5983 MB
node 1 free: 5969 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 

In this machine, I see an error when using --numa_node option as follows.

$ ./damo fmt_json --numa_node 0
Traceback (most recent call last):
  File "/home/root/damo/./damo", line 116, in <module>
    main()
  File "/home/root/damo/./damo", line 113, in main
    subcmd.execute(args)
  File "/home/root/damo/_damo_subcmds.py", line 34, in execute
    self.module.main(args)
  File "/home/root/damo/damo_fmt_json.py", line 27, in main
    kdamonds, err = _damon_args.kdamonds_for(args)
  File "/home/root/damo/_damon_args.py", line 335, in kdamonds_for
    ctx, err = damon_ctx_for(args)
  File "/home/root/damo/_damon_args.py", line 230, in damon_ctx_for
    init_regions, err = init_regions_for(args)
  File "/home/root/damo/_damon_args.py", line 36, in init_regions_for
    init_regions = _damo_paddr_layout.paddr_region_of(args.numa_node)
  File "/home/root/damo/_damo_paddr_layout.py", line 163, in paddr_region_of
    paddr_ranges_ = paddr_ranges()
  File "/home/root/damo/_damo_paddr_layout.py", line 130, in paddr_ranges
    return integrate(memblock_ranges(), iomem_ranges())
  File "/home/root/damo/_damo_paddr_layout.py", line 70, in memblock_ranges
    sz_block = int(readfile('/sys/devices/system/memory/block_size_bytes'), 16)
  File "/home/root/damo/_damo_paddr_layout.py", line 43, in readfile
    with open(file_path, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/sys/devices/system/memory/block_size_bytes'

There is no memory directory inside /sys/devices/system/ in the system.

$ ls /sys/devices/system/
clockevents  clocksource  container  cpu  machinecheck  node

So this edge case should also be handled properly.

The tested damo version is as follows.

$ git log --oneline --decorate=no -1
01474c2 Update the version

The kernel version is as follows.

$ uname -r
6.6.0-rc4

Does damo support multiple "damo record"?

Hi SeongJae,

I have tried to run damo record in two different terminals together while the other is running, but see that the second damo record gets the following failure.

$ sudo ./damo record -o damon.stairs3.data "./masim/masim ./masim/configs/stairs.cfg"
could not turn DAMON on (writing on to /sys/kernel/mm/damon/admin/kdamonds/0/state failed ([Errno 16] Device or resource busy))

As I understand, /sys/kernel/mm/damon/admin/kdamonds/nr_kdamonds can be larger than 1 and it can create multiple kdamonds, but I'm wondering why this failure is shown in damo.

I guess it might be a problem only in damo, but not from DAMON framework inside kernel.

Could you please confirm if my assumption is correct or not?

Thanks.

damo recording for multi process/thread program

Hi @sjp38,

It's been a while but I would like to ask you a question regarding damo recording for multi process/thread programs.

I would like to record the memory access pattern of a target program, but it's written in a shell script and it invokes a python script that creates multiple threads internally.

In this case, which pid should I put as a target of damo?

I have tried to put the pid of the python script insdie the shell script, but it shows nothing in the output of the following command.

$ ./damo report heats --heatmap stdout --stdout_heatmap_color emotion
(... prints multiple empty lines ...)
# access_frequency:  0  1  2  3  4  5  6  7  8  9
# x-axis: space (94647110262784-94738634076144: 85.238 GiB)
# y-axis: time (24214084546000-25696152599000: 24 m 42.068 s)
# resolution: 80x40 (1.065 GiB and 37.052 s for each character)

I have tested the heatmap output of masim with zigzag.cfg and stairs.cfg and they showed the expected output.

Could you please help? Thanks!

Too many regions are shown with --numa_node 0

Hi SeongJae,

In my personal machine, I see too many separate regions with --numa_node 0 as follows.

$ sudo ./damo fmt_json --numa_node 0
{
    "kdamonds": [
        {
            "state": null,
            "pid": null,
            "contexts": [
                {
                    "ops": "paddr",
                    "targets": [
                        {
                            "pid": null,
                            "regions": [
                                {
                                    "start": "4,096",
                                    "end": "651,264"
                                },
                                {
                                    "start": "1,048,576",
                                    "end": "2,259,902,488"
                                },
                                {
                                    "start": "2,259,902,488",
                                    "end": "2,259,959,896"
                                },
                                {
                                    "start": "2,259,959,896",
                                    "end": "2,259,963,928"
                                },
                                {
                                    "start": "2,259,963,928",
                                    "end": "2,260,093,528"
                                },
                                {
                                    "start": "2,260,093,528",
                                    "end": "2,293,805,056"
                                },
                                {
                                    "start": "2,293,813,248",
                                    "end": "2,318,991,360"
                                },
                                {
                                    "start": "2,318,995,456",
                                    "end": "2,324,541,440"
                                },
                                {
                                    "start": "2,324,824,064",
                                    "end": "2,382,479,360"
                                },
                                {
                                    "start": "2,398,089,216",
                                    "end": "2,398,093,312"
                                },
                                {
                                    "start": "4,294,967,296",
                                    "end": "19,025,362,944"
                                }
                            ]
                        }
                    ],
                    "intervals": {
                        "sample_us": "5 ms",
                        "aggr_us": "100 ms",
                        "ops_update_us": "1 s"
                    },
                    "nr_regions": {
                        "min": "10",
                        "max": "1,000"
                    },
                    "schemes": []
                }
            ]
        }
    ]
}

My personal machine has a single numa node and it has only 16GB of DRAM so I feel it might be wasteful having too many regions in the configuration.

Rather than this, we might be better to keep only the two largest holes and show only 3 regions in this case just like damo already does in vaddr mode.

On the other hand, if you think that the regions are eventually split so it'd be better to keep the separate regions by physical layout rather than merging those regions, then I can close this issue without changes.

Thanks.

NameError: name '_dmo_fmt_str' is not defined. Did you mean: '_damo_fmt_str'?

Hi SeongJae,

I see an error while running damo show as follows.

$ sudo ./damo show
Traceback (most recent call last):
  File "/home/honggyu/work/damo/./damo", line 112, in <module>
    main()
  File "/home/honggyu/work/damo/./damo", line 109, in main
    subcmd.execute(args)
  File "/home/honggyu/work/damo/_damo_subcmds.py", line 34, in execute
    self.module.main(args)
  File "/home/honggyu/work/damo/damo_show.py", line 704, in main
    pr_records(args, records)
  File "/home/honggyu/work/damo/damo_show.py", line 450, in pr_records
    format_pr(args.format_record_head, record_formatters,
  File "/home/honggyu/work/damo/damo_show.py", line 369, in format_pr
    txt = formatter.format_fn(record, raw)
  File "/home/honggyu/work/damo/damo_show.py", line 46, in <lambda>
    _dmo_fmt_str.format_ns(record.snapshots[0].start_time, raw),
NameError: name '_dmo_fmt_str' is not defined. Did you mean: '_damo_fmt_str'?

Could you please have a look? Thanks.

paddr is not supported

Hi,

I intend to sample each NUMA node individually, so I have chosen to sample using physical addresses.

I am examining the address range of System RAM by using 'cat proc/iomem' and converting the hexadecimal values to decimal. My input is as follows:
sudo damo record --numa_node 0 --ops paddr -r "4294967296-139586437119" --monitoring_nr_regions 10 100 -s 5000

However, I have encountered errors for the addresses I have tried:
could not turn DAMON on (cannot apply kdamonds from args (staging kdamond failed (paddr is not supported)))

How can I make these addresses supported? Or, how can I determine which addresses are supported?

Thanks!

Support loadable module for DAMON with a separate build config

Hi SeongJae,

We had discussion previously at #46 (comment) as follows.

I saw that there is no EXPORT_SYMBOL in the files under mm/damon/* now. Could you please confirm that the live demo with insmodable kernel module in the above youtube link is no longer available?

You're correct, DAMON API is not supporting loadable modules at the moment. I had to export some symbols for only the demo. The exporting change has never been merged into the mainline. Note that that was mainly because we don't have use case, not because we don't want to let loadable modules use DAMON API.

These days, I need to keep modifying my kernel implementation based on DAMON, but it's hard to modify and replace the kernel frequently in the real machine for the testing.

For frequent test and evaluation, I feel it'd be useful to have EXPORT_SYMBOL under mm/damon/*. But I know that it's not acceptable by kernel community members so I'm wondering if it's possible to have those EXPORT_SYMBOL only when a specific build config, for example CONFIG_DAMON_EXPORT_SYMBOL, is enabled as a debugging feature?

If this feature is supported then I think I can write and test more DAMON features during development stages.

I would like to hear that if this approach is acceptable or not.

Thanks!

damo using with Nvidia tx2 board.

Hello, I am a new user of this amazing tool!.

I am grateful to the author that backed-port the DAMON to kernel version 4.9.140.
I used this tool with Nvidia tx2 board & jetpack version 32.2.3.
The damo executing with 'ls' or 'sleep 5' is working.
I don't know that this output whether exact or not.

But when I tried with ./masim/masim ./masim/configs/zigzag.cfg, I got the stack limit from kernel.

[ 278.233762] damon: kdamond (7386) starts
[ 278.243855] ------------[ cut here ]------------
[ 278.243861] kernel BUG at /home/ubuntu/public_sources/kernel/kernel-4.9/mm/damon/prmtv-common.c:29!
[ 278.252905] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[ 278.258390] Modules linked in: fuse zram bcmdhd overlay cfg80211 nvgpu bluedroid_pm ip_tables x_tables
[ 278.267848] CPU: 4 PID: 7386 Comm: kdamond.0 Not tainted 4.9.140 #2
[ 278.274118] Hardware name: quill (DT)
[ 278.277793] task: ffffffc1dbcef000 task.stack: ffffffc1a3354000
[ 278.283742] PC is at damon_follow_pte.isra.0.constprop.2+0x118/0x130
[ 278.290106] LR is at damon_va_young+0x3c/0x250
[ 278.294563] pc : [] lr : [] pstate: 00400045
[ 278.301961] sp : ffffffc1a3357cb0
[ 278.305286] x29: ffffffc1a3357cb0 x28: ffffffc1c7e749b0
[ 278.310647] x27: 0000000000017000 x26: ffffffc1c7e749d0
[ 278.316004] x25: ffffff8009e7d920 x24: ffffff8009524000
[ 278.321362] x23: ffffff8009e7d920 x22: ffffffc1a3357d30
[ 278.326719] x21: ffffffc1a3357d38 x20: 0000005577f055d3
[ 278.332070] x19: ffffffc1a3296df8 x18: 0000000000000014
[ 278.337424] x17: 0000000000000001 x16: 0000000000000000
[ 278.342780] x15: 0000000000000000 x14: 000000559af38f40
[ 278.348135] x13: 0000000000000a0a x12: 071c71c71c71c71c
[ 278.353489] x11: 000000000000000b x10: 0000000000000a20
[ 278.358841] x9 : ffffffc1a3357bf0 x8 : ffffffc1dbcefa80
[ 278.364196] x7 : fefefeff646c606d x6 : ffffffbf00000000
[ 278.369550] x5 : 00000000070dcc40 x4 : 00000000002f3385
[ 278.374904] x3 : ffffffc1a3357d38 x2 : ffffffc1a3357d30
[ 278.380255] x1 : 0000000080000000 x0 : 00e800021f800f51

[ 278.387118] Process kdamond.0 (pid: 7386, stack limit = 0xffffffc1a3354000)
[ 278.394086] Call trace:
[ 278.396558] [] damon_follow_pte.isra.0.constprop.2+0x118/0x130
[ 278.403965] [] damon_va_young+0x3c/0x250
[ 278.409464] [] damon_va_check_accesses+0x7c/0x160
[ 278.415742] [] kdamond_fn+0x12c/0x740
[ 278.420980] [] kthread+0xec/0xf0
[ 278.425783] [] ret_from_fork+0x10/0x40
[ 278.431113] ---[ end trace cfa9801b9d191659 ]---
[ 345.458024] tegradc 15210000.nvdisplay: blank - powerdown
[ 345.513299] extcon-disp-state external-connection:disp-state: cable 47 state 0 already set.
[ 345.513324] Extcon AUX1(HDMI) disable
[ 345.535790] tegra_nvdisp_handle_pd_disable: Powergated Head2 pd
[ 345.535891] tegra_nvdisp_handle_pd_disable: Powergated Head1 pd
[ 345.536504] tegra_nvdisp_handle_pd_disable: Powergated Head0 pd

Do you guys have any clue for solving this?

converting format from perf_data to json_compressed failed (parsing damon.data failed)

Hi,
I've read question #66, but it seems like our situation is a little different.

Below are the machine details:

uname -r 
6.1.0-rc1

perf --version
perf version 6.1.0-rc1

grep CONFIG_DAMON /boot/config-$(uname -r)
CONFIG_DAMON=y
CONFIG_DAMON_VADDR=y
CONFIG_DAMON_PADDR=y
CONFIG_DAMON_SYSFS=y
CONFIG_DAMON_DBGFS=y
CONFIG_DAMON_RECLAIM=y
CONFIG_DAMON_LRU_SORT=y

sudo pip3 install damo
Collecting damo
  Using cached damo-2.0.9-py3-none-any.whl (75 kB)
Installing collected packages: damo
Successfully installed damo-2.0.9

I'm going to sample a section of physical memory:

sudo damo record --numa_node 0 --ops paddr -r "4294967296-139586437119" --monitoring_nr_regions 400 5000 -s 5000
Press Ctrl+C to stop
^C
signal 2 received
[ perf record: Woken up 19 times to write data ]
[ perf record: Captured and wrote 6.410 MB damon.data (78409 samples) ]
converting format from perf_data to json_compressed failed (parsing damon.data failed)

Then I followed the advice in question 66, and I'm wondering if as well as the data was collected correctly, it's just the formatting that's problematic:

# perf record -e damon:damon_aggregated & damo start "sleep 5"
[1] 23982
# sleep 5
# killall perf
# [ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 1.499 MB perf.data (534 samples) ]
^C
[1]+  Terminated              perf record -e damon:damon_aggregated
# perf script -f
Segmentation fault (core dumped)

It looks like I should reinstall another version of perf?

Can I use multiple schemes with damo?

Hello,

I tried to apply two schemes with damo to find an error message like below.

$ sudo ./damo schemes --damos_access_rate 0 0 --damos_sz_region 4K max \
                    --damos_age 60s max --damos_action lru_deprio \
                    --ops paddr &
Press Ctrl+C to stop

$ sudo ./damo schemes --damos_access_rate 5 max --damos_sz_region 4K max \
                    --damos_age 0 1 --damos_action lru_prio \
                    --ops paddr &

could not turn DAMON on (writing on to /sys/kernel/mm/damon/admin/kdamonds/0/state failed ([Errno 16] Device or resource busy))

Is there any way I can apply multiple schemes (one kdamond, multiple schemes) with damo?

Thank you for your answer in advance.

Why did you return -EPERM?

hello amd world are the other two ctxs that don't use, there don't have kdamond obviously! If you want to free it, you need to create it and keep it.

//dmesg
Aug 24 09:11:51 debian kernel: damon: kdamond (11274) starts
Aug 24 09:11:51 debian kernel: damon: kdamond (11275) starts
Aug 24 09:11:51 debian kernel: damon: kdamond (11275) finishes
Aug 24 09:11:51 debian kernel: damon: kdamond (11276) starts
Aug 24 09:11:51 debian kernel: damon: kdamond (11276) finishes

//user
root@debian:~/user-damon/damo# strace -o strace.txt -f -s 128 ./damo record 10370

damon attrs: 5000 100000 1000000 10 1000 1048576 /root/user-damon/damo/damon.data

Press Ctrl+C to stop
^C
signal 2 received
/bin/bash: line 1: echo: write error: Operation not permitted
failed to turn damon off!
root@debian:~/user-damon/damo#

//debugfs
root@debian:/sys/kernel/debug/damon# ls
attrs hello kdamond_pid monitor_on rm_contexts target_ids world
direct_scheme init_regions mk_contexts record schemes version

//kernel

int damon_start(struct damon_ctx **ctxs, int nr_ctxs)
{
        int i;
        int err = 0;

        mutex_lock(&damon_lock);
        if (nr_running_ctxs) {
                mutex_unlock(&damon_lock);
                return -EBUSY;
        }

        for (i = 0; i < nr_ctxs; i++) {
                err = __damon_start(ctxs[i]);
                if (err)
                        break;
                nr_running_ctxs++;
        }
        mutex_unlock(&damon_lock);

        return err;
}

struct.error: unpack requires a buffer of 8 bytes

Hi @sjp38,

Thanks for building this useful tool. I would like to test running damo but found some errors as follows.

$ sudo ./damo record 3257
Press Ctrl+C to stop
^C
signal 2 received
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 1.752 MB damon.data ]
Traceback (most recent call last):
  File "/home/honggyu/work/damo/./damo", line 104, in <module>
    main()
  File "/home/honggyu/work/damo/./damo", line 101, in main
    subcmd.execute(args)
  File "/home/honggyu/work/damo/_damo_subcmds.py", line 31, in execute
    self.module.main(args)
  File "/home/honggyu/work/damo/damo_record.py", line 145, in main
    _damon.wait_current_kdamonds_turned_off()
  File "/home/honggyu/work/damo/_damon.py", line 918, in wait_current_kdamonds_turned_off
    time.sleep(1)
  File "/home/honggyu/work/damo/damo_record.py", line 57, in sighandler
    cleanup_exit(signum)
  File "/home/honggyu/work/damo/damo_record.py", line 46, in cleanup_exit
    err = _damon_result.update_result_file(data_for_cleanup.rfile_path,
  File "/home/honggyu/work/damo/_damon_result.py", line 293, in update_result_file
    result, err = parse_damon_result(file_path)
  File "/home/honggyu/work/damo/_damon_result.py", line 216, in parse_damon_result
    result, err = record_to_damon_result(result_file)
  File "/home/honggyu/work/damo/_damon_result.py", line 133, in record_to_damon_result
    end_addr = struct.unpack('L', f.read(8))[0]
struct.error: unpack requires a buffer of 8 bytes

This error shows in different places when I run it again. This error happens in start_addr or nr_accesses as well. Could you please let me know if I did something wrong?

I have manually enabled DAMON config in 6.3.0-rc1 as follows.

$ cat /boot/config-$(uname -r) | grep DAMON
CONFIG_DAMON=y
CONFIG_DAMON_VADDR=y
CONFIG_DAMON_PADDR=y
CONFIG_DAMON_SYSFS=y
CONFIG_DAMON_DBGFS=y
CONFIG_DAMON_RECLAIM=y
CONFIG_DAMON_LRU_SORT=y

No brk region information

Hi,

I observed a side problem when doing the HPC profiling using damo. Specifically, I intercept both mmap() and brk() syscalls and compare with DAMON's heatmap. I found DAMON does not capture brk-ed addresses regions. For example, I print the abs_addr of the damon.data, which spans from 140737310404608 to 140737488261635, and matches with the heatmap figure. However, I observed there are also frequent accesses from brk-ed regions eg., 10125312 to 18046976 but that's not captured by DAMON. Is there any specific reason or overhead for that? Thanks!

damo record for storing DAMOS actions applied hisotory

Hi SeongJae,

Sorry for throwing many draft ideas, but I'm just wondering if it's possible to keep the history of DAMOS actions applied.

The current damo schemes is very useful when operating our custom DAMOS actions, but we would like to keep the history when and how some regions are affected by the registered actions.

I don't have a strong idea how to display such data as of now, but I'm just leaving the idea for the future record.

Thanks!

When i run damon record, some errors had happened

root@debian:~/user-damon# ./damo/damo record 1
Traceback (most recent call last):
File "./damo/damo", line 59, in
record.main(args)
File "/root/user-damon/damo/record.py", line 140, in main
new_attrs = _damon.cmd_args_to_attrs(args)
File "/root/user-damon/damo/_damon.py", line 251, in cmd_args_to_attrs
schemes = args.schemes
AttributeError: 'Namespace' object has no attribute 'schemes'

damo show enhancement

Hi SeongJae,

It'd be useful if damo show supports the followings.

  1. print the total number of regions at the top
    I think running damo show with watch command is useful, but it shows only the top part of damo show result and keep updating it. If there are about hundreds of regions then there is no way to know how many regions are created. For example, the output can be printed as follows.
$ ./damo show
total number of regions: 13
  0 addr [4.000 KiB, 640.000 KiB) (636.000 KiB) access 0 % age 100 ms
  1 addr [1024.000 KiB, 3.000 GiB) (2.999 GiB) access 0 % age 100 ms
  2 addr [4.000 GiB, 19.579 GiB) (15.579 GiB) access 0 % age 200 ms
        ...
total sz: 160.000 GiB
  1. provide a way to sort the regions (based on access rate or age)
    The current damo show output prints the regions in order based on the address. This can be useful, but users can also be interested in most accessed regions or oldest regions.

I would like to hear how you think about it. Thanks.

Update documentation for newly added commands

We implemented more damo commands for online usage, including start, tune, stop, show, and status. The old commands also gained new options. Those were intentionally not documented since those interface were unfixed. It has been a while since implementation of those, and those might now stable enough to be documented. Some of the commands and options were also acknowledged by recent damo users, and the lack of documentation made them confused. We promised to provide the documentation hopefully before OSSEU. The OSSEU is only six days left. Let's document those.

Can we make "damo status" faster?

I sometimes want to monitor the output of damo show, but I feel it is quite slow.

From my experience, it takes around 10 seconds, but I'm just wondering if it's possible to make it faster.

$ time sudo ./damo status
      ...
real    0m10.172s
user    0m0.001s
sys     0m0.007s

The heatmap is too dark

Hi,
I have sampled approximately 20 gigabytes of physical addresses. I have attempted to modify the sampling range and time intervals, but the resulting heat maps are mostly black, with the highest access count being 1. I would like to inquire if there are any techniques or recommendations for optimizing the combination of these parameters. Your expertise on this matter would be greatly appreciated. Thank you!

provide a way to turn on/off damon from userspace

Hi SeongJae,

The current way of enabling damon requires root permission, but is it possible to support a way to control it from userspace by having a runtime kernel parameter?

For example, if kernel.damon is 1, then we could allow userspace to run damo record.

$ sudo sysctl -a | grep kernel.damon
kernel.damon = 1

The kernel.damon kernel parameter can only be changed with root permission of course.

I've tested that the access permission of files in sysfs can be changed with root permission.

This might not be a realistic but I just feel that it might be much more useful if damo record can be executed without sudo.

Is there a reason to choose the largest System RAM region for paddr monitoring?

Hi SeongJae,

I have set the qemu memory configuration with NUMA node0=4G, node1=4G, node2=8G.

The more details about memory are as follows.

# cat /proc/iomem | grep System.RAM  
00001000-0009fbff : System RAM  
00100000-bffdffff : System RAM  
100000000-43fffffff : System RAM 

In the above log, there are 3 System RAM sections, but default_paddr_region in _damo_paddr_layout.py and damon_find_biggest_system_ram in mm/damon/core.c only find the largest System RAM region and set the start and end at

  • /sys/kernel/mm/damon/admin/kdamonds/0/contexts/0/targets/0/regions/0/start
  • /sys/kernel/mm/damon/admin/kdamonds/0/contexts/0/targets/0/regions/0/end

In this case, it looks like the scope outside of the largest System RAM is not monitored. As a result, the first two lines of the following scope are not monitored but the last scope is only monitored.

# cat /proc/iomem | grep System.RAM  
00001000-0009fbff : System RAM (NOT monitored)  
00100000-bffdffff : System RAM (NOT monitored)  
100000000-43fffffff : System RAM (MONITORED)

However, the missing scope is included in NUMA node 0 as follows.

# dmesg | grep NUMA 
[    0.008144] NUMA: Initialized distance table, cnt=3 
[    0.008146] NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] 
[    0.008148] NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x13fffffff] -> [mem 0x00000000-0x13fffffff] 
 
# dmesg | grep "node  " 
[    0.009121]   node   0: [mem 0x0000000000001000-0x000000000009efff] 
[    0.009124]   node   0: [mem 0x0000000000100000-0x00000000bffdffff] 
[    0.009125]   node   0: [mem 0x0000000100000000-0x000000013fffffff] 
[    0.009126]   node   1: [mem 0x0000000140000000-0x000000023fffffff] 
[    0.009127]   node   2: [mem 0x0000000240000000-0x000000043fffffff]

Could you please explain why the start and end of regions are set in this way?

`--output_type` parameter not working

Environment

I am running damo on a virtual machine. Below are the details
Host:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 23.04
Release: 23.04
Codename: lunar

$ uname -r
6.2.0-24-generic

Virtual machine:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 23.04
Release: 23.04
Codename: lunar

$ uname -r
6.1.6+

$ grep DAMON linux/.config
CONFIG_DAMON=y
CONFIG_DAMON_VADDR=y
CONFIG_DAMON_PADDR=y
CONFIG_DAMON_SYSFS=y
CONFIG_DAMON_DBGFS=y
# CONFIG_DAMON_RECLAIM is not set
# CONFIG_DAMON_LRU_SORT is not set

Issue

So, when I try to run damo as in the example from USAGE I get the following error:

$ damo record "sleep 5"
Press Ctrl+C to stop
[ perf record: Woken up 1 times to write data ]
failed to mmap file
[ perf record: Captured and wrote 0.085 MB damon.data ]

WARNING: 'record' file type support will be deprecated by 2023-Q3.
use json_compressed type instead.
Please report your usecase to Github issues[1], [email protected],
[email protected] and/or [email protected] if you depend on those.

[1] https://github.com/awslabs/damo/issues

converting format from perf_data to json_compressed failed (snapshot reading failead: unpack requires a buffer of 8 bytes)

So, I was thinking to specify the output type, but it still doesn't work:

$ damo record --output_type json "sleep 5"
Press Ctrl+C to stop
[ perf record: Woken up 1 times to write data ]
failed to mmap file
[ perf record: Captured and wrote 0.084 MB damon.data ]

WARNING: 'record' file type support will be deprecated by 2023-Q3.
use json_compressed type instead.
Please report your usecase to Github issues[1], [email protected],
[email protected] and/or [email protected] if you depend on those.

[1] https://github.com/awslabs/damo/issues

converting format from perf_data to json failed (snapshot reading failead: unpack requires a buffer of 8 bytes)

Am I doing something wrong?

A question about the meaning of heatmap values

Hi SeongJae,

I've recorded a target program to investigate the memory access patterns of https://github.com/facebookresearch/dlrm.

I've got a heatmap and it looks most of the memory regions are cold and linearly accessed as follows.

I'm just wondering what's the meaning of the heatness values at the right hand side of the heatmap.
access_pattern_heatmap

I found that the value is calculated at [1] and printed at [2], but it doesn't look straight forward to me.

Could you please explain about it? Thanks!

[1] https://github.com/awslabs/damo/blob/v1.8.2/damo_heats.py#L50-L55
[2] https://github.com/awslabs/damo/blob/v1.8.2/damo_heats.py#L171

Proof-read documents

Seems there are many typos and grammatical errors in the documents. Let's proofread at least once.

no file named init_regions in /sys/kernel/debug/damon

Hi, I'm testing on 5.15.0 with CONFIG_DAMON enabled, but I'm experiencing the following error doing damo record

Traceback (most recent call last):
  File "./damo", line 98, in <module>
    main()
  File "./damo", line 95, in main
    subcmd.execute(args)
  File "/home/cc/damo/_damo_subcmds.py", line 31, in execute
    self.module.main(args)
  File "/home/cc/damo/damo_record.py", line 129, in main
    set_data_for_cleanup(data_for_cleanup, args, output_permission)
  File "/home/cc/damo/damo_record.py", line 59, in set_data_for_cleanup
    data_for_cleanup.orig_kdamonds = _damon.current_kdamonds()
  File "/home/cc/damo/_damon.py", line 870, in current_kdamonds
    return _damon_fs.current_kdamonds()
  File "/home/cc/damo/_damon_dbgfs.py", line 315, in current_kdamonds
    return files_content_to_kdamonds(
  File "/home/cc/damo/_damon_dbgfs.py", line 277, in files_content_to_kdamonds
    fields = [int(x) for x in files_content['init_regions'].strip().split()]
KeyError: 'init_regions'

After back tracing the error, it's because in my server I don't have a file called init_regions in /sys/kernel/debug/damon directory.
system version and configure:

cc@uc-inst-1:~/damo$ if grep CONFIG_DAMON /boot/config-$(uname -r); then echo "installed"; fi
CONFIG_DAMON=y
CONFIG_DAMON_VADDR=y
CONFIG_DAMON_DBGFS=y
installed
cc@uc-inst-1:~/damo$ cat /etc/os-release 
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Inconsistent absolute time

Hi,

I found this inconsistent absolute time generated by damo, with the time output from the program.
Specifically, I was playing with the masim and call this GetTime() first get into main and before main finishes.

void GetTime(char *identifier)
{
    struct timespec ts;
    clock_gettime(CLOCK_MONOTONIC, &ts);
    printf("%s at: %ld.%ld\n", identifier, ts.tv_sec, ts.tv_nsec);
}

The output shows

in! at: 1016680.957450677
initial phase:	               22963 accesses/msec, 10000 msecs run
low phase 0:	               24710 accesses/msec, 5002 msecs run
high phase 0:	               24076 accesses/msec, 5003 msecs run
low phase 1:	               24500 accesses/msec, 5002 msecs run
high phase 1:	               23933 accesses/msec, 5000 msecs run
low phase 2:	               24348 accesses/msec, 5001 msecs run
high phase 2:	               23897 accesses/msec, 5002 msecs run
low phase 3:	               24417 accesses/msec, 5003 msecs run
high phase 3:	               23709 accesses/msec, 5003 msecs run
low phase 4:	               24469 accesses/msec, 5003 msecs run
high phase 4:	               23216 accesses/msec, 5002 msecs run
low phase 5:	               24526 accesses/msec, 5002 msecs run
high phase 5:	               23959 accesses/msec, 5000 msecs run
out! at: 1016751.151737409

As you see the time spans from 1016680 to 1016751
However, when I use damo to get absolute time sudo damo report heats --abs_time > abs_time.txt, the first column, spans from 1016621500242000 to 1016686696412352. As this is in nanosecond scale, so for second scale, it should be 1016621 to 1016686, which is around ~60s ahead of what the program's output.

Is this a potential bug or is there something I missed? Thanks!

Is there a way to set `--numa_node` for each scheme in `json` file?

Hi SeongJae,

I would like to create two different schemes, but also want to limit the monitoring scope separately.

For example, let's say there are action A and B. Then I want to apply action A to NUMA node 0 and also want to apply action B to NUMA node 1.

So is there a way to set two different --numa_node options for each scheme in .json file? (I saw it's possible to provide two schemes only with .json file.)

Thanks.

It seems "damo schemes" doesn't use physical address option (--ops paddr)

Hi SeongJae,

I have executed the damo schemes with --ops paddr as follows.

$ sudo ./damo schemes --damos_access_rate 0 0 --damos_sz_region 4K max \
               --damos_age 1s max --ops paddr --damos_action pageout \
               "./masim/masim ./masim/configs/stairs.cfg"

However, it didn't go through the damon_pa_apply_scheme and instead it goes to damon_va_apply_scheme in the kernel implementation.

I have checked the operations file in sysfs during the above masim execution, but it shows vaddr.

# cat /sys/kernel/mm/damon/admin/kdamonds/0/contexts/0/operations
vaddr

I'm just wondering if my usage for --ops paddr is incorrect.

Thanks.

could not turn DAMON with damo schemes

I have updated my damo repo based on the current next branch, but I see an error when running damo schemes as follows.

$ cat pageout.json
[
    {
        "action": "pageout",
        "access_pattern": {
            "sz_bytes": {"min": "4.000 KiB", "max": "max"},
            "nr_accesses": {"min": "0 %", "max": "0 %"},
            "age": {"min": "1 s", "max": "max"}
        }
    }
]

$ ./damo schemes -c pageout.json
could not turn DAMON on (cannot apply kdamonds from args (filepath (/sys/kernel/mm/damon/admin/kdamonds/0/contexts/0/schemes/0/apply_interval_us) is neither dir nor file))

It works fine on the previous version at 0c48a1f.

My kernel version is based on v6.4.

thinking about maximum region size

Hi SeongJae,

I've been testing paddr mode DAMON in a system, which has a large system memory so emulated the system in my qemu setting with 160GB of memory.

I have internally implemented promote actions, but found that it sometimes can't capture hot regions for a very long time. I have a simple program that creates 200MB of frequently accessed memory block but the action doesn't get triggered.

I have checked the status of regions, but it shows only 13 regions as follows and they show 0% access rate. The weird part was that they mostly show very young ages.

$ ./damo show
total number of regions: 13
  0 addr [4.000 KiB, 640.000 KiB) (636.000 KiB) access 0 % age 100 ms
  1 addr [1024.000 KiB, 3.000 GiB) (2.999 GiB) access 0 % age 100 ms
  2 addr [4.000 GiB, 19.579 GiB) (15.579 GiB) access 0 % age 200 ms
  3 addr [19.579 GiB, 35.272 GiB) (15.693 GiB) access 0 % age 400 ms
  4 addr [35.272 GiB, 50.906 GiB) (15.634 GiB) access 0 % age 300 ms
  5 addr [50.906 GiB, 66.601 GiB) (15.695 GiB) access 0 % age 200 ms
  6 addr [66.601 GiB, 82.237 GiB) (15.636 GiB) access 0 % age 500 ms
  7 addr [82.237 GiB, 97.809 GiB) (15.572 GiB) access 0 % age 500 ms
  8 addr [97.809 GiB, 113.378 GiB) (15.569 GiB) access 0 % age 500 ms
  9 addr [113.378 GiB, 128.978 GiB) (15.600 GiB) access 0 % age 500 ms
 10 addr [128.978 GiB, 144.558 GiB) (15.580 GiB) access 0 % age 400 ms
 11 addr [144.558 GiB, 160.256 GiB) (15.698 GiB) access 0 % age 400 ms
 12 addr [160.256 GiB, 161.000 GiB) (762.141 MiB) access 0 % age 400 ms
total sz: 160.000 GiB

The size of each region looks as 16GB and I feel it might be very difficult to randomly capture the accessed page among those regions.

So I think we may have to think about having a maximum size of region to prevent this case. There is no specific size limit as of now so the region size can grow too huge.

I would like to hear how you think about this.

Damo fails to start

Hello!

I attempted to use the sudo damo start option, but it seems to raise an error, as shown below:
image

Upon searching through the source code, I found that this function is indeed defined in _damon.py, so I found this error quite weird!
image

Thank you very much.
Kind regards.

something wrong when run "./damo report heats --heatmap stdout --stdout_heatmap_color emotion"?

Hi
Thanks for your great tools! I am studying and testing how to use this tools. But I have a trouble in usage.
I can not get the output style like "https://github.com/awslabs/damo/blob/next/images/masim_zigzag_heatmap_ascii.png" shown when I test in my system. But "./damo/damo report raw" works, it print following log:

400036705000-400036718000( 76.000 KiB): 0
400036718000-400036719000( 4.000 KiB): 15
400036719000-40003671d000( 16.000 KiB): 0
40003671d000-40003671f000( 8.000 KiB): 12
40003671f000-400036722000( 12.000 KiB): 0
400036722000-400036723000( 4.000 KiB): 12
400036723000-400036739000( 88.000 KiB): 0
400036739000-40003673a000( 4.000 KiB): 14
40003673a000-40003673e000( 16.000 KiB): 0
40003673e000-400036745000( 28.000 KiB): 0
400036745000-400036746000( 4.000 KiB): 20
400036746000-400036756000( 64.000 KiB): 0
ffffe1a18000-ffffe1a39000( 132.000 KiB): 0

I am not sure whether I miss something.

Thanks!

"failed to turn damon off" after running damo schemes

Hi SeongJae,

It might be a trivial issue but I see an error at the end of execution for damo schemes as follows.

$ sudo ./damo schemes --damos_access_rate 0 0 --damos_sz_region 4K max --damos_age 1s max --damos_action pageout "./masim/masim ./masim/configs/stairs.cfg"
Press Ctrl+C to stop
initial phase:                88,565 accesses/msec, 10000 msecs run
phase 0:                      97,596 accesses/msec, 5000 msecs run
phase 1:                      97,412 accesses/msec, 5000 msecs run
phase 2:                      97,884 accesses/msec, 5000 msecs run
phase 3:                      97,760 accesses/msec, 5001 msecs run
phase 4:                      98,100 accesses/msec, 5001 msecs run
phase 5:                      97,229 accesses/msec, 5000 msecs run
phase 6:                      97,229 accesses/msec, 5000 msecs run
phase 7:                      86,166 accesses/msec, 5000 msecs run
phase 8:                      93,952 accesses/msec, 5000 msecs run
phase 9:                      95,866 accesses/msec, 5000 msecs run
failed to turn damon off (writing off to /sys/kernel/mm/damon/admin/kdamonds/0/state failed ([Errno 1] Operation not permitted))

It doesn't actually a problem for me now but might be better to suppress this error message.

perf record not working with "perf"

Hi,

I just setup DAMON on a new server but somehow experienced this issue.

lyuze@wukong0:~/masim$ ./masim ./configs/zigzag.cfg &
lyuze@wukong0:~$ sudo damo record -o damon.data $(pidof masim)
perf record not working with "perf"

I see the damo python script is calling this:

subprocess.check_output(
                    [PERF, 'record', '-e', PERF_EVENT, '--', 'sleep', '0'],
                    stderr=subprocess.PIPE)

And when I try this command in shell:

lyuze@wukong0:~$ sudo perf record -e damon:damon_aggregated -- sleep 0
event syntax error: 'damon:damon_aggregated'
                     \___ unsupported tracepoint

libtraceevent is necessary for tracepoint support
Run 'perf list' for a list of valid events

 Usage: perf record [<options>] [<command>]
    or: perf record [<options>] -- <command> [<options>]

    -e, --event <event>   event selector. use 'perf list' to list available events

Although I have installed libtraceevent-dev and libtraceevent1 and damon:damon_aggregated is in my perf list:

lyuze@wukong0:~$ sudo perf list | grep damon
  damon:damon_aggregated                             [Tracepoint event]

Server environment:

lyuze@wukong0:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.3 LTS
Release:        22.04
Codename:       jammy
lyuze@wukong0:~$ uname -r
6.2.0
lyuze@wukong0:~$ if grep CONFIG_DAMON /boot/config-$(uname -r); then echo "installed"; fi
CONFIG_DAMON=y
CONFIG_DAMON_VADDR=y
CONFIG_DAMON_PADDR=y
CONFIG_DAMON_SYSFS=y
CONFIG_DAMON_DBGFS=y
CONFIG_DAMON_RECLAIM=y
CONFIG_DAMON_LRU_SORT=y
installed

[RFC] support damo update command

Hi SeongJae,

I've been testing using damo schemes, but I often need to change the configuration such as -s, -a, -u, --monitoring_nr_regions_range and -c option as well.

I normally stop damo schemes execution then relaunch the scheme after modifying a few fields. But I feel this can be simplified if there is a new command damo update, then use the same options, but only provide a few options that I want to change.

Example usage is that the damo schemes begins with a simple config.json file as follows.

$ sudo ./damo schemes -c config.json
Press Ctrl+C to stop

While running this command, I want to rerun the schemes with --monitoring_nr_regions_range 100 10000. The current usage is as follows.

^C        # Stop the previous damo schems execution

# Re-run the `damo schemes with an additional option`, but previous same options have to be provided again.
$ sudo ./damo schemes --monitoring_nr_regions_range 100 10000 -c config.json
Press Ctrl+C to stop

I think it'd be useful if the following imaginary use case is supported.

# Run the initial damo schemes
$ sudo ./damo schemes -c config.json
Press Ctrl+C to stop

# Change the number of monitoring regions only, but keep the other configuration same.
$ sudo ./damo update --monitoring_nr_regions_range 100 10000

The above damo update command only applies the given change, then write commit to kdamonds/0/status. The points are as follows.

  • don't have to stop running the current kdamond.
  • don't have to repeat the same options from the previous execution.

This might be useful only for me but I would like to hear how you think about this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.