GithubHelp home page GithubHelp logo

powerapi-ng / powerapi Goto Github PK

View Code? Open in Web Editor NEW
183.0 11.0 34.0 7.01 MB

PowerAPI is a Python framework for building software-defined power meters.

Home Page: https://powerapi.org

License: BSD 3-Clause "New" or "Revised" License

Python 99.93% Dockerfile 0.07%
power-meter python inria green-computing energy energy-monitoring

powerapi's Introduction

Powerapi

License: BSD 3 GitHub Workflow Status PyPI Codecov Zenodo JOSS paper

PowerAPI is a middleware toolkit for building software-defined power meters. Software-defined power meters are configurable software libraries that can estimate the power consumption of software in real-time. PowerAPI supports the acquisition of raw metrics from a wide diversity of sensors (eg., physical meters, processor interfaces, hardware counters, OS counters) and the delivery of power consumptions via different channels (including file system, network, web, graphical). As a middleware toolkit, PowerAPI offers the capability of assembling power meters «à la carte» to accommodate user requirements.

About

PowerAPI is an open-source project developed by the Spirals project-team, a joint research group between the University of Lille and Inria.

The documentation of the project is available here.

Mailing list

You can follow the latest news and asks questions by subscribing to our mailing list.

Contributing

If you would like to contribute code you can do so through GitHub by forking the repository and sending a pull request.

When submitting code, please make every effort to follow existing conventions and style in order to keep the code as readable as possible.

Publications

Use Cases

PowerAPI is used in a variety of projects to address key challenges of GreenIT:

  • SmartWatts is a self-adaptive power meter that can estimate the energy consumption of software containers in real-time.
  • GenPack provides a container scheduling strategy to minimize the energy footprint of cloud infrastructures.
  • VirtualWatts provides process-level power estimation of applications running in virtual machines.
  • Web Energy Archive ranks popular websites based on the energy footpring they imposes to browsers.
  • Greenspector optimises the power consumption of software by identifying potential energy leaks in the source code.

Research Projects

Currently, PowerAPI is used in two research projects:

License

PowerAPI is licensed under the BSD-3-Clause License. See the LICENSE file for details.

FOSSA Status

powerapi's People

Contributors

altor avatar danglotb avatar dependabot[bot] avatar dromeroac avatar dsaingre avatar gfieni avatar kayoku avatar larsschellhas avatar ldesauw avatar pierrerustorange avatar roda82 avatar rouvoy avatar tomemd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

powerapi's Issues

Error Management

  • Normalize exception names
  • make all exceptions inherit from a PowerAPIException

Test

  • configure setuptools to launch tests

Integration

Puller

  • CsvDB
  • refactor MongoDB test

Pusher

  • MongoDB test

Generic management of Report and Database for better data usage

Hey,

This issue is mainly a note for myself and to keep track

This issue is a thought about our data usage in PowerAPI and the gateway between Report and Database.
For the moment, we manage the bridge between Report class and Database as follows

gateway_between_report_and_database

... which is pretty ugly and can quickly become heavy later. It's a temporary solution and can't scale to a larger model. This mean that each time we had to add a new kind of Report or Database in PowerAPI, we need to rewrite all code for manage each from_xxxDB/to_xxxDB. Imagine we have ~20 Database and you want to add a Report, you have to write the 40 from/to function, have fun ;-)
A more intuitive way would be to do as follows.

wanted_report_and_database

This problem is usually solved by using an ORM, but with the kind of database that we are using (document-oriented like MongoDB, text-file like csv, time-series like OpenTSDB...), it becomes difficult to homogenize the whole thing.


Comments

Report isn't always save as a single output

e.g if you want to save a HWPCReport in a csv file, you will have to split the HWPCReport by cpu/socket, whereas a PowerReport is pretty simple and can be save in one single row. It become easier if we want to store it in a Mongo database, we can make both in a single output. So for a specific BridgeData, how do we know that data must be splitted to be saved, and how do we split it?

Anyway, a BridgeData must know the kind of Report that it manage, basically with a report_type attribute containing the associate Report.

This problem can be solve by defining the following property: a BridgeData is a piece of data that can be save in a single row, no matter what kind of database we are using. This means that e.g HWPCReport is a set of BridgeData gathered whereas a PowerReport is a single BridgeData. No, this answer solve the csv problem, but what if you want to save a "set" of BridgeData from a HWPCReport in a Mongo database ?

printable string Management

  • put all printable string used in the application (log, error message, ...) in a global file (for multi-language usage)
  • when a test has to check if a string is print/return, use this file to check the validity of the string

Automatic release creation

  • build documentation with CI when publishing a new release
  • build a pipy package and upload it on the pipy repository
  • automatically upload the docker image on dockerHub

Minor Refactoring

  • delete import from global __init__.py
  • in Puller add a check during the deserialize() call in order to check the received JSON integrity. Raise an exception if the JSON is not valid.

Make supervisor supervise its actor

For the moment, supervisor are only used to handle a pool of actor.
Supervisors need to supervise its actor, i.e have rules to handle actor crash (restart the actor, kill the other actors)

A first behavior may be to check periodically if the supervised actor are alive (with the Process.is_alive method) and if an actor is not, send a PoisonPillMessage to other supervised actors.

Current strategies :

In stream mode ON:
When killing by CTRL+C or anything else, stop the actor with this order : Puller - Dispatcher/Formula - Pusher :

  • Send SIGTERM
  • Join X seconds
  • If still alive after X seconds, send SIGKILL

In stream mode OFF:

  • Supervisor wait the Puller death
  • Supervisor wait the Dispatcher death (Puller spread PoisonPill when he die)
  • Supervisor send a PoisonPill (by_data) to the Pusher
  • Supervisor wait the Pusher death

Error in configuration leaves zombie processes

When using the prometheus exporter, if some error happens in smartwatt after the httpd server has been initialized, smartwatt exits but leaves the http server running as a zombie process. This means that when restarting smartwatt after that, it will block when trying to reopen the same port, which is still open (this can be checked with netstat -tulpn)

For example, if there is an error in the output section, powerAPI calls sys.exit() (e.g. in generator.py l.231), leaving the exporter's http server open.

I think we should avoid using sys.exit() and instead raise an exception which would be catched in main , closing the actor system properly.

Restart power-api-formula docker container

Hello,

I'm running both containers sensor and formula using docker containers. In the command-line, I use the --restart to make the container restart if it does crash.

However, sometimes I have the following error:

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/opt/powerapi/.local/lib/python3.7/site-packages/powerapi/puller/handlers.py", line 78, in run
    report = self._pull_database()
  File "/opt/powerapi/.local/lib/python3.7/site-packages/powerapi/puller/handlers.py", line 59, in _pull_database
    return next(self.state.database_it)
  File "/opt/powerapi/.local/lib/python3.7/site-packages/powerapi/database/mongodb.py", line 82, in __next__
    json = self.db.collection.find_one_and_delete({})
  File "/opt/powerapi/.local/lib/python3.7/site-packages/pymongo/collection.py", line 2950, in find_one_and_delete
    session=session, **kwargs)
  File "/opt/powerapi/.local/lib/python3.7/site-packages/pymongo/collection.py", line 2885, in __find_and_modify
    write_concern.acknowledged, _find_and_modify, session)
  File "/opt/powerapi/.local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1491, in _retryable_write
    return self._retry_with_session(retryable, func, s, None)
  File "/opt/powerapi/.local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1384, in _retry_with_session
    return func(session, sock_info, retryable)
  File "/opt/powerapi/.local/lib/python3.7/site-packages/pymongo/collection.py", line 2879, in _find_and_modify
    user_fields=_FIND_AND_MODIFY_DOC_FIELDS)
  File "/opt/powerapi/.local/lib/python3.7/site-packages/pymongo/collection.py", line 250, in _command
    user_fields=user_fields)
  File "/opt/powerapi/.local/lib/python3.7/site-packages/pymongo/pool.py", line 618, in command
    self._raise_connection_failure(error)
  File "/opt/powerapi/.local/lib/python3.7/site-packages/pymongo/pool.py", line 613, in command
    user_fields=user_fields)
  File "/opt/powerapi/.local/lib/python3.7/site-packages/pymongo/network.py", line 157, in command
    reply = receive_message(sock, request_id)
  File "/opt/powerapi/.local/lib/python3.7/site-packages/pymongo/network.py", line 196, in receive_message
    _receive_data_on_socket(sock, 16))
  File "/opt/powerapi/.local/lib/python3.7/site-packages/pymongo/network.py", line 261, in _receive_data_on_socket
    raise AutoReconnect("connection closed")
pymongo.errors.AutoReconnect: connection closed

And then, the container is pending, without crashing "properly" in order to restart. It just hangs there.

Is there any way to be alerted of such behavior? Could we prevent it?

Thank you!

Best.

Import in __init__ file breaks modular database install

Currently all supported databases are imported in powerapi/database/__init__.py.
But the user have the choice of which database(s) to install via the setup.py script.
If a user choose to not install a database, the required dependencies will not be installed and the import of the module will fail.

Socket puller binds to localhost only

The socket puller binds to 127.0.0.1 cannot be reached from another host.
See in socket_db.py:

self.server = await asyncio.start_server(self.gen_server_callback(), host='127.0.0.1', port=self.port)

With the PR powerapi-ng/hwpc-sensor#17 the sensor can connect to a formula running on another host,but this can only work if the formula also listen to connection from other hosts.

We could

  • start the server on 0.0.0.0
  • start the server on a host name given in the configuration

I would prefer the first option, I don't see an use case for the second approach.

Multiple Formula bug in limited CPU ressource environement

When using an architecture with two or more formula in a limited CPU resource environment, one formula could finish its work before the others. In this case, the formula will send a PoisonPillMessage to pusher kill it. Because other formula didn't finish their work, the pusher is killed before receiving all the messages it have to receive.

How to reproduce : execute the test tests/acceptation/test_simple_architecture_mongo_to_influx.py while stressing all host core with the stress command

socket on formula

Hi,
When i use this config file :

{
  "verbose": true,
  "stream": true,
  "input": {
    "puller": {
      "model": "HWPCReport",
      "type": "socket",
      "uri": "127.0.0.1",
      "port": 8080
    }
  },
  "output": {
    "pusher_power": {
      "model": "PowerReport",
      "type": "socket",
      "uri": "127.0.0.1",
      "port": 8081
}
  },
  "cpu-frequency-base": 1900,
  "cpu-frequency-min": 800,
  "cpu-frequency-max": 1900,
  "cpu-error-threshold": 2.0,
  "disable-dram-formula": true,
  "sensor-report-sampling-interval": 1000
}

I have this mesage error :

smartwatts-formula_1  | Traceback (most recent call last):
smartwatts-formula_1  |   File "/usr/local/lib/python3.8/runpy.py", line 194, in _run_module_as_main
smartwatts-formula_1  |     return _run_code(code, main_globals, None,
smartwatts-formula_1  |   File "/usr/local/lib/python3.8/runpy.py", line 87, in _run_code
smartwatts-formula_1  |     exec(code, run_globals)
smartwatts-formula_1  |   File "/opt/powerapi/.local/lib/python3.8/site-packages/smartwatts/__main__.py", line 261, in <module>
smartwatts-formula_1  |     conf = get_config()
smartwatts-formula_1  |   File "/opt/powerapi/.local/lib/python3.8/site-packages/smartwatts/__main__.py", line 256, in get_config
smartwatts-formula_1  |     return parser.parse()
smartwatts-formula_1  |   File "/opt/powerapi/.local/lib/python3.8/site-packages/powerapi/cli/config_parser.py", line 231, in parse
smartwatts-formula_1  |     conf = self._validate(conf)
smartwatts-formula_1  |   File "/opt/powerapi/.local/lib/python3.8/site-packages/powerapi/cli/config_parser.py", line 183, in _validate
smartwatts-formula_1  |     self.subparser[args][dic_value["type"]].validate(dic_value)
smartwatts-formula_1  | KeyError: 'socket'

The pusher_power is not support socket ?

Bug in csv file reading

I'm using csv file as input to powerAPI (using smartwatt formula in my case, but it does not depend on the formula).
These csv files are written by the sensor, which generate 3 files : core, msr and rapl.

When reading these files, powerAPI does not takes all targets into account : in CsvIterDB.__next__() we merge all lines that have the same timestamp and generate a single report with them.
I believe we should generate one report for each (timestamp, target) pair

Refactor Tests

Write Clean test

  • delete "buzzer" tests
  • tests actors features and protocol :
    • abstract test for all actors
    • puller
    • dispatcher
    • pusher
  • if dispatcher rules are not re-writen, put dispatcher handler code into external function and test them

Test must be launchable in local context

  • remove unit test dependencies from pymongo
  • clearly separate unit, integration and acceptation tests

Test Formula

  • write abstract formula test
  • test smartwatts_formula
  • test rapl_formula

Minors

  • test error returned by actors to supervisors :
    • NoPrimaryDispatchRuleRuleException (dispatcher)

Use Python type hints

Since Python 3.5 it is possible to add type hints.
This can help when refactoring and is very useful with linters to catch bugs earlier.
The type hints can be reused by sphinx to complete the documentation.

Database schemas

Hello, I'm trying to analyze the data collected in the MongoDB by PowerAPI and I would like to build some scripts to extract, analyze, and build some graph (offline of the combo grafana/influxDB).

What I understand is that when we run the SmartWatts formula, we give the following command line options:

           --output mongodb --name power --model PowerReport \
                            -u mongodb://ADDR -d $OUTPUT_DB -c $OUTPUT_COL \
           --output mongodb --name formula --model FormulaReport \
                            -u mongodb://ADDR -d $OUTPUT_DB -c frep \

when I observe the collection $OUTPUT_COL, I have the following schema:

{
	"_id" : ObjectId("5f7c6eed3478c6e7ef7f3a21"),
	"timestamp" : ISODate("2020-10-06T13:19:35.372Z"),
	"sensor" : "powerapi_sensor",
	"target" : "powerapi-example",
	"metadata" : {
		"scope" : "cpu",
		"socket" : "0",
		"formula" : "605a25f30c554781aba6176eb61c6c03aaaebbc6",
		"ratio" : 0.9838963126087563,
		"predict" : 11.768422518703293
	},
	"power" : 11.626175767236608
}

and when I observe the collection frep, I have the following schema:

{
	"_id" : ObjectId("5f7c7019b1479d9c537f2c91"),
	"timestamp" : ISODate("2020-10-06T13:24:36.914Z"),
	"sensor" : "powerapi_sensor",
	"target" : "4e166cad020b2dcc03aa1eafcbf24c6e4ac23d9c",
	"metadata" : {
		"scope" : "cpu",
		"socket" : "0",
		"layer_frequency" : 600,
		"pkg_frequency" : 616.5078632435011,
		"samples" : 10,
		"id" : 1,
		"error" : 0.32354736328125,
		"intercept" : 4.18572998046875,
		"coef" : "[0. 0. 0. 0.]"
	}
}

I could not find any documentation about the schemas of these entries. Would you mind to provide me some information about it or a link?

Thank you very much

Refactor TimeoutHandler

The actor timeout handler has no real use case, we can refactor the Actor class to no longer use it

New supervision strategie

Main change :

  • Only supervisors could send control messages to actors. This messages will be sent only by the control socket.

  • There will be two way to kill an actor :

    • soft kill : the actor handle all the message in its mailbox, runs its termination behavior and then terminates
    • hard kill : the actor runs its termination behavior and then terminates (the Ctrl-C signal will send a hard kill message to each actor)
  • PoisonPillHandler will centralize all the termination behaviour. the internal teardown method from the Actor class will be deleted.

Lost messages for slow formulas

When stream mode is disabled, messages are lost when the formula is slow to process them.
Only the messages at the end of the dataset are missing.

The problem happen because the formula don't check if there is still reports to process in its data socket before shutting down. When a no-op handler is set for the PoisonPill message in the formula, no messages are lost.

To follow the root of the problem:
When stream mode is disabled, the BackendSupervisor wait for the Puller to shutdown after having processed the reports, then send a soft-kill to all Dispatcher and wait for them to shutdown. The Dispatcher follow the soft_kill to theirs Formula and wait for them before shutdown. Then, the Formula receive the PoisonPill message and shutdown without checking if there is still messages to process. All actors are shots down and the messages remaining in the sockets are destroyed with them.

RAPL_ENERGY_PKG is invalid or unsupported by this machine

Hi Power API team,

I've tried to run a couple of tests, but apparently sensor is failing to start off, due to unsupported event group.
The processor architecture in which I'm running tests has Willow Cove architecture, which is from 2020 (i.e. Sandy Brige +).

Machine characteristics:

  • Processor architecture: Willow Cove
  • Processor collection: 11th Generation Intel® Core™ i7 Processors
  • Cores: 4
  • Threads: 8
$ sudo lshw -C CPU            
 *-cpu                     
      description: CPU
      product: 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
      vendor: Intel Corp.
      physical id: 400
      bus info: cpu@0
      version: 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
      slot: U3E1
      size: 2704MHz
      capacity: 4800MHz
      width: 64 bits
      clock: 100MHz
      capabilities: lm fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp x86-64 constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities cpufreq
      configuration: cores=4 enabledcores=4 threads=8

Steps I took

  1. Install MongoDb (as docker container)
  2. Install powerapi-sensor (as docker container)
docker run --net=host --privileged --name powerapi-sensor -d \
           -v /tmp/sys:/sys -v /var/lib/docker/containers:/var/lib/docker/containers:ro \
           -v /tmp/powerapi-sensor-reporting:/reporting \
		   powerapi/hwpc-sensor:latest \
		   -n "powerapi" \
		   -r "mongodb" -U "mongodb://127.0.0.1:27888" -D "db1" -C "col1" \
		   -s "rapl" -o -e "RAPL_ENERGY_PKG" \
		   -s "msr" -e "TSC" -e "APERF" -e "MPERF" \
		   -c "core" -e "CPU_CLK_THREAD_UNHALTED:REF_P" -e "CPU_CLK_THREAD_UNHALTED:THREAD_P" \
                     -e "LLC_MISSES" -e "INSTRUCTIONS_RETIRED"

Container logs:

I: 21-12-07 11:14:26 build: version undefined (rev: undefined) (Sep 28 2021 - 14:40:24)
I: 21-12-07 11:14:26 uname: Linux 5.13.0-1020-oem #24-Ubuntu SMP Thu Nov 11 14:28:56 UTC 2021 x86_64
E: 21-12-07 11:14:26 config: event 'RAPL_ENERGY_PKG' is invalid or unsupported by this machine
E: 21-12-07 11:14:26 config: failed to parse the provided command-line arguments

I would appreciate if you could help out for me to move forward, do you think there is something I'm missing?
Thanks.

(Question) Badges & general API to measure Project Impact ?

Hello,

I have a question not directly related to powerapi (GitHub Discussion is not enabled on this project so I'm using Issues for this, sorry !)

I was thinking GitHub Badges, connected to a general API, could be a powerful tool to rank a project based on (global) energy consumption usage. Similarly to how we just use badges for Code coverage for instance. Defining the metrics is obviously hard (as anything related to GreenIT), but the Badge itself is an interesting tool to make results visible, and share awareness. And some tips could be more widely shared, for instance in setup up a low-imapct Continuous integration pipeline.

I did a quick research, found this project and pyjoule and the work of Spirals : do you know of any project that would already have implemented this Badge mechanism ?

Major Refactoring

Major

  • change State class to Actor attributes
  • remove possibility to change behavior
  • don't use InitHandler behavior, no interaction is allowed as long as the actor was not started correctly
  • don't use Handlers, use Actor methods instead
  • if pyzmq sockets are used, hide the connection in a launch function that also wrap Actor.start methods
  • wrap the send method into typed method. For example : FormulaActor.submit_report(report: Report) (the API is clearer and typed)

Minor

  • Sens a message to an actor that is dead or not started raise ActorNotStartedException
  • sending StartMessage to a puller with a Filter without route rules answer a ErrorMessage

Formula with Map of Pusher

  •  Formula with a Map of Pusher instead of a List
  •  Pusher handle all Report, and not just "PowerReport" :-)

Fix setuptools package detection

with the new setuptools configuration, setuptools crash while loading the __init__.py file that contain import .... This is caused by importing dependencies that are not been installed yet.

change the names of the parameters

  • change the name of the parameter from frequency to sampling_step or let the user give the frequency (example: 2 instead of 500 ms)
  • change the frequency unit from 100Mhz to either Ghz or Mhz, to be more intuitive ( right now we have to introduce the CPU frequencies - base, min and max - with a multiple of 100 Mhz )

Add CI features

Use pytest plugins to check :

  • Code coverage
  • code style (linter)

Questions about measurement

Hello.
I read this in the official documentation of pyJoules:

Here are some basic usages of pyRAPL. Please note that the reported energy consumption is not only the energy consumption of the code you are running. This includes the global energy consumption of all the process running on the machine during this period, thus including the operating system and other applications.

That is why we recommend to eliminate any extra programs that may alter the energy consumption of the machine hosting experiments and to keep only the code under measurement (i.e., no extra applications, such as graphical interface, background running task…). This will give the closest measure to the real energy consumption of the measured code.

Is it also true for the other measurement tools such as PowerAPI and Jjoules?

Thank you very much.

Minor style fix

  • change python version to 3.8 in DockerFile
  • move report creation function to test utils module

Better Formula API without actor

Currently, formula couldn't be used without a PusherActor and managing some internal State

Create some methods that encapsulate formula Handlers and State/Pusher management.

For example a method compute that take a report and return the power estimation computed from the given report.

Benchmark Python IPC:

Find alternative to pyzmq sockets
The goal is to find an alternative that is easier to use and that can handle 100 000 messages per second

general serialization

For the moment, power report are serialized in order to be save in mongo db

We need a general way to serialize power reports in order to manage multiple database type

Documentation needed - How to select only a given target for Formula filedb output report ?

My needs are the following : I would like to use VirtualWatts to monitor the energy consumption of a process executed on a virtual machine. Following the documentation, I use the output of Smartwatts through a filedb to get the energy consumption of my VM. As expected, the filedb contains the last report made by smartwatts. However, when monitoring several components, the target of the last report (stored in the filedb) is not always my VM. Hence, I can't use this file as an input for VirtualWatts.

Is there any way to ask SmartWatts to output the last report for a given target on its filedb output ? The documentation does not seems to explain this point.

I can later update the documentation is you want :)

Documentation

  • draw an architecture example of powerAPI
  • link to formula
  • general update
  • update the powerAPI README using this one as an inspiration

center documentation on use case (possibility to use asciinema) with simple command line to copy-paste for each use case :

  • Docker container monitoring
  • kubernetes pod monitoring

Minor fix :

  • Rename "client/server" process to "parent/son" process

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.