GithubHelp home page GithubHelp logo

Comments (6)

nelsonje avatar nelsonje commented on August 16, 2024

On some systems, srun is not sufficient to run an MPI job properly; you need to use salloc and mpirun or mpiexec to run multi-node jobs properly. Does the command

make demo-hello_world && salloc -N2 -n4 mpirun applications/demos/hello_world.exe

behave differently than

make demo-hello_world && srun -N2 -n4 applications/demos/hello_world.exe

In general, Grappa CPU usage will be high whether or not a task is idle---it busy-waits for new work to minimize startup latency.

from grappa.

buaasun avatar buaasun commented on August 16, 2024

Can grappa process reach CPU usage of 200%, 300% or even higher ... ?
I write a simple multi thread program, here is test_thread.cpp

#include <thread>
#include <vector>
int main(int argc, char *argv[]) {
  std::vector<std::thread> threads(2);
  for (auto& thr : threads) {
    thr = std::thread([](){
      while(1){
      }
    });
  }
  for (auto& thr : threads) {
    thr.join();
  }
}

then I run the program with srun
srun -N1 -n2 test_thread
this produced 2 test_thread process, and each of the process cost 200% of CPU usage.

I also wirte a simple grappa app test_grappa

#include <Grappa.hpp>
void app_main(int argc, char *argv[]) {
  Grappa::CompletionEvent ce(2);
  for(int i=0;i<2;i++){
    Grappa::spawn([&ce]{
      while(1){
      }
      ce.complete();
    });
  }
  ce.wait();
}
int  main(int argc, char *argv[]) {
  Grappa::init(&argc,&argv);
  Grappa::run([=]{
    app_main(argc,argv);
  });
  Grappa::finalize();
}

then I run the program with srun
srun -N1 -n2 test_grappa
this also produced 2 test_grappa process, but each of the process cost only 100% of CPU usage.
I think the grappa process is binded to a CPU core and it can not use more than one CPU.
Am I right? @nelsonje

from grappa.

bmyerz avatar bmyerz commented on August 16, 2024

First, the program test_grappa probably is not what you intended to write. When you call Grappa::run, it runs the lambda on core0 as a task. Grappa::spawn runs a task that is private to the calling core. So once you call ce.wait, your program has 3 tasks, all running on core0. If you want the program to actually run 2 tasks on both cores, then try this:

void app_main(int argc, char *argv[]) {
  Grappa::on_all_cores([] {
  Grappa::CompletionEvent ce(2);
  for(int i=0;i<2;i++){
    Grappa::spawn([&ce]{
      while(1){
      }
      ce.complete();
    });
  }
  ce.wait();
 });
}

When you call ce.wait() on both cores, you will have 7 tasks. The main task on core0, 2 tasks spawned by on_all_cores (1 on core0 and 1 on core1), 2 tasks by for loop on core 0, and 2 tasks by for loop on core 1.

To answer your question directly, each Grappa process is bound to a CPU core. If you want to use more cores, you run more processes.

from grappa.

buaasun avatar buaasun commented on August 16, 2024

Do you have plan to improve Grappa to support multiple CPU cores per process? @nelsonje @bmyerz

I think it's quite different between multi thread and multi process, threads can share memory while processes can not. So I want Grappa to use not only multi process but multi thread.

If you guys agree with me but do not plan to do it, I'd like to have a try. Do you have any ideas?

from grappa.

simonkahan avatar simonkahan commented on August 16, 2024

My concern would be that doing so would complicate the programming model.
In Grappa today, when tasks executed by distinct cores access the same
global variable, the accesses are guaranteed to be serialized through the
core to which the process owning the variable is bound. No atomics are
needed.
In what you propose, when tasks executed by distinct cores access the same
global variable, they would need to use atomic operations to maintain
consistency: otherwise, for example, the operation performed by one core
might be overwritten by the other.

The difficulty of mixing access to global variables by having some tasks
use delegates and others use atomics I believe would be a programmer's
nightmare: diagnosing races would be insufferable.

So, rather than corrupt the existing memory semantics, I suggest you
consider adding an additional abstraction: a task "bundle". The bundle
consists of a set of tasks all executed as a single process that can span
multiple cores. Tasks within the bundle share memory within their own
address space only with other tasks in the bundle. Access to a global
variable uses the grappa delegate mechanism.

On Tue, Apr 26, 2016 at 5:26 AM, Sun Chenggen [email protected]
wrote:

Do you have plan to improve Grappa to support multiple CPU cores per
process? @nelsonje https://github.com/nelsonje @bmyerz
https://github.com/bmyerz

I think it's quite different between multi thread and multi process,
threads can share memory while processes can not. So I want Grappa to use
not only multi process but multi thread.

If you guys agree with me but do not plan to do it, I'd like to have a
try. Do you have some ideas?


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#279 (comment)

from grappa.

bmyerz avatar bmyerz commented on August 16, 2024

I'm curious to hear what your specific motivation to implement multi-threaded Grappa processes is? Grappa's programming model provides its own shared memory abstraction.

Is it that you want to implement core-to-core communication with intra-process shared memory because you think it will be faster? Then you might change the implementation of Grappa to be multithreaded without changing its user API.

Or, do you just want to be able to use existing multithreaded applications within a Grappa program? If so, then it should be possible to tweak the code (found within this function I think https://github.com/uwsampa/grappa/blob/master/system/Grappa.cpp#L359) that pins Grappa processes to cores so that when you spawn a pthread in your Grappa process your pthread is allowed to run on another core than the core the main pthread is running on. Be warned: if you spawn more pthreads you should not have them call Grappa APIs. Grappa's APIs assume they are being called by only the main pthread.

from grappa.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.