PSCuX (Parallel Simple Classes Under MosiX)


Intro

    PSCuX is a parallel Library for C++ that help development of parallel applications that can be transparently distributed over network connected machines with Linux+OpenMosix
    The architecture implanted in PSCuX allow any kind of function run in parallel over any kind of data, and these processes has a well structured way of communication with others.
    This project, plus OpenMosix, aim to be( in fact, it's already ) a replacement for the PVM and MPI libraries under Linux environment.





Overview


    OpenMosix is a linux kernel extension that provide a SSI like Cluster, where his nodes work as a single machine with several processors.
    There is no need to use any kind of library ( like PVM or MPI ) to take advantage of the nodes power, you can just fork() some childrens and they are quickly distributed over the machines, in a adaptative and transparent way( like a SMP OS do with machine's CPUs ).
    Is true that this method("fork and forget") of development is pretty god, but, is very important that the programmer has a clean and elegant form of development, becoming the use of a library, extremely recommended.
    Leaving of this principle I decided to make my proper library of parallel programming for the OpenMosix platform, therefore I believe that I can make it more efficient and easy to use that already the existing ones, besides counting on one exelente mechanism of external load balancing(OpenMosix).
    A basic use of PSCuX to run 4 functions in parallel and put his returns into results vars;

Sequential code
PSCuX parallel code
result1 = func1( 20 );
result2 = func2( 32 );
result3 = func3( 21 );
result4 = func4( 45 );
PscuxJob job1( func1 , 20 );
PscuxJob job2( func2 , 32 );
PscuxJob job3( func3 , 21 );
PscuxJob job4( func4 , 45 );

job1.exec();
job2.exec();
job3.exec();
job4.exec();

result1 = job1.receive();
result2 = job2.receive();
result3 = job3.receive();
result4 = job4.receive();

Inter-Process Communication

    A paradigm of the distributed computation that cannot be forgotten is of communication between processes.
    PSCuX implements this scheme in a sufficiently clean form in the class "PscuxPasser".
PscuxPasser allows that given contained in a variable of any type it is transmitted of a process for another one at any time.
    The declaration of a PscuxPasser object if gives of the following form:

PscuxPasser mypasser;

    A function that is being executed in parallel can thus send the content of a local or global variable for one another function like this:

send_func( int ){
  char buff[100];
  strcpy( buff , "Hello World!!!" );
  mypasser.send( buff );
  return 1;
}

    The call "mypasser.send(buff)" is in "sleep mode" until one another process receives what he is being sent. For example:

int receive_func( int ){
  char buff[100];
  mypasser.receive( buff );
  cout << "received: "
          << buff << endl;
  return 1;
}

Distributed Shared Memory

    PSCuX implements a class called PscuxSharedVar that supplies to the programmer an extremely simple method to share memory between distributed parallel processes. The base idea of this class is to store a referring value to some type of variable and to supply methods of alteration and recovery of these data to all the processes that need, like this, any process can change the value stored in or verifying what it is stored at the moment, leaving the available done alterations for the other processes.
    The example illustrates 2 processes running in parallel having access to the same global PscuxSharedVar variable that keeps the lesser value between 2 processed vectors:

PscuxSharedVar lesser_value;

int find_lesser_value( vector_t v ){
  int i, temp;
  for(i=0 ; i<PART ; i++){
    lesser_value >> temp;
    if( v.vector[i] < temp )
      lesser_value << v.vector[i];
  }
  return 0;
}

PscuxJob1 job1( find_melhor_value , vector1 );
Pscuxjob1 job2( find_lesser_value , vector2 );

job1.exec();
job2.exec();
job1.receive();
job2.receive();

cout << "lesser value between the vectors: "
     << lesser_value << endl;