Welcome, Guest. Please Login
YaBB - Yet another Bulletin Board
 
  HomeHelpSearchLogin  
 
starting with mpirun but using only 1 proc. (Read 3155 times)
paulfg
YaBB Newbies
*
Offline


I love YaBB 1G - SP1!

Posts: 4
starting with mpirun but using only 1 proc.
Dec 7th, 2006 at 9:10am
 
hello,

i have a working cluster, other MPI soft runs correctly,
and i use lam.
i compiled with "build.sh MPI"
i tried to run "mpirun -np 60 /...path.../HYPHYMPI filename.cfg"
it seems to work correctly, but using only 1 proc
on the master node.

do you have any ideas ?
i've probably done something wrong during compilation

thanks, paul.
Back to top
 
 
IP Logged
 
Sergei
YaBB Administrator
*****
Offline


Datamonkeys are forever...

Posts: 1658
UCSD
Gender: male
Re: starting with mpirun but using only 1 proc.
Reply #1 - Dec 7th, 2006 at 9:16am
 
Dear Paul,

HyPhy does not automatically (in general) distribute the code across nodes (especially if you have a custom file). Many standard analyses do, however, contain code needed to work in an MPI environment. Also, there are command line flags which will distribute certain analyses across MPI nodes. Try

$mpirun -np xx ./HYPHYMPI BatchFiles/MPITest.bf

for a very simple diagnostic to make sure HyPhy runs on your cluster properly.

If you can tell me what you are trying to to, I can help you out.

Cheers,
Sergei
Back to top
 

Associate Professor
Division of Infectious Diseases
Division of Biomedical Informatics
School of Medicine
University of California San Diego
WWW WWW  
IP Logged
 
paulfg
YaBB Newbies
*
Offline


I love YaBB 1G - SP1!

Posts: 4
Re: starting with mpirun but using only 1 proc.
Reply #2 - Dec 7th, 2006 at 9:38am
 
hi

thanks, the mpi test works !
i think i have to go back to the script author
to ask how they run it with mpi usually.

-------
Running a HYPHY-MPI test

Detected 10 computational nodes
Polling slave nodes...
Polling node 2...
OK
Polling node 3...
OK
Polling node 4...
OK
Polling node 5...
OK
Polling node 6...
OK
Polling node 7...
OK
Polling node 8...
OK
Polling node 9...
OK
Polling node 10...
OK

Measuring simple job send/receieve throughput...
Node     2 sent/received 4194 batch jobs per second
Node     3 sent/received 5151.8 batch jobs per second
Node     4 sent/received 5245 batch jobs per second
Node     5 sent/received 5103.2 batch jobs per second
Node     6 sent/received 5079.6 batch jobs per second
Node     7 sent/received 5056 batch jobs per second
Node     8 sent/received 5121.2 batch jobs per second
Node     9 sent/received 5122 batch jobs per second
Node    10 sent/received 5148.2 batch jobs per second

Measuring relative computational performance...
Master node reference index:    1744601
Slave node   1 index:    1687580.      96.73% relative to the master
Slave node   2 index:    1671200.      95.79% relative to the master
Slave node   3 index:    1633820.      93.65% relative to the master
Slave node   4 index:    1693860.      97.09% relative to the master
Slave node   5 index:    1648430.      94.49% relative to the master
Slave node   6 index:    1688510.      96.78% relative to the master
Slave node   7 index:    1698390.      97.35% relative to the master
Slave node   8 index:    1691400.      96.95% relative to the master
Slave node   9 index:    1692130.      96.99% relative to the master

Back to top
 
 
IP Logged