Skip to content

Commit 9c59ffc

Browse files
author
Thomas Weise
committed
Improved Documentation
1 parent e94b93c commit 9c59ffc

2 files changed

Lines changed: 115 additions & 17 deletions

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ Several of the C examples come for Windows or Linux. GCC allows you to cross-com
8282

8383
### 2.8. MPICH
8484

85-
In order to build and compile our [examples](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/) for using the Message Passing Interface (MPI), we need an MPI implementation. We choose MPICH.
85+
In order to build and compile our [examples](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/) for using the Message Passing Interface (MPI), we need an [MPI implementation](https://en.wikipedia.org/wiki/Message_Passing_Interface#Implementations). We choose [MPICH](https://en.wikipedia.org/wiki/MPICH).
8686

8787
Under Linux, you can install the required files via `sudo apt-get install mpich libmpich-dev`.
8888

mpi/README.md

Lines changed: 114 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -6,92 +6,190 @@ The Message Passing Interface ([MPI](https://en.wikipedia.org/wiki/Message_Passi
66

77
The following examples are included in this folder.
88

9-
## 1.1. Bare Bones
9+
### 1.1. Bare Bones
1010

1111
A simple MPI test program which does nothing except of initializing and disposing the MPI sub system. Launch 1 instance.
1212

1313
1. [bareBones.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/bareBones.c)
1414

15-
## 1.2. Basic Info
15+
Build: `mpicc bareBones.c -o bareBones`
16+
17+
### 1.2. Basic Info
1618

1719
A simple MPI test program which does nothing except of initializing and disposing the MPI sub system and printing the size of the current communicator and the rank of the current process in it. Launch any number of instances.
1820

1921
1. [basicInfo.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/basicInfo.c)
2022

21-
## 1.3. Simple Point-to-Point Communication
23+
Build: `mpicc basicInfo.c -o basicInfo`
24+
25+
### 1.3. Simple Point-to-Point Communication
2226

2327
A simple MPI program which performs some simple point-to-point communication: Each process with an even rank sends a message to the process with the next-higher rank and wants to receive a message from the process with the next-lower rank. For the odd-ranked processes, it is the other way around. Launch 2 instances, or 2n instances.
2428

2529
1. [simplePointToPoint.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/simplePointToPoint.c)
2630

27-
## 1.4. Simple Point-to-Point Communication 2
31+
Build: `mpicc simplePointToPoint1.c -o simplePointToPoint1`
32+
33+
### 1.4. Simple Point-to-Point Communication 2
2834

2935
A simple MPI program which performs some simple point-to-point communication: The process with rank 0 sends a string to the process with rank 1 who receives it. Launch two instances.
3036

3137
1. [simplePointToPoint2.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/simplePointToPoint2.c)
3238

39+
Build: `mpicc simplePointToPoint2.c -o simplePointToPoint2`
40+
3341
## 1.5. Estimate Pi with Point-to-Point Communication
3442

3543
This program tries to estimate Pi in the same way as done in our Java [client](http://github.com/thomasWeise/distributedComputingExamples/tree/master/sockets/java/src/PiClient.java)/[server](http://github.com/thomasWeise/distributedComputingExamples/tree/master/sockets/java/src/PiServer.java) example for [sockets](http://github.com/thomasWeise/distributedComputingExamples/tree/master/sockets/java/) - just with MPI. Launch 4 or 5 instances. See also examples 1.8 and 1.12.
3644

3745
1. [piPointToPoint.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/piPointToPoint.c)
3846

39-
## 1.6. Deadlock Error
47+
Build: `mpicc piPointToPoint.c -o piPointToPoint`
48+
49+
### 1.6. Deadlock Error
4050

4151
This program compiles but will enter a deadlock if you run it. The reason is that the processes wait for each other in a cycle. Launch two instances to see how they hang.
4252

4353
1. [deadlock.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/deadlock.c)
4454

45-
## 1.7. Non-Blocking Point-to-Point Communication
55+
Build: `mpicc deadlock.c -o deadlock`
56+
57+
### 1.7. Non-Blocking Point-to-Point Communication
4658

4759
This program is very similar to the previous one which caused a deadlock. However, we now use non-blocking point-to-point communication. This means that we can initiate a message receive action and then send a message and then wait for the receive to complete. The deadlock disappears. Launch 2 instances.
4860

4961
1. [nonBlockingPointToPoint.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/nonBlockingPointToPoint.c)
5062

51-
## 1.8. Estimate Pi with Non-Blocking Point-to-Point Communication
63+
Build: `mpicc nonBlockingPointToPoint.c -o nonBlockingPointToPoint`
64+
65+
### 1.8. Estimate Pi with Non-Blocking Point-to-Point Communication
5266

5367
Like example 1.5, we try to estimate Pi with point-to-point communication. However, now we perform an asynchronous computation and use non-block point-to-point communication. Launch 4 or 5 instances. See also examples 1.5 and 1.12.
5468

5569
1. [piNonBlockingPointToPoint.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/piNonBlockingPointToPoint.c)
5670

57-
## 1.9. Broadcast
71+
Build: `mpicc piNonBlockingPointToPoint.c -o piNonBlockingPointToPoint`
5872

59-
The root node will broadcast a message to everyone. Launch 5 istances.
73+
### 1.9. Broadcast
74+
75+
The root node will broadcast a message to everyone. Launch 5 instances.
6076

6177
1. [broadcast.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/broadcast.c)
6278

63-
## 1.10. Gather-Scatter: The Bare Bones
79+
Build: `mpicc broadcast.c -o broadcast`
80+
81+
### 1.10. Gather-Scatter: The Bare Bones
6482

6583
This example shows the bare bones of a gather-scatter based communication.
6684

6785
1. [gatherScatterBareBones.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/gatherScatterBareBones.c)
6886

69-
## 1.11. Gather-Scatter: Count Primes
87+
Build: `mpicc gatherScatterBareBones.c -o gatherScatterBareBones`
88+
89+
### 1.11. Gather-Scatter: Count Primes
7090

7191
We use a gather-scatter based communication to count the prime numbers amongst the first 1024 numbers. The number range is divided among all workers. See also example 14, launch 4 instances.
7292

7393
1. [gatherScatterPrimes.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/gatherScatterPrimes.c)
7494

75-
## 1.12. Gather-Scatter: Estimate Pi
95+
Build: `mpicc gatherScatterPrimes.c -o gatherScatterPrimes -lm`
96+
97+
### 1.12. Gather-Scatter: Estimate Pi
7698

7799
This example again tries to estimate Pi, but this time we use gather-scatter based communication. Launch 4 or 5 instances. See also examples 1.5 and 1.8.
78100

79101
1. [piGatherScatter.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/piGatherScatter.c)
80102

81-
## 1.13. Reduce: Count Primes
103+
Build: `mpicc piGatherScatter.c -o piGatherScatter`
104+
105+
### 1.13. Reduce: Count Primes
82106

83107
Like in example 1.11, we want to count the number of primes amongst the first 1024 natural numbers. This time we use `reduce` in the communication. Launch 4 instances
84108

85109
1. [reducePrimes.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/reducePrimes.c)
86110

87-
## 1.14. Memory Layout of a Struct
111+
Build: `mpicc reducePrimes.c -o reducePrimes -lm`
112+
113+
### 1.14. Memory Layout of a Struct
88114

89115
This example does no communication at all, but it prints the memory layout of a `struct`. This shows that the compiler may align fields in many ways and we cannot compute on where a field of a `struct` but need to use proper addressing.
90116

91117
1. [structTest.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/structTest.c)
92118

93-
## 1.15. Struct with Scatter
119+
Build: `mpicc structTest.c -o structTest`
120+
121+
### 1.15. Struct with Scatter
94122

95123
We define a `struct` datatype for MPI and then send such `struct`s via scatter. Launch 4 instances.
96124

97-
1. [structScatter.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/structScatter.c)
125+
1. [structScatter.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/structScatter.c)
126+
127+
Build: `mpicc structScatter.c -o structScatter`
128+
129+
## 2. Building and Execution
130+
131+
In order to build and run our examples, you need to have a [C compiler](https://en.wikipedia.org/wiki/List_of_compilers#C_compilers) and an [MPI implementation](https://en.wikipedia.org/wiki/Message_Passing_Interface#Implementations). Here we use [GCC](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) and [MPICH](https://en.wikipedia.org/wiki/MPICH).
132+
133+
### 2.1. Installation
134+
135+
#### 2.1.1. Linux
136+
137+
Under Linux, GCC is usually already installed, otherwise you can do `sudo apt-get install gcc`.
138+
MPICH can be installed using `sudo apt-get install mpich libmpich-dev`.
139+
140+
In some environments, such as [travis ci](https://github.com/travis-ci/apt-package-whitelist/issues/406), it is not possible to install MPICH directly. Here you can do it manually, by executing the following lines in your terminal:
141+
142+
currentDir=`pwd`
143+
mpichVersion=3.2
144+
cd /tmp/
145+
wget --no-check-certificate -q http://www.mpich.org/static/downloads/$mpichVersion/mpich-$mpichVersion.tar.gz
146+
tar -xzf mpich-$mpichVersion.tar.gz
147+
cd mpich-$mpichVersion
148+
mkdir build && cd build
149+
sudo ../configure CC=$CC CXX=$CXX --disable-fortran --disable-romio
150+
sudo make -j2
151+
sudo make install
152+
cd "$currentDir"
153+
154+
#### 2.1.2. Windows
155+
156+
Until a two or so years ago (at the time of this writing), you could use [MPICH](https://en.wikipedia.org/wiki/MPICH) under Windows. Unfortunately, it seems that MPICH supported Windows only [until version `1.4.1p`](http://stackoverflow.com/questions/21153750) and it seems there is no MPICH for Windows anymore, at the time of this writing. We could use `MS-MPI` instead, but getting this to work seems to be a a hassle, see
157+
158+
1. [https://github.com/scisoft/autocmake/issues/85](https://github.com/scisoft/autocmake/issues/85) (for FORTRAN)
159+
2. [http://stackoverflow.com/questions/32200131](http://stackoverflow.com/questions/32200131) (for FORTRAN)
160+
3. [http://stackoverflow.com/questions/19755272](http://stackoverflow.com/questions/19755272) (for FORTRAN)
161+
4. [http://stackoverflow.com/questions/21153750](http://stackoverflow.com/questions/21153750)
162+
5. [https://social.microsoft.com/Forums/en-US/245dcda4-7699-494f-bbe1-b76eb19e53da/linking-msmpi-with-mingw-gfortran?forum=windowshpcmpi](https://social.microsoft.com/Forums/en-US/245dcda4-7699-494f-bbe1-b76eb19e53da/linking-msmpi-with-mingw-gfortran?forum=windowshpcmpi) (FORTRAN again).
163+
164+
For now, I list how you can get to MS-MPI from the MPICH size. *I have not yet found out how to get it to run with MinGW (let alone for crosscompilation from Linux to Windows).* _This summary below is thus, basically, useless._
165+
166+
1. To download MPICH visit [http://www.mpich.org/downloads/](http://www.mpich.org/downloads/)
167+
2. Naturally, you would select a Windows distribution of MPICH - scroll down the page until point `Microsoft Windows` at the very bottom.
168+
3. Click the link. At the time of this writing, this is version [1.0.3](http://msdn.microsoft.com/en-us/library/bb524831%28v=vs.85%29.aspx).
169+
4. Interestingly, this link will lead you to [Microsoft](https://msdn.microsoft.com/en-us/library/bb524831(v=vs.85).aspx): scroll to "MS-MPI downloads" and click. At the time of this writing, this is [MS-MPI v7](http://go.microsoft.com/FWLink/p/?LinkID=389556).
170+
5. You arrive at (yet another) [download page](https://www.microsoft.com/en-us/download/details.aspx?id=49926) which also provides installation instructions. Oddly enough, on the page, there is neither a button nor any link for doing the downlad as the time of this writing.
171+
6. Well, I found [http://www.microsoft.com/en-us/download/details.aspx?id=47259](http://www.microsoft.com/en-us/download/details.aspx?id=47259) from where you can seemingly download version 6 of MS-MPI. Not as good as version 7, but it will do.
172+
7. When clicking the download button on that page, you get to (arghh!!!) [another download](http://www.microsoft.com/en-us/download/confirmation.aspx?id=47259). There I choose to [download `MSMpiSetup.exe`](https://download.microsoft.com/download/6/4/A/64A7852A-A8C3-476D-908C-30501F761DF3/MSMpiSetup.exe).
173+
8. After downloading `MSMpiSetup.exe`, right-click it an choose `Run as Administrator`.
174+
9. In the opening install screen, click `Next`, accept the license agreement by checking the check box and click `Next` again.
175+
10. Leave the installation path as is (`C:\Program Files\Microsoft MPI\`) and click `Next` again. Then click `Install` and after the process completes, click `Finish`.
176+
177+
### 2.2. Building
178+
179+
#### 2.2.1. Linux
180+
181+
Under Linux, you can now compile each example using `mpicc` which becomes available after the above installation. For instance, you would compile example 1.5. as `mpicc piPointToPoint.c -o piPointToPoint`. For some examples (such as 1.11 and 1.13), you need to add the parameter [`-lm`](http://www.stackoverflow.com/questions/10447791/), as they need to be linked against the math library.
182+
183+
#### 2.2.2. Windows
184+
185+
As said before, currently not supported by this README.md, sorry.
186+
187+
### 2.3. Execution
188+
189+
### 2.3.1. Linux
190+
191+
After compiling, you can now execute the programs using `mpirun`. For the example 1.5 above, you would do `mpirun -n 4 ./piPointToPoint`.
192+
193+
### 2.3.2. Windows
194+
195+
As said before, currently not supported by this README.md, sorry.

0 commit comments

Comments
 (0)