Skip to content

Commit e94b93c

Browse files
author
Thomas Weise
committed
All Remaining MPI Examples have been Added
1 parent 5685fda commit e94b93c

19 files changed

Lines changed: 668 additions & 6 deletions

.gitignore

Lines changed: 31 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,36 @@ hs_err_pid*
158158

159159

160160
## MPI
161-
### Linux Builds of C examples
161+
## Linux Builds of C examples
162162
mpi/bareBones
163+
mpi/basicInfo
164+
mpi/simplePointToPoint1
165+
mpi/simplePointToPoint2
166+
mpi/piPointToPoint
167+
mpi/deadlock
168+
mpi/nonBlockingPointToPoint
169+
mpi/piNonBlockingPointToPoint
170+
mpi/broadcast
171+
mpi/gatherScatterBareBones
172+
mpi/gatherScatterPrimes
173+
mpi/piGatherScatter
174+
mpi/reducePrimes
175+
mpi/structTest
176+
mpi/structScatter
163177
## Windows Builds of C examples
164-
mpi/bareBones.exe
178+
mpi/bareBones.exe
179+
mpi/basicInfo.exe
180+
mpi/simplePointToPoint1.exe
181+
mpi/simplePointToPoint2.exe
182+
mpi/piPointToPoint.exe
183+
mpi/simplePtP2.exe
184+
mpi/deadlock.exe
185+
mpi/nonBlockingPointToPoint.exe
186+
mpi/piNonBlockingPointToPoint.exe
187+
mpi/broadcast.exe
188+
mpi/gatherScatterBareBones.exe
189+
mpi/gatherScatterPrimes.exe
190+
mpi/piGatherScatter.exe
191+
mpi/reducePrimes.exe
192+
mpi/structTest.exe
193+
mpi/structScatter.exe

mpi/.gitignore

Lines changed: 30 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,5 +61,34 @@ hs_err_pid*
6161
## MPI
6262
## Linux Builds of C examples
6363
bareBones
64+
basicInfo
65+
simplePointToPoint1
66+
simplePointToPoint2
67+
piPointToPoint
68+
deadlock
69+
nonBlockingPointToPoint
70+
piNonBlockingPointToPoint
71+
broadcast
72+
gatherScatterBareBones
73+
gatherScatterPrimes
74+
piGatherScatter
75+
reducePrimes
76+
structTest
77+
structScatter
6478
## Windows Builds of C examples
65-
bareBones.exe
79+
bareBones.exe
80+
basicInfo.exe
81+
simplePointToPoint1.exe
82+
simplePointToPoint2.exe
83+
piPointToPoint.exe
84+
simplePtP2.exe
85+
deadlock.exe
86+
nonBlockingPointToPoint.exe
87+
piNonBlockingPointToPoint.exe
88+
broadcast.exe
89+
gatherScatterBareBones.exe
90+
gatherScatterPrimes.exe
91+
piGatherScatter.exe
92+
reducePrimes.exe
93+
structTest.exe
94+
structScatter.exe

mpi/README.md

Lines changed: 86 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,90 @@ The following examples are included in this folder.
88

99
## 1.1. Bare Bones
1010

11-
A simple MPI test program which does nothing except of initializing and disposing the MPI sub system.
11+
A simple MPI test program which does nothing except of initializing and disposing the MPI sub system. Launch 1 instance.
1212

13-
1. [bareBones.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/bareBones.c)
13+
1. [bareBones.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/bareBones.c)
14+
15+
## 1.2. Basic Info
16+
17+
A simple MPI test program which does nothing except of initializing and disposing the MPI sub system and printing the size of the current communicator and the rank of the current process in it. Launch any number of instances.
18+
19+
1. [basicInfo.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/basicInfo.c)
20+
21+
## 1.3. Simple Point-to-Point Communication
22+
23+
A simple MPI program which performs some simple point-to-point communication: Each process with an even rank sends a message to the process with the next-higher rank and wants to receive a message from the process with the next-lower rank. For the odd-ranked processes, it is the other way around. Launch 2 instances, or 2n instances.
24+
25+
1. [simplePointToPoint.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/simplePointToPoint.c)
26+
27+
## 1.4. Simple Point-to-Point Communication 2
28+
29+
A simple MPI program which performs some simple point-to-point communication: The process with rank 0 sends a string to the process with rank 1 who receives it. Launch two instances.
30+
31+
1. [simplePointToPoint2.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/simplePointToPoint2.c)
32+
33+
## 1.5. Estimate Pi with Point-to-Point Communication
34+
35+
This program tries to estimate Pi in the same way as done in our Java [client](http://github.com/thomasWeise/distributedComputingExamples/tree/master/sockets/java/src/PiClient.java)/[server](http://github.com/thomasWeise/distributedComputingExamples/tree/master/sockets/java/src/PiServer.java) example for [sockets](http://github.com/thomasWeise/distributedComputingExamples/tree/master/sockets/java/) - just with MPI. Launch 4 or 5 instances. See also examples 1.8 and 1.12.
36+
37+
1. [piPointToPoint.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/piPointToPoint.c)
38+
39+
## 1.6. Deadlock Error
40+
41+
This program compiles but will enter a deadlock if you run it. The reason is that the processes wait for each other in a cycle. Launch two instances to see how they hang.
42+
43+
1. [deadlock.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/deadlock.c)
44+
45+
## 1.7. Non-Blocking Point-to-Point Communication
46+
47+
This program is very similar to the previous one which caused a deadlock. However, we now use non-blocking point-to-point communication. This means that we can initiate a message receive action and then send a message and then wait for the receive to complete. The deadlock disappears. Launch 2 instances.
48+
49+
1. [nonBlockingPointToPoint.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/nonBlockingPointToPoint.c)
50+
51+
## 1.8. Estimate Pi with Non-Blocking Point-to-Point Communication
52+
53+
Like example 1.5, we try to estimate Pi with point-to-point communication. However, now we perform an asynchronous computation and use non-block point-to-point communication. Launch 4 or 5 instances. See also examples 1.5 and 1.12.
54+
55+
1. [piNonBlockingPointToPoint.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/piNonBlockingPointToPoint.c)
56+
57+
## 1.9. Broadcast
58+
59+
The root node will broadcast a message to everyone. Launch 5 istances.
60+
61+
1. [broadcast.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/broadcast.c)
62+
63+
## 1.10. Gather-Scatter: The Bare Bones
64+
65+
This example shows the bare bones of a gather-scatter based communication.
66+
67+
1. [gatherScatterBareBones.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/gatherScatterBareBones.c)
68+
69+
## 1.11. Gather-Scatter: Count Primes
70+
71+
We use a gather-scatter based communication to count the prime numbers amongst the first 1024 numbers. The number range is divided among all workers. See also example 14, launch 4 instances.
72+
73+
1. [gatherScatterPrimes.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/gatherScatterPrimes.c)
74+
75+
## 1.12. Gather-Scatter: Estimate Pi
76+
77+
This example again tries to estimate Pi, but this time we use gather-scatter based communication. Launch 4 or 5 instances. See also examples 1.5 and 1.8.
78+
79+
1. [piGatherScatter.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/piGatherScatter.c)
80+
81+
## 1.13. Reduce: Count Primes
82+
83+
Like in example 1.11, we want to count the number of primes amongst the first 1024 natural numbers. This time we use `reduce` in the communication. Launch 4 instances
84+
85+
1. [reducePrimes.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/reducePrimes.c)
86+
87+
## 1.14. Memory Layout of a Struct
88+
89+
This example does no communication at all, but it prints the memory layout of a `struct`. This shows that the compiler may align fields in many ways and we cannot compute on where a field of a `struct` but need to use proper addressing.
90+
91+
1. [structTest.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/structTest.c)
92+
93+
## 1.15. Struct with Scatter
94+
95+
We define a `struct` datatype for MPI and then send such `struct`s via scatter. Launch 4 instances.
96+
97+
1. [structScatter.c](http://github.com/thomasWeise/distributedComputingExamples/tree/master/mpi/structScatter.c)

mpi/basicInfo.c

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
#include <mpi.h>
2+
#include <stdio.h>
3+
4+
int main(int argc, char **argv) {
5+
int size, rank;
6+
7+
MPI_Init(&argc, &argv);
8+
9+
MPI_Comm_size(MPI_COMM_WORLD, &size);
10+
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
11+
12+
if(rank == 0) {
13+
printf("Hi from Master\n");
14+
} else {
15+
printf("Just Slave %d out of %d\n", rank, size);
16+
}
17+
18+
MPI_Finalize();
19+
return 0;
20+
}

mpi/broadcast.c

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
#include <mpi.h>
2+
#include <stdio.h>
3+
4+
int main(int argc, char *argv[]) {
5+
char message[60];
6+
int rank, size, root;
7+
MPI_Status status;
8+
9+
MPI_Init(&argc, &argv);
10+
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
11+
MPI_Comm_size(MPI_COMM_WORLD, &size);
12+
13+
if (rank == 0) {
14+
sprintf(message, "Message from root (rank %d)", rank);
15+
}
16+
17+
MPI_Bcast(message, 60, MPI_CHAR, root, MPI_COMM_WORLD);
18+
printf("The message sent/received at node %d is \"%s\"\n", rank, message);
19+
20+
MPI_Finalize();
21+
return 0;
22+
}

mpi/deadlock.c

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
#include <stdio.h>
2+
#include <string.h>
3+
#include <mpi.h>
4+
5+
int main(int argc, char **argv) {
6+
int rank, size, prev, next;
7+
MPI_Status status;
8+
char message[20];
9+
10+
MPI_Init(&argc, &argv);
11+
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
12+
MPI_Comm_size(MPI_COMM_WORLD, &size);
13+
14+
prev = ((size + rank - 1) % size);
15+
next = ((rank + 1) % size);
16+
strcpy(message, "Important message!");
17+
18+
MPI_Recv(message, 20, MPI_CHAR, prev, 0, MPI_COMM_WORLD, &status);
19+
printf("Process %d received message %s from process %d.\n", rank, message, prev);
20+
printf("Process %d is sending message %s to process %d.\n", rank, message, next);
21+
MPI_Send(message, 20, MPI_CHAR, next, 0, MPI_COMM_WORLD);
22+
23+
MPI_Finalize();
24+
return 0;
25+
}

mpi/gatherScatterBareBones.c

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
#include <mpi.h>
2+
#include <stdio.h>
3+
#include <math.h>
4+
5+
#define DATA_SIZE 1024
6+
7+
int main(int argc, char *argv[]) {
8+
int send[DATA_SIZE], recv[DATA_SIZE];
9+
int rank, size, count, root, res;
10+
MPI_Status status;
11+
12+
MPI_Init(&argc, &argv);
13+
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
14+
MPI_Comm_size(MPI_COMM_WORLD, &size);
15+
16+
if(rank == 0) { //If root: Generate data to be distributed.
17+
}
18+
19+
//Send data to all nodes. here: an integer array of length "count".
20+
count = (DATA_SIZE / size);
21+
MPI_Scatter(send, count, MPI_INT, recv, count, MPI_INT, 0, MPI_COMM_WORLD);
22+
23+
// Each node now processes its share of data and sends the results (here: int "res") to root.
24+
MPI_Gather(&res, 1, MPI_INT, recv, 1, MPI_INT, 0, MPI_COMM_WORLD);
25+
26+
if(rank == 0) { //If root: process the received data.
27+
}
28+
29+
MPI_Finalize();
30+
return 0;
31+
}

mpi/gatherScatterPrimes.c

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
#include <mpi.h>
2+
#include <stdio.h>
3+
#include <math.h>
4+
5+
#define DATA_SIZE 1024
6+
7+
int main(int argc, char *argv[]) {
8+
int send[DATA_SIZE], recv[DATA_SIZE];
9+
int rank, size, count, root, res, i, j;
10+
MPI_Status status;
11+
12+
MPI_Init(&argc, &argv);
13+
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
14+
MPI_Comm_size(MPI_COMM_WORLD, &size);
15+
16+
if(rank == 0) { //generate data if root
17+
for(i = DATA_SIZE; (--i)>=0; ) { send[i] = (i + 1); }
18+
}
19+
20+
count = (DATA_SIZE / size);
21+
MPI_Scatter(send, count, MPI_INT, recv, count, MPI_INT, 0, MPI_COMM_WORLD);
22+
23+
// each node now processes its share of data
24+
res = count; //here: count how many prime numbers are contained in the array
25+
outer: for(i = count; (--i) >= 0; ) {
26+
for(j = ((int)(sqrt(recv[i]))|1); j>1; j--) {
27+
if((recv[i] % j) == 0) {
28+
res--;
29+
break; }
30+
}
31+
}
32+
printf("Process %d discovered %d primes in the numbers from %d to %d.\n", rank, res, recv[0], recv[count-1]);
33+
34+
MPI_Gather(&res, 1, MPI_INT, recv, 1, MPI_INT, 0, MPI_COMM_WORLD);
35+
36+
if(rank == 0) { //if root, process the received data
37+
res = 0;
38+
for(i = size; (--i) >= 0; ) { //add up the prime number counts
39+
res += recv[i];
40+
}
41+
printf("The total number of primes in the first %d natural numbers is %d.\n", (count*size), res);
42+
}
43+
44+
MPI_Finalize();
45+
return 0;
46+
}

mpi/make_linux.sh

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,35 @@ set -o errexit # set -e : exit the script if any statement returns a non-true
99
echo "Building MPI examples in C for Linux."
1010

1111
rm -f bareBones
12+
rm -f basicInfo
13+
rm -f simplePointToPoint
14+
rm -f simplePointToPoint2
15+
rm -f piPointToPoint
16+
rm -f deadlock
17+
rm -f nonBlockingPointToPoint
18+
rm -f piNonBlockingPointToPoint
19+
rm -f broadcast
20+
rm -f gatherScatterBareBones
21+
rm -f gatherScatterPrimes
22+
rm -f piGatherScatter
23+
rm -f reducePrimes
24+
rm -f structTest
25+
rm -f structScatter
1226

1327
mpicc bareBones.c -o bareBones
28+
mpicc basicInfo.c -o basicInfo
29+
mpicc simplePointToPoint1.c -o simplePointToPoint1
30+
mpicc simplePointToPoint2.c -o simplePointToPoint2
31+
mpicc piPointToPoint.c -o piPointToPoint
32+
mpicc deadlock.c -o deadlock
33+
mpicc nonBlockingPointToPoint.c -o nonBlockingPointToPoint
34+
mpicc piNonBlockingPointToPoint.c -o piNonBlockingPointToPoint
35+
mpicc broadcast.c -o broadcast
36+
mpicc gatherScatterBareBones.c -o gatherScatterBareBones
37+
mpicc gatherScatterPrimes.c -o gatherScatterPrimes -lm
38+
mpicc piGatherScatter.c -o piGatherScatter
39+
mpicc reducePrimes.c -o reducePrimes -lm
40+
mpicc structTest.c -o structTest
41+
mpicc structScatter.c -o structScatter
1442

1543
echo "Finished building MPI examples in C for Linux."

mpi/nonBlockingPointToPoint.c

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
#include <mpi.h>
2+
#include <stdio.h>
3+
4+
int main(int argc, char *argv[]) {
5+
int rank, size, prev, next;
6+
char buffer[30], buffer2[30];
7+
MPI_Request request, request2;
8+
MPI_Status status;
9+
10+
MPI_Init(&argc,&argv);
11+
MPI_Comm_size(MPI_COMM_WORLD, &size);
12+
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
13+
14+
next = ((rank + 1) % size);
15+
prev = ((rank + size - 1) % size);
16+
17+
MPI_Irecv(buffer, 30, MPI_CHAR, prev, 42, MPI_COMM_WORLD, &request);
18+
19+
sprintf(buffer2, "Non-blocking from %d!", rank);
20+
MPI_Isend(buffer2, 30, MPI_CHAR, next, 42, MPI_COMM_WORLD, &request2);
21+
22+
MPI_Wait(&request, &status);
23+
printf("%d received \"%s\"\n", rank, buffer);
24+
25+
MPI_Wait(&request2, &status);
26+
27+
MPI_Finalize();
28+
return 0;
29+
}

0 commit comments

Comments
 (0)