Skip to content

Commit 9efad61

Browse files
committed
Merge branch 'master' of github.com:GPUE-group/GPUE
2 parents df59862 + 9ef03a3 commit 9efad61

6 files changed

Lines changed: 76 additions & 24 deletions

File tree

paper.bib

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,14 +186,17 @@ @article{ORiordan2016b
186186
@misc{documentation,
187187
title = {{GPUE documentation website}},
188188
author = {Schloss, J and O'Riordan, L. J.},
189+
year = {2018},
189190
howpublished = {\url{https://gpue-group.github.io/}}
190191
}
191192

192193

193194
@misc{WittekGPE2016,
194195
title = {{Comparing three numerical solvers of the Gross-Pitaevskii equation}},
196+
author = {Wittek, P.},
197+
year = {2016},
195198
howpublished = {\url{https://web.archive.org/web/20171120181431/https://peterwittek.com/gpe-comparison.html}},
196-
note = {Accessed: 2018-10-04}
199+
note = {Updated: 18/01/2017. Accessed: 2018-10-04}
197200
}
198201

199202
@phdthesis {ORiordan2017,

paper.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -17,39 +17,40 @@ authors:
1717
affiliations:
1818
- name: Okinawa Institute of Science and Technology Graduate University, Onna-son, Okinawa 904-0495, Japan.
1919
index: 1
20-
date: 21 September 2018
20+
date: 10 December 2018
2121
bibliography: paper.bib
2222
---
2323

2424
# Summary
2525

2626
Bose--Einstein Condensates (BECs) are superfluid systems consisting of bosonic atoms that have been cooled and condensed into a single, macroscopic ground state [@PethickSmith2008; @FetterRMP2009].
27-
These systems can be created in an experimental laboratory and allow for the the exploration of many interesting physical phenomena, such as superfluid turbulence [@Roche2008; @White2014; @Navon2016], chaotic dynamics [@Gardiner2002; @Kyriakopoulos2014; @Zhang2017], and other analogous quantum systems [@DalibardRMP2011].
28-
Numerical simulations of BECs that directly mimic what can be seen in experiments are valuable for fundamental research in these areas.
27+
These systems can be created in an experimental laboratory and allow for the the exploration of physical phenomenon such as superfluid turbulence [@Roche2008; @White2014; @Navon2016], chaotic dynamics [@Gardiner2002; @Kyriakopoulos2014; @Zhang2017], and analogues of other quantum systems [@DalibardRMP2011].
28+
Numerical simulations of BECs that directly mimic experiments are valuable to fundamental research in these areas and allow for theoretical advances before experimental validation.
2929
The dynamics of BEC systems can be found by solving the non-linear Schrödinger equation known as the Gross--Pitaevskii Equation (GPE),
3030

3131
$$
3232
i\hbar \frac{\partial\Psi(\mathbf{r},t)}{\partial t} = \left( -\frac{\hbar^2}{2m} {\nabla^2} + V(\mathbf{r}) + g|\Psi(\mathbf{r},t)|^2\right)\Psi(\mathbf{r},t),
3333
$$
3434

3535
where $\Psi(\mathbf{r},t)$ is the three-dimensional many-body wavefunction of the quantum system, $\mathbf{r} = (x,y,z)$, $m$ is the atomic mass, $V(\mathbf{r})$ is an external potential, $g = \frac{4\pi\hbar^2a_s}{m}$ is a coupling factor, and $a_s$ is the scattering length of the atomic species.
36-
Here, the GPE is shown in three dimensions, but it can easily be modified for one or two dimensions [@PethickSmith2008].
37-
The split-operator method is one straightforward technique to solve the GPE and has previously been accelerated with GPU devices [@Ruf2009; @Bauke2011]
38-
No generalized software packages are available using this method on GPU devices; however, software packages have been designed to simulate BECs with other methods, including GPELab [@Antoine2014] the Massively Parallel Trotter-Suzuki Solver [@Wittek2013], and XMDS [@xmds].
36+
Here, the GPE is shown in three dimensions, but it can easily be modified to one or two dimensions [@PethickSmith2008].
37+
One of the most straightforward methods for solving the GPE is the split-operator method, which has previously been accelerated with GPU devices [@Ruf2009; @Bauke2011].
38+
No generalized software packages are vailable using this method on GPU devices that allow for user-configurable simulations and a variety of different system types; however,
39+
several software packages exist to simulate BECs with other methods and on different architectures, including GPELab [@Antoine2014] the Massively Parallel Trotter-Suzuki Solver [@Wittek2013], and XMDS [@xmds].
3940

40-
GPUE is a GPU-based GPE solver via the split-operator method for superfluid simulations of both linear and non-linear Schrödinger equations, emphasizing Bose--Einstein Condensates with vortex dynamics in 2 and 3 dimensions. GPUE provides a fast, robust, and accessible method to simulate superfluid physics for fundamental research in the area and has been used to simulate and manipulate large vortex lattices in two dimensions [@ORiordan2016; @ORiordan2016b], along with ongoing studies on quantum vortex dynamics in two and three dimensions.
41+
GPUE is a GPU-based Gross--Pitaevskii Equation solver via the split-operator method for superfluid simulations of both linear and non-linear Schrödinger equations, emphasizing superfluid vortex dynamics in two and three dimensions. GPUE is a fast, robust, and accessible software suite to simulate physics for fundamental research in the area of quantum systems and has been used to manipulate large vortex lattices in two dimensions [@ORiordan2016; @ORiordan2016b] along with ongoing studies of vortex dynamics.
4142

4243
For these purposes, GPUE provides a number of unique features:
4344
1. Dynamic field generation for trapping potentials and other variables on the GPU device.
4445
2. Vortex tracking in 2D and vortex highlighting in 3D.
4546
3. Configurable gauge fields for the generation of artificial magnetic fields and corresponding vortex distributions [@DalibardRMP2011; @Ghosh2014].
4647
4. Vortex manipulation via direct control of the wavefunction phase [@Dobrek1999].
4748

48-
All of these features enable GPUE to simulate a wide variety of linear and non-linear (BEC) dynamics of quantum systems. The above features enable configurable physical system parameters and GPUE’s high-performance numerical solver improves over other suites [@WittekGPE2016; @ORiordan2017]. All GPUE features and functionalities have been described in further detail in the documentation [@documentation].
49+
All of these features enable GPUE to simulate a wide variety of linear and non-linear dynamics of quantum systems. GPUE additionally features a numerical solver with improvements over other suites [@WittekGPE2016; @ORiordan2017]. All of GPUE's features and functionality have been described in further detail in the documentation [@documentation].
4950

5051
# Acknowledgements
5152
This work has been supported by the Okinawa Institute of Science and Technology Graduate University and by JSPS KAKENHI Grant Number JP17J01488.
5253
We would also like to thank Thomas Busch, Rashi Sachdeva, Tiantian Zhang, Albert Benseney, and Angela White for discussions on useful physical systems to simulate with the GPUE codebase, along with Peter Wittek and Tadhg Morgan for contributions to the code, itself.
53-
These acknowledgements can be found in `acknowledgements.md`.
54+
These acknowledgements can be found in `GPUE/acknowledgements.md`.
5455

5556
# References

py/plot.py

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -190,6 +190,48 @@ def plot_wfc_phase(xDim, yDim, data_dir, pltval, start, end, incr):
190190
#fig = plt.figure()
191191
#fig.savefig('wfc.png')
192192

193+
# Function to plot wfc cut
194+
def plot_wfc_cut(xDim, yDim, data_dir, pltval, start, end, incr):
195+
if data_dir[0] != "/":
196+
data_dir = "../" + data_dir
197+
for i in range(start,end,incr):
198+
print(i)
199+
data_real = data_dir + "/wfc_0_const_%s" % i
200+
data_im = data_dir + "/wfc_0_consti_%s" % i
201+
if pltval == "wfc_cut_ev":
202+
data_real = data_dir + "/wfc_ev_%s" % i
203+
data_im = data_dir + "/wfc_evi_%s" % i
204+
205+
lines_real = np.loadtxt(data_real)
206+
lines_im = np.loadtxt(data_im)
207+
wfc_real = np.reshape(lines_real, (xDim,yDim));
208+
wfc_im = np.reshape(lines_im, (xDim,yDim));
209+
210+
wfc = abs(wfc_real + 1j * wfc_im)
211+
wfc = wfc*wfc
212+
213+
max = 0
214+
for j in range(xDim):
215+
for k in range(yDim):
216+
if (wfc[j][k] > max):
217+
max = wfc[j][k]
218+
219+
print("Max value is: ",max)
220+
for j in range(xDim):
221+
for k in range(yDim):
222+
if (wfc[j][k] > max*0.4):
223+
wfc[j][k] = 1.0
224+
else:
225+
wfc[j][k] = 0.0
226+
227+
plt.imshow(wfc, extent=(-6.9804018707623236e-04,6.9804018707623236e-04,-6.9804018707623236e-04,6.9804018707623236e-04), interpolation='nearest',
228+
cmap = cm.jet)
229+
plt.colorbar()
230+
plt.show()
231+
#fig = plt.figure()
232+
#fig.savefig('wfc.png')
233+
234+
193235

194236
# Function to parse arguments for plotting
195237
# Note: We assume that the parameters come in sets
@@ -233,6 +275,9 @@ def plot(par):
233275
elif (par.item == "GK" or par.item == "GV"):
234276
plot_complex(par.xDim, par.yDim, par.data_dir, par.item,
235277
par.start, par.end, par.incr)
278+
elif (par.item == "wfc_cut" or par.item == "wfc_cut_ev"):
279+
plot_wfc_cut(par.xDim, par.yDim, par.data_dir, par.item,
280+
par.start, par.end, par.incr)
236281
elif (par.end != 1):
237282
plot_var_range(par.xDim, par.yDim, par.data_dir, par.item,
238283
par.start, par.end, par.incr)

py/vort.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -245,13 +245,13 @@ def run(start,fin,incr): #Performs the tracking
245245
v0c = vorts_c.element(index_r[0]).sign #Get the sign of the smallest distance vortex
246246
v0p = vorts_p.element(i3).sign # Get the sign of the current vortex at index i3
247247
v1c = vorts_c.element(index_r[0]).uid #Get uid of current vortex
248-
#Check if distance is less than 7 grid points, and that the sign is matched between previous and current vortices, and that the current vortex has a negative uid, indicating that a pair has not yet been found. If true, then update the current vortex index to that of the previous vortex index, and turn vortex on --- may be dangerous
249-
if (index_r[1] < 30) and (vorts_c.element(index_r[0]).sign == vorts_p.element(i3).sign) and (vorts_c.element(index_r[0]).uid < 0) and (vorts_p.element(i3).isOn == True):
250-
vorts_c.element(index_r[0]).update_uid(vorts_p.element(i3).uid)
251-
vorts_c.element(index_r[0]).update_on(True)
252-
else:
253-
print "Failed to find any matching vortex. Entering interactive mode. Exit with Ctrl+D"
254-
from IPython import embed; embed()
248+
#Check if distance is less than 7 grid points, and that the sign is matched between previous and current vortices, and that the current vortex has a negative uid, indicating that a pair has not yet been found. If true, then update the current vortex index to that of the previous vortex index, and turn vortex on --- may be dangerous
249+
if (index_r[1] < 30) and (vorts_c.element(index_r[0]).sign == vorts_p.element(i3).sign) and (vorts_c.element(index_r[0]).uid < 0) and (vorts_p.element(i3).isOn == True):
250+
vorts_c.element(index_r[0]).update_uid(vorts_p.element(i3).uid)
251+
vorts_c.element(index_r[0]).update_on(True)
252+
else:
253+
print "Failed to find any matching vortex. Entering interactive mode. Exit with Ctrl+D"
254+
from IPython import embed; embed()
255255

256256

257257
#You will never remember why this works

src/ds.cu

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,8 +91,9 @@ void generate_plan_other3d(cufftHandle *plan_fft1d, Grid &par, int axis){
9191

9292
if(result != CUFFT_SUCCESS){
9393
printf("Result:=%d\n",result);
94-
printf("Error: Could not execute cufftPlan3d(%s ,%d ,%d ).\n",
95-
"plan_1d", (unsigned int)xDim, (unsigned int)yDim);
94+
printf("Error: Could not execute cufftPlan3d(%s, %d, %d, %d).\n",
95+
"plan_3d", (unsigned int)xDim, (unsigned int)yDim,
96+
(unsigned int)zDim);
9697
exit(1);
9798
}
9899

src/init.cu

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -309,11 +309,15 @@ int init(Grid &par){
309309
(unsigned int)xDim, (unsigned int)yDim);
310310
exit(1);
311311
}
312-
generate_plan_other2d(&plan_other2d, par);
313312

314313
generate_plan_other3d(&plan_1d, par, 0);
315-
generate_plan_other3d(&plan_dim2, par, 1);
316-
generate_plan_other3d(&plan_dim3, par, 2);
314+
if (dimnum == 2){
315+
generate_plan_other2d(&plan_other2d, par);
316+
}
317+
if (dimnum == 3){
318+
generate_plan_other3d(&plan_dim3, par, 2);
319+
generate_plan_other3d(&plan_dim2, par, 1);
320+
}
317321
result = cufftPlan3d(&plan_3d, xDim, yDim, zDim, CUFFT_Z2Z);
318322
if(result != CUFFT_SUCCESS){
319323
printf("Result:=%d\n",result);
@@ -526,7 +530,6 @@ void set_variables(Grid &par, bool ev_type){
526530

527531
// Special variables / instructions for 2/3d case
528532
if (dimnum > 1 && !par.bval("Ay_time")){
529-
pAy_gpu = par.cufftDoubleComplexval("pAy_gpu");
530533
EpAy = par.cufftDoubleComplexval("EpAy");
531534
err=cudaMemcpy(pAy_gpu, EpAy, sizeof(cufftDoubleComplex)*gsize,
532535
cudaMemcpyHostToDevice);
@@ -538,7 +541,6 @@ void set_variables(Grid &par, bool ev_type){
538541
}
539542

540543
if (dimnum > 2 && !par.bval("Az_time")){
541-
pAz_gpu = par.cufftDoubleComplexval("pAz_gpu");
542544
EpAz = par.cufftDoubleComplexval("EpAz");
543545
err=cudaMemcpy(pAz_gpu, EpAz, sizeof(cufftDoubleComplex)*gsize,
544546
cudaMemcpyHostToDevice);

0 commit comments

Comments
 (0)