tsphere_internals.rst - sphere - GPU-based 3D discrete element method algorithm with optional fluid coupling
(HTM) git clone git://src.adamsgaard.dk/sphere
(DIR) Log
(DIR) Files
(DIR) Refs
(DIR) LICENSE
---
tsphere_internals.rst (12561B)
---
1 sphere internals
2 ================
3
4 The *sphere* executable has the following options:
5
6 .. command-output:: ../../sphere --help
7
8 The most common way to invoke *sphere* is however via the Python API (e.g. :py:func:`sphere.run`, :py:func:`sphere.render`, etc.).
9
10
11
12
13 \subsection{The *sphere* algorithm}
14 \label{subsec:spherealgo}
15 The *sphere*-binary is launched from the system terminal by passing the simulation ID as an input parameter; \texttt{./sphere\_<architecture> <simulation\_ID>}. The sequence of events in the program is the following:
16 #. System check, including search for NVIDIA CUDA compatible devices (\texttt{main.cpp}).
17
18 #. Initial data import from binary input file (\texttt{main.cpp}).
19
20 #. Allocation of memory for all host variables (particles, grid, walls, etc.) (\texttt{main.cpp}).
21
22 #. Continued import from binary input file (\texttt{main.cpp}).
23
24 #. Control handed to GPU-specific function \texttt{gpuMain(\ldots)} (\texttt{device.cu}).
25
26 #. Memory allocation of device memory (\texttt{device.cu}).
27
28 #. Transfer of data from host to device variables (\texttt{device.cu}).
29
30 #. Initialization of Thrust\footnote{\url{https://code.google.com/p/thrust/}} radix sort configuration (\texttt{device.cu}).
31
32 #. Calculation of GPU workload configuration (thread and block layout) (\texttt{device.cu}).
33
34 #. Status and data written to \verb"<simulation_ID>.status.dat" and \verb"<simulation_ID>.output0.bin", both located in \texttt{output/} folder (\texttt{device.cu}).
35
36 #. Main loop (while \texttt{time.current <= time.total}) (functions called in \texttt{device.cu}, function definitions in seperate files). Each kernel call is wrapped in profiling- and error exception handling functions:
37
38
39 #. \label{loopstart}CUDA thread synchronization point.
40
41 #. \texttt{calcParticleCellID<<<,>>>(\ldots)}: Particle-grid hash value calculation (\texttt{sorting.cuh}).
42
43 #. CUDA thread synchronization point.
44
45 #. \texttt{thrust::sort\_by\_key(\ldots)}: Thrust radix sort of particle-grid hash array (\texttt{device.cu}).
46
47 #. \texttt{cudaMemset(\ldots)}: Writing zero value (\texttt{0xffffffff}) to empty grid cells (\texttt{device.cu}).
48
49 #. \texttt{reorderArrays<<<,>>>(\ldots)}: Reordering of particle arrays, based on sorted particle-grid-hash values (\texttt{sorting.cuh}).
50
51 #. CUDA thread synchronization point.
52
53 #. Optional: \texttt{topology<<<,>>>(\ldots)}: If particle contact history is required by the contact model, particle contacts are identified, and stored per particle. Previous, now non-existant contacts are discarded (\texttt{contactsearch.cuh}).
54
55 #. CUDA thread synchronization point.
56
57 #. \texttt{interact<<<,>>>(\ldots)}: For each particle: Search of contacts in neighbor cells, processing of optional collisions and updating of resulting forces and torques. Values are written to read/write device memory arrays (\texttt{contactsearch.cuh}).
58
59 #. CUDA thread synchronization point.
60
61 #. \texttt{integrate<<<,>>>(\ldots)}: Updating of spatial degrees of freedom by a second-order Taylor series expansion integration (\texttt{integration.cuh}).
62
63 #. CUDA thread synchronization point.
64
65 #. \texttt{summation<<<,>>>(\ldots)}: Particle contributions to the net force on the walls are summated (\texttt{integration.cuh}).
66
67 #. CUDA thread synchronization point.
68
69 #. \texttt{integrateWalls<<<,>>>(\ldots)}: Updating of spatial degrees of freedom of walls (\texttt{integration.cuh}).
70
71 #. Update of timers and loop-related counters (e.g. \texttt{time.current}), (\texttt{device.cu}).
72
73 #. If file output interval is reached:
74
75 \item Optional write of data to output binary (\verb"<simulation_ID>.output#..bin"), (\texttt{file\_io.cpp}).
76 \item Update of \verb"<simulation_ID>.status#..bin" (\texttt{device.cu}).
77
78 \item Return to point \ref{loopstart}, unless \texttt{time.current >= time.total}, in which case the program continues to point \ref{loopend}.
79
80
81 #. \label{loopend}Liberation of device memory (\texttt{device.cu}).
82
83 #. Control returned to \texttt{main(\ldots)}, liberation of host memory (\texttt{main.cpp}).
84
85 #. End of program, return status equal to zero (0) if no problems where encountered.
86
87
88 Numerical algorithm
89 -------------------
90 The *sphere*-binary is launched from the system terminal by passing the simulation ID as an input parameter; \texttt{./sphere\_<architecture> <simulation\_ID>}. The sequence of events in the program is the following:
91
92 #. System check, including search for NVIDIA CUDA compatible devices (\texttt{main.cpp}).
93
94 #. Initial data import from binary input file (\texttt{main.cpp}).
95
96 #. Allocation of memory for all host variables (particles, grid, walls, etc.) (\texttt{main.cpp}).
97
98 #. Continued import from binary input file (\texttt{main.cpp}).
99
100 #. Control handed to GPU-specific function \texttt{gpuMain(\ldots)} (\texttt{device.cu}).
101
102 #. Memory allocation of device memory (\texttt{device.cu}).
103
104 #. Transfer of data from host to device variables (\texttt{device.cu}).
105
106 #. Initialization of Thrust\footnote{\url{https://code.google.com/p/thrust/}} radix sort configuration (\texttt{device.cu}).
107
108 #. Calculation of GPU workload configuration (thread and block layout) (\texttt{device.cu}).
109
110 #. Status and data written to \verb"<simulation_ID>.status.dat" and \verb"<simulation_ID>.output0.bin", both located in \texttt{output/} folder (\texttt{device.cu}).
111
112 #. Main loop (while \texttt{time.current <= time.total}) (functions called in \texttt{device.cu}, function definitions in seperate files). Each kernel call is wrapped in profiling- and error exception handling functions:
113
114
115 #. \label{loopstart}CUDA thread synchronization point.
116
117 #. \texttt{calcParticleCellID<<<,>>>(\ldots)}: Particle-grid hash value calculation (\texttt{sorting.cuh}).
118
119 #. CUDA thread synchronization point.
120
121 #. \texttt{thrust::sort\_by\_key(\ldots)}: Thrust radix sort of particle-grid hash array (\texttt{device.cu}).
122
123 #. \texttt{cudaMemset(\ldots)}: Writing zero value (\texttt{0xffffffff}) to empty grid cells (\texttt{device.cu}).
124
125 #. \texttt{reorderArrays<<<,>>>(\ldots)}: Reordering of particle arrays, based on sorted particle-grid-hash values (\texttt{sorting.cuh}).
126
127 #. CUDA thread synchronization point.
128
129 #. Optional: \texttt{topology<<<,>>>(\ldots)}: If particle contact history is required by the contact model, particle contacts are identified, and stored per particle. Previous, now non-existant contacts are discarded (\texttt{contactsearch.cuh}).
130
131 #. CUDA thread synchronization point.
132
133 #. \texttt{interact<<<,>>>(\ldots)}: For each particle: Search of contacts in neighbor cells, processing of optional collisions and updating of resulting forces and torques. Values are written to read/write device memory arrays (\texttt{contactsearch.cuh}).
134
135 #. CUDA thread synchronization point.
136
137 #. \texttt{integrate<<<,>>>(\ldots)}: Updating of spatial degrees of freedom by a second-order Taylor series expansion integration (\texttt{integration.cuh}).
138
139 #. CUDA thread synchronization point.
140
141 #. \texttt{summation<<<,>>>(\ldots)}: Particle contributions to the net force on the walls are summated (\texttt{integration.cuh}).
142
143 #. CUDA thread synchronization point.
144
145 #. \texttt{integrateWalls<<<,>>>(\ldots)}: Updating of spatial degrees of freedom of walls (\texttt{integration.cuh}).
146
147 #. Update of timers and loop-related counters (e.g. \texttt{time.current}), (\texttt{device.cu}).
148
149 #. If file output interval is reached:
150
151 * Optional write of data to output binary (\verb"<simulation_ID>.output#..bin"), (\texttt{file\_io.cpp}).
152 * Update of \verb"<simulation_ID>.status#..bin" (\texttt{device.cu}).
153
154 #. Return to point \ref{loopstart}, unless \texttt{time.current >= time.total}, in which case the program continues to point \ref{loopend}.
155
156
157 #. \label{loopend}Liberation of device memory (\texttt{device.cu}).
158
159 #. Control returned to \texttt{main(\ldots)}, liberation of host memory (\texttt{main.cpp}).
160
161 #. End of program, return status equal to zero (0) if no problems where encountered.
162
163
164
165 The length of the computational time steps (\texttt{time.dt}) is calculated via equation \ref{eq:dt}, where length of the time intervals is defined by:
166
167 .. math::
168 \Delta t = 0.075 \min \left( m/\max(k_n,k_t) \right)
169
170 where :math:`m` is the particle mass, and :math:`k` are the elastic stiffnesses.
171 The time step is set by this relationship in :py:func:`initTemporal`.
172 This equation ensures that the elastic wave (traveling at the speed of sound) is resolved a number of times while traveling through the smallest particle.
173
174 \subsubsection{Host and device memory types}
175 \label{subsubsec:memorytypes}
176 A full, listed description of the *sphere* source code variables can be found in appendix \ref{apx:SourceCodeVariables}, page \pageref{apx:SourceCodeVariables}. There are three types of memory types employed in the *sphere* source code, with different characteristics and physical placement in the system (figure \ref{fig:memory}).
177
178 The floating point precision operating internally in *sphere* is defined in \texttt{datatypes.h}, and can be either single (\texttt{float}), or double (\texttt{double}). Depending on the GPU, the calculations are performed about double as fast in single precision, in relation to double precision. In dense granular configuraions, the double precision however results in greatly improved numerical stability, and is thus set as the default floating point precision. The floating point precision is stored as the type definitions \texttt{Float}, \texttt{Float3} and \texttt{Float4}. The floating point values in the in- and output datafiles are \emph{always} written in double precision, and, if necessary, automatically converted by *sphere*.
179
180 Three-dimensional variables (e.g. spatial vectors in `E^3`) are in global memory stored as \texttt{Float4} arrays, since these read and writes can be coalesced, while e.g. \texttt{float3}'s cannot. This alone yields a `\sim`20`\times` performance boost, even though it involves 25\% more (unused) data.
181
182
183 \paragraph{Host memory} is the main random-access computer memory (RAM), i.e. read and write memory accessible by CPU processes, but inaccessible by CUDA kernels executed on the device.
184
185
186 \paragraph{Device memory} is the main, global device memory. It resides off-chip on the GPU, often in the form of 1--6 GB DRAM. The read/write access from the CUDA kernels is relatively slow. The arrays residing in (global) device memory are prefixed by ``dev_`` in the source code.
187
188 \marginpar{Todo: Expand section on device memory types}
189
190 \paragraph{Constant memory} values cannot be changed after they are set, and are used for scalars or small vectors. Values are set in the ``transferToConstantMemory(...)}`` function, called in the beginning of \texttt{gpuMain(\ldots)} in \texttt{device.cu}. Constant memory variables have a global scope, and are prefixed by ``devC_`` in the source code.
191
192
193
194 %\subsection{The main loop}
195 %\label{subsec:mainloop}
196 %The *sphere* software calculates particle movement and rotation based on the forces applied to it, by application of Newton's law of motion (Newton's second law with constant particle mass: `F_{\mathrm{net}} = m \cdot a_{\mathrm{cm}}`). This is done in a series of algorithmic steps, see list on page \pageref{loopstart}. The steps are explained in the following sections with reference to the *sphere*-source file; \texttt{sphere.cu}. The intent with this document is \emph{not} to give a full theoretical background of the methods, but rather how the software performs the calculations.
197
198
199 \subsection{Performance}
200 \marginpar{Todo: insert graph of performance vs. np and performance vs. `\Delta t`}.
201 \subsubsection{Particles and computational time}
202
203 \subsection{Compilation}
204 \label{subsec:compilation}
205 An important note is that the \texttt{C} examples of the NVIDIA CUDA SDK should be compiled before *sphere*. Consult the `Getting started guide`, supplied by Nvidia for details on this step.
206
207 *sphere* is supplied with several Makefiles, which automate the compilation process. To compile all components, open a shell, go to the \texttt{src/} subfolder and type \texttt{make}. The GNU Make will return the parameters passed to the individual CUDA and GNU compilers (\texttt{nvcc} and \texttt{gcc}). The resulting binary file (\texttt{sphere}) is placed in the *sphere* root folder. ``src/Makefile`` will also compile the raytracer.
208
209
210 C++ reference
211 -------------
212 .. doxygenclass:: DEM
213 :members:
214
215