tAdd more content - brcon2020_adc - my presentation for brcon2020
 (HTM) git clone git://src.adamsgaard.dk/.brcon2020_adc
 (DIR) Log
 (DIR) Files
 (DIR) Refs
 (DIR) LICENSE
       ---
 (DIR) commit da0a4b7ed98cd134c4053d43613ec69832244e74
 (DIR) parent 11961efe351b5aaa8bad5a292a69abb4b08be295
 (HTM) Author: Anders Damsgaard <anders@adamsgaard.dk>
       Date:   Tue, 28 Apr 2020 18:36:27 +0200
       
       Add more content
       
       Diffstat:
         M brcon2020_adc.md                    |      44 ++++++++++++++++++++++++-------
       
       1 file changed, 34 insertions(+), 10 deletions(-)
       ---
 (DIR) diff --git a/brcon2020_adc.md b/brcon2020_adc.md
       t@@ -239,12 +239,15 @@ Academic interests:
        
        * Performance through massively parallel deployment (MPI, GPGPU)
        
       -    * NOAA/NCRC Gaea cluster
       +    * NOAA/DOE NCRC Gaea cluster
                * 2x Cray XC40, "Cray Linux Environment"
                * 4160 nodes, each 32 to 36 cores, 64 GB memory
                * infiniband
                * total: 200 TB memory, 32 PB SSD, 5.25 petaflops (peak)
        
       +% National Climate-Computing Research Center
       +% as much power as a small city
       +
        ## Scaling problem
        
        New algorithms hard to implement in HPC codes
       t@@ -264,15 +267,35 @@ NO!
        
        %## Measuring computational energy use
        
       +## Example: Ice-sheet flow with sediment/fluid modeling
        
       -## Algorithm matters
       +
       +    --------------------------._____                   ATMOSPHERE
       +                ----->              ```--..
       +        ICE                                 `-._________________      __
       +                ----->                             ------>      |vvvv|  |vvv
       +                                               _________________|    |__|
       +                ----->                      ,'
       +                                          ,'    <><      OCEAN
       +                ---->                    /                        ><>
       +    ____________________________________/___________________________________
       +      SEDIMENT  -->
       +    ________________________________________________________________________
        
        * example: granular dynamics and fluid flow simulation for glacier flow
        
       +* 90% of Antarctic ice sheet mass driven by ice flow over sediment
       +
       +* need to understand ice-basal sliding in order to project sea-level rise
       +
       +
       +## Algorithm matters
       +
                        sphere: git://src.adamsgaard.dk/sphere
                                C++, Nvidia C, cmake, Python, Paraview
                                massively parallel, GPGPU
                                detailed physics
       +                        20,191 LOC
        #pause
                                3 month computing time on nvidia tesla k40 (2880 cores)
        
       t@@ -285,6 +308,7 @@ NO!
                                C99, makefiles, gnuplot
                                single threaded
                                simple physics
       +                        2,348 LOC
        #pause
                                real: 0m00.07 s on potato laptop from 2012
        
       t@@ -296,16 +320,16 @@ NO!
        for numerical simulation:
        
        * high-level languages
       -        * easy
       -        * produces results quickly
       -        * does not develop low-level programming skills
       -        * no insight into numerical algorithm
       -        * realistically speaking: no direct way to HPC
       +    * easy
       +    * produces results quickly
       +    * does not develop low-level programming skills
       +    * no insight into numerical algorithm
       +    * realistically speaking: no direct way to HPC
        
        * low-level languages
       -        * require low-level skills
       -        * saves electrical energy
       -        * directly to HPC, just sprinkle some MPI on top
       +    * require low-level skills
       +    * saves electrical energy
       +    * directly to HPC, just sprinkle some MPI on top
        
        
        ## Thanks