[HN Gopher] Spy on Python down to the Linux kernel level
       ___________________________________________________________________
        
       Spy on Python down to the Linux kernel level
        
       Author : p403n1x87
       Score  : 90 points
       Date   : 2021-09-27 11:05 UTC (1 days ago)
        
 (HTM) web link (p403n1x87.github.io)
 (TXT) w3m dump (p403n1x87.github.io)
        
       | faizshah wrote:
       | How do you relate the output of this back to your code?
       | 
       | Like I look up ddot_kernel_8 from the sample sklearn output and I
       | find it's a function from OpenBLAS but when I try to find how
       | sklearn uses it I don't see where they use that. How would you
       | make use of this tool?
       | 
       | It seems like the output would be useful for writing cython
       | extensions is that the main use case?
        
         | p403n1x87 wrote:
         | If you follow the call stack carefully you should be able to
         | get to the point where sklearn calls ddot_kernel_8 (indirectly
         | in this case). Austin(p) reports source files as well, so that
         | shouldn't be a problem (provided all the debug symbols are
         | available). If you're collecting data with austinp, don't
         | forget to resolve symbol names with the resolve.py utility (htt
         | ps://github.com/P403n1x87/austin/blob/devel/utils/resolve...,
         | see the README for more details: https://github.com/P403n1x87/a
         | ustin/blob/devel/utils/resolve...)
        
       | goffi wrote:
       | Really nice tool to have in the toolbox, thanks for that. For the
       | record I've installed Austin from AUR on Arch and austin-tui was
       | not working (I've pinged the packager about that), and it was not
       | working either with pypi version. It's working if I pipx install
       | directly the git version though.
       | 
       | How does it play with async code?
        
         | p403n1x87 wrote:
         | Thanks for reporting this. Do you mean an async tracee?
         | Austin(p) handles all sorts of Python code, so should work with
         | async like any other code.
        
       | rntksi wrote:
       | What do you usually look for after a trace? I know how to trace
       | and I know the general gist of it in terms of looking at % ratio
       | to see which function calls is hogging the resources, but other
       | than that I don't know what other insights one could infer from
       | looking at a trace result. Can you share your knowledge on this?
        
         | p403n1x87 wrote:
         | You'd normally look at a more convenient visualisation of this
         | data, usually in the form of a flame graph, which shows you
         | aggregated figures for functions and the actual call stacks. If
         | you don't care much about %'s you could still learn a lot about
         | how a program works (especially if you're not familiar with it)
         | just by looking at the call stacks. The VS Code extension makes
         | this quite easy to do since you can click on the interactive
         | flame graph to hop directly to the source code and peek around.
         | See for example https://p403n1x87.github.io/how-to-bust-python-
         | performance-i..., which en passant shows you how pytest works
         | internally
        
         | omegalulw wrote:
         | Not OP but that is already a LOT of information. Usually you
         | see which methods are hogging CPU which shouldn't, and optimize
         | those. The nice part is that you can drill in at various levels
         | of granularity. Really bleeding stuff would be stuff like
         | minimizing copies and sharing memory within Kernels,
         | vectorizing, etc.
        
       ___________________________________________________________________
       (page generated 2021-09-28 23:02 UTC)