[HN Gopher] Alice's Adventures in a Differentiable Wonderland
       ___________________________________________________________________
        
       Alice's Adventures in a Differentiable Wonderland
        
       Author : henning
       Score  : 119 points
       Date   : 2025-06-30 18:02 UTC (3 days ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | fossa1 wrote:
       | Glad to see JAX featured alongside PyTorch. JAX still feels like
       | the best-kept secret in deep learning
        
       | ProofHouse wrote:
       | Damn beeeeefffffyyyyy. Need the month to eat ten pages a day, Tnx
       | looks awesome. Could append diffusion too ultimately
        
       | superjose wrote:
       | Wow, kudos to the Author. Very easy to digest, beautifully
       | crafted, and took the time to explain the concepts when most
       | places take them for granted.
        
       | magnio wrote:
       | This looks like a good practical companion for a more theoretical
       | text, such as Deep Learning by Bishop.
        
       | kittikitti wrote:
       | Although I love this, it's not peer reviewed and I don't trust
       | arxiv.
        
         | SiempreViernes wrote:
         | Actually, it _is_ peer reviewed following the standard practice
         | for books: some other people read it and provided feedback as
         | evidenced by the Acknowledgments section.
        
         | esafak wrote:
         | People are submitting corrections:
         | https://www.sscardapane.it/assets/alice/errata_list.pdf
        
         | odyssey7 wrote:
         | It's more a book than academic research.
         | 
         | The funny thing about books is that authors in free societies
         | are allowed to self-publish whatever they want. The norms are
         | different and, frankly, more democratic and with less
         | gatekeeping.
        
         | ethan_smith wrote:
         | arXiv is a preprint server trusted by the scientific community
         | for decades - papers there often undergo peer review later, and
         | many top ML researchers publish their work there first for
         | faster dissemination.
        
       | _giorgio_ wrote:
       | Website of the author with more material and lab sessions
       | 
       | https://www.sscardapane.it/alice-book/
       | 
       | https://sscardapane.notion.site/Guided-lab-sessions-18c25bd1...
        
       | odyssey7 wrote:
       | It would be nice if arXiv included a small-layout pdf or native
       | epub option for e-readers. Now that they serve the Tex files and
       | are experimenting with HTML, it feels like a natural step.
        
       | 0cf8612b2e1e wrote:
       | The corresponding row vector is denoted by x^T when we need to
       | distinguish them. We can also ignore the transpose for
       | readability, if the shape is clear from context.
       | 
       | I am tilting at windmills, but I am continually annoyed at the
       | sloppiness of mathematicians in writing. Fine, you don't like
       | verbosity, but for didactic purposes, please do not assume the
       | reader is equipped to know that variable x actually implies
       | variable y.
       | 
       | All that being said, the writing style from the first chapter is
       | very encouraging at how approachable this will be.
        
         | JadeNB wrote:
         | > I am tilting at windmills, but I am continually annoyed at
         | the sloppiness of mathematicians in writing. Fine, you don't
         | like verbosity, but for didactic purposes, please do not assume
         | the reader is equipped to know that variable x actually implies
         | variable y.
         | 
         | I am a practicing mathematician who felt the same way you did
         | when I started, and who still writes their papers in a way that
         | many of my colleagues feel is gallingly pedantic. With that as
         | my credentials, I hope I may say that it can be much worse as a
         | reader to read something where every detail is spelled out,
         | because a bit of syntactic sugar begins to seem as important as
         | the heart of an argument. Where the dividing line is between
         | precision and obfuscation depends on the reader, and so
         | inevitably will leave some readers on the wrong side, but a
         | trade-off does have to be made somewhere.
        
         | runeblaze wrote:
         | It is weird to be honest. I first learned Coq and then started
         | taking upper level maths classes. My group theory proofs were
         | panned by my TAs as overly verbose, very precise, and I was
         | specializing on H_1 and H_2s everywhere and having IHns flying
         | around like crazy because I _could not fathom_ how one proves
         | things without formally connecting things up.
         | 
         | Then my profs told me I was not "wrong", but proofs or
         | expositions are to most mathematicians not programs (ha! How
         | did I not know. You teach me natural deduction and expect me
         | _not_ to program?), more like convincing arguments /prose. At
         | some point one abstracts.
        
       | bwfan123 wrote:
       | this 3 page classic [1] captures most of the core ideas and
       | explains it in a manner anyone with basic calculus background can
       | understand - "Learning representations by back-propagating
       | errors"
       | 
       | [1] https://gwern.net/doc/ai/nn/1986-rumelhart-2.pdf
        
       | dunefox wrote:
       | And I just bought the physical book...
        
       ___________________________________________________________________
       (page generated 2025-07-03 23:01 UTC)