[HN Gopher] Understanding Deep Learning
___________________________________________________________________
Understanding Deep Learning
Author : georgehill
Score : 107 points
Date : 2023-11-26 20:54 UTC (2 hours ago)
(HTM) web link (udlbook.github.io)
(TXT) w3m dump (udlbook.github.io)
| WeMoveOn wrote:
| lit
| msie wrote:
| This book looks impressive. There's a chapter on the unreasonable
| effectiveness of Deep Learning which I love. Any other books I
| should be on the lookout for?
| nootopian wrote:
| https://news.ycombinator.com/item?id=38425368
| ldjkfkdsjnv wrote:
| I spent a decade working on various machine learning platforms at
| well known tech companies. Everything I ever worked on became
| obsolete pretty fast. From the ML algorithm to the compute
| platform, all of it was very transitory. That coupled with the
| fact that a few elite companies are responsible for all ML
| innovation, its oxymoronic to me to even learn a lot of this
| material.
| drBonkers wrote:
| What would you recommend someone read instead?
| ldjkfkdsjnv wrote:
| Better to understand the bounds of whats currently possible.
| And then recognize when that changes. Much more economically
| valuable
| probablynish wrote:
| Do you think there's a better way to do this than spending
| some time playing around with the latest releases of
| different tools?
| reqo wrote:
| Very few things stay the same in Technology. You should think
| of technology as another type of evolution! It is driven by the
| same type of forces as evolution IMO. I think even Linus
| Torvalds once stated that Linux evolved trough natural
| selection.
| nabla9 wrote:
| >machine learning platforms
|
| Machine learning platforms become obsolete.
|
| Machine learning algorithms and ideas don't. If learning SVN or
| Naive Bayes did not teach you things that are useful today, you
| didn't learn anything.
| xcv123 wrote:
| Agreed. Look at the table of contents of this book. Whatever
| fundamental machine learning concepts you learned with SVM or
| other obsolete algorithms is still useful and applicable
| today.
| ldjkfkdsjnv wrote:
| Nobody is building real technology with either of those
| algorithms. Sure, they are theoretically helpful, but they
| arent valuable anymore. Spending your precious life learning
| them is a waste
| xcv123 wrote:
| So what? The same fundamental machine learning concepts are
| still relevant to deep learning.
|
| It's almost like arguing that everything you learned as a
| Java developer is completely useless when a new programming
| language replaces it.
| HighFreqAsuka wrote:
| Quite a lot of techniques in deep learning have stood the test
| of time at this point. Also new techniques are developed either
| depending on or trying to solved deficiencies in old
| techniques. For example Transformers were developed to solve
| vanishing gradients in LSTMs over long sequences and improve
| GPU utilization since LSTMs were inherently sequential in the
| time dimension.
| ldjkfkdsjnv wrote:
| Sure, but if you were an expert in LSTM, thats nice, you know
| the lineage of algorithms. But it probably isnt valuable,
| companies dont care, and you cant directly use that
| knowledge. You would never just randomly study LSTMs now.
| HighFreqAsuka wrote:
| Transformers have disadvantages too, and so LSTMs are still
| used in industry. But also it's not that hard to learn a
| couple new things every year.
| water-your-self wrote:
| No chapter on RNNs, but one on transformers is interesting,
| having last read Deep learning by ian goodfellow in 2016
| nothrowaways wrote:
| Yeah, content looks interesting.
| PeterisP wrote:
| RNNs have "lost the hardware lottery" by being structurally not
| that efficient to train on the cost-effective hardware that's
| available. So they're not really used for much right now -
| though IMHO they are conceptually sufficiently interesting
| enough to cover in such a course.
| nsxwolf wrote:
| As someone who missed the boat on this, is learning about this
| just for historical purposes now, or is there still relevance to
| future employment? I just imagine the OpenAI eats everyone's
| lunch in regards to anything AI related, am I way off base?
| ksherlock wrote:
| Maybe last week's drama should have been a left-pad moment. For
| many things you can train your own NN and be just as good
| without being dependent on internet access, third parties, etc.
| Knowing how things work should give you insight into using them
| better.
| lamroger wrote:
| I wonder if using APIs was more of a first to market move
| mnky9800n wrote:
| Which drama of last week are you referring to? The one about
| the openai guy saying it's all just the data set? Or
| something else?
| adamnemecek wrote:
| All machine learning is Hopf convolution, analogous to
| renormalization. This should come as no surprise, renormalization
| can be modeled via the Ising model which itself is closely
| related to Hopfield networks which are recurrent networks.
| dchuk wrote:
| Hopefully not a dumb question: how do I buy a physical copy?
| rossant wrote:
| It'll be published in a few days:
| https://mitpress.mit.edu/9780262048644/understanding-deep-le...
___________________________________________________________________
(page generated 2023-11-26 23:00 UTC)