[HN Gopher] My Favorite Book on AI
___________________________________________________________________
My Favorite Book on AI
Author : f1shy
Score : 100 points
Date : 2025-01-05 16:28 UTC (1 days ago)
(HTM) web link (www.gatesnotes.com)
(TXT) w3m dump (www.gatesnotes.com)
| thorum wrote:
| > The historian Yuval Noah Harari has argued that humans should
| figure out how to work together and establish trust before
| developing advanced AI. In theory, I agree. If I had a magic
| button that could slow this whole thing down for 30 or 40 years
| while humanity figures out trust and common goals, I might press
| it. But that button doesn't exist. These technologies will be
| created regardless of what any individual or company does.
|
| Is that true, though? Training runs for frontier models don't
| happen without enormous resources and support. You don't run one
| in your garage. It doesn't happen unless people make it happen.
|
| Is this really a harder coordination problem than, say, stopping
| climate change, which Gates does believe is worth trying?
| liuliu wrote:
| Genuinely asking, are we still trying to "stop" the climate
| change any more or are we in the "easing the incoming pain"
| phase already?
| cyberpunk wrote:
| We have a PhD student ecologist in the family and they've
| lost all hope. It's quite depressing. Doesn't look like the
| next round of world governments have it high on the agenda
| also.
| zombiwoof wrote:
| Isn't it sad , that our government and companies quickly
| solved AI energy problems by bootstrapping Nuclear again,
| but in 30 years can't make a single unified decision on
| anything else
| TeMPOraL wrote:
| It'll be ironic if AI solves climate change by just
| existing, i.e. by giving the world a compelling reason to
| stop hiding behind austerity and anti-growth bullshit
| rhetoric, and to double down on energy production
| instead, thus forcing us to tackle the "how do we do it
| without cooking ourselves" problem up front, since it
| cannot be deferred.
| cyberpunk wrote:
| It won't start moving until the boomer generation are
| out, I think. Let's just hope we can do better when it's
| our turn I guess.
| beezlebroxxxxxx wrote:
| Powerful, and wealthy in particular, people are more
| interested in the latter than the former. When vast wealth is
| built off of maintaining the status-quo there is very little
| incentive for implementing changes that threaten the status-
| quo "organically."
|
| I get hope when I read this essay from Harpers [0]. But I
| actually think it will be more like Paolo Bacigalupi's "The
| Water Knife" [1].
|
| [0]: https://harpers.org/archive/2021/06/prayer-for-a-just-
| war-fi...
|
| [1]: https://en.wikipedia.org/wiki/The_Water_Knife
| basch wrote:
| do you mean stop as in reverse the trend by limiting bad
| behavior or do you mean stop like climate engineering. there
| is always the option to use science to modify the climate on
| purpose.
| add-sub-mul-div wrote:
| Sam Altman convincing the ultra-rich that AI is the next best
| place to put their too much money after they've run out of
| other places to put it is not anywhere near the spirit of what
| he meant by people "working together." He's saying that while
| in a dysfunctional society, AI will only magnify the
| dysfunction.
|
| Fighting climate change is a means towards the end of having a
| livable environment, developing AI is a means towards the end
| of having a better society. But, whereas fixing the environment
| would be its own automatic benefit, having AGI would not
| automatically improve the world. Something as seemingly
| innocuous and positive as social networking made a lot of
| things worse.
| zombiwoof wrote:
| This rings true. The internet exposed the dysfunctional
| wealth inequality (look at all the billionaires it created) ,
| social networks exposed the dysfunctional human
| communications and manipulation (ie Jan 6 was a love fest) ,
| and now AI will just take both of those to the next level.
| Extreme wealth, extreme poverty and mass manipulation of
| people
|
| If anybody things AI will cure cancer or something grandiose
| like that they are propping up their stock portfolio.
|
| AI will be the harbinger of the last wave of human growth
| before we all end up killing each other over the price of
| eggs or whatever else AI regurgitation machine decides
| lanthissa wrote:
| Climate change is the byproduct of the desired outcome, energy.
| Advanced AI, if you buy Yuval's argument is the threat in and
| of itself.
|
| So Climate Change is a problem that can be 'solved' while the
| main goal is pursued. This is ideologically consistent with
| gates investment in terrapower. Where as AI isn't because the
| desired outcome is the threat not a bi-product.
|
| So your questions a bit flawed fundamentally.
|
| As for gate's point is it true, almost certainly yes, the game
| theory is peruse and lie that you aren't, or openly pursue. You
| can't ever not pursue because you do not and cannot have
| perfect information.
|
| Imagine how much visibility china would demand from the US to
| trust it was doing nothing, far more than they could give, and
| vice versa.
|
| Do you think the us is going to give its adversaries tracking
| and production information its most advanced chips? It would
| never, and if they did why would other powers trust it if
| theres every reason to lie.
| joe_the_user wrote:
| _Climate change is the byproduct of the desired outcome,
| energy. Advanced AI, if you buy Yuval 's argument is the
| threat in and of itself._
|
| That's just a matter of how you slice your concepts. You
| could say burning oil is a threat in and off itself, for
| example. Or oppositely, "the threat of bad AI" is a byproduct
| of "useful AI".
|
| _So Climate Change is a problem that can be 'solved' while
| the main goal is pursued._
|
| I don't think many people trying to solve climate change are
| trying to end industrial society. They are trying to find an
| energy source that doesn't produce Co2 pollution.
| tim333 wrote:
| AI research is global and of strategic value with both the US
| and China competing. I don't see one stopping research while
| the other cracks ahead. Similar problems exist with curbing co2
| which hasn't gone very well to date.
| the_arun wrote:
| With all due respect - I would have hoped to see a list of other
| AI books reviewed with a recommendation. Currently the article
| seems like a Preface for the book - "The Coming Wave".
| davideg wrote:
| I remember people on HN having a less than favorable sentiment
| about Mustafa Suleyman:
|
| https://news.ycombinator.com/item?id=39757330
|
| Regardless, he's certainly been in the right places to understand
| AI trends and Gates' write-up makes it sound like an intriguing
| distillation. Thanks for posting!
| Barrin92 wrote:
| It's an okay book but there isn't really anything in it that you
| couldn't infer after you've read the first 10%. A lot of common
| sense warnings about risks from AI, bioweaopns, cyberattacks etc
| but it's all very generic. There's no chapter in it that I found
| had any genuine insight. An interesting chapter would have been
| "what if I'm completely wrong and all we get is a bunch of meme
| generators and the next bubble", but that never appears to be a
| possibility.
|
| It's oddly enough the case with a lot of books that end up on
| Gates recommended lists. I saw someone recently say, maybe a bit
| too mean, that we might make it to AGI because Yuval Noah Harari
| keeps writing books that more and more look like they're written
| by ChatGPT and it's not entirely untrue for a lot of the stuff
| Gates recommends.
| option wrote:
| the author of that book is a _non-technical_ co-founder of
| DeepMind who currently leads Microsoft AI efforts.
| add-sub-mul-div wrote:
| Apologies if I've missed your point, but if what you're hinting
| at is that we should take his ideas about the social impact of
| AI less seriously because he's not _deep into writing Rust
| code_ all day, that 's just laughable.
| asdasdsddd wrote:
| Well yes, we don't really know what this thing is yet, and
| the only ones who kind of understand it are the researchers
| themselves.
| option wrote:
| no, my point is that before reading a book it is often
| helpful to know who the author is and what is their track
| record
| thangalin wrote:
| An alpha reader for my hard sci-fi novel wrote:
|
| "Hey just wanted to let you know I started autonoma - only
| started - I'm at page 45 now but I'm really digging this - love
| getting into a story about AI and the image of the scene in Japan
| in the game - super great - and the scene with the coy wolves-
| I'm totally in."
|
| The novel has a take on AGI and ASI that diverges from our fear
| of machines that will destroy/control/enslave humanity. I'd be
| grateful for any other alpha readers who'd like to give me their
| thoughts on the story, especially with respect to the economic
| ramifications. See my profile for contact details.
| Insanity wrote:
| The author is the head of Microsoft AI. Gates might not be
| entirely unbiased here :)
|
| edit: as a semi-related question for folks here. How often do you
| 'vet' authors of non-fiction books prior to reading the book?
| NilMostChill wrote:
| depends on the subject matter.
|
| If it's something i have no grounding in then understanding the
| authors' potential biases is useful.
|
| If it's something I'm relatively familiar with or close enough
| that i think I'll be able to understand the application of
| potential biases in realtime then i don't usually bother.
|
| This issue is sometimes somewhat alleviated by reading multiple
| sources for the same/similar information.
|
| YMMV
| ladyprestor wrote:
| I always do a little bit of research on the author online
| beforehand. If I'm going to read a non-fiction book I need to
| know if the author is credible.
|
| I do believe Bill Gates might be a little bit biased here. I
| read the book some months ago, and while I can't say it's a bad
| book, I wouldn't call it a favorite either.
| zeofig wrote:
| >The author is the head of Microsoft AI. I'm disappointed but
| not surprised.
| ccppurcell wrote:
| Funny you should say that because I just read his Wikipedia
| page and a couple of articles about him. He and Gates are
| successful salesmen and managers who dabbled in coding when
| they were young. I don't expect any insight from them about the
| effect of technologies on society or anything like that. The
| idea that they are intellectuals or scholars is laughable.
| pjmorris wrote:
| > How often do you 'vet' authors of non-fiction books prior to
| reading the book?
|
| I misread the question as 'How' rather than 'How often', but
| I'll repeat Jerry Weinberg's heuristic. He'd wait until three
| people he trusted recommended a book before reading it, as a
| way to filter for quality. He used it as a way to manage his
| limited time ("24 hours, maybe 60 good years" - Jimmy Buffett),
| but it also works to weed out books not worth mentioning.
|
| In terms of 'how often', pretty often.
| senko wrote:
| I read all the negative/neutral GoodReads comments of the book
| (and author's other books if I'm not familiar with the author,
| and maybe Wikipedia if I want to dig deeper).
|
| 99% of books I learn about from recommendations (HN, blogs,
| other books), and the pattern I see is that the
| source/recommender are usually at the similar "popsci" level.
|
| I sometimes get it wrong. In most cases I just waste a few
| hours. The worst mistake was taking Why We Sleep to heart
| before I read the rebuttal. I still think it's fine, but more
| on a Gladwell level.
|
| Im Suleyman's case, I recognize the name from Inflection
| shenanigans, so already have a bias against the book to start
| with.
| WillAdams wrote:
| Interesting critical review at:
|
| https://www.goodreads.com/book/show/90590134-the-coming-wave
|
| >...
|
| >Given that The coming wave assumes that technology comes in
| waves and these waves are driven by the insiders, the solution it
| proposes is containment--governments should determine (via
| regulation) who gets to develop the technology, and what uses
| they should put the technology to. The assumption seems to be
| that governments can control access to natural choke points in
| the technology. One figure the book offers is how around 80% of
| the sand used in semiconductors comes from a single mine--control
| the mine and you control much of that aspect of the industry.
| This is not true though. Nuclear containment, for example, relies
| more on peer pressure between nation states, than regulation per
| se. It's quite possible to build a reactor or bomb in your
| backyard. The more you scale up these efforts, the more it's
| likely that the international community will notice and press you
| to stop. Squeezing on one of these choke points it more likely to
| move the activity somewhere else, then enable you to control it.
|
| >...
|
| >At its heart this is a book by and insider arguing that someone
| is going to develop this world-changing technology, and it should
| be them.
| bambax wrote:
| > _I've always been an optimist, and reading The Coming Wave
| hasn't changed that._
|
| I'm not an optimist, but I fail to see the dangers of AI. I think
| it's more likely we will be wiped out by nuclear war, or climate
| change, or the collapse of biodiversity and ecosystems that
| result in worldwide famines, before AI is advanced enough to
| constitute any kind of threat to our existence.
| randcraw wrote:
| The dangers are not in AI, per se. Like nuclear fission and
| fusion, the danger is in how the technology 1) may be misused
| by corporations that are clueless to or disinterested in the
| damage it can inflict, and 2) surely will be dis-regulated by
| the increasingly stupid and malignant boobs infecting
| Washington.
| Bjorkbat wrote:
| Can't take this seriously knowing that this is the same Mustafa
| Suleyman who...
|
| - Was basically acqui-hired by Microsoft from Pi AI (seems a
| little biased to recommend a book from one of your own)
|
| - Left DeepMind due to allegations of bullying
| (https://en.wikipedia.org/wiki/Mustafa_Suleyman#DeepMind_and_...)
|
| - Allegedly yelled at OpenAI employees because they weren't
| sharing technologies frequently enough
| (https://www.nytimes.com/2024/10/17/technology/microsoft-open...)
|
| But what do I know, maybe if I read it and regurgitate its
| contents in a not-too-obvious way I can get an AI policy job.
| tim333 wrote:
| Suleyman seems to have got ahead in AI basically by being mates
| with Demis Hassabis and joining him in the company founding. He
| doesn't seem to have achieved much in actual AI, more things
| like setting up a "Muslim Youth Helpline" and being a "policy
| officer on human rights for Ken Livingstone."
| zombiwoof wrote:
| How can anyone take Mustafa Suleyman seriously
| HPMOR wrote:
| This book is middling at best. From a literary perspective it's
| terrible. It reads like GPT-3.5 wrote it. Now from an
| introduction to a new idea perspective and understanding how AI
| will affect society, its __fine__. The book is full of
| contradictions. Suleyman regularly points out how we've __never__
| successfully constrained a disrupting technological innovation
| and then says we NEED to here. I mean, absolutely absurd stuff.
| Ostensibly ends on an optimistic note, but is actually much more
| nihilistic.
| mjfl wrote:
| The begging for regulatory capture in the AI business is so
| egregious that I think the government should nationalize all
| the AI companies 'to keep us safe' but really to prevent these
| shysters from making money.
| Sverigevader wrote:
| Hmm, didn't we do this for cloning? I remember hearing about
| this on Lex Fridman when he interviewed Max Tegmark.
|
| If I recall correctly, the entire world is in agreement that
| cloning is illegal, and even that some people in China (could
| be just one) even went to prison for it.
| LPisGood wrote:
| If you could use cloning to make lots of money more labs
| would be doing it.
| AlexCoventry wrote:
| You probably _could_ make lots of money from human cloning,
| if it weren 't illegal?
| WalterBright wrote:
| If I could clone myself, I'd make a few bug fixes.
| WalterBright wrote:
| > It reads like GPT-3.5 wrote it
|
| These days it would be surprising if an author didn't generate
| at least some of the text with AI, or direct an AI to improve
| the prose.
| amelius wrote:
| From the guy who didn't see the internet coming ...
| thomassmith65 wrote:
| Gates wrote his 'internet tidal wave' email mid 1995
| (https://wired.com/2010/05/0526bill-gates-internet-memo/) which
| was only two years after Berners-Lee publicly released Mosaic.
|
| By the late 1990s, Microsoft's competition (including Netscape
| and Apple) were nearly dead. In fact, the browser that Apple
| originally shipped with OS X was M$ Internet Explorer.
|
| Gates was several months late to the web, but it's not like he
| missed the boat.
| wrs wrote:
| I went from Apple to Microsoft in 1995 partially because
| Microsoft was so far ahead of Apple on the Internet. At the
| time, Apple was entirely concerned with promoting eWorld.
| (Ironic because Apple had a /8 IP allocation and was
| processing something like a quarter of Usenet traffic well
| before this.) They both had to get out of their walled garden
| "compete with AOL" models, but MSFT did it faster.
| thomassmith65 wrote:
| The other problem with the Mac in those years was that
| there was no decent web browser.
|
| Windows MSIE eventually surpassed the usability,
| functionality and popularity of Netscape, but Microsoft's
| Mac version of MSIE did not.
|
| In the late 1990s, many websites did not render or function
| correctly on Macintosh.
| fsckboy wrote:
| absolutely everybody in software was talking about the
| internet in 1995, and for the most part already on the
| internet, and companies were already pivoting to the internet
| left and right. It made me realize how discretionary is all
| the work we do in these large industries, because whatever we
| were working on in 1993 and 1994 no longer mattered, we were
| now working on enabling whatever assets we had to be
| compatible with the internet.
| saneshark wrote:
| I just finished listening to it on audible. It is certainly
| thought provoking, but full of contradictions as others have
| mentioned. Namely that this technology cannot be contained, and
| yet that it must be contained is pretty doom and gloom. The
| prognostications about artificial intelligence are hardly as
| scary as the ones made around genetic sequencing -- that you can
| buy a device for 30k that will print pathogens and viruses for
| you out of your garage. That's some scary stuff.
| downrightmike wrote:
| You can buy plasmids and make whatever bacteria you want for a
| few decades now. AI may help, but it certainly doesn't cost
| $30k to cause mischief.Pretty sure I learned that in Bio 102
| Balgair wrote:
| I just want to echo this here and in a bit different wording:
| AI will provide step-by-step guides on how to make viruses
| that just about any idiot can follow, for very cheap, and in
| under a year time frame.
|
| I really _really_ hope I 'm missing something big here.
| mdorazio wrote:
| Having a step-by-step guide and _actually being able to
| follow it_ are two very different things. If you follow
| YouTube channels like The Thought Emporium you 'll see how
| hard it is just to duplicate existing lab results from
| published sources in biology. To go a step further and
| create new dangerous things without also getting yourself
| killed in the process is a pretty tall order.
| Keyframe wrote:
| _Having a step-by-step guide and actually being able to
| follow it are two very different things._
|
| exactly. we'll see how far it goes. it might be a more
| elaborate draw the rest of an owl guide, like:
|
| 1. obtain uranium-238
|
| 2. fire up the centrifuge for isotope separation
|
| 3. drop yellowcake into it
|
| 3. collect uranium-235
|
| ...
| downrightmike wrote:
| You missed the part where you turn uranium metal into a
| gas for the centrifuge to work in the first place
| energy123 wrote:
| We should be talking about the more abstract problem of
| asymmetric defense and offense.
|
| Imagine that nukes were easy to make with household
| items. That would be a scenario where offense is easy but
| defense is hard. And we would not exist as a species
| anymore.
|
| Once a hypothetical technology like this is discovered
| one time, it's not possible to put the genie back in the
| bottle without extreme levels of surveillance.
|
| We got lucky that nukes were hard to make. We had no idea
| that would be the case before nuclear physics was
| discovered, but we played Russian Roulette and survived.
| root_axis wrote:
| They said the same thing about the anarchist's cookbook 30
| years ago.
| mjfl wrote:
| I wish he would read 'The Israel lobby' by John Mearsheimer and
| Stephen Walt. Gates has some responsibility as an elite
| billionaire to safegaurd our democracy.
| jack_pp wrote:
| https://www.fimfiction.net/story/62074/friendship-is-optimal
| netfortius wrote:
| Some folks really need to read Thibault Prevost's "Les prophetes
| de l'IA", in order to comprehend the motivation behind work like
| Suleyman's.
| PunchTornado wrote:
| this is from Mustafa Suleyman, don't bother opening the link.
|
| If people don't know him, he is the classic impostor who just
| goes by contributing nothing to the field, but investing big in
| pr and bots.
| mistrial9 wrote:
| news flash - he got a promotion, staff and budgets since then
| randcraw wrote:
| After he was demoted at DeepMind for poor performance as a
| manager, that is. Suleyman is no technologist. He's a
| marketing promoter. He attended one year of college before
| dropping out to help sell DeepMind.
|
| You have to wonder what's going on in Gates' head these days
| to not recognize the lack of substance in such a book, and in
| its author.
|
| Far better books on the possible futures of modern AI are
| Stuart Russell's "Human Compatible" or Brian Christian's "The
| Alignment Problem", both of which predate the boom in LLMs
| but still anticipate their Achilles heel -- the inability to
| control what they learn or how they will use it.
| tim333 wrote:
| TED talk by the author, probably with similar content to the book
| https://youtu.be/KKNCiRWd_j0
| figers wrote:
| In an early chapter he talks about how well LLMs knowing medical
| and legal information but doesn't mention how it makes things
| up... Was hoping he'd discuss the challenges and hurdles right
| away...
| DwnVoteHoneyPot wrote:
| For those negative on the book, do you have a better suggestion
| for me to read? Maybe I should just ask ChatGPT.
| greenthrow wrote:
| Bill Gates is not some great thinker. He was born on third base
| and in the right place at the right time, and then absolutely
| ruthless once Microsoft had power in the industry. As a thought
| leader he is extremely mediocre.
| pockmarked19 wrote:
| My favorite book on AI is Sutton's "reinforcement learning".
| Looking just at the url I knew this would be some pop-sci tripe
| but leaving this comment here in case people want something other
| than what they can tout on twitter/X.
| gerash wrote:
| Is he recommending the book because MS hired Suleyman?
| jasoneckert wrote:
| While I've enjoyed the small bursts of wisdom from many of Bill
| Gates shorter talks, I haven't found anything noteworthy in his
| written reviews and books. His viewpoints are often bizarre and
| radical. I still chuckle when remembering reading his conviction
| that ISDN would become the dominant Internet technology before
| the year 2000 in his book "The Road Ahead" (1995). It even seemed
| bizarre back then to me.
| talles wrote:
| This reads like a PR written blog post recommending a PR written
| book
___________________________________________________________________
(page generated 2025-01-06 23:00 UTC)