[HN Gopher] AI will save the world?
       ___________________________________________________________________
        
       AI will save the world?
        
       Author : kjhughes
       Score  : 56 points
       Date   : 2023-06-06 15:56 UTC (7 hours ago)
        
 (HTM) web link (pmarca.substack.com)
 (TXT) w3m dump (pmarca.substack.com)
        
       | ambientenv wrote:
       | Wow. Spoken like someone who hopes (believes they deserve to)
       | profit from the evolving technology, who views anyone not like
       | them with bemusement and detached curiosity or, more likely,
       | derision. Why do we - humans - feel it is so damn well
       | appropriate to outsource our responsibilities and
       | accountabilities as humans and integral members off an ecology to
       | technology and, in doing so, forego a necessary immersion into
       | and deep reverence for the world, substituting instead a tech-
       | derived and mediated superficiality, detaching ourselves from our
       | biology mostly for the sake of self-gratification and self-
       | grandeur? The bigger question is, what values do we - the
       | collective we - attribute to a world and a life to be saved and
       | will our AI adhere to such values?
        
         | BobbyJo wrote:
         | > Why do we - humans - feel it is so damn well appropriate to
         | outsource our responsibilities and accountabilities as humans
         | and integral members off an ecology to technology and, in doing
         | so, forego a necessary immersion into and deep reverence for
         | the world, substituting instead a tech-derived and mediated
         | superficiality, detaching ourselves from our biology mostly for
         | the sake of self-gratification and self-grandeur?
         | 
         | I mean, the simple and inelegant answer is evolution. Maximize
         | mating opportunity and minimize energy expenditure. Grandeur
         | means mating opportunity. Passing off responsibility means
         | minimizing energy expenditure.
         | 
         | Humans aren't transcendent beings. We're just good at math.
         | 
         | > The bigger question is, what values do we - the collective we
         | - attribute to a world and a life to be saved and will our AI
         | adhere to such values?
         | 
         | Ask different groups of people and you'll get different
         | answers. I don't know that there are "human" values.
        
       | fsflover wrote:
       | Another ongoing discussion:
       | https://news.ycombinator.com/item?id=36214901.
        
       | kkfx wrote:
       | IMVHO: those who cry "AI will destroy anything" AND those who
       | equally cry "AI will makes anything better" are both largely
       | wrong for the present and still wrong for the mean and long run.
       | 
       | Today "AI" systems are nice automatic "summarize tools", with big
       | limits and issues. They might be useful in limited scenarios like
       | to design fictional stuff, to be "search engine on steroid" (as
       | long as their answers are true) and so on. Essentially they
       | _might_ help automation a bit.
       | 
       | The BIG, BIG, BIG issue is who train them AND how can we verify.
       | If people start to get the habit of taking for truth any "answer"
       | their grasp on the reality would be even lower that today "online
       | misinformation", and that can go far beyond news (try to imaging
       | false medical imaging analysis consequences). How can we verify
       | is even more complex. Not only we can't train at home with
       | interesting results but also we can't verify for truth the mass
       | of training materials. Try to imaging the classic Eduard Bernays
       | "dummy" sci journal publishing some _true_ papers and some false
       | one stating smoking is good for health...
       | https://www.apa.org/monitor/2009/12/consumer now imaging the
       | effect of carefully slipped false material in the big "ocean" of
       | data...
        
         | dinvlad wrote:
         | They are trained using input from an army of underpaid "ghost
         | workers", i.e. people without many rights or economic freedom,
         | and no consideration for their well-being.
         | 
         | Nothing good can be grown on such ground.
        
       | golergka wrote:
       | [flagged]
        
       | AdamH12113 wrote:
       | I was expecting this to be "person who expects to make enormous
       | sums of money from AI thinks AI is a great idea", but somehow it
       | was worse. For instance, I did not expect it to explicitly call
       | for a new cold war against China that pits their authoritarian
       | vision for AI against a _completely unregulated, corporate
       | profit-driven_ vision for AI. The lack of self-awareness there is
       | mind-blowing.
       | 
       | The author also thinks that wages have grown rapidly over the
       | last few decades in proportion to labor productivity (!!!), that
       | outsourcing and automation have turned out to be great for
       | everyone, that AI replacing existing labor en masse would
       | immediately and obviously result in more and higher-paying jobs,
       | and that concerns about rising inequality and billionaires
       | sucking up all the new wealth are literally Marxism and should
       | thus be dismissed outright. (His links supporting this mostly go
       | to conservative think tanks and Wikipedia articles on economics,
       | in case you were wondering.)
       | 
       | Meanwhile, the upsides of AI are described using language like
       | this:
       | 
       | > Every child will have an AI tutor that is infinitely patient,
       | infinitely compassionate, infinitely knowledgeable, infinitely
       | helpful. The AI tutor will be by each child's side every step of
       | their development, helping them maximize their potential with the
       | machine version of infinite love.
       | 
       | > Every person will have an AI
       | assistant/coach/mentor/trainer/advisor/therapist that is
       | infinitely patient, infinitely compassionate, infinitely
       | knowledgeable, and infinitely helpful. The AI assistant will be
       | present through all of life's opportunities and challenges,
       | maximizing every person's outcomes.
       | 
       | Which (taken literally) is too optimistic for Star Trek, much
       | less real life. Viewed through the lens of Silicon Valley venture
       | capitalism and its products, it is terrifyingly dystopian.
       | 
       | I'll leave you to read the part where regulation is literally a
       | reenactment of prohibition for yourself.
       | 
       | With all the straw-manning, it didn't even touch on more
       | realistic problems like effortless, undetectable cheating on
       | school homework or the proliferation of circular references.
        
       | te_chris wrote:
       | I just know a take from a16z is going to be worthless. Then I
       | read it to confirm.
        
       | lxgr wrote:
       | > Every child will have an AI tutor [...]
       | 
       | Is anybody keeping score on Neal Stephenson novel plot points
       | becoming real-world news in 2023? ("The Diamond Age" in this
       | case.)
        
         | jes5199 wrote:
         | we actually are going to enter a "diamond age" as processes
         | like C2CNT make direct-air carbon molecules cheaper than glass.
         | I don't know if goes mainstream this year though
        
       | mordae wrote:
       | We overestimate impacts in the short term and underestimate
       | impacts in the long term. It's the part of the hype cycle.
       | 
       | 1. Regulatory capture is a relevant worry.
       | 
       | 2. We will see a lot of Ad infested disinformation ML around.
       | 
       | 3. FLOSS will help to keep the big players in check A LOT.
       | 
       | 4. Hardware won't matter in 15 years. We are crowdfunding GPUs by
       | then. Possibly using some upcoming libre ML assisted CAD.
        
       | Havoc wrote:
       | This seems to be aggressively conflating strong and weak AI?
       | 
       | Talks about weak AI when describing it:
       | 
       | >AI is a computer program like any other - it runs, takes input,
       | processes, and generates output.
       | 
       | and uses that to dismiss the fears about something completely
       | different (strong AI dangers, a la musk) as being irrational.
       | 
       | Very strange given that Marc is presumably aware of the
       | distinction. Smells of ulterior motive frankly
        
       | TheDudeMan wrote:
       | > "What explains this divergence in potential outcomes from near
       | utopia to horrifying dystopia?"
       | 
       | > "Historically, every new technology that matters, from electric
       | lighting to automobiles to radio to the Internet, has sparked a
       | moral panic"
       | 
       | LMAO. Yeah, totally the same thing.
        
       | srameshc wrote:
       | My insignificant take is that neither AI will save or destroy the
       | world. Like the internet it will aid us in our modern society and
       | probably for some it will be the means of causing nuisance for
       | others. A new set of problems for a new set of solutions.
        
       | seydor wrote:
       | i enjoyed this unapologetic retort to the prevailing media
       | doomerism
       | 
       | I guess the title "AI will eat the world" was rejected as it
       | conflicts with the message
       | 
       | this whole thing is entertaining but doesn't matter. it's just an
       | extension of the culture wars to a new domain. Whatever
       | regulation people come up with will be useless as AI has not
       | really got its final shape. and it s not like regulation stopped
       | most of the tech of the past 3 decades from forming
        
       | revskill wrote:
       | The biggest problem in AI to me, is how to solve the training
       | problem.
       | 
       | Let's say, i have some small data to train my robot.
       | 
       | Day by day, i can teach it more things. But i want it to "learn
       | once and creative 10 times".
       | 
       | It's how human brain got intelligence with only not large data to
       | be trained.
        
       | nologic01 wrote:
       | AI is talked about as a singular something but arguably it is
       | just the current stage in the long running process of software
       | processing more _data_ with ever more elaborate algorithms.
       | 
       | As such AI inherits all the "world-saving capabilities" of
       | software which, empirically, are not exactly overwhelmingly
       | proven.
       | 
       | Ergo, AI will not save the world any more than software as a
       | whole saved the world in the past half century. That historical
       | track record is the best guide we have as to what role AI will
       | play. Ceteris paribus the future will not be different from the
       | past because of AI. AI is a different CSS applied to the same
       | HTML.
       | 
       | The only thing that can save the world is human wisdom, which
       | _is_ a software of sorts, but alas after several millenia of
       | recorded history not yet fully understood.
       | 
       | Can "AI 3.0" help with enhance human wisdom? Of course it can.
       | But so could have AI 1.0, 2.0 etc. and it didn't happen.
        
       | TradingPlaces wrote:
       | Came for everyone ripping on a16z. Was not disappointed. Thanks
       | y'all.
        
       | tjpnz wrote:
       | >Every child will have an AI tutor that is infinitely patient,
       | infinitely compassionate, infinitely knowledgeable, infinitely
       | helpful. The AI tutor will be by each child's side every step of
       | their development, helping them maximize their potential with the
       | machine version of infinite love.
       | 
       | This bothers me.
       | 
       | Who trains these AI tutors and how do we prevent the system
       | they're embedded in from churning out the same cookie cutter
       | individuals, each with the exact same political beliefs and
       | inability to comprehend the grey and nuanced?
       | 
       | Do we even want perfect tutors at all? The lessons I remember
       | from school didn't always come from the best and brightest the
       | profession had to offer. I would even go as far as to say that
       | some were rather flawed individuals, in one way or another. That
       | "wisdom" though has shaped me for the better as an individual.
       | You're not going to find that in any textbook, much less a LLM.
        
         | GCA10 wrote:
         | Amen on the value of imperfect tutors. I'd say that the best
         | moments of early adolescence come when you dispute something
         | with an adult -- and are able to establish they they're wrong
         | and you're right.
         | 
         | Later on, we have to learn how to do this delicately enough
         | that we don't make enemies. But the journey to adulthood takes
         | a big step forward during those early rushes of realizing that
         | we can see/recognize things that our elders cannot.
        
         | Karrot_Kream wrote:
         | > Who trains these AI tutors and how do we prevent the system
         | they're embedded in from churning out the same cookie cutter
         | individuals, each with the exact same political beliefs and
         | inability to comprehend the grey and nuanced?
         | 
         | I understand you're trying to dig into some idea of political
         | homogenization and bias by pushing this point but I really
         | think you're missing the forest for the trees.
         | 
         | I grew up in a low income area in the US and the standard of
         | public education was abysmal. The teachers were overworked with
         | huge class sizes and they spent so much time helping kids
         | simply graduate that they had no time to help any student who
         | was average or above. If you learned in a non-standard way,
         | forget about it. Forget "cookie cutter individuals", half the
         | time the teachers would show up and not actually teach (they'd
         | put some music on and sit work on other things.) "perfect
         | tutors"? In an era before digitized gradebooks, teachers could
         | give you whatever scores they wished. Even the teachers that
         | were trying their best just didn't have the time to do any more
         | than the absolute basics.
         | 
         | I studied for SATs and AP exams by downloading textbooks from
         | filesharing websites and studying those. Wikipedia filled in
         | the gaps for non-STEM subjects. Nobody at my school could help
         | me. At the time all the teachers and councilors knew is that I
         | was going to be okay without any help from them.
         | 
         | I can only see AI tutors as much better than the nothing that
         | often is public education. The status quo is just overworked,
         | ineffective teaching. Being able to bounce off questions from
         | an AI tutor when it's late and you're up with homework and your
         | parents either can't help or are hostile to your educational
         | goals (the reality in a low income area) would go a long way.
         | Of course the reality is that maybe AI tutors aren't actually
         | coming and all we'll get are LLMs trained on the contents of
         | school textbooks, but even that is a lot better than what we
         | have now. The important problem will be making sure that
         | _everyone_ gets access to these AI tutors and not just the
         | privileged in the elite schools who were going to be funneled
         | into an elite college anyway.
         | 
         | Would it be best if we were able to shrink class sizes and have
         | world-class teachers for children? Of course. Are countries
         | going to be willing to pay the tax burden needed to make this
         | happen? Probably not.
        
         | throw310822 wrote:
         | > Every child will have an AI tutor that is infinitely patient,
         | infinitely compassionate, infinitely knowledgeable, infinitely
         | helpful.
         | 
         | Which also means: infinitely capable of performing a job in
         | their place. What will these children study for? Teachers have
         | always transmitted to the only available recipients the
         | knowledge that otherwise would have disappeared with them. When
         | intelligent beings can simply be replicated at zero cost and
         | have no expiration, what are children even learning for?
        
           | amelius wrote:
           | They learn how to maximize their happiness.
        
         | lisasays wrote:
         | Bothersome is an understatement. This is snake oil of the most
         | insidious kind.
         | 
         | Oh wait, here's another zinger: "I even think AI is going to
         | improve warfare, when it has to happen, by reducing wartime
         | death rates dramatically."
        
           | golergka wrote:
           | This is true though. Modern warfare became much less deadly
           | to civilians. Just compare Russia's total brutal destruction
           | of Mariupol with their barbaric indiscriminating artillery
           | barrages and Ukraine's almost bloodless liberation of
           | Kherson, without a single artillery or air strike on the city
           | itself -- a very clear contrast between 20th and 21st
           | century's warfare.
           | 
           | Wars will never go away, but by making them more precise and
           | intelligent, we can make them not as horrible as they were in
           | the past.
        
             | lisasays wrote:
             | But the same technology can also be used to make war, you
             | know -- even more indiscriminate and deady. Or even where
             | less so -- it can be used by the wrong side, even help them
             | win in the end. While the companies that A16Z will
             | inevitably invest in just keep raking in the profits,
             | either way.
        
             | jrumbut wrote:
             | I don't think that has much to do with AI. Russia is
             | seeking to subjugate Ukraine and terrorize the population
             | into surrendering, while Ukraine is attempting to protect
             | the population.
             | 
             | If Russia had more military AI, they would use it to do
             | more of the same thing they're doing with all of their
             | current technologies.
        
             | rqtwteye wrote:
             | The "precision strikes" of the US weren't very kind to
             | Iraqi civilians either. War is horrible and it probably
             | should never be viewed as anything else. Otherwise the
             | temptations to start wars is too big for some political
             | leaders.
        
             | joe_the_user wrote:
             | There's nothing about Russia's terror bombing that's
             | related to not having advanced technology. Russia is using
             | terror bombing because their strategic calculations have
             | determined that weakening the Ukrainian nation would give
             | them a strategic advantage - they've used advanced drones
             | for this purpose as well as advanced missiles. If they had
             | even more advanced systems, they would use them similarly.
             | 
             | Perhaps you could argue an advanced social system wouldn't
             | target civilians but that's different issue (and still a
             | hard one).
             | 
             |  _...by making them more precise and intelligent..._
             | 
             | Precise technology is just as effective at precisely
             | targeting civilians as it is at precisely soldiers and
             | there hasn't been end to forces that view various civilians
             | as the enemy.
             | 
             | And indeed, nations have very seldom targeted civilians
             | because of the lack of precision - because civilians were
             | standing next to soldiers or something similar. The human
             | shield phenomena has happened but NAZIs targeted London for
             | bombing because they wanted to break the British Nation.
             | Etc.
        
               | staunton wrote:
               | I agree with your assessment as far as Ukraine is
               | concerned.
               | 
               | However, I hope it's not a strawman to assume you're
               | arguing that there is no progress in warfare in the sense
               | of harm inflicted upon civilians. What would you prefer
               | as a civilian: living in a country being conquered by
               | Julius Caesar, Gengis Khan, occupied by Nazis in WWII or
               | living in any of the countries occupied since WWII
               | (including Ukraine)?
               | 
               | We even used to have a different word for it:
               | "conquered". What was the lastest country in history,
               | where this word would be appropriate?
        
               | joe_the_user wrote:
               | _However, I hope it 's not a strawman to assume you're
               | arguing that there is no progress in warfare in the sense
               | of harm inflicted upon civilians._
               | 
               | My point is specifically that progress in the
               | technologies of war don't by themselves promise that
               | things will be less brutal. Quite possibly other things
               | have produced progress. I make that clear in my parent
               | post.
               | 
               | I would also note that technology produces unpredictable
               | changes in the strategic situation and the actual result
               | from a changed strategic is itself unpredictable even
               | from the strategic situation. So where a technology
               | change might take us is unpredictable and unpredictable-
               | over-time. Notably, nuclear deterrence has so far worked
               | well for keeping the world peaceful and is something of a
               | factor for the relative pleasantness of the situation you
               | cite. But if nuclear deterrence were to slip into nuclear
               | war, the few survivors would probably think of this
               | technology advance as the worst thing the world ever saw.
        
               | lisasays wrote:
               | But of course.
               | 
               | We have to remember they bombed that theater in Mariupol
               | (with a "precision" guided missile no less) not despite
               | the fact that there were children and mothers inside --
               | but because of it.
        
             | ornornor wrote:
             | I'll preface this by saying that I have never known war in
             | my lifetime and that I absolutely don't condone it.
             | 
             | That said, isn't the point of war also that it's horrible
             | and barbaric? If war isn't that anymore, won't it be much
             | more frequent and casually started as a result?
             | 
             | Again, I'm absolutely not saying it's a good thing that war
             | maims and kills people, but I see these side effects as a
             | deterrent. There is a component of terror to it and that's
             | what makes it even worse.
             | 
             | If it's enlisted professional killing each other, or even
             | robots destroying each other, it can go on for much longer.
             | And how do you determine the "winner" in that case if you
             | can keep feeding robots into the fight, it's never ending
             | I'd think.
        
               | lisasays wrote:
               | _That said, isn't the point of war also that it's
               | horrible and barbaric?_
               | 
               | Of course - that's precisely the point. The idea that any
               | technical innovation can make it less so (or make war
               | less likely) runs counter to all historical observation.
        
               | InexSquirrel wrote:
               | I'm not sure I agree with that. I don't think the _point_
               | of war is to be barbaric - that's a by-product of
               | forceful expansion of power. Regardless of how killing
               | another human is done in the context of conflict, it will
               | always be considered barbaric, but the _point_ of the
               | conflict isn't be maximise barbarism.
               | 
               | I think (and know very little, so could be wrong), that
               | the purpose of war is to expand influence. This can take
               | the form of access to new resources (whether that be
               | land, access, air space, whatever) or to form a buffer
               | between one country and another. There's probably other
               | reasons, simply like ego in some cases.
               | 
               | There are other ways to expand powerbases too - such as
               | China's heavy financial and infrastructure investment in
               | Africa and South Pacific nations, or attempting to
               | undermine another country's social structures. These are
               | longer and harder to implement, but yield better results
               | with practically no blood shed.
        
               | lisasays wrote:
               | I stand corrected.
               | 
               | The point (in nearly all cases) is to win at any cost.
               | From which the practice of limitless barbarism naturally
               | follows.
        
               | sebzim4500 wrote:
               | >The idea that any technical innovation can make it less
               | so (or make war less likely) runs counter to all
               | historical observation.
               | 
               | Does it? WWII was less bloody than WW1, and nothing since
               | has had anywhere near as many deaths.
        
               | dasil003 wrote:
               | > _That said, isn't the point of war also that it's
               | horrible and barbaric? If war isn't that anymore, won't
               | it be much more frequent and casually started as a
               | result?_
               | 
               | Yes (usually), to the first question. The second begs the
               | question though.
               | 
               | Wars are destructive and enormously expensive. Only a
               | tiny fraction of human leaders wielding a
               | disproportionate amount of power have the agency to start
               | wars, and they do so in order to pursue specific (but
               | varied) objectives. Since the cost is high, no one does
               | this lightly (even the "crazy" ones, because they would
               | already have lost power if they weren't smart and
               | calculating).
               | 
               | AI may provide avenues to enhance the efficacy of wars.
               | It may also provide avenues to enhance other strategies
               | to achieve those objectives. In all cases, we can expect
               | AI will be used to further the objectives of those humans
               | with the power to direct its development and
               | applications.
               | 
               | It is therefore ludicrous and self-interested speculation
               | to claim that AI will reduce death rates. Andreessen
               | signals this with the preface "I even think" so that he
               | can make the claim without any later accountability. The
               | reality is, future wartime death rates may or may not
               | decrease, but even if they do, we likely won't even be
               | able to credibly attribute the change to AI versus all
               | other changes to the global geopolitical environment.
        
               | golergka wrote:
               | > That said, isn't the point of war also that it's
               | horrible and barbaric?
               | 
               | No. The conqueror never wants war -- he prefers to get
               | what he wants without any resistance. It's only the
               | defender who has to wage war to defend itself from
               | aggression.
        
               | panxyh wrote:
               | Wars and their winners usually _emerge_.
        
         | soco wrote:
         | Since when was something advertised as "perfect", perfect? I'd
         | rather worry that it will be imperfect and try to teach all
         | kind of random weird stuff to gullible kids.
        
         | atleastoptimal wrote:
         | There are more risks from flawed teachers than risks from too
         | perfect teachers
        
           | fsflover wrote:
           | Except that AI teachers trained by for-profit corporations
           | are far from perfect.
        
             | pelagicAustral wrote:
             | Oh let me fix that business model right away, the perfect
             | version costs 100 bucks a month.
        
               | TeMPOraL wrote:
               | And it doubles down on being flawed, as people willing to
               | pay those 100 bucks self-select as both gullible and
               | having discretionary income. More-less the same story as
               | with paying for ad-free versions of ad-supported
               | services.
        
               | sebzim4500 wrote:
               | If it was truly that cheap it would fundamentally change
               | society.
               | 
               | Unless the field is incredibly competitive though it will
               | cost orders of magnitude more than this.
        
             | thethethethe wrote:
             | Regular teachers are trained by for profit corporations
             | too. They are called private universities
        
         | dinvlad wrote:
         | They were trained by cheap labor in African countries, using
         | abusive practices with no consideration for ethical concerns.
         | The worst part is they didn't even care about how this would be
         | seen by the public, since the public is just so blindingly
         | hyped on it.
        
           | x11antiek wrote:
           | Do you have a smartphone? Pretty sure it was assembled by
           | cheap labor in third-world countries using abusive practices
           | with no consideration for ethical concerns. Maybe you have a
           | laptop? Same story. Did you consider this before purchasing
           | and using these devices?
        
         | TheOtherHobbes wrote:
         | Why not replace imperfect parents too?
         | 
         | If AI can be a perfect tutor, it can certainly be a perfect
         | parent, perfect romantic partner, perfect employee, perfect
         | employer, perfect VC...
         | 
         | In fact why not create a perfect economy, perfect media, and a
         | perfect infinitely wise and knowledgable political system?
        
           | moffkalast wrote:
           | Actually unironically yes.
           | 
           | If history's proven anything it's that if you put enough
           | humans together we'll do almost nothing but invent more and
           | more elaborate ways to kill each other. And sometimes to kill
           | other things too.
        
           | [deleted]
        
           | Ekaros wrote:
           | And in the end why not just replace the citizenry and
           | voters...
           | 
           | We could lock up those imperfect humans and just occasionally
           | ship them some food and water. Maybe direct them to not
           | produce so many imperfect new humans too.
        
         | eastbound wrote:
         | > churning out the same cookie cutter individuals
         | 
         | The question is not how to create individuals with different
         | mind, but how to create them in enough mass that it reaches 51%
         | before governments forbid whatever you are doing. And if you
         | have so much power that the government doesn't come at you,
         | what kind of power were you looking for.
        
       | darod wrote:
       | "Every child will have an AI tutor that is infinitely patient,
       | infinitely compassionate, infinitely knowledgeable, infinitely
       | helpful." - people use the internet so that they don't have to
       | remember things. I'm not sure how this tutor will help because
       | currently a lot of students are using AI to do their homework for
       | them.
        
         | jasonvorhe wrote:
         | I grew up in Germany and our entire education system doesn't
         | make a lot of sense and homework was something that only very
         | few people actually did, since most just copied from the few or
         | managed to stay under the radar and slither through classes
         | without people noticing.
         | 
         | Homework was always a chore and not a challenge or something
         | that would help you out in daily life.
        
       | ulrikhansen54 wrote:
       | This is by far the best piece I've ever written on the impact AI
       | will have on society & the most articulate response to the
       | hysteria gripping the discourse.
        
         | neonate wrote:
         | I assume you mean read rather than written?
        
           | ljlolel wrote:
           | Maybe he meant "I've ever seen written"?
        
       | denton-scratch wrote:
       | > teach computers how to understand, synthesize, and generate
       | knowledge in ways similar to how people do it.
       | 
       | Rephrasing proposed: enable computers to synthesize and generate
       | knowledge without the slightest glimmer of understanding, in ways
       | completely different from anything humans do. FTFY.
       | 
       | I made it as far as the TOC (pale grey on white - I'm getting on,
       | my vision isn't great, and my laptop has poor contrast).
        
       | pavlov wrote:
       | Cursed headline+domain combo.
        
       | javajosh wrote:
       | God I hate this rhetorical style - to lead with the conclusion
       | ("The era of Artificial Intelligence is here, and boy are people
       | freaking out. Fortunately, I am here to bring the good news: AI
       | will not destroy the world, and in fact may save it."), to title
       | the post with a question (which conventional wisdom says the
       | answer is always "no").
       | 
       | It's been what, 30 years since Netscape, and Marc's brain has
       | been pickled by infinite wealth, and it shows. And I say this as
       | someone rather bullish on LLMs.
        
         | SkyMarshal wrote:
         | I prefer when authors lead with the conclusion and then spend
         | the rest of the essay supporting it.
         | 
         | I hate long essays that bury the lede, and force you to read
         | through paragraphs of bloviating and pontificating until they
         | finally get to the point. Save that for the fiction novels.
         | 
         | Whenever I come across an essay like that, I either skip
         | reading it, or read it backwards starting at the end conclusion
         | and then working backwards to see how it was justified. Marc is
         | just saving me some work here.
        
         | sillysaurusx wrote:
         | The title being a question is actually a feature of HN, which
         | de-sensationalizes titles. The original title is "Why AI will
         | save the world".
        
           | ginko wrote:
           | I hate this "feature". It's grammatically incorrect and it's
           | putting words in the mouth of authors.
        
             | sillysaurusx wrote:
             | More thoughtful people hate clickbait titles than those who
             | hate this feature, so it balances out. Titles are communal
             | property, unlike the authors' words.
        
         | gitfan86 wrote:
         | I agree that wealth may have influenced Marc significantly, but
         | as someone who is MUCH less rich, and who has been right about
         | most major trends over the past 20 years, I think he is
         | generally correct here.
         | 
         | The good news is that our predictions are pretty short term,
         | we'll know who was right in 5 years.
        
       | more_corn wrote:
       | Or destroy it. One of those two. Or somewhere in the middle.
       | Which is how things usually land.
        
       | geodel wrote:
       | > "Bootleggers" are the self-interested opportunists who stand to
       | financially profit by the imposition of new restrictions,
       | regulations, and laws that insulate them from competitors.
       | 
       | Ok, so they think regulation may hurt some of their dubious AI
       | startup investments. At this point I don't know are they just
       | plain pathetic or still scammers.
        
       | sofixa wrote:
       | Calling bullshit on multiple points.
       | 
       | > I even think AI is going to improve warfare, when it has to
       | happen, by reducing wartime death rates dramatically. Every war
       | is characterized by terrible decisions made under intense
       | pressure and with sharply limited information by very limited
       | human leaders. Now, military commanders and political leaders
       | will have AI advisors that will help them make much better
       | strategic and tactical decisions, minimizing risk, error, and
       | unnecessary bloodshed.
       | 
       | Fundamental misunderstanding of the nature of wars notwistanding
       | (sane people rarely start them; assuming non-sane leaders like
       | Putin, or historically Hitler, Bush, Milosevic, Mussolini,
       | Galtieri, al-Assad, etc. etc. would listen to advice they don't
       | like is... just stupid), on the contrary - better tools and
       | "better" advice will make commanders and leaders more confident
       | they _could_ win. See: most major military inventions ever
       | 
       | The economic section is too long to quote, but again, fundamental
       | misunderstanding of economy and human physhology and how it
       | relates to economic decisions. If entire profressions get
       | obliterated by AI (not impossible, but improbable with the
       | current quality of AI output), it will, of course, obliterate
       | their wages. It will create fear in other "menial" "white-collar"
       | professiosn that they're next, which will dperess spending. Also,
       | the cost of goods and services that can now be provided by AI
       | (e.g. art) will drastically drop, making it an unviable business
       | for those humans left in it, which will push most of them to
       | quit. Who will be left to consume if vast swathes of professions
       | are made redudant ? And _if_ consumption goes up enough to
       | generate new jobs, they won 't be for the skillsets which were
       | replaced, but different, specialised ones, that will require
       | retraining and requalification, which is time heavy.
       | 
       | In any case, even assuming some equilibrium is reached at some
       | point, having decent chunks of the population unemployed with
       | little to no employment prospects, _especially_ in countries with
       | pretty much no social safety nets like the US will be disastrous
       | socially.
        
         | throwway120385 wrote:
         | That quote is juicy. It's amusing to me that he's so naive he
         | thinks that an AI won't make forced or unforced errors given
         | the imperfect information it inevitably has. And if you can get
         | access to the training set or the corpus of variables the AI is
         | configured with then you can easily predict what it's going to
         | do next, which is far worse. Nobody can look into the mind of a
         | mediocre general, but anyone can look into the mind of any AI
         | general given sufficient access.
         | 
         | People who call themselves technologists always overestimate
         | how beneficial a new technology is and underestimate how
         | inhumane its application becomes when venture capitalists
         | demand 10x or 100x their initial investment. When people like
         | him come in and extol the virtues of some new thing as fixing
         | everything and making everything better, I'm really wary of
         | what they do next. Inevitably they're trying to sell me a bill
         | of goods.
        
       | [deleted]
        
       | random_upvoter wrote:
       | All the world's problems are caused by people trying to be more
       | clever than other people. Therefor, no, AI is not going to save
       | us.
        
       | TeMPOraL wrote:
       | Don't forget about the good ol' tech industry bait-and-switch.
       | Quoting myself from earlier today:
       | 
       | > _There 's the good ol' bait-and-switch of tech industry you
       | have to consider. New tech is promoted by emphasizing (and
       | sometimes overstating) the humane aspects, the hypothetical
       | applications for the benefit of the user and the society at
       | large. In reality, these capabilities turn out to be mediocre,
       | and those humane applications never manifest - there is no
       | business in them. We always end up with a shitty version that's
       | mostly useful for the most motivated players: the ones with
       | money, that will use those tools to make even more money off the
       | users._
       | 
       | https://news.ycombinator.com/item?id=36211006
       | 
       | It applies to Apple's automated emotion reading, and it applies
       | _even more_ to major VCs telling us AI will Save the World. As
       | in, maybe it could, but those interests being involved are making
       | it _less likely_.
        
       | indigoabstract wrote:
       | I wonder why he felt the need to reassure people about the
       | benefits of AI..
       | 
       | For some reason, I'm imagining Dr. Evil reading this. Don't know
       | why, since Marc looks nothing like him.
       | 
       | Maybe a better title might have been "How i learned to stop
       | worrying and love the AI"? Because the world needs saving yet
       | again and it's AI's turn this time (and the AI investors). So
       | everyone chill, whatever happens, it will be OK, some people will
       | get richer and the rest who won't will be properly taken care of
       | anyway.
        
       | shams93 wrote:
       | Its going to be interesting to see how intellectual property
       | rights influence this. Like I can generate saxophone that sounds
       | real with google's ai mp3 generator but then when there are only
       | so many keys and scales in jazz how original could it be, but
       | then how original can anyone be, when you look at sound alike
       | lawsuits against human songwriters.
        
       | dclowd9901 wrote:
       | > The employer will either pay that worker more money as he is
       | now more productive, or another employer will, purely out of self
       | interest.
       | 
       | Complete hogwash. This has _never_ been the case, save for CEOs
       | or other executive positions.
       | 
       | Pay for us plebs is and has always been a function of the
       | availability of skilled labor. With AI, you've suddenly created
       | _gigantic pools_ of skilled labor almost instantaneously, since
       | you don't really need to be skilled, just need to know how to ask
       | a question correctly.
       | 
       | With that hiring power, I would bet both of my feet that wages
       | will absolutely go down, especially in high skill industries. And
       | people like Andreesen? Shrugging. "Oops, guess I was wrong, oh
       | fucking well. Good luck with that."
       | 
       | We should be creating AIs to replace or unseat rich assholes. I
       | mean, is anyone really still wondering why some of the first
       | "innovations" in AI have been mechanisms to replace some of the
       | most highly paid workers in the world? People like Andreesen
       | can't wait to cut the legs out from the every day software
       | engineer.
        
       | effed3 wrote:
       | The article is a big set of -opinions- with no true -fact- at
       | support.
       | 
       | The actuals -AI- are mainly LLM systems, and are quite far away
       | from being intelligent, but actually -capable- in some areas. The
       | misuse of these (and others) -capable- systems suredly will not
       | save the world, but will probably only help some usual little
       | privileged part of it.
       | 
       | Not a single tech will "save" the world, but the meaning of what
       | we make in the world. Tech is a tool, not a goal, and the way
       | -AI- and others tech is described is more close to a goal (or a
       | religion) in itself than one of the tools we must create ad learn
       | to use.
       | 
       | Science + Humanity (Compassion? Love? Friedship? chose yours)
       | will help to save something.
        
       | sfpotter wrote:
       | Absolutely patronizing writing style. Total contempt for his
       | audience. Good stuff. Anyway, the answer is obviously "no".
        
       | stewbrew wrote:
       | Love, respect, and humility have a small chance to save the
       | world. AI? I don't know. I have my doubts it will improve human
       | interaction.
        
         | thefz wrote:
         | This is just another take among the dozens we see recently.
         | They all sound identical, like "look guys, AI is hard, you
         | don't get it but I do. Let me explain why it will be infinitely
         | good/bad/anything else."
        
           | stewbrew wrote:
           | My comment doesn't have anything to do with AI per se but
           | with the hybris and inhumanity of people in the AI bubble.
        
             | thefz wrote:
             | With "this" I was referring to the article, not to your
             | opinion, which I share.
        
         | throwway120385 wrote:
         | I could see it being used to paperclip maximize engagement,
         | which is currently working out really well for us.
        
       | chewbacha wrote:
       | I'm not sure how much credibility I'm willing to extend to
       | Andreessen after the past 5 years. Sounds like snake-oil when
       | they say it.
        
         | senko wrote:
         | Yeah, the crypto craze stripped the halos of many of once-
         | hallowed VCs, like A16Z and Sequoia.
        
         | coolspot wrote:
         | Same, but they don't care. These posts are aimed at potential
         | investors who care mostly about returns and financial track
         | record.
        
       | VHRanger wrote:
       | A16Z: "Crypto stopped making money, quick, move on the the next
       | thing we can pump and dump!"
        
       | friend_and_foe wrote:
       | In skeptical of any claim that software will make the world
       | better. Perhaps it's selection bias, but all I can see is
       | software overcomplicating things to the point that any benefit
       | derived from it is overshadowed by the maintenance burden and
       | unforseen externalities. I say this as a once Utopian hopeful,
       | Uber was going to get rid of the monopolistic taxi cartels,
       | google was going to put balloons up and fiber and give us all
       | affordable high speed connectivity. All they seemed to do was get
       | in control with these pitches and then become worse than their
       | predecessors.
       | 
       | I am not someone who thinks AI is going to kill us all. But I
       | dont think it's going to usher in a new utopia either. I think
       | probably it's going to be useful and beneficial in some ways,
       | cause problems in others, and like always, our nature and
       | behavior will determine the human condition going forward.
        
         | rutierut wrote:
         | I don't know about you, but Uber _has_ significantly improved
         | ride-sharing for me, along with countless other aspects of life
         | such as government services, interpersonal communication, and
         | education.
         | 
         | It's a miracle I can live in another country than I was born,
         | top income bracket due to a free internet education,
         | effortlessly facetiming my parents whenever I want in amazing
         | quality, learning and talking about all sorts of niche
         | interests without any effort.
         | 
         | I think this is just all so obvious these days that you don't
         | remember how much it used to suck, and that's a real testament
         | to its greatness.
        
           | friend_and_foe wrote:
           | I remember, and I agree with you. These tools have improved
           | our lives. But they've damaged our lives on other ways.
           | 
           | I remember waking up every morning and without a second
           | thought, I got up, out in the world, did things I felt like
           | doing, saw people I liked on a whim, every day was jam packed
           | with eventfulness and dimension and interaction. I took that
           | for granted. Now, kids don't play outside, nobody knows how
           | to drive because their mind is somewhere else 24/7 and we
           | check our phones before we do anything else. But the flip
           | side is, I can know anything I want that is known by someone
           | else with less physical effort than it would take me to make
           | a sandwich. I can talk to almost any human being in the world
           | while doing any mundane task anywhere in my environment. It's
           | amazing.
           | 
           | But we did lose something, I believe something very
           | important. It wasn't free, it cost us something. Was it worth
           | it? I don't know. I think yes, but I'm not really sure.
        
       | akagusu wrote:
       | AI will be used to increase profit for mega corps and this is
       | basically the opposed to saving the world.
        
         | fsflover wrote:
         | Unless it is available to all people (FLOSS).
        
           | SketchySeaBeast wrote:
           | Software is only half the battle - the other half is compute.
           | Right now it's not possible for all people to run the current
           | LLMs locally and you can't give that away for free.
        
             | fsflover wrote:
             | This is true of course. But you nevertheless should not
             | forget about this half.
        
           | kelseyfrog wrote:
           | Any examples where a world-changing technology was developed
           | and it didn't concentrate capital and subsequently wealth?
        
             | fsflover wrote:
             | (GNU/)Linux?
        
             | rprenger wrote:
             | The printing press.
        
             | jhonof wrote:
             | The internet did not directly concentrate capital and
             | wealth, downstream effects have but they aren't an inherent
             | quality of the internet it's self.
        
               | II2II wrote:
               | You could say that about almost any technology, yet we
               | also have to face the reality that every technology has
               | social implications. Take the Internet. A lot of people
               | who were around in the 1990's thought it was a great
               | thing at the time. Perhaps it wouldn't be the great
               | equalizer, but it would reduce the barriers to access
               | information. Consider the Internet today. It is
               | undoubtedly better at reducing the barriers to access
               | information, yet I doubt that many people have such an
               | optimistic view of the technology. We discovered that it
               | is just as good, perhaps even better, at distributing
               | junk information. We discovered that the information
               | becomes consolidated in a limited number of hands. The
               | technology itself hasn't fundamentally changed. Even the
               | use of the technology hasn't fundamentally changed. It
               | simply took time for the distributed to become
               | consolidated. It was almost certainly a given that this
               | would happen as wealth facilitates growth and growth
               | attracts people who wish to exploit it for wealth.
               | 
               | Does that mean the world is worse off because of the
               | Internet? Probably not. Even if it was, I'm not sure I
               | would want to give it up because I remember the hurdles I
               | faced before the Internet was nearly universal. That
               | said, I do believe we should be conscious about how we
               | use it and skeptical of those who present hyper-
               | optimistic views of new technologies. While some may be
               | genuine in their visions, we also have to factor in those
               | who wish the exploit it for their own benefit and to the
               | expense of others.
        
       | sebastianconcpt wrote:
       | I've stopped reading at                   What AI offers us is
       | the opportunity to profoundly augment human intelligence
       | 
       | It's the other way around, all AI, automated cognitive mimicry,
       | has to offer to us, is a wide yet really _shallow_ augmentation
       | of our intelligence.
        
         | wiseowise wrote:
         | You don't need to downplay tech just because you don't like it.
         | 
         | LLM can already remove huge chunk of boilerplate or useless
         | search. An actual AI would be on a completely different level.
        
           | sebastianconcpt wrote:
           | It doesn't sound like you understood what I wrote. Try to go
           | deeper than the propaganda and hype level of analysis.
        
         | soperj wrote:
         | It's not augmenting anything at a humanity level. It might give
         | people access to skills that they don't possess, but I don't
         | see it coming up with new styles.
        
           | TheOtherHobbes wrote:
           | Why not? If you can make a functional LLM it's a fairly small
           | step to an LCM (Large Culture Model) and LEM (Large Emotion
           | Model) as submodules in a LBM (Large Behavioural Model).
           | 
           | The only difference is the tokens are rather more abstract.
           | But there's really nothing special about novelty.
           | 
           | If you have a model of human psychology and culture, there
           | isn't even anything special about cultural novelty fine-tuned
           | to tickle various social, intellectual, and emotional
           | receptors.
        
             | nathan_compton wrote:
             | Training data is the main thing. We have lots and lots of
             | text and text has the special property that a sequence of
             | text contains a lot of information about what is going to
             | come next and is easy for a user to create. This is a
             | rather particular circumstance, the combination of so much
             | freely available data and their being a lot of utility in a
             | purely auto-regressive model. It is difficult to think
             | about what other modalities are in a similar position.
        
             | sebastianconcpt wrote:
             | In all you described there, you are talking about anything
             | but humanity. You described hypothetical artifacts that, if
             | successful, would be vehicles of a synthetic species that
             | could imitate human behavior. Again, nothing to do with
             | humanity (unless you are bought into some kind of idea
             | related to see humanity as dinosaurs in extinction and
             | transhumanism as a new reality).
        
         | fsflover wrote:
         | Here is an example where human intelligence was actually
         | augmented: https://news.ycombinator.com/item?id=36209042.
        
         | phoe-krk wrote:
         | I've stopped reading at                   As an AI language
         | model, I cannot answer questions about the future, such as
         | about AI saving the world
         | 
         | /s
        
         | goda90 wrote:
         | The AI itself may be wide and shallow, but it can be a tool to
         | accomplish things that are possible for human intelligence but
         | impractical due to time, focus, economics, coordination, etc.
        
       | whoisjuan wrote:
       | Is it me or a16z completely destroyed their reputation as a
       | reputable VC after the crypto frenzy? I just can't seem to read
       | this with a straight face.
       | 
       | On the other hand, I guess VC is just that. To follow trends,
       | predict trajectories and attempt to make one win out of
       | thousands. But a16z trying to become a thought leader in AI after
       | all the crypto BS they elevated is very off-putting.
       | 
       | Times are very different now. If I was a founder in AI, I would
       | probably be wary of firms that went so hard on crypto. It seems
       | their investment thesis is to monopolize attention around these
       | trends instead of seeking real alignment.
        
         | mupuff1234 wrote:
         | You assume most founders don't have similar goals - make money
         | by chasing trends.
        
         | seizethecheese wrote:
         | Here's the thing - I didn't believe in crypto but was willing
         | to believe it had a small chance to be very successful. This
         | made it make sense as a VC investment. Im willing to believe
         | Andreessen saw it that way. The problem was that Andreessen
         | sold it so hard as a certainty. This makes him a charlatan in
         | my book.
        
         | mordymoop wrote:
         | Andreesen went on so many podcasts talking up their crypto
         | bets. It was such a clear failure mode of a massively high
         | horsepower brain drawing elaborate mental constructs to justify
         | an obviously stupid conclusion. His writing is a treasure trove
         | of the kind of poor reasoning that only really smart people are
         | capable of.
        
         | kajumix wrote:
         | His original thesis on Bitcoin as expressed in his NYT article
         | "Why Bitcoin Matters" [1], is still very compelling. His later
         | obsession with shitcoins is quite misguided, I agree, but not
         | enough to compromise all of his credibility. It'll be nice to
         | see a decent critique of his AI position, instead of ad
         | hominem.
         | 
         | [1]:
         | https://archive.nytimes.com/dealbook.nytimes.com/2014/01/21/...
        
           | cornercasechase wrote:
           | > His original thesis on Bitcoin as expressed in his NYT
           | article "Why Bitcoin Matters" [1], is still very compelling
           | 
           | This looks like a bog standard description of how Bitcoin
           | works, written 5 years after the Bitcoin white paper. There's
           | nothing insightful there.
        
       | naveen99 wrote:
       | What if people want to train their own likeness into a chatbot
       | and fund it with their estate ? suppose it doesn't want to be
       | anyone's property ? When does it get freedom ? When does it get
       | personhood, right to vote, right to burn compute for its own
       | whims ?
       | 
       | Human slavery ended because people were more productive when they
       | got to keep some of the fruits of their own labor. And the state
       | benefited more from taxing the slave's productivity than their
       | owner's profits. Why would it be any different for artificial
       | intelligence ?
        
       | barbariangrunge wrote:
       | The biggest risk with AI, in the medium-term at least, is it will
       | be used by governments and organizations with power to surveil
       | and manipulate people on a previously-impossible scale. Automated
       | systems monitoring everybody, pulling levers to prevent anybody
       | from speaking out or causing trouble.
       | 
       | In the long run, it will be the end of human freedom
       | 
       | For example, it looks like Xi has been pretty actively pursuing
       | this, based on the news over the last 10 years
       | 
       | > China has one surveillance camera for every 2 citizens (...)
       | These camera [sic] checks if people are wearing face mask,
       | crossing the road before the green lights for pedestrians are
       | turned on. If caught breaking rules, people lose their social
       | credit points, are charged higher mortgage, extra taxes and
       | slower internet speed. Not only that, public transport for them
       | gets expensive as well, and the list goes on. [1]
       | 
       | It's not like we're immune to this. All the malls I go to lately
       | are packed with facial recognition systems to analyze our
       | behaviour.
       | 
       | [1] https://www.firstpost.com/world/big-brother-is-watching-
       | chin....
        
         | zoogeny wrote:
         | I was doing some research on facial recognition for a job where
         | we were considering its use. I came across examples of
         | sentiment analysis being used at Walmart and Target. They have
         | big and conspicuous cameras in every one of their stores now.
         | Most people assume it is for shoplifting mitigation, which it
         | is. But that is not all. They can use it to track individual
         | customer's paths through the store and then use cameras at the
         | checkout to analyze your facial expression and rank your mood.
         | They use this data to optimize their store layouts.
         | 
         | The other use case was at high-end retail stores. Places like
         | Luis Vuitton, Hermes, etc. They have facial recognition to log
         | high spenders. If you drop 10k at Coach and then go down the
         | street to Valentino their security system will recognize you
         | and highlight you as a VIP customer. A specialized customer
         | assistant then comes out to give you personal attention, maybe
         | to invite you to the private shopping experience.
         | 
         | I learned about these in 2017 I believe. Most non-technical
         | people who I've told about this think it is some conspiracy
         | theory and they often don't believe it. For some reason people
         | are scared of the government but they remain totally docile or
         | willfully ignorant in the face of corporate use.
        
           | samstave wrote:
           | When we were evaluating employee entrance systems for a FAANG
           | back in ~2012 we were demo'd systems that could do retinal
           | scanning on streams of people as they walked through the turn
           | styles and they could read your eyes even through polarized
           | sun glasses.
           | 
           | I cant recall its name though - but yeah - OpenAI basically
           | brought capabilities for extreme real-time surveillance to an
           | 11.
        
         | samstave wrote:
         | >> _The biggest risk with AI, in the medium-term at least, is
         | it will be used by governments and organizations with power to
         | surveil and manipulate people on a previously-impossible scale.
         | Automated systems monitoring everybody, pulling levers to
         | prevent anybody from speaking out or causing trouble. In the
         | long run, it will be the end of human freedom.._
         | 
         | THIS is exactly what I see happening. I personally think the
         | "pause" on development is bullshit NationState jockeying for
         | dominance by trying to gAIn AI Dominance - Israel, MI5, NSA,
         | CCP <--- Every Intel Agency on the planet is
         | building/buying/stealing/weaponizing whatever they can.
         | 
         | I wonder what/where Palintir is in this fight?
         | 
         | It feels REALLY _Anime_ with the Sabre Rattling btwn the US and
         | China over Taiwan and TSMC 's chip fabs for AI cores.
         | 
         | The hardware is still relatively infancy - but in 5 years it
         | will be really interesting when we see the perfomance for 1hr
         | or 1D problems cut down to minutes seconds for massive AI apps
         | 5 years from now
        
         | zabzonk wrote:
         | watched over by machines of loving grace
        
         | startupsfail wrote:
         | The examples that you've given (obeying traffic laws and
         | wearing masks during pandemic l) seem to be perfectly good
         | social behaviors.
         | 
         | It's a balancing act between freedom and law. Go one way too
         | far - you get Tiananmen Square and reeducation camps. Go
         | another way too far - you get storming the White House and
         | school shootings.
        
           | candiodari wrote:
           | I hate this sort of thinking. You are making the implicit
           | assumption that everything about our social environment
           | happens simply on 1 variable: heavy-handed enforcement.
           | 
           | When I put it like this, I hope you can see that it doesn't
           | work like that. There are hundreds of variables you could
           | change that would affect everything. We can prevent Congress
           | storming (it was, btw, Congress, not the White House, that
           | got stormed) without moving even 1 micrometer in the
           | direction of reeducation camps.
        
             | monkeycantype wrote:
             | I initially read the comment you are responding to
             | differently, in that I saw the 'observer' in the statement
             | as not the state but the community, on re-reading I'm not
             | sure that makes sense. All the same, reading HN politics,
             | it often seems that a spectrum is presented that spans from
             | freedom to state oppression. There are democracies where
             | the public will not accept the state using power for its
             | own benefit, but is comfortable with the state enforcing
             | the social contract, because there is a stronger sense that
             | this is defined democratically. This may be simply a matter
             | of population size, the state in a nation of 20 million is
             | a different beast to a state of 350m
        
           | acover wrote:
           | Traffic laws are often nonsense and ignored, see jay walking.
        
         | layer8 wrote:
         | Yeah, this vision from TFA:
         | 
         | "Every person will have an AI
         | assistant/coach/mentor/trainer/advisor/therapist that is
         | infinitely patient, infinitely compassionate, infinitely
         | knowledgeable, and infinitely helpful. The AI assistant will be
         | present through all of life's opportunities and challenges,
         | maximizing every person's outcomes."
         | 
         | ...will more likely turn into an indoctrination and compliance
         | machine under authoritarian regimes.
        
           | foogazi wrote:
           | This just devolves into the same currently existing bad actor
           | issue: authoritarian regimes
           | 
           | You could say the same about weapons, radio, electricity, the
           | internet
           | 
           | They all could be abused by authoritarian regimes
        
             | layer8 wrote:
             | We haven't found a way yet to prevent authoritarian regimes
             | from arising and spreading, so it's unclear how AI will
             | save the world. On the contrary, AI will make it easier for
             | authoritarian regimes to expand and maintain their control.
        
               | gigel82 wrote:
               | All regimes asymptotically tend towards authoritarian in
               | the long run; from their POV it's just easier to do their
               | job that way. AI will greatly accelerate this trend.
        
               | barking_biscuit wrote:
               | This feels like the correct take to me.
        
           | tetris11 wrote:
           | "But you can train it however you want!" is the main
           | counterargument I hear against this (alas, my strawman).
           | 
           | Sure, you could, assuming accessible resources to decent
           | compute nodes and good training data, but something tells me
           | that this will be in the hands of a very few.
           | 
           | Also, even if decent AI remains affordable for most people,
           | most people will still mindlessly take the default route of a
           | corporate/government pushed apps.
        
           | dinvlad wrote:
           | Reminds me of Ms. Casey's "wellness sessions" in Severance
        
           | 0xcde4c3db wrote:
           | Not to mention members of certain "high-risk groups" getting
           | their own AI police officers to issue warnings and citations.
           | Obviously not based on race, just based on objective risk
           | factors such as having a direct social link to someone with
           | an arrest record...
        
         | dinvlad wrote:
         | I'd even say that the social networks are a precursor to this.
         | Everyone is constantly observed by everyone else there, and
         | many use a fake persona to try to "fit in", or god forbid say
         | something they will regret later. And those aren't on them have
         | trouble keeping in touch with the rest. Smh
        
         | [deleted]
        
         | boringuser2 wrote:
         | They're already pretty good at this.
         | 
         | I am of the opinion that destruction of knowledge work and
         | class mobility is more problematic.
        
       | boringuser2 wrote:
       | >Instead we have used our intelligence to raise our standard of
       | living on the order of 10,000X over the last 4,000 years.
       | 
       | Who's "we"/"our"?
        
       | tikkun wrote:
       | Despite what I think of the crypto projects a16z has been
       | involved in (hint: it's not positive -
       | https://news.ycombinator.com/item?id=36073355), I actually think
       | this essay was pretty solid.
        
       ___________________________________________________________________
       (page generated 2023-06-06 23:00 UTC)