[HN Gopher] Evolution of AI and Amara's Law
___________________________________________________________________
Evolution of AI and Amara's Law
Author : nunocoracao
Score : 31 points
Date : 2024-01-22 17:06 UTC (5 hours ago)
(HTM) web link (n9o.xyz)
(TXT) w3m dump (n9o.xyz)
| nunocoracao wrote:
| Amara's law aptly captures the dichotomy in our perception of
| technological advancements like AI. As we navigate the short-term
| challenges and excitement, it's crucial to maintain a balanced
| perspective, acknowledging both the current limitations and the
| vast potential.
| CatWChainsaw wrote:
| Even as meta commentary this hand is so over-played it's
| already cliche - and going up one level in meta doesn't improve
| it.
| nerpderp82 wrote:
| https://en.wikipedia.org/wiki/Roy_Amara
|
| > We tend to overestimate the effect of a technology in the short
| run and underestimate the effect in the long run.
|
| Amara's Law is a good one, esp in these circles where we have
| many people working on cutting edge technology. They often give
| up because it doesn't see adoption, and then move on to something
| else. Two or three cycles later, when that technology is finally
| gaining traction, someone takes some off the shelf components and
| slaps it together, profiting in ways that the original creators
| would have liked to have seen.
|
| Timing is everything, and the technoseti are often _too early_.
| But they also underestimate how big something will get.
|
| I am a huge Python fan, but I never expected back in 2000 that it
| would be as big as it is today. Never. Cloud Computing, NLP, etc.
| Google suffers from this immensely.
| nunocoracao wrote:
| Amen. I wonder what "underestimating" something like AGI
| actually means though...
| nerpderp82 wrote:
| By the time AGI arrives, the world will have already been
| transformed in ways we cannot fathom by intelligence much
| lower than AGI. Look at the impact that computing has already
| had, and that is mostly accounting and simulation. Building
| and factory automation is on the order of a couple if
| statements. Most programmers can't implement a PID loop.
|
| I am not scared by AGI. But to get to AGI you have to pass
| through a couple points on a curve that occupy the most
| powerful in terms of agency, less than AGI. An angry toddler
| that has > 1TW (terawatts) of power.
| nunocoracao wrote:
| Agree, the path might be quite bumpy but so could the
| endgame. Whether AGI wants to help or kill the human race
| will depend on how aligned it is to our goals. Now, since
| "our" goals will depend on whoever has the metaphysical ear
| of such AI this could be problematic.
| visarga wrote:
| I think I read this kind of discussions too many times,
| imagine how the training set of GPT-5 will look like? all
| sorts of theories of "whether AGI wants to help or kill
| the human race" spread over the internet and analyzed to
| death. It is going to know this topic in-and-out. Will be
| able to write a masterful dissertation on the topic.
| ben_w wrote:
| Assuming it's aligned and we're still around...
|
| AGI? Lots of people are still talking like it will be an
| assistant, perhaps using the "centaur" metaphor, and refusing
| to believe that it could[0] result in everyone's mental
| capacities having all the economic relevance of equine muscle
| power.
|
| ASI? We don't know how far above us IQ can go, but it seems
| reasonable to guess that the smallest possible margin above
| us is such that even asking that question is like asking the
| question is akin to asking Chimpanzees to imagine the moon
| landing, while at worst it's like asking your lawn the same
| question.
|
| [0] I say "could" rather than "will" because there's always
| the question of "how much does it cost to run that software?"
| -- but the cost of 3430 W at $0.03/kWh matches the World
| Bank's international poverty line inflation adjusted to 2022:
| https://www.wolframalpha.com/input?i=%283430+watts+*+%240.03.
| ..
| darkerside wrote:
| Why do we act like people all have an approximately uniform
| intelligence that will be surpassed by AI intelligence by
| orders of magnitude in all dimensions? In my experience,
| different people are smart about different things,
| sometimes yes by orders of magnitude. Couldn't an AI being
| smarter than some people be no more surprising than some
| people being smarter than some other people?
| __loam wrote:
| Python's popularity is more tragedy than success story.
| lsy wrote:
| This law is true in some cases, but I think is too often used as
| the tech industry's version of "first they ignore you, then they
| laugh at you, then they fight you, then you win". The argument is
| that, sure, a technology's promise may have not panned out, but
| under Amara's law, we can expect a commensurately massive impact
| later.
|
| As Carl Sagan pointed out, "...the fact that some geniuses were
| laughed at does not imply that all who are laughed at are
| geniuses." The same applies to technologies. The fact that a
| technology has shown slower-than-expected progress is not a
| foolproof indication that it will eventually have massive impact.
| It may just not be as useful as it initially seemed, or may even
| be supplanted by a completely different technology.
| nunocoracao wrote:
| 100%... not using it to advocate for anything in particular.
| Just find the phenomenon interesting... you can apply this
| retroactively to several innovations.
| logiduck wrote:
| Ever since I was a little kid I was always hearing those
| inspirational stories about how someone would get into an
| accident and then "The doctor told me I would never walk again"
| or "I wouldn't make it to my next birthday" and then the person
| made a miraculous recovery and "proved them wrong" or something
| along those lines.
|
| It always seemed like an attention bias to me. Because for sure
| there are many more stories about the doctor being right and
| the person did never actually walk again. And we don't really
| pay attention to the stories about the doctor saying they would
| walk again. Those aren't interesting.
|
| It can also be a bias with the triumphant person overestimating
| the negativity of the past. Like did the doctor really say you
| will never walk again, or that it will be tremendously
| difficult and unlikely?
|
| People like an underdog story and will hold it with more
| attention than other stories in which the expectations met
| reality.
| robomc wrote:
| Where does this guy live that most people smoke
| paulofilip3 wrote:
| It does seem that this current AI summer is very hot. This does
| start to seem like a nee tech revolution. Apart from Amara's law,
| we're also very bad at visualizing exponential growth.
|
| Key ideas from the very early ages of computers and a few new
| tweaks only recently found the compute power to become useful.
___________________________________________________________________
(page generated 2024-01-22 23:01 UTC)