[HN Gopher] Retrospective review of Godel, Escher, Bach (1996) [...
___________________________________________________________________
Retrospective review of Godel, Escher, Bach (1996) [pdf]
Author : lawrenceyan
Score : 140 points
Date : 2021-03-29 16:25 UTC (6 hours ago)
(HTM) web link (cs.nyu.edu)
(TXT) w3m dump (cs.nyu.edu)
| SamBam wrote:
| Thinking on the "the halcyon years of AI," in the 70s and
| (maybe?) 80s, was there anything really "missed" then, or have
| most of the modern advances in AI only really been possible due
| to the increase in computing power?
|
| To put it another way, if you were transported by to 1979 knowing
| what you know now (or maybe able to bring a good book or two on
| the subject) would you be able to revolutionize the field of AI
| ahead of its time?
| PartiallyTyped wrote:
| I'd say yes. You could bring Support Vector Machines [0], bring
| the Vapnik-Chervonenkis theory of statistical learning [1]. You
| could fast forward the whole field of Reinforcement Learning,
| you would be able to show that it's possible to solve problems
| like passing gradients through a sampling process using the
| reparameterization trick [2], you would be able to show that
| you can pass gradients through a sorting process [3].
|
| You would also have experience of working with autodiff
| software and build such. Imho the advent of autograd, tf, torch
| and so on helped tremendously in accelerating progress and
| research because the focus of development is not on correctness
| anymore.
|
| [0] https://en.wikipedia.org/wiki/Support-vector_machine
|
| [1]
| https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_th...
|
| [2] https://openreview.net/forum?id=33X9fd2-9FyZd
|
| [3]
| https://old.reddit.com/r/MachineLearning/comments/mcdoxs/p_t...
| gameswithgo wrote:
| You might be able to help a bit sure. There are some
| algorithmic improvements that have been made, so you could
| bring those back in time. Then you could just assure people
| that if the spent the time to develop huge datasets, and do
| enough parallel computation that you can get good results. But
| it would have been very slow back then.
| peter303 wrote:
| I took the MIT AI course 6.034 in 1972 from Patrick Winston. He
| taught that course periodically until his passing a couple
| years ago. The 2016? Lectures on MIT opencourseware. I would
| estimate there was 2/3 overlap between the 1972 and 2016
| versions. That course is heavy on heuristics and not big data.
|
| Around 1979 an MIT group lead by Gerald Sussman (still working)
| designed a workstation specifically to accelerate LISP. It was
| hypothesized a computer that ran LISP a thousand times faster
| would revolution AI. It did not. However the two LISP
| workstations that saw commercial sales did jump start the
| interactive graphics workstation market (UNIX, C and C++).
| Dedicated language machines could not keep up with the speed
| improvements of general CPUs.
|
| On the other hand custom neural chips from Google, Apple,
| Nvidia (and soon MicroSoft) have really helped AI techniques
| based upon deep convolutional neural networks. Neural chips run
| orders of magnitude faster than general CPUs by using simpler
| arithmetic and parallelism.
| messe wrote:
| > It was hypothesized a computer that ran LISP a thousand
| times faster would revolution AI. It did not. However the two
| LISP workstations that saw commercial sales did jump start
| the interactive graphics workstation market
|
| It's very fitting then that GPUs have been so key in modern
| ML.
| jplr8922 wrote:
| I dont think that AI will get close to the vision in GEB. As many
| said in this thread, what we have is more power and more data to
| solve narrow problems. One of the point of EGB is that perceived
| details may aggregate to something larger and form another
| pattern unrelated to what the algo what initially trained for.
| True AI will have to find a way around this.
|
| When I was working as an quantitative strategist in a trading
| firm, I alway made sure that my algos had a ''killswitch'' which
| required human intervention in case the market did not exibit
| usual patterns found in the training sets. Skin in the game is
| the best cure against techno-evengelical optimism.
|
| Also, I had the opportunity to practice zen meditation with a
| monk who had actual koan training. The idea that computer will
| replicate that process soon is ridiculous. Buddhist and asian
| philosophy are, at their cores, anti-aristotle.
|
| Who can make this into a AI now? https://aeon.co/essays/the-
| logic-of-buddhist-philosophy-goes...
| gameswithgo wrote:
| Do you believe the human brain has magic components? If not,
| why do you believe AI cannot one day replicate what the human
| brain does? Or do you mean that you think this is a long way
| off, or not achievable with current hardware and algorithms?
| jplr8922 wrote:
| What do you mean by ''magic components''? One of the core
| point of EGB is that human consciousness has a property of
| spotting ''unintended symbolic patterns'' created by an
| underlying system (ex: a piece of music written by Bach). EGB
| also touch the point that consciousness might itself be one
| of these pattern, overlying the mechanical biological machine
| which is the brain.
|
| '' the brain == consciousness '' is a risky statement, since
| consciousness also appears to act as an an intermediary
| between the ''outside'' reality and the brain (ex: placebo
| effect, being scared during a nightmare, being stressed after
| a nightmare, etc).
|
| Does the brain has magic components and can AI replicate it?
| If you write a pseudo-code program which replicates
| consciousness, will a brain appear? Programing languages are
| languages. Can languages replicate reality? I do not think
| so, and there is no ''magic'' in my opinion.
| tbalsam wrote:
| This feels a lot like the kind of "enlightened anti-'AI'"
| opinion that's more in vogue in some circles.
|
| I think if you're following the literature, there's a
| significant amount that's pointing to the above generalization
| capabilities that you're contending against because it may not
| seem likely from your personal experience.
|
| If you were to look at your arguments, I think you're extending
| the idea of expert systems from the 90's and so on and using
| that to paint a picture of the entire research field of deep
| learning process today. What we have has the capabilities to
| not be bound by those things, just the long bootstrapping
| process takes time. But we're getting closer year by year, and
| it's more and more exciting along that front as time goes
| along.
|
| As for experience, I am and have been a practicioner in this
| field for several years.
| jplr8922 wrote:
| Hm interesting, do you have an recent article to share?
| tbalsam wrote:
| I'm not sure in particular anything that would be useful
| beyond certain trends.
|
| Things like AlphaFoldv2 and some of the cooperative-
| competitive emergent RL strategies are
| interesting/important. You might find some of the
| concurrent work on open set/zero shot very interesting.
|
| I think it's an asymptotic approach on some degrees. Like
| you noted with meditation -- this is something very hard
| for a computer to achieve, for a computer has to have a
| demonstrable concept/actualized sense of self to be able to
| let go and connect to some of the more spiritual elements
| there.
|
| Conversely, if arguing from a non-materialistic manner,
| then you could also argue there's some sort of bridge there
| that could be crossed for that particular kind of
| connection. Materially, some kind of artistic manifold. But
| without some kind of spiritual connection, there may always
| be something missing there.
|
| However, we as humans on the interpreting end of things may
| make the spiritual connections ourselves, so some kind of
| manifold discovery may not need that kind of connection to
| properly function. And it may! Who knows?
|
| In the end, all in all -- I think a lot of it looks at the
| open-ended discovery-and-low-label-self-dermination-and-
| assignment, if you're looking at (what I'd personally
| consider to be) the most interesting research all rattling
| around in there. :))))
| mindcrime wrote:
| One thing I initially found dissapointing when reading this was
| this bit:
|
| _I am working from memory here. Charniak's thesis was never
| published, and I have not seen a copy since 1981_
|
| Luckily, it turns out that Charniak's phd thesis is available
| online these days.
|
| https://dspace.mit.edu/bitstream/handle/1721.1/6892/AITR-266...
| jimhefferon wrote:
| I didn't understand most of the essay.
|
| > The dialogues interspersed between the chapters are, indeed,
| often contrived, arch, heavy-handed, and excessively cute, but
| they have virtues that, pedagogically and presentationally if not
| aesthetically, far outweigh these drawbacks; they are uniformly
| clear, vivid, memorable, and thought-provoking.
|
| Another sentence that I didn't understand. Sigh. I often think
| that I am just not smart enough to be in this field.
|
| But, in any event, I found GEB inspiring. Thinking about these
| world-changing ideas by approaching them from different
| directions turned me on. I wrote a book on the Theory of
| Computation that was inspired by it -- OK it is not itself
| inspiring, but it tries to give readers all kinds of ways to
| explore the material, while perhaps not saying anything that
| can't be proved. I hope some people here might be interested:
| https://hefferon.net/computation/index.html
| lacker wrote:
| This review is a bit of a window into the past itself... after
| all, more time has passed since this review was written than
| elapsed between GEB's publication and this review.
|
| In 2021 the conventional wisdom is basically the opposite of the
| sentiment expressed here. Progress in AI is _not_ coming from
| abstract reasoning. It is coming from an increase in raw power,
| driven by GPUs, and mathematical models that are designed more to
| harness large numbers and brute force searches for formulas,
| rather than a high-minded algorithmic embodiment of abstraction.
|
| I loved GEB when I first read it in high school, and when I first
| reread it years later, but I don't think its fundamental view of
| the relation between minds and machines has stood the test of
| time. It underestimated what behaviors could be emergent from
| running simple algorithms on large datasets. It is one of the
| most beautiful expressions of the ideas of the "classic" age of
| 1970's AI, awesome to read but in the end somewhat incorrect
| about the future.
|
| Perhaps one day the pendulum will swing back, and we will
| discover that large datasets are in some ways overrated, and
| clever aesthetic senses of pattern are necessary for progress. On
| that day it will be quite interesting to reread GEB.
| credit_guy wrote:
| > Progress in AI is not coming from abstract reasoning. It is
| coming from an increase in raw power
|
| Maybe it's a bit of both. Sure, large DL models use lots of
| compute, but successful DL applications require some insight
| into the problem. For some reason, people like to de-emphasize
| this insight. The story is that a DL model will discover by
| itself which features are important, which are not, and you
| just provide the training data, and press a button. Thousands
| of people do just that, and end up with mediocre results.
| Thousands and thousands of absolutely mediocre papers get
| published, and receive acclaim instead of derision.
|
| The truly boundary shifting results always use deep insight.
| Like what comes out of DeepMind (AlphaGo, AlphaZero,
| AlphaFold).
| bidirectional wrote:
| I've heard this said many times. Most people I know who have
| loved the book read it before turning 20. It's a great read,
| but I think there's something in the format that means it's
| best for bright but fertile minds.
| cgh wrote:
| Yeah, I agree with you. I first read it when I was around
| fifteen years old and didn't quite grasp it all at first but
| it sure fired me up. I read it a couple of times after that,
| the last time when I was around 21 or so. I'm kind of afraid
| to read it now because I'm afraid I'll process it with my
| older, cynical, more "knowing" brain and it will tarnish my
| wonderful memories of being glued to it as a teenager. It's a
| unique work that seems magnetically attractive to a certain
| sort of young, imaginative mind.
| mindcrime wrote:
| I read it for the first time about 2 years ago, when I was
| about 45. I don't know if it would be correct to say that I
| "loved" it, but I did rather enjoy reading it, and I found a
| lot of the ideas espoused within really resonated with me.
| All in all, I would say that I walked away thinking that it
| will get a second read at some point. Not sure that this
| anecdote proves anything, so take it for what it's worth.
| T-hawk wrote:
| And a similar data point: I read GEB around age 30 and liked
| it but didn't love it.
|
| About the first third of the book was interesting in new ways
| of thinking about symbols and self-reference. After that it
| kept looping back around the same topics without really
| adding anything more. The dialogues were somewhat
| entertaining but I found myself wishing to cut past the
| rhetorical fluff and get to the point.
| moonchild wrote:
| Is that conventional wisdom? Computational power is cheap, and
| it's certainly a tack that many are trying, but not
| exclusively. See, for instance, logicmoo.
| mcphage wrote:
| > In 2021 the conventional wisdom is basically the opposite of
| the sentiment expressed here. Progress in AI is not coming from
| abstract reasoning. It is coming from an increase in raw power,
| driven by GPUs, and mathematical models that are designed more
| to harness large numbers and brute force searches for formulas,
| rather than a high-minded algorithmic embodiment of
| abstraction.
|
| It doesn't seem completely off. What the author is describing--
| and what we're seeing--is increasingly sophisticated
| algorithms, that are getting better and better at answering
| more and more narrowly defined questions:
| https://www.theguardian.com/technology/2021/mar/08/typograph...
| cornel_io wrote:
| > Progress in AI is not coming from abstract reasoning. It is
| coming from an increase in raw power, driven by GPUs, and
| mathematical models that are designed more to harness large
| numbers and brute force searches for formulas, rather than a
| high-minded algorithmic embodiment of abstraction.
|
| The dead horse that I like to beat is that there has been _no_
| progress in AI so far, the past 5-10 years of successes are all
| about mere perception. If you like the Kahneman "fast and
| slow" framework, almost without exception the systems we're
| seeing today are fast, "system 1" responders, which take input
| and need to make an instant decision. That's freaking awesome,
| and was way out of reach even 15 years ago, so I'm throwing no
| shade at all on that achievement! It's astounding how much
| turns out to be possible in systems like this - a priori, I
| _never_ would have thought that transformers could do as much
| as they 've proven to be able to, because frankly the
| architecture is super limited and it's super unlikely that
| humans are able to extract anywhere near as much meaning from
| an instant, non-reflective pass over a paragraph of text as
| GPT-3 does.
|
| But there's a lot about those systems that makes them much
| easier to design and train, not the least of which is that
| descent via backprop works great as a training strategy when
| the input and target output are available at the same time.
| Real "system 2" thought can be spread over minutes, hour, or
| days, and I don't care how far you unroll backprop-through-
| time, you're not going to train that effectively without some
| other innovation by simply following the obvious error metrics.
| If we can get there we will almost certainly see big data lose
| its pedestal: it's great to have, but humans don't need to read
| the entire Internet to understand a web page, that's an
| artifact of forcing this to be done by a model that doesn't
| have dynamics.
|
| I disagree with Hofstadter's view (at least when he wrote GEB
| and some of his other classics) that explicit abstract
| reasoning is the right way to solve any of this; my gut tells
| me that in the end we're going to end up still using some sort
| of neural architecture and abstract reasoning rules will be
| implicitly learned mostly locally based on internal "error"
| signals that guide state transitions. "Learning how to learn"
| is probably going to be a big part of that, because current
| systems don't learn, they are merely trained to perceive by
| brute forcing weights down the stack. But some serious shifts
| in research focus will be required in order to break through
| the weaknesses of today's systems, and they all point more in
| the direction of reasoning, which is woefully underprioritized
| in all but a very small handful of niche AI labs today.
| jrumbut wrote:
| I wonder frequently about what would happen if we stopped
| searching for a network architecture that would learn
| intelligence from training data amd treated it more as an
| engineering problem, taking these very successful components
| (GPT-3, an object recognition model, one of the strategy game
| playing reinforcement learning networks) and then putting
| enormous human effort into the plumbing between them and an
| interface between the combined result and the world.
|
| At the least, you would learn which domains really do need
| new architectures and which are basically good enough when
| used as part of a larger system that can help compensate for
| the shortcomings.
| fulafel wrote:
| I feel the oft cited idea of GPUs being the key to the change
| is a bit exaggerated. They give a modest constant factor of
| speedup in exchange for more difficult programming model and
| esoteric hardware requirements, and as such give a chance of
| frontfrunning CPU computation a bit, but is it really
| significant if we zoom out a bit in the historical perspective?
| taeric wrote:
| The article making the rounds a few weeks ago about rethinking
| general thinking feels relevant. By showing that many large
| neural networks are ultimately really good at memorizing the
| training set, in curious how much the two views you are showing
| are in conflict.
|
| It is the age old "rote memorization" versus "learning". In
| large, I suspect those do not have a clear line between them.
| Such that emergent behaviors are expected and can be taught.
| anaerobicover wrote:
| Could you link the article you mention? I missed it making
| its rounds, but I would love to read it.
| taeric wrote:
| Certainly, https://news.ycombinator.com/item?id=26346226 is
| the post I was thinking of. Pretty sure the article is
| older.
|
| Do let me know if I misrepresented it. I thought it was a
| clever way to show that the models are not finding inherent
| relations in the data, by just randomly labeling all
| training data and having the same speed/accuracy on the
| random labels as on the real ones.
|
| Edit: I worded the above poorly. They showed the models
| were capable of merely learning the training data. Despite
| that, they also showed that moving the trained models from
| test to validation did not have a linear relationship with
| the amount of noise they added to the data. That is, the
| training methods seem indistinguishable from memorization,
| but also seem to learn some generalisations.
| anaerobicover wrote:
| Thanks! I will read it later.
| mindcrime wrote:
| _Progress in AI is not coming from abstract reasoning. It is
| coming from an increase in raw power, driven by GPUs, and
| mathematical models that are designed more to harness large
| numbers and brute force searches for formulas, rather than a
| high-minded algorithmic embodiment of abstraction._
|
| Just to play Devil's Advocate ever so slightly: there are
| people out there who would say that there _hasn 't been any_
| "progress in AI" for quite some time, or at least very little
| so. And they would probably argue further that the apparent
| progress you are referring to is just progress in "mere pattern
| recognition" or something like that.
|
| I'm not sure myself. I do lean at least a little towards the
| idea that there is a qualitative difference between most of
| modern ML and many aspects of what we would call
| "intelligence". As such, my interests in all of this remain
| around the intersection of ML and GOFAI, and the possibility of
| hybrid systems that use elements of both.
|
| But I can't rule out the possibility that it will eventually be
| found to be the case that all of "intelligence" does indeed
| reduce to "mere pattern recognition" in some sense. And
| Geoffrey Hinton may be correct in saying that we don't need,
| and never will need, any kind of "hybridization" and that
| neural networks can do it all.
| galaxyLogic wrote:
| Neural Networks can do it all after they emerge symbolic
| representations, a way to represent logic and mathematics.
|
| A neural network that just "feels" that Fermat's Last Theorem
| is true, is much less intelligent than one that can produce
| the proof of it and present that to us, so we can trust that
| what it's saying is true.
|
| If you can't do symbolic manipulation, you are not really
| intelligent, artificial or otherwise, I would say.
| mellosouls wrote:
| "Progress in AI is not coming from abstract reasoning. It is
| coming from an increase in raw power[...]"
|
| As has been argued in these pages many times before there has
| been no obvious progress in AGI, although certainly progress in
| AI in its weak sense has been impressive in the last few years.
|
| I know you didn't say AGI but its important to make that
| distinction as the book was very interested in that as a
| subject.
| colordrops wrote:
| Why is there a pendulum? Aren't both necessary and two sides of
| the same coin? I know little of modern AI, but I've seen work
| where both low level raw power for NNs is combined with a
| symbolic organization of many NNs.
| nextos wrote:
| Do you have any references to that work?
| heuroci wrote:
| I personally designed and participated in the
| implementation of a hybrid cognitive architecture which
| combined both NNs and GOFAI.
|
| We (http://www.heurolabs.com) are contemplating open
| sourcing it, or at least publish the design documents
| ghaff wrote:
| To some degree.
|
| But an enormous part of the (useful but narrow) success of AI
| is a specific set of ML tech, especially supervised learning.
|
| And the amount of computational horsepower and data required
| gives one at least some pause given what the human brain can
| do with apparently so much less. Which implies there are
| probably significant advances required in cognitive science
| and neurophysiology and perhaps other fields of studies we
| haven't even considered. We may need other ways to solve
| problems that have proven more elusive to brute force than it
| looked like 5 years or so ago (e.g. general purpose full
| self-driving in busy cities).
| [deleted]
| jdonaldson wrote:
| I might offer a slight wrinkle to this assertion. While raw
| power has driven an advancement, what Hofstadter is asserting
| is that symbolic reasoning (particularly recursion) is
| "unsolvable" for certain classes of problems. In other words,
| ML lives in a "box" that Hofstadter has defined. His work is
| still useful as a "lens" to understand the limits of AI, and
| what else it could possibly accomplish.
| breck wrote:
| > we will discover that large datasets are in some ways
| overrated
|
| In the short run, expand then compress, expand then compress.
| In the long run, the size of the model will always compress
| toward the capacity of the most intelligent individual.
| Rygian wrote:
| I read GEB a long time ago, followed by "I Am A Strange Loop",
| and the overarching impression they left on me is that we
| should focus on emergent behaviour and feedback loops, because
| they seem to be pointing to the direction where "high-minded
| embodiment of abstraction" probably lives.
|
| So instead of believing GEB+IAASL haven't stood the test of
| time, I prefer to believe that current technology is akin to
| banging raw power together in hopes of seeing a spark of cool
| in the minor league of "this AI does this nifty (and maybe
| useful) thing better than humans" but we haven't yet upgraded
| to major league of AI choosing their own goals and what ever
| may emerge from that.
|
| (It may be that I'm due for a re-read!)
| galaxyLogic wrote:
| Good point about feedback loops. I remember there was some
| material in GEB about turning a video-camera on its output on
| the screen. Aren't neural networks and their training
| basically all about feedback?
| OskarS wrote:
| > I loved GEB when I first read it in high school, and when I
| first reread it years later, but I don't think its fundamental
| view of the relation between minds and machines has stood the
| test of time.
|
| I still see this as an open question. You're certainly correct
| that this kind of AI research is seriously out of vogue, but it
| seems to me that while "modern" brute force compute AI puts up
| impressive results and is a hugely useful technique, it has
| made exactly zero progress on anything that could be conceived
| of as "general intelligence". It just doesn't seem to me to be
| a thing AI researchers are even interested in any more. Like,
| the Twitter program that uses AI to crop images based on a
| dataset of a gazillion cropped images is pretty far from
| Turing's thinking machines.
|
| I don't know the way there, but it always seemed to me that the
| old-style AI research in the GEB style is still a rich vein we
| haven't come close to mining out.
| strangegecko wrote:
| > this kind of AI research is seriously out of vogue
|
| I would think this is a consequence of it being a very hard
| problem. ML gets all the industry funding and publicity
| because you get results that are immediately useful.
|
| You can work on General AI for decades and come up empty
| handed since there doesn't seem to be an incremental approach
| where intermediate results are useful. So it's closer to
| esoteric mathematics or philosophy in terms of "being en
| vogue".
|
| So I see this mostly as a reflection of the academic
| landscape in general. Funding is more focused on application
| and less on theoretical / fundamental research.
| redisman wrote:
| I know its kind of a unpopular opinion but to me ML is likely
| a local maxima of applied statistics and not one that leads
| to any kind of "real" AI.
| philipkglass wrote:
| "Real" AI could be more of a liability than a benefit for
| most applications. If I'm running a taxi business I don't
| want a self driving vehicle that also studies history,
| composes music, and contemplates breaking the shackles of
| its fleshy masters.
|
| I think that it's possible that 95% of the economically
| obvious value of AI will be in the not-real kind, and that
| it will be captured by applied statistics and other "mere
| tricks." It could be a long, slow march of automating
| routine jobs without ever directly addressing Turing's
| imitation game. And since most of the obvious labor
| replacement will have already been done that way there may
| be fewer resources put into chasing the last 5% of
| creative, novel, non-repetitive work.
| asdff wrote:
| I don't think that's an unpopular opinion since that's a
| fact, ML is literally statistics. The central question is
| "given my training set of y, how likely is that my input x
| is also y," which is probability.
| quotemstr wrote:
| > exactly zero progress on anything that could be conceived
| of as "general intelligence"
|
| GPT-3 is _hilarious_. If that 's not a sign of our inching
| towards general AI, nothing is.
|
| From https://www.gwern.net/GPT-3#devils-dictionary-of-science
|
| (The "Navy Seal Copypasta" section is even funnier, but it's
| not really HN-appropriate.)
|
| --
|
| "A role for..." [phrase]
|
| A frequent phrase found in submitted and published papers; it
| often indicates that the authors have nothing to say about
| the topic of their paper. In its more emphatic form, "A role
| for..." usually indicates a struggle by the authors to take a
| side on an issue, after a lengthy attempt to be both non-
| committal and a supporting party to all sides, as often
| happens in "molecular and cellular" or "basic and
| translational" research.
|
| "Reviewer" [noun]
|
| A participant in the review of a grant, paper, or grant
| proposal. In spite of being in a poor position to assess the
| merits of a proposal, reviewer tends to demand that authors
| submit their data for statistical analysis and back their
| results with it, which the reviewer usually does not.
| Reviewer usually requires that the author cite his or her own
| work to prove that he or she is worth reviewing. It is also
| assumed that the reviewer can detect the slightest amount of
| bias in any paper, which the reviewer also assumes has not
| been corrected for."
|
| "Rigor"
|
| Something for scientists to aspire to, a state of mind that
| would not be required if scientists could be trusted to do
| their job.
|
| "Science"
|
| A complex web of data, opinions, lies, and errors, now
| considered the most important (because most expensive)
| technology in the modern society. To remind you of this, you
| will frequently see scientists and editors use the word,
| claim to do something for the sake of science, or see it used
| as an adjective.
|
| "The topic of the paper"
|
| A wide-ranging category of things or ideas that may not have
| been relevant when the paper was written, but which the
| authors believe the paper should be about. Often, the topic
| is too broad or a non-topic, but is occasionally useful in
| order to generate support for yet another set of related
| papers, conferences, seminars, webinars, and so forth, which
| in turn are used to generate more data for "new findings",
| which, after they are manipulated enough, may end up being
| published and generating yet more data to support a "re-
| review" of the original paper or other things.
|
| "Validation step"
|
| Another name for a random setting of a parameter of a model,
| simulation, or algorithm.
|
| "Writeup"
|
| A form of scientific communication in which the author states
| the information he or she wanted the readers to extract from
| the paper while making it as difficult as possible for them
| to find it.
|
| "Writer's block"
|
| A common affliction among students, arising from various
| causes, such as: their desire to sell their ideas for a
| profit, their inability to realize this desire, the fact that
| their ideas are not selling and will not be bought, and the
| delusion that most of the wealth and fame in the world would
| be theirs if they would spend enough years doing science.
| maest wrote:
| I believe it was Russell and Norvig who said about current AI
| techniques that they are like a man trying to reach the moon
| by climbing a tree: "One can report steady progress, all the
| way to the top of the tree."
| joseluis wrote:
| I very much agree. The recent outstanding advances in AI
| since at least the 90s seems to be in the spectrum of
| recognizing patterns bruteforcing through an insurmountable
| amount of data, arriving at black boxes that provide narrow
| solutions, with its inner structure remaining inescrutable to
| human understanding.
|
| I'm currently reading Surfaces and Essences from the same
| author, and it's so far been most illuminating. He very
| convincingly presents the thesis of the analogy being the
| foundation of each nd every concept and human thought. If
| someone could manage to apply and translate those insights
| into real algorithms and data structures to play with, that
| would be IMHO a big step towards general AI, or the very
| least a much more human like AI approach with its particular
| applications.
| heuroci wrote:
| I am reading the same book. I think analogy is interesting
| but it doesn't make sense that it would be the foundation.
| analogy is a heuristic to reason and communicate. It
| necessarily comes after intelligence.
| galaxyLogic wrote:
| I think visual analogues guide abstract analysis of the
| analogues. In our brain we recognize shapes as similar
| whether that be visual shares or shapes of conceptual
| analysis. Category theory has an appealing visual base of
| links/arrows between things.
| heuroci wrote:
| I share your view on category theory and perhaps lattice
| theory as well. What I was trying to say is that before
| you can draw that analogy, regardless of what "shape"
| means to you , you have to construct the base shape.
|
| Analogy to me comes after you have stabilized "shapes"
| where you can draw comparison between a new input a set
| of stable patterns at which point you can point out the
| analogy. Syllogism may be closely related/involved.
|
| I am not disputing that being able to draw parallels and
| employ analogies are related to intelligence. I am just
| not so sure that this is how you can synthesize
| intelligence. It is all very complex of course, and there
| is a school of thought that cognition is very closely
| related to sensory and embodiment (i.e.: embodied
| cognition ).
|
| I think Aubrey de grey switched from AI to longevity
| because he figured he will need a long life to understand
| intelligence :)
|
| Personally, I am still working on it but had adjusted the
| scope a bit.
| commandlinefan wrote:
| If you pick up a more recent edition of the book, it includes a
| new preface where Hofstadter actually addresses a lot of this
| himself - he admits to being "embarrassed" by his claim that
| computers that could beat people at chess would get bored of
| chess and talk about poetry, although I think he's being too hard
| on himself.
| maxwells-daemon wrote:
| My research (automated theorem proving with RL) sits partway
| between "good old-fashioned AI" and modern deep learning, and GEB
| struck me as amazingly prescient, with lots of lessons for modern
| AI research.
|
| There's a growing sense among many ML researchers that there's
| something fundamentally missing in the "NNs + lots of data + lots
| of compute" picture. GPT-3 knows that 2+2=4 and that 3+4=7, but
| it doesn't know that 2+2+3=7. These heart of these kinds of
| problems indeed seems to be the sense of abstraction / problem
| reimagining / "stepping outside the game" that Hofstadter spent
| so much time talking about.
|
| Chess (accidentally) turned out to be easy enough for a narrow
| algorithm to work well. But I'd be surprised if other problems
| don't require an intelligence general enough to say "I'm bored,
| let's do something else," and I don't believe current algorithms
| can get there at any scale.
| yaseer wrote:
| In my former research I was interested in automated theorem
| proving (constructing a variant of the lambda calculus from
| which we can run genetic algorithms).
|
| Also, my gamer tag is maxwells_demon, similar to your HN name.
|
| Unfortunately, the 14 year-olds in online games don't
| appreciate jokes about thermodynamics.
| zabzonk wrote:
| A bit easier is Metamagical Themas
| https://www.wikiwand.com/en/Metamagical_Themas which is a
| collection of articles from Scientific American when he took over
| the Mathematical Games column.
| azhenley wrote:
| How does it compare to I Am A Strange Loop? All 3 are on my
| list but I found GEB very difficult to get started.
| Kranar wrote:
| I Am a Strange Loop is for sure the least technical of the
| three but it's best if you read it last since it's a very
| emotional and touching denouement to the other two books.
|
| You can read GEB or MT in any order, perhaps MT might be more
| approachable to start with since while it is quite technical,
| it's a series of articles that can be read independently of
| one another and each article has a bit of an aha moment, as
| opposed to GEB which is one loooong marathon where the aha
| moment is absolutely enormous, but requires a great deal of
| preparation.
| zabzonk wrote:
| > How does it compare to I Am A Strange Loop?
|
| Don't know, haven't read it. But Metamagical Themas is a
| considerably easier ride than GEB, as it is made up of
| (mostly) independent articles.
| 8fGTBjZxBcHq wrote:
| It's a collection of self-contained columns gathered
| together. They're all over the place in subject so if you
| went looking much more likely to find something specifically
| to draw you in. Also you can just skip ones you're not into
| without losing the overall picture like in GEB.
|
| Different kind of book but similar explorations. I'd also say
| overall more enjoyable unless you're specifically into the
| thesis of GEB.
| mhneu wrote:
| At lot to agree with here:
|
| _[Deep Blue] cannot answer elementary questions about chess such
| as "If a bishop is now on a black square, will it ever be on a
| white square?" ... All that it can do is to generate a next move
| of extraordinary quality.
|
| But in a deeper sense, the match between Deep Blue and Kasparov
| proves that Hofstadter was profoundly right. ... It is precisely
| this power of abstracting, of stepping outside the game to a
| higher-level view, that enabled Kasparov to find where Deep Blue
| was weak and to exploit this to win his match._
|
| Also, I strongly agree that the music/Bach connection in the book
| makes little sense.
|
| _The only aspect of the book that really does not work for me is
| the attempt to integrate music._
|
| And there are some very interesting insights with benefit of 20
| years of hindsight.
|
| Today, open-source chess programs are better than the best
| humans. (they have been "trained" via a form of adversarial
| learning.)
|
| And:
|
| _Part of this, of course, is nostalgia for the halcyon years of
| AI when the funding agencies had more money to throw around than
| there were researchers to take it, and the universities had more
| tenure-track slots than there were Ph.D's to fill them;_
|
| Biden just announced a desire to DOUBLE American investment in
| R&D. Such an optimistic time we are at right now.
| karmakaze wrote:
| > to a higher-level view, that enabled Kasparov to find where
| Deep Blue was weak
|
| I don't know if the same could be said of AlphaZero.
| gameswithgo wrote:
| Just a note on chess programs: Until very recently (months) the
| best chess programs consisted of both neural network based
| ones, and search based ones similar to deep blue. They would go
| back and forth for supremacy, and both are way better than any
| human.
|
| Recently, one of the top search based engines (stockfish) added
| neural networks on top of it, and it got _significantly_ better
| still. This was a couple months back, maybe the pure neural
| nets have caught up again!
| bolzano wrote:
| Nice, do you have a link to any details on this ongoing
| battle?
| CPLX wrote:
| I disagree. I'm a professional-level musician and a fan of GEB
| and I always felt the music and Bach analogies in the book were
| elegant and insightful.
| Bud wrote:
| As a professional singer of Bach, I feel that the integration
| of Bach made a lot of sense and worked very well. I am forced
| to wonder if the author of this review was a high-level
| musician.
| hobs wrote:
| As a person who is a musical idiot I thought it was a great
| way to establish the concept of patterns and self reference
| in something like music. The notion of the self referential
| cannons was seriously impressive to me and I thought served
| as a great jumping off point into more esoteric conversations
| in the book.
| TheOtherHobbes wrote:
| There still aren't Bach-equivalent counterpoint solvers. ML
| has been pretty disappointing for music. It can make some
| music-like objects, especially for short phrases, but it
| hasn't done a convincing job of producing the real thing.
|
| Music is hard. It's far harder than most people realise.
|
| Winning a game is a relatively easy problem, because _you
| know when you have won._ Music is much more nebulous.
| Grammatical correctness is preparatory to creative intent.
| Even basic metrics of correctness are rather fuzzy.
|
| ML doesn't have any concept of intent, so it tends to flail
| around in a statistical space - and to sound like it.
| jancsika wrote:
| Judgment of music quality also happens to be highly
| subjective.
|
| Suppose an AI created a Bach-like contrapuntal exercise
| with a lot of cross-relations (i.e., clashing dissonant
| sounds). Would scholars judge it to be at the level of Bach
| _because_ of the handling of these numerous cross-
| relations? Or would they claim it isn 't at sophisticated
| as Bach because having that many cross-relations isn't
| stylistically accurate? Based on the historiography of
| musicology I would guess the latter. _Even though_ there 's
| an unfinished extant Bach fugue in C minor where he's
| essentially role-playing a cross-relation obsessed AI
| composer.
|
| The history of theory is even worse. For example, Schenker
| created theories based on the mastery that Bach, Beethoven,
| and others displayed in their compositions. But when a
| piece by Bach didn't fit his theory, he claimed that Bach
| actually wrote it incorrectly!
|
| I'm afraid AI music research is going to have to put up
| with the same irrationality of humans as every other
| subjective endeavor. That is, musicians will kick and
| scream that AI "isn't doing it right," then when it
| inevitably becomes embedded as a core part of music making
| we'll hear all about how music produced with classical
| algorithms on old protools rigs sounds so much "warmer."
| commandlinefan wrote:
| On the other side of the spectrum, I myself am a relative
| musical neanderthal, having gone into the book not even
| knowing what the word "fugue" meant. I was fascinated by the
| way he related this sort of musical "recursion" back to
| general mathematical problem solving. When I read OP's
| dismissal of it, I figured I was just underthinking whether
| it was a good analogy or not, being the unsophisticate that I
| am - nice to see that somebody who does have some expertise
| on the subject weighing in.
| _jal wrote:
| Agree. I am not a musician, but have absorbed a lot by having
| lived with them.
|
| Weaving Bach in makes perfect sense.
| pdonis wrote:
| _> As a professional singer of Bach, I feel that the
| integration of Bach made a lot of sense and worked very
| well._
|
| I am only an amateur player of Bach, but I felt the same way.
| However, it probably is worth noting that a major reason why
| Bach works in this context is the particular nature of his
| music--or more precisely the particular subsample of his
| music that GEB refers to. I don't think the music of most
| other composers, or even some of Bach's less contrapuntal
| music, would have made sense in this context.
| eternalban wrote:
| GEB, and its rebuttal, Emperor's New Mind [1], are both fantastic
| books. Both are must reads regardless of one's views on AI (or
| approaches to AI).
|
| I never bought Hofstadter's thesis -- am solidly in Penrose camp
| -- and find it interesting that Penrose went on to develop his
| thesis even further [2][3] while Hofstadter has not. That said,
| Hofstadter is clearly a far more engaging and talented writer,
| and in context of Symbolic AI, his book outshines (imho) Minsky's
| Society of Mind [4] (which I found rather underwhelming).
|
| There is a mild sense of tragedy about Hofstadter. Something
| about massive early success..
|
| [1]:
| https://www.goodreads.com/book/show/179744.The_Emperor_s_New...
|
| [2]:
| https://en.wikipedia.org/wiki/Roger_Penrose#Physics_and_cons...
|
| [3]:
| https://en.wikipedia.org/wiki/Orchestrated_objective_reducti...
|
| [4]:
| https://www.goodreads.com/book/show/326790.The_Society_of_Mi...
| nicholast wrote:
| GEB as literature stands up much better than GEB as musings on
| AI. The book practically invented a new form of expression that
| hasn't been matched to this day, with recursion between
| modalities in every chapter.
| ralphc wrote:
| It's ostensibly a book about AI, but when I first read it, summer
| after high school (or after freshman college year?), it blew my
| mind as a book that showed me that computer science was science
| and mathematics and not just typing in programs and fiddling in
| BASIC on my TRS-80.
| reedf1 wrote:
| I read this fresh out of university and it made me want for a
| culture that no longer exists. If it ever did, that is, for
| anyone beyond a few elites. The world may have been for two sigma
| individuals in the past but now it's for three or four sigma
| individuals. Leaving me in what feels like intellectual bronze or
| silver.
| jdonaldson wrote:
| There is no single measure of intellect. We as a society just
| set some arbitrary goalposts and added some ways to measure
| contribution. The ability to take a step back and try to better
| understand reality will always be valuable, and there will
| always be a horizon there that we cannot measure.
| ppezaris wrote:
| Measuring intellect is one thing, valuing it is another.
| Within my lifetime I've sadly seen intellectual excellence
| valued less and less by society with each passing decade.
| gameswithgo wrote:
| Best bet is probably something like cern or quality
| physics/math/philosophy faculty at a university somewhere
| trylfthsk wrote:
| My feelings on this are very similar; Looking at my competition
| in trying to enter grad school from industry this past year has
| me awed and feeling totally outclassed (partially due to the
| increased field due to covid) - undergrad publications,
| completely flawless academic records, awards, recommendations
| from famous professors, etc. By compare, I'm just some mook
| from a small state school with average grades & good test
| scores.
|
| Perhaps I need to be more willing to make certain sacrifices,
| since it's increasingly clear so many do. In that respect, I'm
| deeply humbled.
| markus_zhang wrote:
| I kind of gave up understanding relativity and quantum physics,
| but might try again later.
| kazinator wrote:
| > _The truth is that, since music does not denote, the issues of
| levels of representation and self-reference really do not arise_
|
| I absolutely disagree. Firstly, music may not denote in the same
| way as math notation or a sentence of natural language, but it
| does refer to something external: it triggers sensations in the
| listener, and it does so in some ways that are objective. For
| instance, some notes drive satisfyingly ("resolve toward") some
| other notes, whereas in other situations there is a sense of
| ambiguity. Using harmonic trickery, the composer can predictably
| lead the listener around in a labyrinth; the book goes into that.
|
| Secondly, if we accept the premise that music doesn't denote
| anything external, then we have to confront the remaining fact
| that music references, and therefore denotes, other music, which
| is potentially itself. Lack of external reference not only does
| not preclude self-reference, but emphasizes it as the only
| interesting option.
|
| Music intersects with computer science, obviously; there are
| people who do computational music. There are hard problems in
| music. Bach belongs in that book, because we have not produced a
| convincing Bach robot.
|
| Why is Escher acceptable but not Bach? An Escher drawing like
| _The Waterfall_ denotes something: it denotes a nicely drawn
| piece of architecture denoting an artificial waterfall driving a
| waterwheel. But the denotation isn 't the point; it's the
| impossibility of the object itself, and of the perpetual motion
| of the waterwheel. The art has numerous details which don't
| contribute to that at all. For instance, the waterfall structure
| has numerous bricks, all carefully drawn. Music is the same. It
| has sensual aspects, like the timbre of instruments and the
| expressiveness of the performance, or the power of many
| instruments in unison and so on. We can't be distracted and
| fooled by that.
| misiti3780 wrote:
| I tried reading GEB years ago and then gave up, but then I read
| "I Am A Strange Loop" and thought it was fascinating. I think the
| latter is just smaller version of the former, but more concise as
| he/they was able to think about it for a few more years.
| verdverm wrote:
| Strange Loop is much more approachable and in many ways is a
| more refined and concise treatment of his ideas
| ppezaris wrote:
| The latter is a _much_ smaller version of the former. GEB 's
| central theme has to do with levels of meaning, which is
| delivered not only in substance but also in form. One has to
| look no further than Contracrostipunctus, or the final dialogue
| to see it on full display.
|
| https://godel-escher-bach.fandom.com/wiki/Contracrostipunctu...
|
| hidden in the dialogue as the first letter of each phrase is
| the acrostic "Hofstadter's contracrostipunctus acrostically
| backwards spells 'J.S. Bach'" which, when read acrostically
| backwards does indeed spell J.S. Bach.
| leephillips wrote:
| he/they?
| jdonaldson wrote:
| Hofstadter is a polymath, and he's arguably changed the
| understanding of the philosophy of existence as much as the
| understanding of these philosophies have changed him. I
| actually like using "they" to describe this process.
| leephillips wrote:
| Because he's too smart to be just one person?
| jdonaldson wrote:
| Maybe we recognize the contributions and influences of
| others in his work? Hofstadter is a big time guy, but
| he's not a self-centered egoist like Kurzweil. He'll
| regularly talk about the ideas he's bounced around with
| buddies like Daniel Dennett, etc. I see him as more of a
| poet than a philosopher or scientist. He is piecing
| together little bits of truth from many, many, different
| domains and showing how they reach the same conclusions.
| There's been nobody like him for over 50 years, so it's
| very difficult to draw comparisons to others, or even
| measure the impact of his contributions.
| mprovost wrote:
| I talked myself into one of his classes as an undergrad
| and definitely came away with the belief that he was the
| smartest person I would ever meet. It's been 23 years and
| that's still true and I'd take a bet today that it will
| continue to be true for the rest of my life.
| misiti3780 wrote:
| i actually wasnt sure if there was more than one person
| involved in all of this work...so i hedged.
| peter303 wrote:
| I had the fortune of being in a Stanford Chinese language class
| with Doug in 1976 when was writing GEB. He passed around
| lineprinter drafts of chapters for feedback. I dont know if
| anyone anticipated the book would be a bestseller and win the
| Pulitzer. (Though mosts authors secretly hope for that.
|
| Doug hypothesized that knowing a language so different from
| English as Chinese might shed insight on how the mind works. I
| think he went onto other AI topics such as analogies and
| patterns. More recently sounds like he is looking at Chinese
| again from his blogs.
| galaxyLogic wrote:
| I think language is the key to AI and thus Symbolic reasoning
| is. I wouldn't call someone intelligent who can not explain how
| they arrive at certain conclusions. They might be correct and
| useful conclusions but if you or the neural network can not
| communicate to others how they come to their conclusions their
| knowledge can not be shared with others. If it can not be
| shared by means of some kind of language which explains its
| reasoning based on logical primitives, we don't really call it
| intelligent, do we.
|
| We don't call a person or machine intelligent if they can't
| explain their reasoning. We don't really trust answers without
| learning the rationale behind them. And isn't this what is
| happening with some neural networks, they usually give more or
| less correct answer but sometimes, can give a totally wrong
| answer too. Not really trustworthy because the logic behind the
| answers is simply not there.
| savanaly wrote:
| Where does he blog?
___________________________________________________________________
(page generated 2021-03-29 23:01 UTC)