[HN Gopher] AI Index 2021
___________________________________________________________________
AI Index 2021
Author : T-A
Score : 52 points
Date : 2021-03-03 18:03 UTC (4 hours ago)
(HTM) web link (hai.stanford.edu)
(TXT) w3m dump (hai.stanford.edu)
| crazypython wrote:
| Whenever I explain computer programs that write computer
| programs- compiler theory- and programming language theory to
| someone, they immediately think "That's AI." Yet compiler
| research and programming language research don't get any AI
| funding and is not considered AI.
| hntrader wrote:
| Perhaps a key distinction is whether the algorithm is mostly
| learned from data or whether the algorithm is mostly hand-
| engineered.
|
| What most people refer to as "AI" maintains that the former is
| a necessary (although not sufficient) condition.
| joe_the_user wrote:
| _Perhaps a key distinction is whether the algorithm is mostly
| learned from data or whether the algorithm is mostly hand-
| engineered._
|
| In a broad view, I don't think there's any meaningful metric
| for the "mostly" you're talking about.
|
| Sure, "mostly" seems to make sense in the context of
| laboriously trained deep networks. But if the training
| process is improved, drawing a line between training and
| "looking and understanding" become hard/purposeless. At the
| limit, suppose some crazy genius created a small, "hand
| crafted" program, from GOFAI or whatever principles and this
| program "knew how to learn". If you fed it Wikipedia or
| whatever, it understood that and by it's content, it would
| then be "mostly trained on data" or would it?
| YeGoblynQueenne wrote:
| Well that's how neural networks er work. They are hand-
| crafted systems with a human-devised training algorithm,
| backprop. Their _output_ is a model trained on data. But
| the algorithms that train the model, themselves, are coded
| manually.
|
| Same goes for basically all machine learning algorithms.
| They are hand-crafted systems that train models from data.
| mjburgess wrote:
| Indeed, it is impossible to learn anything "from data".
|
| Data is just measurement of observable variables. A prior
| model of the meaning of those measurements is required
| first to parse them into a coherent "observational
| model"; and then many prior models is required to parse
| into a representational model.
|
| Few, if any extant "AI" systems are able to take the
| latter step, not least, as it requires more than
| measurements of the target system which are always
| ambiguous. (In particular, it requires a coordinated
| body).
| YeGoblynQueenne wrote:
| For most of the history of AI research the vast majority of
| AI applications consisted of hand-crafted programs. For
| example automated theorem proves, planners, SAT-solvers,
| game-playing algorithms, expert systems, search algorithms
| etc, are all hand-crafted, rather than learned from data.
|
| What you say, that it's AI if it's learned from data, that
| applies to machine learning, but machine learning is only one
| branch of AI. Of course it's the branch that most people know
| today, but go maybe a few years back and have a look at e.g.
| the classes on AI taught at places like Stanford or MIT etc,
| and you'll find that they're all about probabilities and
| logic, and machine learning does not feature very
| prominently. You can see the same thing in the staple AI
| textbook, "AI- A Modern Approach", which is pretty much all
| hand-crafted approaches.
| bluecalm wrote:
| It's strange that it came down to it. AI is a field inspired
| by our notion of human intelligence. Yet a human can learn to
| recognize road signs with a fraction of the data state of the
| art "AI" algorithms need these days. To play chess at the
| same level as a human master a state of the art neural
| network based engine needs data from tens of millions of self
| played games - several orders of magnitude more than a human
| master encounters (and then the computer has huge advantage
| in calculation speed and accuracy as well). I don't think
| deep learning is the end of it. It would be really great to
| see people venturing into different approaches. We can do
| much better than pattern recognition on huge amount of data.
| superbcarrot wrote:
| > several orders of magnitude more than a human master
| encounters
|
| Not if you account for millions of years of evolution that
| got your nervous system to its current state and access to
| books and training materials which add up to multiple
| decades/centuries of precompiled and synthesised expertise
| from other people who learned chess in similar ways to you.
|
| This is a minor point though. I agree that deep learning
| isn't it, or that if it is, that would be quite
| underwhelming.
| mjburgess wrote:
| Evolution produced the system, not the chess training.
| Humans aren't born able to play chess.
|
| The relevant comparison is "training time spent on
| chess".
| ad404b8a372f2b9 wrote:
| If you want to compare today's state of the art chess
| playing model to a human that's the relevant comparison,
| but if you want to compare the current field of AI to
| humans the line between the system and the training
| becomes fuzzy. If you look at the past 10 years of AI we
| have massively reduced the amount of data required to
| achieve a given score for computer vision models by
| making changes to the system of the neural networks. In
| fact it's where most of the improvements have come from
| rather than increasing the amount of data. I don't see a
| reason to believe it won't keep working this way. I'd say
| we're being damn efficient and it doesn't feel fair to
| say "look at how much data they need, the approach is
| fundamentally wrong" when we've been at it for such a
| short amount of time compared to the millions of years it
| took humans to evolve.
| Isinlor wrote:
| Human designed chess, chess are made to be played by
| humans.
|
| If you scramble all pixels in some game in a random, but
| fixed way, you will never learn to play it.
|
| But speed of training of a simple feed forward network
| will not change at all.
| currymj wrote:
| it's true, to some extent AI research is "whatever AI
| researchers do". whereas if work tackling similar problems is
| done by PL theory, operations research, or whoever, it's no
| longer AI.
|
| of course this goes both ways; during the AI winter everyone
| doing AI research was scrambling to rebrand themselves as doing
| OR, optimization, computer vision, etc.
|
| and there is still a difference in the terminology and
| conceptual tools. look at the problem of ensuring that a system
| will output values that satisfy some constraints.
|
| classically there was a lot of work on constraint solving by
| people who called themselves AI researchers. meanwhile
| operations research people worked on integer programming
| approaches to the same problem. and PL theory people work with
| abstract interpretations.
| dr_dshiv wrote:
| IMO, cybernetics is more conceptually grounded than AI -- at
| least in so far as it is possible to objectively define what
| constitutes a cybernetic system. The term "Artificial
| Intelligence" was literally invented for the purposes of
| attracting grant money. It is still good for that.
| amelius wrote:
| What is their definition of AI?
___________________________________________________________________
(page generated 2021-03-03 23:01 UTC)