[HN Gopher] Good Old Fashioned AI is dead, long live New-Fangled AI
___________________________________________________________________
Good Old Fashioned AI is dead, long live New-Fangled AI
Author : isomorphy
Score : 61 points
Date : 2022-11-15 19:36 UTC (3 hours ago)
(HTM) web link (billwadge.com)
(TXT) w3m dump (billwadge.com)
| shrimpx wrote:
| It could affect commercial artist deeply, like game artists and
| commercial illustrators making logos and icons and whatnot.
|
| But it won't affect studio artists at all. Studio art is not
| about "the image", it's about the practice, physical qualities of
| the artifacts, and an ongoing evolution of the artist.
| tomrod wrote:
| The subreddit for stablediffusion has several examples of high
| quality stitching of SD-generated images. If I am interpreting
| your vernacular of "studio artist" correctly, then yes, studio
| artists will be affected.
|
| Artists that produce a "real" medium like charcoal, sculpting,
| etc. aren't directly affected yet, but could be in the future.
|
| As always, there is a power law distribution when it comes to
| perceived value. It will be interesting to see how this
| evolves.
| time_to_smile wrote:
| People spend an awful lot of time talking about current successes
| in AI without often reflecting on how much (or little actually)
| AI impacts their lives. Despite all of the energy put into
| current gen AI, as far as every day impacts the biggest things I
| can think of are:
|
| - Spam filtering/email sorting
|
| - Web search
|
| - GPS/Wayfinding
|
| - Voice assistants
|
| These are the only practical applications of "AI" that I use more
| or less everyday (I'd be happy to reminded of others). Of these 4
| I personally have found spam filtering to be getting _worse_
| recently, as well as web search. The first 3 were all more or
| less solved over a decade ago, and, while I find Siri convenient,
| I wouldn 't mind much at all if voice assistants completely
| disappeared tomorrow.
|
| I'm not denying we've had an amazing decade of pushing the
| research needle further. There have been tons of impressive AI
| projects out there. However the practical, day-to-day
| improvements we've seen with the existence of AI seem to be few
| and far between, and this is even more true when you start asking
| about any AI work done in the last decade. I was happier with the
| state of "AI" in my life in 2006 than I am today.
|
| I just find it a bit fascinating how much energy has gone into
| both generic data science as well as more serious AI research and
| yet how little the reach of AI has grown in the last 10 years.
| All of the cool AI that I use existed _before_ data science was
| declared the "sexiest job".
| flooo wrote:
| Most novel AI serves the user perfectly well. And since this
| type of AI requires lots of labelled data, these users are
| typically large data harvesting organisations.
|
| Some other applications that you may use daily are translate
| and face unlock/recognition.
|
| It's interesting that you mention search and spam filtering
| which both include an adversarial component. It seems to me
| that the adversarial AI has become better, in line of
| expectation from the democratisation of AI tools and knowledge.
| tomrod wrote:
| You might be surprised where AI shows up.
|
| Use a credit card? Fraud monitoring, KYC, and other financial
| models run through (e.g. Early Warning service).
|
| Log into a website? Application monitoring with anomaly
| detection.
|
| Own a 401k with shares in a financial vehicle like an ETF? AI
| used to predict the market for in-the-money trades.
|
| Gone to the ER? Risk levels of mortality, sepsis, etc. are
| constantly pushed to your medical record (in many top-tech
| hospitals, like Parkland Hospital in Dallas and similar).
| woopwoop wrote:
| Do any of those applications use neural nets in any non-
| trivial way? I'm pretty sure that kind of stuff is all
| classical statistical modeling.
| vagrantJin wrote:
| I agree with your take mostly. But also, some things are
| getting better work-wise like video/image editing. Where
| something can take an animator a week or two now takes all of 1
| minute or less. Some startups in the motion capture space are
| doing some wild things and in a few years even small indie game
| studios will have mocap parity with AAAs.
| falcolas wrote:
| As always, they mis-spelled the acronym for "Machine Learning".
| There's nothing "Artificial" or "Intelligent" here but a
| mathematical algorithm operating on an algorithmically-encoded
| dataset.
|
| If anything, it's closer to an encryption algorithm where the
| keys can decrypt deterministic parts of the plantext from the
| cyphertext and soften the edges a bit.
| IanCal wrote:
| This is a long lost battle since AI has been a term used to
| describe far simpler things than that for over 60 years.
| drdeca wrote:
| While I don't really endorse it, I understand the objection to
| a name with the word "Intelligence" as part of it.
|
| I don't understand the objection to the word "Artificial".
|
| Why do you say that there's nothing "Artificial" about these
| programs. Now, they may be contexts in which you could call a
| program "natural" in the sense of, "the natural way to do
| something", but, at the same time, are not all computer
| programs, in a different sense, artificial?
| short_sells_poo wrote:
| I like to equate it to a lossy compression.
| nl wrote:
| Compression _is_ intelligence[1] - which goes to point out
| exactly how misguided the OP 's attempted distinction between
| ML and AI is.
|
| [1] http://prize.hutter1.net/#motiv
| mattnewton wrote:
| Auto encoders really hit this home for me- you are trying to
| find an efficient compressed notation of the dataset, and the
| most efficient way to do that hopefully ends up learning
| useful rules about the data.
| wwwtyro wrote:
| Why do the eyes in the generated images always look a little off?
| Most facial features usually appear photorealistic to me, but the
| eyes always have a little smudge or something in them that gives
| them away.
| tjnaylor wrote:
| https://blog.oimo.io/2022/10/21/human-or-ai-walkthrough/
|
| This is a blog that investigates ai eyes and particular and how
| to distinguish them from human artist made eyes.
| *However, in the case of AI painting, it will almost certainly
| change the coloring of the left and right eyes and how to add
| highlights . Humans can understand that ``the left and right
| eyes have the same physical shape and are placed in the same
| situation, so there is naturally a consistency there. '' I
| don't understand the theory "I don't really know what an eye
| is, but it's something like this that's placed around here,
| isn't it?" Still, it looks like it, so humans can recognize it
| as eyes, but there are still many defects in the details.
| Among them, the most distinctive feature is the " highlight
| that melts into the pupil and breaks the pupil ". Humans know
| that ``first there is the eyeball, there is the pupil in it,
| and then the surrounding light is reflected to form a gloss'',
| so ``the highlight does not block part of the pupil. It can be
| understood as a matter of course that the shape of the pupil
| itself does not collapse, even if the AI does the same, but
| AI that learns only by looking at the final illustration can
| understand the ``logical relationship between the whites of the
| eyes, pupils, and highlights''. I don't recognize anything . Or
| rather, I can't. I didn't give it as data. The
| unnatural deformation of the pupil is also one of the judgment
| materials. Humans know that "the pupil is originally a perfect
| circle", but AI trained by looking only at the final completed
| illustration does not know "the original shape of the pupil" .
| Therefore, such an error occurs. Another feature of
| AI drawings is that they often subtly change the color of the
| left and right eyes . Of course, there are characters with
| different eye colors on the left and right (heterochromia), but
| in most cases , characters designed that way can be clearly
| recognized as having different colors . It is one of the
| criteria for judging that the colors are similar at first
| glance, but if you take a closer look, they are different.
| However, even if there is such a character, it is not strange,
| so it is not an important basis. Also, it is natural for the
| color of the left and right eyes to change depending on the
| surrounding environment, so be careful not to make a mistake.*
| KaoruAoiShiho wrote:
| Maybe because the training data have a lot of bad photos with
| the red dot in the eyes.
| triska wrote:
| The "new-fangled" AI, as the article calls it, is often useful
| when the stakes are low, and you can accept mistakes in outcomes.
| Examples of such applications are: trying to determine which of
| your friends occur in a photo, which movies a subscriber _may_ be
| interested in, or which action _could_ lead to victory in a
| computer game. Getting a rough translation of a newspaper entry,
| as mentioned in the article, is also a good example.
|
| As soon as you need reliable outcomes, such as certainty
| _whether_ an erroneous state can arise in a program, _whether_ a
| proof for a mathematical conjecture exists, or _whether_ a
| counterexample exists, exhaustive search is often necessary.
|
| The question then soon becomes: How can we best delegate this
| search to a computer, in such a way that we can focus on a clear
| description of the _relations_ that hold between the concepts we
| are reasoning about? Which symbolic _languages_ let us best
| describe the situation so that we can reliably reason about it?
| How can we be certain that the computed result is itself correct?
|
| The article states: _" The heart of GOFAI is searching - of trees
| and, more generally, graphs."_ I think one could with the same
| conviction state: "The heart of GOFAI is reasoning - about
| relations and, more generally, programs."
| bade wrote:
| Well said
| skissane wrote:
| > As soon as you need reliable outcomes, such as certainty
| whether an erroneous state can arise in a program, whether a
| proof for a mathematical conjecture exists, or whether a
| counterexample exists, exhaustive search is often necessary.
|
| Proof checking requires 100% reliability. But if you are
| searching the space of all possible proofs for a valid one,
| that process does not require 100% reliability. On the
| contrary, automated theorem provers rely on heuristics to guide
| their exploration of that space, none of which work 100% of the
| time. "Exhaustive search" is an infeasible strategy, because
| the search space is just too large. Finding proofs is the
| really hard part (NP-hard), and the part which most stands to
| benefit from "AI" techniques - checking their validity is a lot
| easier (polynomial time).
|
| "New AI" deep-learning techniques can be used to augment
| automated theorem provers, by giving them guidance on which
| areas of the search space to target - see for example
| https://arxiv.org/abs/1701.06972 - that produced a seemingly
| modest improvement (3 percentage points) - but keep in mind how
| hard the problem is, a 3 percentage point improvement on a very
| hard problem can actually be a big deal - plus I don't know if
| any more recent research has improved on that.
| _carbyau_ wrote:
| The idea of chainlinking "AI to guide/choose AI for the next
| step" is where I expect more impressive results in future. It
| will be important to understand the limitations of AI to be
| sure of proper placement.
| falcolas wrote:
| So, there's an area of research that's under way called "AI
| Assurance" which seeks to answer many of these questions.
|
| Some things they're attempting:
|
| - Creating explainable outcomes by tracing the inner works of
| ML models.
|
| - Looking for biases in models using random inputs & looking
| for biased outputs.
|
| - Using training sets with differently weighted models to find
| attacks and biases.
|
| etc.
| nonrandomstring wrote:
| The tragedy is that GOFAI did all these things as built-ins.
| Procedural expert systems have been doing introspection,
| backtracing, declaring confidence intervals etc since the
| 1960s. Layering "assurance" on top of inherently jittery
| statistical/stochastic and neural systems seems to
| misunderstand how these models evolved, where they come from
| and why there are alternatives.
| ShamelessC wrote:
| Another tragedy, GOFAI hasn't made a dent on any of these
| problems for a very long time.
| thwayunion wrote:
| _> As soon as you need reliable outcomes, such as certainty
| whether an erroneous state can arise in a program, whether a
| proof for a mathematical conjecture exists, or whether a
| counterexample exists, exhaustive search is often necessary._
|
| Checking proofs is easier than finding proofs.
|
| _> The question then soon becomes: How can we best delegate
| this search to a computer, in such a way that we can focus on a
| clear description of the relations that hold between the
| concepts we are reasoning about? Which symbolic languages let
| us best describe the situation so that we can reliably reason
| about it?_
|
| These questions are largely answered. Or, at least, the
| methodology for investigating these types of questions is well-
| developed.
|
| I think the more interesting question is co-design. What do
| languages and logics look like when they are designed for
| incorporation into new-fangled AI systems (perhaps also with a
| human), instead of for purely manual use?
| [deleted]
| [deleted]
| shafoshaf wrote:
| The only barrier for higher stakes applications is going to be
| the frequency of errors. Flying an airplane or running a
| factory has a lot less margin for error, but humans don't do
| those things perfectly either (Chernobyl, Three Mile Island,
| Union Carbide-Bhopal disaster). It doesn't have to be perfect,
| just better than humans. And in fact, I'd argue that by having
| no deterministic outcomes prevents systemic failure, like
| having a single point of failure for all those drones in the
| crappy episodes of Star Wars.
| MonkeyMalarky wrote:
| There's a weird bit of induced demand like widening a
| freeway, makes errors less often than a human but is more
| scalable so the absolute number of errors increases. I guess
| in the case of self driving cars, it could be from hordes of
| autonomous shipping trucks that outnumber existing truck
| drivers.
| triska wrote:
| In addition to the _frequency_ , it is also about the
| _magnitude_ of unintended consequences. As an example,
| consider the Therac-25 accidents:
|
| https://en.wikipedia.org/wiki/Therac-25
|
| Not only was the software not perfect, it was so erroneous
| that the patients were struck with approximately 100 times
| the intended dose of radiation.
|
| Such extreme outliers should be completely and reliably ruled
| out in safety-critical applications, even if they occur only
| very rarely.
| setr wrote:
| > The only barrier for higher stakes applications is going to
| be the frequency of errors.
|
| Frequency and strength. My issue with e.g. image classifiers
| is that when they're wrong, they're _catastrophically_ wrong
| -- they don't misidentify a housecat as a puma, they
| misidentify a cat as an ostrich.
| nradov wrote:
| But there's the rub. It's impossible to determine through
| testing whether a particular AI system will actually have a
| lower frequency of errors than humans. You can program an AI
| system to handle certain failure modes and test for those in
| simulation. But complex systems tend to have hidden failure
| modes which no one ever anticipated, so by definition it's
| impossible to test how the AI will handle those. Whereas an
| experienced human can often determine the correct course of
| action based on first principles.
|
| For example, see US Airways Flight 1549. Airbus had never
| tested a double engine failure in those exact circumstances
| so the flight crew disregarded some steps in the written
| checklist and improvised a new procedure. Would an AI have
| handled the emergency as well? Doubtful.
| joe_the_user wrote:
| _The only barrier for higher stakes applications is going to
| be the frequency of errors._
|
| IE, "The only barrier to the software working perfectly is
| it's tendency to fail".
|
| Which is to say this sort of argument effectively assumes,
| without proof, that are no structural barriers to improving
| neural network performance in the real world. The thing is,
| the slow progress on self-driving cars shows that reducing
| the "frequency of errors" can turn from a simple exercise in
| optimizing and pumping in more data to a decades long debug
| process.
| blueyes wrote:
| GOFAI was never more than a rules engine. If-then statements.
|
| Agree with you about probabilistic AI being useful in low-
| stakes situations, at least at first.
| mistrial9 wrote:
| If-then questions can lead to non-deterministic outputs with
| some simple feedback systems
|
| not disagreeing completely, but.. both "questions that are
| reasoned about", and "the code that reasons about
| questions".. need more careful classification in order to
| make use of these new data methods..
|
| personally, I see the hype on DeepLearning to solve "find
| pattern in varying digital content" that is so clearly useful
| to the FAANG content Feudal Lords, is engaging in an investor
| shouting match that paves over simple use cases where
| DeepLearning is really not appropriate.
| thwayunion wrote:
| _> GOFAI was never more than a rules engine._
|
| GOFAI "aged out" of the AI label and became: compilers,
| databases, search algorithms, planning algorithms,
| programming languages, theorem proving, a million things that
| are still commonly used in NLP/CV/robotics, comp arch, etc.
| Aka, most of what makes computers actually useful.
|
| If something's over 30 years old and is still called AI,
| that's just shorthand for saying it's a failed idea that in
| the best case hasn't had its moment yet.
|
| _> If-then statements._
|
| 99.999% of the software I use is just if-then statements.
|
| (Also, this is like saying that deep learning is Linear
| Algebra.)
| cscurmudgeon wrote:
| Sorry, thats really wrong.
|
| I wouldn't call theorem provers just "if-then" statements. By
| that logic, everything, even large models, are if-then
| statements.
| mxwsn wrote:
| The author undersells himself - AlphaGo basically searches the
| game tree the same way he describes of GOFAI. Monte Carlo Tree
| Search is old yet essential. The neural network mainly improves
| the game evaluation heuristic, using function approximation which
| I'm sure the author is familiar with. Modern AI abilities are
| mind boggling but they're not that complicated to understand,
| especially with a GOFAI background!
| jamesgreenleaf wrote:
| > The image generator seems to understand that you can't see
| through opaque objects
|
| I thought this isn't the case for Stable Diffusion. Wasn't it the
| humans making the source images who understood things like that,
| and their knowledge became encoded in the latent space of the
| model? I'm not an expert. Please correct me here.
| _carbyau_ wrote:
| Hmm. Wonder what "astronaut riding a glass horse" would do
| then?
| Barrin92 wrote:
| >The heart of GOFAI is searching - of trees and, more generally,
| graphs.
|
| and because of this GOFAI, unlike the title suggests, and
| algorithmic solutions will continue to underpin a huge chunk of
| applications that don't fall in any 'natural' or generative
| domain. When you have an algorithmically optimal or closed form
| mathematical solution to a problem trying to approximate that
| with some new method from data doesn't make sense just because
| it's cool.
| lawrenceyan wrote:
| If you can do good old fashioned AI, you most definitely can do
| new fangled AI. In fact, I'm almost certain you'll have a better
| understanding of the fundamentals.
|
| At the heart of AI is mathematics, and that will never change.
| synapticpaint wrote:
| "Finally, a vital question is, how will this affect today's
| working artists? Here the answer is not so optimistic."
|
| I have a different take on this. I think this technology will
| allow more people, not less, to make money as a living (so,
| professionally) in a visual arts related industry. So I'm
| broadening the field to include not just "artists" but
| "commercial art" as well (designers, commercial illustrators,
| video/film post-production, etc.).
|
| The reason is that it changes and lowers the bar to entry for
| these fields, automates away a lot of the labor intensive work,
| thereby lowering the cost of production.
|
| Whenever something becomes cheaper (in this case, labor for art),
| its consumption increases. So in the future, because producing
| commercial art is so much cheaper, it will be consumed a lot
| more.
|
| At the same time, we're not at the point where we can actually
| remove humans entirely from the process. AI generated art is a
| different process and requires a different skillset, but it still
| requires skill and learning to do well.
|
| The analogy would be something like a word processor reducing the
| number of secretaries needed in the workforce, but increasing the
| number of office workers. People no longer need someone to take
| notes / dictation, but all kinds of new workflows emerged on top
| of the technology, and almost all office workers need to know how
| to use something like a word processor.
|
| Therefore, the opportunity here to do is to build tooling that
| make it easier and more accessible for more people to work with
| AI image generation.
|
| Disclaimer: I'm doing exactly that (building tooling to make
| content generation easier and more accessible) with
| https://synapticpaint.com/
| dleslie wrote:
| > I think this technology will allow more people, not less, to
| make money as a living (so, professionally) in a visual arts
| related industry. > ... > Whenever something becomes cheaper
| (in this case, labor for art), its consumption increases.
|
| But not its price, and definitely not the compensation for the
| labour to produce it.
|
| Making a living as a mediocre-to-good artist is already
| incredibly difficult; increasing the supply of poor-to-good
| artists through AI-assistance isn't going to make it any
| easier.
|
| > The analogy would be something like a word processor reducing
| the number of secretaries needed in the workforce, but
| increasing the number of office workers.
|
| Only if the word processor wrote documents without the
| assistance of a typist, or an author.
| hbn wrote:
| > Whenever something becomes cheaper (in this case, labor for
| art), its consumption increases. So in the future, because
| producing commercial art is so much cheaper, it will be
| consumed a lot more.
|
| I'm not sure how that would apply here. There's never been a
| shortage of art. Art has always had more supply than demand,
| and now we just added even more supply to saturate the market.
| I was previously a more likely client for an artist than I am
| now where I can get my computer to spit out any image I want in
| like 30 seconds. But I have no more desire for art than I did
| before.
| [deleted]
| [deleted]
___________________________________________________________________
(page generated 2022-11-15 23:00 UTC)