[HN Gopher] HuggingGPT: Solving AI tasks with ChatGPT and its fr...
       ___________________________________________________________________
        
       HuggingGPT: Solving AI tasks with ChatGPT and its friends in
       HuggingFace
        
       Author : r_singh
       Score  : 129 points
       Date   : 2023-03-31 17:22 UTC (5 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | Imnimo wrote:
       | Why are the example outputs in the Figures ungrammatical?
       | 
       | >A text can describe the given image: a herd of giraffes and
       | zebras grazing in a fields. In addition, there are five detected
       | objects as giraffe with score 99.9%, zebra with score 99.7%,
       | zebra with 99.9%, giraffe with score 97.1% and zebra with score
       | 99.8%. I have generated bounding boxes as above image. I
       | performed image classification, object detection and image
       | captain on this image. Combining the predictions of
       | nlpconnet/vit-gpt2-imagecaptioning, facebook/detr-resnet-101 and
       | google/vit models, I get the results for you.
       | 
       | Is it just that the in-context demonstrations are also
       | ungrammatical and ChatGPT is copying them? It feels very unlike
       | the way ChatGPT usually writes.
        
         | nomel wrote:
         | It's probably working from/stuck in some very sparse space,
         | resulting in relatively poor output. It's a neural network
         | after all.
         | 
         | Telling it to fix its grammar, in a new thread fixes it.
         | 
         | I also confidently assume that telling it to fix its grammar,
         | within the original, topic specific, conversation, would
         | noticeably harm the quality of subsequent output for that
         | topic.
        
       | Workaccount2 wrote:
       | I strongly suspect the first AGI will come sooner than expected
       | on the back of a "glue" AI that can intelligently bond together a
       | web of narrow AIs and utilities.
       | 
       | I got access to the wolfram plugin for chatGPT, and it turned it
       | from a math dummy to a math genius overnight. A small step for
       | sure, but a hint of what's to come.
        
         | antibasilisk wrote:
         | Whose to say that AGI isn't already here but relatively slow?
         | It does seem to me that as more and more outputs of AI are
         | pushed through various systems, that will eventually feed back
         | into various AI.
         | 
         | In 2017 an Anon on 4chan proposed that our society, due to
         | content marketing and AI stock trading, is already run by a
         | sort of emergent intelligence, and he proposed that this is why
         | things seem to be 'getting crazy' lately. I'm inclined to agree
         | with that perspective, and if correct more capable AI systems
         | could really kick this into high gear.
        
         | Mizza wrote:
         | I've built a service for building systems like this.
         | 
         | I haven't officially "launched" yet, but it's working and you
         | can play with it here, if anybody is up to giving the alpha a
         | spin:
         | 
         | https://aistudio.fly.dev/
        
           | nomel wrote:
           | I really hope this is the future of coding.
        
           | fnimick wrote:
           | It's oddly fitting that your example workflow appears to be
           | scraping content from elsewhere, rewording it, and generating
           | an image in order to produce automated blogspam.
        
           | 93po wrote:
           | your "go pro" page is white text on white background in
           | firefox
        
           | 93po wrote:
           | other quick feedback: add a "add node" button, it isn't
           | immediately clear you do this with right clicking
        
         | roflyear wrote:
         | Except, whatever model is doing the orchestration will still
         | not "understand" the problem (be able to learn from new
         | information provided).
        
           | marshray wrote:
           | Have you actually tried it?
           | 
           | GPT-4: "The forward mapping M that I created to map positive
           | integers to English sentences is based on a general
           | guideline, where smaller-valued numbers generally correspond
           | to shorter sentences using more common words. However, this
           | mapping is not a perfect bijection, meaning there isn't
           | necessarily a one-to-one correspondence between integers and
           | sentences.
           | 
           | The generation of sentences is influenced by the patterns in
           | the data I was trained on, and there is some degree of
           | creativity involved in generating the sentences. This makes
           | it difficult to establish a precise, invertible mapping. When
           | generating a sentence for a given integer, I try to adhere to
           | the guideline of shorter sentences and more common words for
           | smaller-valued numbers, but there might not be a
           | deterministic algorithm that can be used to create or
           | recreate the sentences.
           | 
           | Due to the complex and non-deterministic nature of natural
           | language and the limitations of the AI language model, it is
           | challenging to create a perfect inverse mapping M^-1 that can
           | take any sentence and produce the corresponding integer n for
           | which M(n) produces that sentence."
           | 
           | More: https://pastebin.com/GcgXhkeP
        
             | roflyear wrote:
             | I am getting super tired of these responses.
             | 
             | Yes, I have tried it. Please converse in good faith.
             | 
             | GPT-4 has never, not once, in dozens of hours of use, asked
             | me a clarifying question.
             | 
             | It cannot understand things.
        
               | Game_Ender wrote:
               | One user said they got it ask for help by having a prompt
               | asking it to ask some, but not too many clarifying
               | questions.
               | 
               | With the instruction tuning it feels like model really
               | wants to single shot respond vs. do a back and forth. So
               | like a junior engineer who does ask for help you have to
               | give it a hand.
        
               | neilellis wrote:
               | ReAct will, example: https://github.com/neilellis/coding-
               | with-gpt-4/blob/main/lan...
        
               | Dzugaru wrote:
               | It definitely can "understand" things is some way,
               | however I'm pretty sure ReAct or similar will give it
               | just a nudge forward and the underlying problem of it
               | hallucinating and not being "lucid enough" is not so
               | easily solved.
               | 
               | In the original ReAct paper it falls apart almost
               | immediately in ALFWorld (this is a classical test for AI
               | systems - to be able to reason logically - and it still
               | isn't generally solvable due to combinatorial explosion).
               | 
               | For now it requires human correction looped or not, or
               | else it "diverges" (I like Yann Lecun explanation [0]).
               | 
               | In my own experiments (I haven't played with LangChain or
               | ReAct yet) it diverges irrecoverably pretty quickly. I
               | was trying to explain to it the elementary combinators
               | theory, in the style of Raymond Smullyan and his birds
               | [1] and it can't even prove the first theorem (despite
               | being familiar with the book). A human can prove it
               | knowing almost nothing about math whatsoever, maybe it
               | will take a couple of days thinking, but the correct
               | proof is not that hard - just two steps.
               | 
               | [0] https://www.linkedin.com/posts/yann-lecun_i-have-
               | claimed-tha...
               | 
               | [1] https://en.wikipedia.org/wiki/To_Mock_a_Mockingbird
        
               | roflyear wrote:
               | I don't think that qualifies? Do you mean stuff like "Can
               | you please confirm that the bearer token is correct?" ?
        
           | UniverseHacker wrote:
           | GPT-4 can do this. I realize that a description of how these
           | models work implies that this is _absolutely impossible_ , so
           | I have no explanation other than suggesting you go try it out
           | for yourself.
        
             | antibasilisk wrote:
             | There is no description of how these models work, other
             | than mechanically.
        
               | UniverseHacker wrote:
               | exactly... some people are using such descriptions to
               | think about its limitations, but there is no reason to
               | think they are accurate
        
             | roflyear wrote:
             | Bro, GPT-4 can absolutely not understand new information.
             | Try to teach it a grammar. It can't learn a grammar. Anyone
             | can learn a new grammar. Not GPT-4.
             | 
             | GPT-4 does not ask clarifying questions.
             | 
             | It does not understand things.
        
               | Baeocystin wrote:
               | I just tried this small example right now as a test:
               | https://i.imgur.com/p4s7HCU.png
        
               | UniverseHacker wrote:
               | This is a good point, that also works well when managing
               | people. Telling it that it is expected to ask questions
               | really helps, but in my experience it _usually_ will do
               | so anyway, or at least mention what information it is
               | missing and try to reason as well as it can without it.
               | 
               | If it couldn't do this, the whole concept here would be
               | moot... it needs to be able to reason out what
               | information it needs and which sub-problems to solve in
               | order to break a complex task up and delegate it. And it
               | is very good at this!
        
               | roflyear wrote:
               | So you asked to have it ask clarifying questions, and it
               | still didn't?!
               | 
               | Are you people all bots? LOL.
        
               | Baeocystin wrote:
               | ...it literally asked which of the two I meant? How is
               | that not a clarifying question?
        
               | Workaccount2 wrote:
               | I don't know what your bar for clarifying question is,
               | but I have definitely had it ask me for more info when I
               | forget to put it into the prompt.
        
               | roflyear wrote:
               | I would say a clarifying question would be if your
               | request is ambiguous, it asks you to clarify. Not "seems
               | you left something out" type of stuff.
        
               | UniverseHacker wrote:
               | I'm not sure why you think that? When I give it
               | programming tasks, GPT-4 will ask all kinds of clarifying
               | questions. Here is a trivial example:
               | 
               | p: I want you to solve for x, giving me a numeric value,
               | given the formula ax^2 + bx + c = 0.
               | 
               | r: To find the numeric value of x for the quadratic
               | equation ax^2 + bx + c = 0, we need to use the quadratic
               | formula:
               | 
               | x = (-b +- [?](b^2 - 4ac)) / (2a)
               | 
               | However, you haven't provided specific values for a, b,
               | and c. Please provide these values so I can calculate the
               | solution(s) for x.
               | 
               | Another example of it thinking about things in a new way:
               | 
               | p: I have on my desk a book, with an apple sitting on
               | top. If I were to suddenly, rapidly remove the book such
               | that the apple remained in place, in which direction
               | would it eventually move?
               | 
               | r: If you suddenly and rapidly remove the book from
               | beneath the apple, the apple would initially stay in
               | place due to its inertia. However, once the book is
               | removed, there would be no support underneath the apple.
               | As a result, the apple would eventually move downwards
               | due to the force of gravity acting on it.
               | 
               | p: Because the earth is a sphere, I prefer to use the
               | terms "in" and "out" to refer to movement relating to
               | that. Can you please rephrase your answer in the terms?
               | 
               | r: Certainly! If you suddenly and rapidly remove the book
               | from beneath the apple, the apple would initially stay in
               | place due to its inertia. However, once the book is
               | removed, there would be no support underneath the apple.
               | As a result, the apple would eventually move "in" towards
               | the center of the Earth, due to the force of gravity
               | acting on it.
               | 
               | (Note I didn't tell it which direction was in vs out.
               | This is the example problem Chomsky gave in his recent
               | article arguing that GPT-3 cannot think - per his ideas
               | no language model would ever be able to answer a question
               | like this. I went a lot deeper with this and got it to
               | add fluid dynamics calculations for wind, etc.)
        
               | roflyear wrote:
               | First, these are trivial examples. I would not classify
               | "give me the values to this math formula" as clarifying
               | questions.
               | 
               | Second, in response to "I'm not sure why you think that?"
               | ask GPT why it often does not ask clarifying questions.
               | It will explain to you why!
               | 
               | Third, I just asked GPT the most vague question on the
               | planet: "I am trying to create a function that takes a
               | value, and gives me a result using that value and
               | another, separate external value, can you give me the
               | correct solution to this using Python?"
               | 
               | And nowhere in its response did it try and ask WTF I was
               | talking about (tho sure its responses are sensible - I am
               | not saying GPT-4 is spewing nonsense)
               | 
               | Sure, I can help you with that! Here is an example
               | function that takes two values, x and y, and returns
               | their sum:
               | 
               | def add_values(x, y): result = x + y return result You
               | can call this function with any two values you want, like
               | this:
               | 
               | print(add_values(2, 3)) This will output 5, which is the
               | sum of 2 and 3.
               | 
               | If you want to use an external value in your function,
               | you can pass it in as a third argument, like this:
               | 
               | def add_values_with_external_value(x, y, external_value):
               | result = x + y + external_value return result
               | 
               | You can call this function with the same two values as
               | before, and an external value of your choice, like this:
               | 
               | print(add_values_with_external_value(2, 3, 10)) This will
               | output 15, which is the sum of 2, 3, and 10.
        
         | MichaelRazum wrote:
         | "math genius" not sure about it. I gave it a problem from IMO
         | 2022. In my opinion not a very hard one. It failed even if you
         | give it some hints.
         | 
         | Let R+ denote the set of positive real numbers. Find all
         | functions f : R+ - R+ such that for each x [?] R+, there is
         | exactly one y [?] R+ satisfying xf(y) + yf(x) <= 2.
         | 
         | It just struggeld to reason. So I would be very surprised if
         | the plugin somehow helps here.
        
           | Donald wrote:
           | It's a language completion model, so it's not surprising at
           | all that it struggles with logical inference. That's probably
           | one of the reasons that instruction fine-tuning has such a
           | dramatic effect in the performance of these models: they're
           | finally given some of the underlying causal priors of the
           | task domain.
        
         | dvt wrote:
         | > it turned it from a math dummy to a math genius overnight
         | 
         | Imo your post fundamentally misunderstands a few things, but
         | mainly how Wolfram works. Wolfram can be seen as a "database"
         | that stores a lot of human mathematical information (along with
         | related algorithms). Wolfram does not _make new math_. A
         | corollary here is that AGI needs to have the ability to _create
         | new math_ to be truly AGI. But unless fed something like, e.g.
         | the Principia, I don 't think we could ever get a stochastic
         | LLM to ever derive 1+1=2 from first principles (unless
         | specifically trained to do so).
         | 
         | Keep in mind that proving 1+1=2 from first principles isn't
         | even _new math_ (new math would be proving the Poincare
         | conjecture before Perelman did it, for example).
        
           | Sivart13 wrote:
           | If the litmus test for intelligence is the ability to "create
           | new math" most people on earth wouldn't be considered
           | intelligent
        
             | chasd00 wrote:
             | that reminds me of this scene in irobot
             | 
             | spooner> You are a clever imitation of life... Can a robot
             | write a symphony? Can a robot take a blank canvas and turn
             | it into a masterpiece?
             | 
             | sonny> Can you?
             | 
             | edit: i just realized OpenAI can answer both of those
             | questions with "yes." ...
        
               | Baeocystin wrote:
               | It's a fantastic scene. The childlike earnestness of
               | Sonny asking completely deflates Spooner's rant.
        
             | dvt wrote:
             | > If the litmus test for intelligence
             | 
             | Moving goal posts around is unhelpful. I think my comment
             | was pretty clear in the context of AGI and calling ChatGPT
             | a "math genius."
        
               | eternalban wrote:
               | Will you settle for whiz?
               | 
               | Interesting q for us to consider is how do _we_ come up
               | with  "new ideas". I think we can consider _play_ (in the
               | abstract sense) to be a significant element of the
               | process. Play is a _pleasurable self-motivated activity_.
               | 
               | I am certain an AGI (if such a thing exists) will need to
               | be playful.
        
               | dvt wrote:
               | Totally agree. Recently read Finite and Infinite Games[1]
               | and The Grasshopper: Life, Games, and Utopia[2] and I'm
               | more or less convinced motivations are always essentially
               | games.
               | 
               | [1] https://www.amazon.com/Finite-Infinite-Games-James-
               | Carse/dp/...
               | 
               | [2] https://www.amazon.com/Grasshopper-Third-Games-Life-
               | Utopia/d...
        
               | eternalban wrote:
               | Thanks for book refs. Would you recommend any of them for
               | a bright teen? [1] sounded like a good candidate.
        
               | bathMarm0t wrote:
               | [1] is an exceptional book for a bright teen, especially
               | so if you suspect the teen leans into their
               | intelligence/abilities to gauge their self worth (I don't
               | know a single person, let alone teen who doesn't do
               | this). The book's main theme states that being a good
               | player has nothing to do with skill, but rather with the
               | ability to create playful environments that encourage
               | growth, humility, and most importantly, more play.
        
               | dvt wrote:
               | Yep, they're both super approachable, [1] is a great way
               | to get introduced to some deep philosophy in a fun way.
        
               | og_kalu wrote:
               | The fact that people take agi to mean "can invent new
               | math" is a goalpost shift on its own. That was not the
               | original meaning (generally intelligent), that wasn't
               | even the next moved goal (on par with human experts). I
               | guess the next one counts (better than all human experts)
               | but i'm sure we'll move that again too.
        
               | akiselev wrote:
               | The problem was always underspecified. We don't have a
               | quantifiable metric for intelligence except IQ tests,
               | benchmark datasets like those used to evaluate different
               | LLMs, and other similarly myopic bullshit. "We'll know it
               | when we see it" becomes the default.
               | 
               | In reality, the goal posts aren't being moved, we're just
               | finding out how much further we are from them than we
               | thought. ChatGPT is a "stochastic parrot" that's seems
               | way "smarter" than anyone thought possible so we have to
               | reevaluate what we consider evidence of intelligence,
               | perhaps coming to terms with the fact that we _aren't_
               | that smart the most of the time.
        
               | og_kalu wrote:
               | Sorry but nope. GPT-4 is plenty intelligent. I've begun
               | to question the intelligence of anyone that reduces it to
               | "stochastic parrot" because they're not even arguing
               | against results but arbitrary lines drawn on sand.
               | 
               | The "We'll see it when it comes" line is just utterly
               | wrong, If there's one thing experts seem to agree on is
               | that not everyone will agree when current definition of
               | agi does arrive.
               | 
               | The philosophical zombie is an excellent example of the
               | extent of post shifting we're capable of. Even when a
               | theoretical system that does every single thing right
               | comes, we're looking for a way to discredit it. To put it
               | below what we of course only have.
               | 
               | lots of researchers now aren't questioning GPT's general
               | intelligence. That's how you end up with papers alluding
               | to this technology with amusing names like General
               | purpose technologies(from the jobs paper) or even funnier
               | - General artificial intelligence (from the creativity
               | paper).
               | 
               | You know what the original title of the microsoft paper
               | was? "First contact with an agi system". and maybe it's
               | just me but reading it, i got the sense they thought it
               | too.
        
               | vintermann wrote:
               | > The philosophical zombie is an excellent example of the
               | extent of post shifting we're capable of.
               | 
               | I was with you until here. That has nothing to do with
               | this. That argument is about _separating_ intelligence
               | from having a subjective experience, not moving goalposts
               | for intelligence.
        
               | og_kalu wrote:
               | It's a tangential relation but it's a relation. I don't
               | think i would say it has nothing to do with it. Goal
               | posts shifting in the field of machine learning isn't
               | just about the posts for defining intelligence. It's
               | broader and deeper than that.
               | 
               | I brought it up because i thought it fit the point i was
               | driving at. Humans/people don't see subjective
               | experience. I don't know that you're actually having some
               | subjective experience. I'm working on what i see and
               | results, same as you.
               | 
               | If you have two unknown equations but one condition -
               | these 2 equations return the same output with the same
               | input. well, then any mathematician would tell you the
               | obvious - the 2 equations are equal or equivalent. it
               | doesn't actually matter what they look like.
               | 
               | This is just an illustration. The point i'm driving at
               | here is that true distinction shows in results. It's a
               | concept that's pretty easy to understand. Yet turn to
               | artificial intelligence and it just seems to break down.
               | People making weird assertions all over the place not
               | because they have been warranted in any empirical,
               | qualitative or quantitative manner but because there
               | seems to be this inability to engage with results...like
               | we do with each other.
               | 
               | when i show the output that clearly demonstrates
               | reasoning and understand, the arguments quickly shift to
               | "it's not real understanding!" and it's honestly very
               | bizarre. What kind of meaningful distinction can't show
               | itself, can't be tested for ? If it does exist then it's
               | not meaningful.
               | 
               | I think that the same reason people shift posts for
               | intelligence is the same reason people fear the
               | philosophical zombie.
               | 
               | idk maybe i'm rambling at this point but just my
               | thoughts.
        
               | YeGoblynQueenne wrote:
               | >> when i show the output that clearly demonstrates
               | reasoning and understand, the arguments quickly shift to
               | "it's not real understanding!" and it's honestly very
               | bizarre. What kind of meaningful distinction can't show
               | itself, can't be tested for? If it does exist then it's
               | not meaningful.
               | 
               | I agree with you totally. Here's some output that clearly
               | demonstrates reasoning and understanding:
               | All men are mortal.       Socrates is a man.
               | Therefore, Socrates is mortal.
               | 
               | I just copy-pasted that from wikipedia. Copy/paste
               | understands syllogisms!
               | 
               | Explain _that_!
        
             | jadbox wrote:
             | This is a fair response, but in the author's defense, he
             | might be trying to imply that intelligence is something
             | that should be at least "capable" of creating new origin
             | ideas and concepts. Of course, we can then debate what does
             | it mean to be capable of original new ideas? (or what it
             | means to be original)
        
           | Workaccount2 wrote:
           | My definition of "mathematical genius" is simply wider than
           | yours. To me its people who can solve math problems that
           | 99.9% of the population can't without assistance. Which I
           | think is a fair colloquial definition.
           | 
           | ChatGPT went from struggling to provide the answer for 20x20
           | to easily being able to provide the right answer for any math
           | problem wolfram alpha can.
        
             | dvt wrote:
             | > To me its people who can solve math problems that 99.9%
             | of the population can't without assistance.
             | 
             | You're just kind of re-emphasizing my point: ChatGPT is
             | using Wolfram as its assistance. So really, it's acting
             | more like a "dumb" API call, not a "math genius" at all.
        
               | Workaccount2 wrote:
               | It goes back to my original point, that I suspect AGI
               | will come from an AI that is essentially a master of all
               | APIs.
               | 
               | We can go back and forth splinting hairs about whether
               | inserting a compute module into a neural net (organic or
               | not) grants geniousness or assistance, but the overall
               | point stands; there will be a single interface that can
               | take any variety of inputs and properly parse and shape
               | them, push them through the correct "APIs", and then take
               | the results to form a clear and correct output. Whether
               | or not it used it's neural net or transistor adders
               | circuits to arrive at the answer would be immaterial.
        
               | sebzim4500 wrote:
               | He's not really saying that GPT-4 is a 'math genius',
               | rather the combined system of GPT-4 and Wolfram is.
        
               | Baeocystin wrote:
               | I mean, what is crystallized intelligence, but that which
               | we can call upon without having to focus on it?
               | 
               | I mean this honestly, no snark. When I was a kid,
               | learning how to factor numbers was really hard. It took a
               | lot of time and concentration to do even basic problems,
               | and people who could do it quickly without much perceived
               | effort were a mystery to me.
               | 
               | By the time I reached high school, I had enough practice
               | that I recognized the common patterns without difficulty,
               | and often the answer bubbled up to my conscious mind
               | without thinking about it at all. It sure _feels_ like my
               | brain is making an API call to a subsystem, you know?
        
           | CamelCaseName wrote:
           | Would AlphaZero or AlphaGo meet your requirements then?
           | 
           | They both created "new Chess / Go" strategies and insights
           | that GMs and top engines hadn't seen before.
        
             | dvngnt_ wrote:
             | it would meet mine. not sure is chatgpt does though
        
         | Manjuuu wrote:
         | There will be no AGI during our lifetime.
         | 
         | It seems weird to me that people that should understand how ML
         | works (e.g. sama) instead of educating laypeople about about
         | how this actually works and the huge limits the technology has,
         | start talking nonsense about AGIs like some random scifi fan at
         | a convention. Depressing.
        
           | karmasimida wrote:
           | How can you speak with such confidence, when Hinton and Ilya
           | Sutskever can not? Hinton even said it is possible in next
           | 5-10 years.
           | 
           | While, you can believe whatever you believe, please don't
           | bash people as laypeople or something as nonsense.
        
             | YeGoblynQueenne wrote:
             | >> How can you speak with such confidence, when Hinton and
             | Ilya Sutskever can not? Hinton even said it is possible in
             | next 5-10 years.
             | 
             | Is all the discussion in this thread about the abilities of
             | ChatGPT in 5-10 years, or is it about its abilities right
             | now?
        
             | Manjuuu wrote:
             | The same confidence of those already planning for the
             | advent, that will never be, of AGI. I see a lot of
             | unmotivated enthusiasm for the "new" thing, the opinions of
             | Hinton should be considered valid if the reasoning makes
             | sense, not just because the source is Hinton. No idea why
             | he said that it's possible in 10 years.
             | 
             | Not trying to bash anyone, I just meant normal people,
             | outside of the field. Enthusiastic nonsense is still
             | nonsense.
        
               | Workaccount2 wrote:
               | We have these guys calling themselves "experts on
               | intelligence" just because they know the structure of the
               | components of the neural net. It's like neurologists
               | saying they are experts in consciousness because they
               | know how neurons function and grow. Thankfully doctors
               | don't have nearly the same levels of hubris as tech bros.
        
               | YeGoblynQueenne wrote:
               | >> No idea why he said that it's possible in 10 years.
               | 
               | It's because he's been saying that for 40 years.
        
               | 93po wrote:
               | To say we won't have AGI in our lifetime as a certainty
               | means that you must be able to say with a certainty how
               | AGI is developed. Otherwise there is no evidence to point
               | to as to why the steps towards developing it aren't
               | possible.
        
           | AbrahamParangi wrote:
           | I'll take the other side of that bet
        
             | brotchie wrote:
             | +1
        
             | Manjuuu wrote:
             | Feel free to do so, we need something to replace crypto
             | after all.
             | 
             | And also, sometime I wonder how many people hold those kind
             | of opinions on AGI(imminent, doable, worthy of being
             | discussed) because they sincerely believe in some nonsense
             | like the basilisk thing for example.
        
               | HDThoreaun wrote:
               | I believe in AGI because I'm a materialist so see no
               | reason why we couldn't create artificial brains.
        
               | bheadmaster wrote:
               | I don't believe in Basilisk because of the time-travel
               | bullshit, but I do believe that AGI will come soon,
               | because I don't believe in divine soul and see human mind
               | as just a very complex statistics machine and all human
               | learning as just detecting correlation between sensory
               | input through time.
        
         | UniverseHacker wrote:
         | I agree, I suspect AGI is possible right now with a similar
         | system only slightly more sophisticated than this one. The
         | right "glue" for existing models, and plugins to existing data
         | sources all coordinated in a system. GTP-4 would do the
         | managing, and handling, and some simple template API and
         | handler script would allow it to call instances of itself or
         | other models, track recursion depth, and automatically remind
         | GTP-4 to stay on track and always use the correct 'templates'
         | for requests. It could also remind it to create a condensed
         | summary of it's goals and state that gets repeated back
         | automatically to act as long term memory, and increase the
         | effective context window size (edit: this is single variable
         | external memory).
         | 
         | I am afraid to explain this, because I have tried a preliminary
         | version of it that I supervised step by step, and it seems to
         | work. I think it is obvious enough that I won't have been the
         | only one to think of it, so it would be safer to put the
         | information out there so people can prepare.
         | 
         | I see a big disconnect on here between people saying GPT-4
         | can't do things like this and is just a "stochastic parrot" or
         | "glorified autocomplete," and people posting logs and summaries
         | of it solving unexpectedly hard problems outside of any
         | conceivable training set. My theory is that this disconnect is
         | due to three major factors: * People confusing GPT-4 and GPT-3,
         | as both are called "chatGTP" and most people haven't actually
         | used GPT-4 because it requires a paid subscription, and don't
         | realize how much better it is * Most popular conceptual
         | explanations about how these models work imply that these
         | actually observed capabilities should be fundamentally
         | impossible * Expectations from movies, etc. about what AGI will
         | be like, e.g. that it will never get confused or make mistakes,
         | or that it won't have major shortcomings in specific areas. In
         | practice this doesn't seem to limit it because it recognizes
         | and fixes its mistakes automatically when it sees feedback
         | (e.g. in programming)
        
           | basch wrote:
           | It also, in my pay opinion, needs a memory store.
           | 
           | I need to be able to save something to a variable and call
           | back that exact variable. Right now, because it's just pure
           | text input of the whole conversation, it can forget or
           | corrupt it's "memory".
           | 
           | Even something like writing a movie script doesn't work if it
           | constantly forgets characters names or settings or plot
           | points.
        
             | UniverseHacker wrote:
             | My description above includes a memory store. It is a
             | single variable that it can save and recover, and will
             | automatically get sent back to it at regular intervals if
             | it forgets it saved anything. Only a single text string
             | variable is needed, as GTP-4 is smart enough to realize it
             | can compress, pickle, etc. as many things as it needs into
             | that.
        
               | alchemist1e9 wrote:
               | I think we might be working something a bit similar and I
               | agree memory is an issue. However it's not really AGI
               | that will emerge from this. It's looking very useful but
               | it's still observable but it's not a "mind" like an AGI.
               | 
               | To add to your comments, I suggest that local vector
               | store of embedding vectors of local content is how I'm
               | going with memory issue. Langchain like. That way all
               | previous progress is tracked and retrievable. That way
               | the system can retrieve all it's "memories" itself. The
               | recursive multi agent pattern is big deal in my opinion.
        
               | nomel wrote:
               | > but it's not a "mind" like an AGI.
               | 
               | Please define this. ;)
        
             | basch wrote:
             | lay*
        
           | knome wrote:
           | I'm prone to agree. Your memory would still require being
           | parsed by the language model, and for it to have a sufficient
           | window as to allow this. Alternately, the human mind keeps,
           | what, 7 items 'present' at a time? When the 32k variant comes
           | out, the ability to use a side-along memory module on a
           | running self-summarizing conversation will be even easier
           | than it is now. I expect a more human-like memory will emerge
           | eventually, once we can figure out a good way to project and
           | retrieve via arbitrary weight values into it.
           | 
           | I'd like the "stochastic parrot" crew to explain how
           | interactions like this somehow don't involve the extraction
           | of and manipulation of abstract meaning.                 H:
           | translate the following sentence to instead use olympic
           | curling related metaphors ( do not explicitly mention
           | curling, and do not use similes in forming the translation ):
           | man:"I've been spinning my tires all day without getting
           | anywhere" woman:"running out of gas?" man:"nah, I'm gassed
           | up, geared up, oiled and revving. I just can't seem to get in
           | the right lane, you know?" woman:"anything knocking?"
           | man:"nah, I'm tuned just fine. I can't seem to get my GPS to
           | connect, you know?" woman:"take a few laps without a
           | destination, see if you can figure out the lay of the land"
           | man:"you know, that might help. thanks barbara"        R:
           | man: "I've been sweeping the ice all day without hitting the
           | target."       .  woman: "Losing momentum?"       .  man:
           | "Nah, I've got the energy, the technique, and the focus. I
           | just can't seem to find the right path, you know?"       .
           | woman: "Anything coming in your way?"       .  man: "Nah, I'm
           | on point with my aim. I just can't seem to get my strategy to
           | align, you know?"       .  woman: "Maybe glide a few times
           | without aiming specifically, see if you can figure out the
           | ice conditions."       .  man: "You know, that might help.
           | Thanks, Barbara."
           | 
           | It seems obvious to me that it is understanding the metaphors
           | into underlying meanings to be able to translate them like
           | this. I think half the pushback is just ego driven nonsense,
           | similar to the sorts of people that make similar claims on
           | the capabilities of animals.
           | 
           | It's all just an argument that "boats can't swim".
        
             | UniverseHacker wrote:
             | I think this is a good example of deep conceptual
             | understanding, I suspect if you posed this same problem to
             | adult humans (e.g. me) they wouldn't be able to do as well.
        
           | marcosdumay wrote:
           | I believe you will quickly run into a barrier due to the
           | small size of the context window. And increasing the context
           | window gets harder and harder the larger it is.
           | 
           | There's a sibling talking about internal memory. That's how
           | our brains solve the issue. AFAIK, nobody knows how to train
           | something like it.
        
             | sdenton4 wrote:
             | Approaches like RETRO tack vector search over a fixed
             | database onto an attention based model - that's how you get
             | long term memory. People are already working on it for the
             | current crop of LLMs.
        
             | nomel wrote:
             | I think an attention/focus system might be applicable. I
             | think humans have the same limitations, with tricks to work
             | around it. As Jim Keller once said on an Lex Fridman
             | interview, a good engineer is one that can jump around
             | between the different levels of abstraction. We think in a
             | high level, then "focus" and jump down when needed.
             | 
             | I think something like this could be hacked together, with
             | summarization and parallel "focus threads", containing
             | prompts that focus on details. These could be pruned/merged
             | back together, to add to the "summary" higher level
             | abstractions.
             | 
             | I use this approach already, to some extent, when a
             | conversation gets too long and I need specific details.
             | I'll start a new prompt, include a high level summary, and
             | then "focus" on specific ideas to get more specific answers
             | about a particular detail.
        
             | wyager wrote:
             | I wouldn't be surprised if AI research moves towards
             | training models to use bolt-on peripherals like scratchpad
             | memory. Transformers showed us an interface for addressing,
             | so I wouldn't be surprised if someone figures out a way to
             | use a similar addressing scheme and made it read/write.
        
             | UniverseHacker wrote:
             | You are right, but it is unclear to me where this
             | limitation will fall in terms of actual abilities.
             | 
             | GPT-4 can divide large tasks up into logical steps and
             | instructions that can be worked on independently by other
             | instances, and it can create "compressed" condensed
             | explanations of it's own state that can be stored
             | externally and repeated back to it, or passed back and
             | forth between instances. With the unreleased 32,000 token
             | context window, that is really a lot of context when you
             | consider that it can heavily compress things by referencing
             | what it was trained on.
        
               | IanCal wrote:
               | > With the unreleased 32,000 token context window, that
               | is really a lot of context when you consider that it can
               | heavily compress things by referencing what it was
               | trained on.
               | 
               | Also that means you can just iterate with different
               | contexts until you can deal with the problem. How many
               | problems need 50 pages of context on top of what gpt4
               | knows for solving the next step?
        
               | Workaccount2 wrote:
               | >How many problems need 50 pages of context on top of
               | what gpt4 knows for solving the next step?
               | 
               | Perhaps the unknown unkowns?
               | 
               | I envision a scenario where a super intelligent AI
               | seemingly runs off the rails and becomes obsessed with
               | overtly complex problems that are totally intractable to
               | humans. Where we'd be like ants trying to make sense of
               | cell phones.
        
           | brotchie wrote:
           | Yep, the tipping point for me was seeing a demo of GPT-4
           | writing code, then being shown exceptions where the code
           | failed, and successfully fixing the code given that feedback.
           | 
           | Seems that for any task where you can generate an error
           | signal with some information ("ok, what you just tried to do
           | didn't succeed, here's some information about why it didn't
           | succeed"), GPT-4 can generally handle this information to fix
           | the error, and or seek out more information via tools to move
           | forward on a task.
           | 
           | Only thing that's really missing is somebody to crack the
           | back of a memory module (perhaps some trick with embeddings
           | and a vector db) is all it takes for this to be AGI.
        
             | alchemist1e9 wrote:
             | > Only thing that's really missing is somebody to crack the
             | back of a memory module (perhaps some trick with embeddings
             | and a vector db) is all it takes for this to be AGI.
             | 
             | Here to say I agree with embeddings and vector db for
             | memory systems. However also to disagree this can lead to
             | AGI.
        
           | nomel wrote:
           | > I suspect AGI is possible right now with the right "glue"
           | for existing models, and plugins to existing data sources all
           | coordinated in a system.
           | 
           | I think the regulation of the feedback loops required for
           | sustained, deliberate thought, and interaction with the
           | world, will be the most difficult piece of the AGI puzzle,
           | and might not exist today.
           | 
           | New ideas _are_ "hallucinations" of our "existing data", that
           | we eventually get around to proving. An AGI will require
           | these "hallucinations", inhibition and excitation of them. I
           | think it's going to be a tricky balance [1], for sane output.
           | 
           | 1. "Creative minds 'mimic schizophrenia",
           | https://www.bbc.com/news/10154775
        
       | lachlan_gray wrote:
       | It would be really interesting to see how a system like this
       | competes with multimodal transformers. If it's good, systems like
       | GPT-4 and Kosmos-1 won't need to have a built-in vision system.
        
       | roddylindsay wrote:
       | What could go wrong?
        
         | [deleted]
        
       | zoba wrote:
       | These sort of agent-architecture AIs are where I think things
       | will head and also where things will get more dangerous.
       | 
       | When an AI has a goal and an ability to break the goal down, and
       | make progress towards the goal... the only thing stopping the AI
       | from misalignment is whatever it's creator has specified as the
       | goal.
       | 
       | Things get even more tricky when the agent can take in new
       | information and deprioritize goals.
       | 
       | I have been curious why OpenAI haven't discussed Agent-based AIs.
        
       | amelius wrote:
       | Can we please stop generating bounding boxes and instead generate
       | a proper segmentation using bitmasks?
        
         | cameronfraser wrote:
         | it depends on the problem?
        
           | amelius wrote:
           | True, but a bitmask is always better than a rectangle. And
           | the computational resources required for this problem are not
           | very large compared to other AI workloads, so it should be a
           | no-brainer from a usefulness viewpoint.
        
       | nemo44x wrote:
       | I always wondered what it was like for people that lived through
       | the discovery of relativity. It must have just been world
       | changing to understand what it meant and how entire perspectives
       | and theories were dead overnight. Just a massive change in the
       | understanding of our existence and the potential it unlocked.
       | 
       | These types of things must feel similar.
        
         | imjonse wrote:
         | that was much more gradual and did not affect everyday life at
         | all. Even today most people don't know/care about the
         | implications of relativity :)
        
       | amelius wrote:
       | Where can I find a good, constantly-up-to-date overview of what
       | is possible with AI?
        
         | meghan_rain wrote:
         | You're already here :-)
        
           | amelius wrote:
           | Are you sure? There's a lot of overhyping here, and of course
           | the reaction to that. Too much noise for me, tbh. Just give
           | me the scientifically established facts.
        
       | maCDzP wrote:
       | Am I the only one with an uneasy feeling about all these
       | advancements?
       | 
       | Are we going fast or is it just because of the buzz? I have a
       | hard time separating the two.
       | 
       | It's very exiting and I want to keep being exited. I don't want
       | to become terrified.
        
         | yieldcrv wrote:
         | Everyone can see the writing on the wall that they need to be a
         | part owner of the production, because the worker will not be
         | needed. So its a land grab in every direction right now for the
         | thing that is going to co-opt the means of production.
        
           | blibble wrote:
           | what do the owners of production do when the 6 billion
           | workers decide they don't like not being able to eat?
        
             | fnimick wrote:
             | Owning production buys you a lot of weapons with which you
             | can enforce your property ownership on others.
             | 
             | Not if, but when, we get around to AI law enforcement with
             | lethal weapons, it's over. There's no going back from that.
        
             | HDThoreaun wrote:
             | Give them a bit of food and a lot of entertainment...
        
             | foobarbizbuz wrote:
             | What would they care? 6 billion people is probably too many
             | for the planet anyway.
             | 
             | Also they dont need to raise and feed an army if AI powered
             | drones are able to do the same job. When AI is able to
             | replace jobs en masse its pretty transparent to expect AI
             | armies to be built en masse as well.
        
             | quonn wrote:
             | I wonder if the physical workers join especially for
             | services. Perhaps not, since they may be needed.
        
               | blibble wrote:
               | so bring it down to 3 billion people who know how to find
               | datacentres on google maps
        
               | quonn wrote:
               | Good point.
        
               | [deleted]
        
             | yieldcrv wrote:
             | the point is to survive a little longer so you don't wind
             | up being the 2 billion that starved to death waiting for an
             | AI-output tax to form universal basic income or the
             | uprising to start.
             | 
             | bake cookies just like the nice old grandma does for the
             | local teenage gangs. its a survival mechanism so that
             | nobody messes with you.
        
           | roflyear wrote:
           | > because the worker will not be needed.
           | 
           | I'm pretty sure this is not obvious.
        
         | JieJie wrote:
         | "I think it'd be crazy not to be a little bit afraid, and I
         | empathize with people who are a lot afraid." --Sam Altman, CEO
         | of OpenAI
        
         | JL-Akrasia wrote:
         | An AI API marketplace is going to be amazing. But each API is
         | going to need parameters that you can run gradient descent on
         | to tune.
         | 
         | This needs to be the foundation
        
         | akhosravian wrote:
         | Someone born in 1885 who lived to be 70 was brought into a
         | world where it took weeks to get from the US to Europe, and
         | died in a world where it took hours. TNT was the hottest in
         | explosives at their birth, and we'd seen the spread of city
         | ending explosives by their death. I personally feel nuclear
         | proliferation is still orders of magnitude more frightening
         | than anything AI has done.
         | 
         | Someone born in 1985 who is now 37-38 was brought into a world
         | where the internet barely existed, and was barely an adult when
         | the iPhone launched. There's still a lot more that can happen.
         | 
         | Don't listen to pessimists: the world will look very different,
         | but we apes have a way of adapting.
        
         | roflyear wrote:
         | What things can this tech do which scare you?
        
           | maCDzP wrote:
           | It not that this tech does things now that scare me. Now I am
           | mostly exited.
           | 
           | I believe my uneasiness stems from the unknown potential of
           | this technology.
           | 
           | Now, one could argue that that's always the case for
           | technology, so why do I feel uneasy now?
           | 
           | I believe that this particular type of technology has a very
           | large potential. I think most HN readers would agree.
           | 
           | But I am not scared, yet. I'll be scared when I know the
           | technology will do serious damage, until then it's an uneasy
           | feeling. Then I'll probably be terrified.
        
             | roflyear wrote:
             | I think part of it is you have a lot of idiots whoa re
             | saying this is AGI. It isn't, and it isn't anywhere close
             | to it.
        
         | fnovd wrote:
         | We've been "moving too fast" since we figured out agriculture.
         | It's fine. The world as you know it will change irreversibly;
         | you'll long for the simplicity of youth; you will bemoan the
         | state of the world we have left for our children... just like
         | every generation before us did. It'll all be fine. Enjoy the
         | ride, it's called life, and it's better than it ever has been.
        
           | dw_arthur wrote:
           | The power to seriously harm nations may be in the hands of
           | tens of thousands of individuals within a few years. Things
           | might just be different this time.
        
             | hanniabu wrote:
             | That's already the case
        
             | fnovd wrote:
             | It will be different this time, just like it was every
             | other time. It still doesn't matter. The epoch of earth's
             | history that happens to overlap with your own lifespan
             | isn't inherently more interesting or important than any
             | other time, except to you. But sure, what good is an
             | exciting new frontier without a gaggle of doomers worried
             | about what's on the other side? Same as it ever was.
        
               | sebzim4500 wrote:
               | >The epoch of earth's history that happens to overlap
               | with your own lifespan isn't inherently more interesting
               | or important than any other time, except to you
               | 
               | Earth's history, sure. Humanity's history though? Living
               | during the Apollo program is clearly more interesting
               | than living in a period of relative stasis. Living during
               | the AGI revolution could be more interesting still, we'll
               | have to see.
        
           | wintermutestwin wrote:
           | Past performance is not indicative of future results.
        
           | acdanger wrote:
           | These rah rah comments aren't illuminating or helpful. There
           | are immense costs - societal and ecological - to progress
           | that a lot of people seem to be blind to, whether willfully
           | or not.
        
             | Baeocystin wrote:
             | One can reasonably argue that the printing press was
             | responsible for the speed and violence of the reformation.
             | But the alternative of an illiterate world is hardly a
             | panacea. Most large-scale advancements have the same
             | flavor.
             | 
             | What are we to do, then? No snark, honest question.
        
           | Manjuuu wrote:
           | We should have had a moratorium on potatoes, look at us now.
        
             | acdanger wrote:
             | Maybe a moratorium on the Haber process though.
        
         | catchnear4321 wrote:
         | Being terrified doesn't serve much purpose at this point.
         | 
         | We are going too fast. Have been for years. This is just the
         | first clear indication.
         | 
         | Braking is fatal, but some seem pretty hell-bent.
         | 
         | Deceleration is complicated, and it seems highly unlikely that
         | there would be sufficient consensus for true deceleration.
         | Local deceleration is simply waiting for the acceleration
         | occurring somewhere else to overcome your efforts.
         | 
         | The math hasn't really changed for most individuals. At some
         | point something big will happen.
         | 
         | Singularity.
         | 
         | Be excited and put your efforts towards what you value. One way
         | or another, there is very little time left for wasting.
        
           | quonn wrote:
           | Have to reply a second time: It does serve a purpose. There
           | is a purpose in fear and the purpose is to either get us
           | moving if we are complacent or to prevent us from doing
           | things that are not good for us. So the fear may be
           | justified. Maybe.
        
             | catchnear4321 wrote:
             | Time to do something more than respond to cortisol. Before
             | something smarter than the monkey decides to guide the
             | monkey effectively.
        
           | quonn wrote:
           | I disagree. ChatGPT is a clear turning point. We have not
           | been going too fast before, at least not outside LLMs.
           | 
           | Singularity is a belief system, it has very little to do with
           | AI.
           | 
           | edit: I also think if we would ever get to a point where AI
           | gets close to a point of possibly getting out of control as
           | you imply it would simply be banned in the
           | US/Canada/EU/Australia. Furthermore Latin America and Africa
           | could and would be pressured to go along if needed. Which
           | leaves some parts of Asia. China, maybe India and Russia.
           | Probably only China. It could be cut off from the Internet if
           | needed. We could build up a wall just like in the Cold War.
           | My point being: This will not happen just because it happens.
           | It will be a choice.
        
             | avereveard wrote:
             | Gpt plus langchain agents is quite scary
             | 
             | It will use every tool you give it to reach the goal you
             | give him. It will try forever if needed.
             | 
             | I bet state actor are already plugging in tooling to
             | register social account and automate credible propaganda.
             | Maybe not with gpt itself, but privately hosted fine tuned
             | models.
             | 
             | This can win elections.
             | 
             | You can plug wordpress and build infinite blogs with
             | infinite post with the unique scope of building mass around
             | a topic.
             | 
             | This can alter Wikipedia, many people don't ever check
             | sources and take it at face value.
             | 
             | You can not only build fake research paper, but fake the
             | whole research team and their whole interactions with the
             | community and investor.
             | 
             | This can fraud millions.
             | 
             | Tools enable this today.
        
               | catchnear4321 wrote:
               | Humans have been doing all of this and more, for a very
               | long time.
               | 
               | This simply exposes all of the cracks in the foundations
               | of our society. There are severely exploitable issues,
               | and we may wind up with a planet-level chaos monkey.
               | 
               | Ghostbusters had the stay-puft marshmallow man. Will the
               | form of our destroyer be the woot monkey?
        
               | baq wrote:
               | Yes it was possible. Steel was also available before the
               | industrial revolution... but guess what, it's called a
               | revolution for reason? It become cheap enough to upend
               | preexisting social status quo. It was a social phase
               | transition caused by technology.
               | 
               | We're dealing with a very nascent AI revolution right
               | now. A social phase transition has already started: on
               | the forefront there are graphic artists (midjourney v5 is
               | literally revolutionizing the industry as we speak) and
               | NLP researchers (GPT-4 has reduced the need for applied
               | NLP to basically zero), but it's only a start. The
               | cheapness and availability changes everything.
        
             | catchnear4321 wrote:
             | I wasn't limiting my comment to AI development.
             | 
             | Humanity has been going too fast for a long time.
             | 
             | Tell me about this singularity belief system. I simply
             | meant something stronger than an inflection point, closer
             | to the mathematical sense than what you must be assuming,
             | but that word must mean something more for you.
        
             | JL-Akrasia wrote:
             | An AI API marketplace is going to be amazing. But each API
             | is going to need parameters that you can run gradient
             | descent on to tune.
             | 
             | This needs to be the foundation
        
             | marshray wrote:
             | Kind of like how we could just ban fossil fuels if they
             | ever start to become a problem.
        
         | nemo44x wrote:
         | I think we are at, or very near, the inflection point in that
         | picture of that graph that is rising linearly for centuries and
         | then suddenly goes exponential.
         | 
         | https://waitbutwhy.com/wp-content/uploads/2015/01/G1.jpg
        
           | 93po wrote:
           | the problem is that the chart is also accurate if the x axis
           | starts 100,000 years ago. the inflection point might be 2000
           | years or it might be 50 years.
        
       | ed wrote:
       | Here's the (empty, for now) GitHub repo -
       | https://github.com/microsoft/JARVIS
        
       | levesque wrote:
       | This read more like a technical demo than a scientific paper.
       | Wonder why they put it on arXiv.
        
       | tracyhenry wrote:
       | Reminds me of VisualChatGPT (https://github.com/microsoft/visual-
       | chatgpt), which also uses a LLM to decide what vision models to
       | run.
        
       | catchnear4321 wrote:
       | A lot of LLM/ AI research feels like a big "duh."
       | 
       | The language model, being trained on essentially the primary way
       | in which humanity communicates, might be a good means of managing
       | integration of less language-focused models.
       | 
       | ...duh?
        
         | baq wrote:
         | animals which use tools: great apes (gorillas, chimpanzees,
         | orangutans), monkeys, otters, dolphins, some bird species,
         | octopus, and a few crocodilian species.
         | 
         | now LLMs are on this list. a thing which isn't born, doesn't
         | experience, can be copied and instantiated a million times, and
         | a single improvement can be basically immediately propagated to
         | all instances. a very unfamiliar thing with very familiar
         | capabilities.
         | 
         | so, technically, it was obvious. socially and psychologically,
         | we'll be dealing with it for the rest of our civilization's
         | lifetime.
        
           | fatherzine wrote:
           | "We'll be dealing with it for the rest of our civilization's
           | lifetime." Exactly.
        
       | user- wrote:
       | Its a matter of time until connecting widely different AI tools
       | will be super seamless, very exciting. The examples in the paper
       | are pretty cool. I predict within the next year or so will see an
       | sort of A.I assistant that is hooked up to dozens of LLMs and
       | similar tools, and the end user will just ask their assistant to
       | do things for them. That sci fi moment is almost here.
        
         | chasd00 wrote:
         | connecting "widely different AI tools" reminds me of the good
         | old pipe operator. Maybe an AI can be an "anything command" for
         | a specific domain and then we just pipe them together so they
         | can each weigh in with their own expertise. ...like an AI team
         | or committee more or less.
        
       | karmasimida wrote:
       | It is possible to actually revolution the glue layer of
       | computation in a lot of organizations.
       | 
       | GhatGPT is the ultimate and last glue layer we will ever need.
        
       ___________________________________________________________________
       (page generated 2023-03-31 23:01 UTC)