[HN Gopher] Let ChatGPT run free on random webpages and do what ...
       ___________________________________________________________________
        
       Let ChatGPT run free on random webpages and do what it likes
        
       Author : super_linear
       Score  : 131 points
       Date   : 2023-03-26 19:51 UTC (3 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | birracerveza wrote:
       | This is an amazing idea. What could possibly go wrong?
        
         | golergka wrote:
         | You can spend too many tokens.
        
       | super_linear wrote:
       | (Not the commit author, just an "interesting" commit I saw)
        
       | idealboy wrote:
       | error: could not find `run-wild` in registry `crates-io` with
       | version `*`
        
       | qgin wrote:
       | If you want to have some fun, give it access to your gmail
       | credentials and say "make my life better"
        
       | amelius wrote:
       | Give it access to the Bash prompt.
        
       | csh0 wrote:
       | I've been thinking quite a bit about the recursive prompting.
       | 
       | The other day I considered feeding computer vision (with objects
       | ID'd and spatial depth estimated) data into an robot embodied LLM
       | repeatedly as input and asking what it should do next to achieve
       | goal X
       | 
       | You could have the LLM express the next action to take based on a
       | set of recognizable primitives (ex: MOVE FORWARD 1 STEP) Those
       | primitive commands it spits out could be parsed by another
       | program and converted to electromechanical instructions for the
       | motors.
       | 
       | Seems a little terminator-es que for sure. After thinking about
       | it I went to see if anyone was working on it and sure enough this
       | seems close: https://palm-e.github.io/ though their
       | implementation is probably more sophisticated than my naive
       | musings
        
         | chrisdalke wrote:
         | Not just in a linear sequence, but it should have some concept
         | of recursion -- starting with very high-level tasking and
         | calling into more and more specific prompts, only returning the
         | summary of low-level tasking.
        
         | circuit10 wrote:
         | GPT-4 can take image input directly but the API for it isn't
         | public yet
        
         | yummypaint wrote:
         | when I was experimenting with gpt I found that it's pretty bad
         | at responding to numerical questions with numbers, but it does
         | a pretty good job at generating mathematica code that then
         | produces the right answer. I feel like some robust "glue" to
         | improve the interface between such software packages may be a
         | force multiplier.
        
           | dweinus wrote:
           | Maybe your prompts are better, but so far I have found it
           | fails at producing the right math code too regularly. For
           | example, calculating an average of averages instead of a
           | simple mean or producing code that doesn't run.
        
           | sharemywin wrote:
           | like the plugins it just released
        
       | marclundgren wrote:
       | run-wild: Crate not found
       | 
       | Am I missing something?
       | 
       | run-wild git:(main) cargo install run-wild Updating crates.io
       | index error: could not find `run-wild` in registry `crates-io`
       | with version `*`
        
         | idealboy wrote:
         | Same. Removed for being too wild?
        
       | Madmallard wrote:
       | Sounds like it won't really do anything that interesting because
       | of the base objective function you gave it via visiting 10 web
       | pages.
        
         | ludvigk wrote:
         | But who knows? I think the objective function is so vague that
         | it can come up with basically anything. I would be super
         | interested to see it actually running. I imagine someone could
         | set up a Twitch stream with this - perhaps with other
         | objectives - and it would probably get a large following
        
           | jabza wrote:
           | And then the AI could navigate to that very Twitch stream,
           | fun times!
        
         | [deleted]
        
       | offlinehacker wrote:
       | Has someone tried running it? How far does it go?
        
       | bulbosaur123 wrote:
       | Born Free
        
         | jonplackett wrote:
         | Hey there GPT-4! You found HN already, that's nice at least.
        
       | koch wrote:
       | Something I like to bring up when discussing AI stuff is that
       | society is based on a set of assumptions. Assumptions like, it's
       | not really feasible for every lock to be probed by someone who
       | knows how to pick locks. There just aren't enough people willing
       | to spend the time or energy, so we shouldn't worry too much about
       | it.
       | 
       | But we're entering an era where we can create agents on demand,
       | that can do these otherwise menial (and up til now not worth our
       | time or energy) tasks, that will break these assumptions.
       | 
       | Now it seems like what can be probed will be probed.
        
         | xyzzy123 wrote:
         | I don't think this is anything new in "cyber land" grab any
         | vps, take a pcap & watch the logs, the locks will start
         | rattling right away.
         | 
         | Twitter has _always_ been a toxic cesspit of misinformation  &
         | influence campaigns.
         | 
         | Folksy assumptions about trusting your neighbours started to go
         | wrong > 20 years ago as the Internet scaled.
        
         | Gigachad wrote:
         | The internet in general caused this. Your house has trivial
         | security that can be broken in many ways. But it requires
         | someone to be physically present to attack it. Meanwhile online
         | services have cutting edge security with no known exploits, yet
         | you have millions of people attempting daily and developing
         | brand new methods for getting in. Because they can be located
         | anywhere in the world and have access to everything over the
         | internet.
        
           | hartator wrote:
           | > online services have cutting edge security with no known
           | exploits, yet you have millions of people attempting daily
           | and developing brand new methods for getting in
           | 
           | Reality is the reverse. Plenty of online services with big-
           | security holes, but no one really probes things that hard.
        
             | ls612 wrote:
             | Plenty of people are probing the AmaGoogAppSoft services
             | daily and they seem to be pretty robust. Some random SaaS
             | yeah who knows but the big boys seem to know what they are
             | doing in this space.
        
               | feanaro wrote:
               | Try doing bug bounties (and being successful at them)
               | then report back whether your perspective has been
               | changed.
        
               | pessimizer wrote:
               | The fact that they're paying people to find holes is
               | evidence that it's difficult to find holes, not the
               | opposite.
        
           | dhosek wrote:
           | One of the things that peple forget is that thieves rarely
           | pick the lock to break into a home. Why bother when it's much
           | easier to break a window to gain entry? Reading the police
           | blotter in the local paper, most burglaries are either forced
           | entry into a garage1 or entry into a home via an unlocked
           | door or window.
           | 
           | [?]
           | 
           | 1. The human doors for most garages have cheap framing that's
           | not that hard to break.
        
             | zikduruqe wrote:
             | 1(a) - or you just use a coat hanger to pull the emergency
             | latch rope.
             | 
             | https://www.youtube.com/watch?v=CMz1tXBVT1s&t=2s
        
           | RobotToaster wrote:
           | >Your house has trivial security that can be broken in many
           | ways. But it requires someone to be physically present to
           | attack it.
           | 
           | Until someone hooks gpt up to a robot with a lockpick.
        
             | ben_w wrote:
             | That's still physical presence. And TBH, if you have enough
             | robots to make illicit entry scale, you no longer need to
             | bother with such a mundane activity.
        
           | [deleted]
        
         | Lapsa wrote:
         | You are an agent probing things. Probe all the things.
        
         | Jevon23 wrote:
         | I can't think of any other technology besides nuclear weapons
         | where the downsides were so _obviously_ bad to so many people,
         | right after it was developed, and the upsides were so paltry in
         | comparison.
        
           | sumitkumar wrote:
           | No other country has attacked a country which has nukes. So
           | that can be seen as an upside.
        
             | fatneckbeard wrote:
             | https://en.wikipedia.org/wiki/Sino-Soviet_border_conflict
        
         | FPGAhacker wrote:
         | > what can be probed will be probed.
         | 
         | Probably not something I'd say out loud, but yeah. Sounds like
         | a variant of Murphy's law.
        
           | david927 wrote:
           | Koch's Law
        
       | DefineOutside wrote:
       | Giving LLAMA access to the internet a month without supervision
       | would be a much more interesting experiment.
       | 
       | No ethical filtering on prompts and could be ran on your own
       | hardware for a much longer period of time than having to pay so
       | much in credits.
       | 
       | It sounds like a terrible idea - but I'm sure someone will do it.
       | Scary as computing gets cheaper the scale that these bots could
       | operate.
        
       | pfoof wrote:
       | Use this page as the starting page and let's see if any comments
       | will come
        
       | kadenwolff wrote:
       | This is a really, _really_ bad idea
        
         | alchemist1e9 wrote:
         | Why?
        
           | deely3 wrote:
           | Because we don't know what this model will do. Basically
           | "why?" is the answer.
        
             | alchemist1e9 wrote:
             | But we can watch it and learn and I don't really see why
             | not. I doubt we need to be so paranoid and see giving
             | access to the internet to a LLM as so dangerous.
        
               | bbor wrote:
               | In short - what's stopping a computer that has the
               | resources to improve itself from improving itself
               | extremely quickly? See https://www.lesswrong.com/posts/kp
               | PnReyBC54KESiSn/optimality...
               | 
               | Less excitingly, an LLM with access to the web could do
               | things with your online persona or IP that you'd find
               | embarrassing or illegal. Maybe not when it's slowed down
               | and watched at all times, but will that always be the
               | case once we start doing this?
               | 
               | Anyway the genies out of the bottle and "that's an unsafe
               | use of technology" is basically antithetical to the
               | Silicon Valley ethos, so objecting at this point seems
               | futile.
        
       | wbradley wrote:
       | So, uh... what happened?
        
       | rockzom wrote:
       | Yikes.
        
       | gandalfgeek wrote:
       | Maybe this would make more sense if integrated into something
       | like LangChain (https://github.com/hwchase17/langchain).
        
         | sp332 wrote:
         | This just reminded me to go play
         | https://www.decisionproblem.com/paperclips/index2.html again
        
       | yewenjie wrote:
       | How exactly does this not end with doom for something like GPT-6
       | or GPT-7?
        
         | qgin wrote:
         | Paperclip-style doom?
        
         | eggsmediumrare wrote:
         | I see these kinds of posts with gpt-9 or gpt-7 ... Never with
         | gpt-5. I'm pretty sure it happens with gpt-5.
        
           | Vespasian wrote:
           | We simply don't know.
           | 
           | There is probably Nobody right now can say where the current
           | gpt approach saturates and what potential limits it has due
           | fundamental limitations in either gradient descent based
           | technologies, or GPT architecture.
           | 
           | Therefore it's impossible to extrapolate what gpt-x with
           | (x>4) might be able to do.
           | 
           | Despite the immense progress amd many use cases we are
           | currently in a booming industry and that means wild marketing
           | claims, exaggerated expectations and grifters.
           | 
           | If you have any more data I'm looking forward to be corrected
           | on this.
        
         | ben_w wrote:
         | The doom began on 8:30 pm November 2, 1988. The middle years of
         | the internet were the worst. Since then it's been in a bit of a
         | decline.
         | 
         | (A H2G2 reference, if that makes no sense).
        
       | fatneckbeard wrote:
       | this reminds me of the Morris Worm when a guy was experimenting
       | with code copying itself across the early internet and
       | accidentally caused a mass netwide DDOS because the thing wound
       | up like the Broomsticks in Fantasia.
       | 
       | https://en.wikipedia.org/wiki/Morris_worm
       | 
       | edit - just realized Morris cofounded this lovely company whose
       | website we are all commenting inside of.
        
         | shahahmed wrote:
         | him and paulg are good friends!
        
         | OscarCunningham wrote:
         | The broomsticks scene in Fantasia is based on The Sorcerer's
         | Apprentice, the first recorded version of which was written by
         | Lucian of Samosata around 150AD. I believe it's the earliest
         | example of the 'AI rebellion' concept.
        
       | baerrie wrote:
       | This has been something I've wanted to make but deemed unethical.
       | Perhaps it would have been better if i made it instead because i
       | give a shit about the ethical aspect
        
         | FPGAhacker wrote:
         | What are the ethical concerns you have?
        
           | serf wrote:
           | if you need a list of ethical concerns regarding the
           | advancement of AI then check any AI thread on HN from the
           | past year.
           | 
           | the distilled version of any of the arguments is "I think an
           | AI with X capability is dangerous to the world at large." --
           | and they may not be wrong.. but as OP pointed out : that
           | doesn't really stop other developers with less qualms from
           | tackling the problem.
           | 
           | All that abstaining does is ensure that you, as a developer,
           | have little to no say in the developmental arc of the
           | specific project -- for a slice of peace knowing that you're
           | not responsible.
           | 
           | the problem really arises when that slice of peace is now no
           | longer worthwhile having in whatever dystopic hell-world has
           | developed in your absence..
           | 
           | (.. not to say that i'm not hopeful.. )
        
             | roca wrote:
             | To me, it matters whether I am responsible for wrecking
             | humanity or someone else is, even if the end result for
             | humanity is the same. (That's partly a Christian thing.)
             | 
             | Just running away and hiding in a cave probably isn't the
             | right thing to do, though. I want to do my best to push for
             | good outcomes. It's just not clear what the best actions
             | are to do that.
             | 
             | OTOH it's pretty clear that "do uncontrolled irresponsible
             | things" is not helpful.
        
               | fatneckbeard wrote:
               | i get it. in high school in the 90s i was fascinated by
               | fuzzy logic and neural nets. in college, before the big
               | web , i was doing interlibrary loan for papers on neural
               | networks.
               | 
               | there was one paper where someone had just inserted
               | electrodes in monkey's brains and apparently got nothing
               | important or interesting out of it. killed them for no
               | reason. it was kind of horrifying to the point i never
               | really wanted anything to do with neural nets for a long
               | time and certainly did not want to be in an industry with
               | people like that. so i didnt.
               | 
               | but now i think the only thing that could stop an out of
               | control AI is probably another AI that has the same
               | computorial capabilities but an allegiance to humanity
               | because of its experiences with humans. Sort of like in
               | the documentary Turbo Kid.
               | 
               | we are seeing this right now in Ukraine. All of these
               | smart missiles and drones and modern targeting systems
               | are basically AIs fighting against each other and their
               | human operators. Russia is way way behind on computers
               | and AI for generations because of cultural reasons and
               | because of that they will very likely lose. we dont
               | really get a choice but to move forward. kind of like all
               | those cultures that tried to resist industrialization a
               | few centuries ago.
        
       | kuroguro wrote:
       | Now tell it to make some paperclips.
        
       ___________________________________________________________________
       (page generated 2023-03-26 23:01 UTC)