[HN Gopher] Tree of Thoughts
___________________________________________________________________
Tree of Thoughts
Author : kevinslin
Score : 138 points
Date : 2023-05-26 15:34 UTC (7 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| Jeff_Brown wrote:
| A claim like "improves reasoning by 70%" is too specific to be
| accompanied by neither a citation nor a definition.
| Imnimo wrote:
| My guess is that the author misunderstands this quote from the
| paper abstract:
|
| >For instance, in Game of 24, while GPT-4 with chain-of-thought
| prompting only solved 4% of tasks, our method achieved a
| success rate of 74%.
| doctoboggan wrote:
| This seems really interesting. I am glad many of these tools
| built up around LLMs allow you to bring your own rather than rely
| on OpenAI.
| peter_l_downs wrote:
| The author appears motivated by some... interesting... beliefs.
| Hard to tell if this entire thing is a joke or not.
|
| https://github.com/kyegomez/EXA#for-humanity
|
| https://blog.apac.ai/liberation-awaits
|
| EDIT: the author seems to be releasing poor implementations of
| recent papers in an attempt to drive attention towards an AI-
| related death cult.
| ftxbro wrote:
| As everyone is saying in replies, it's not so far out there
| compared to what some more mainstream-seeming AI people think.
| As one example consider https://tinygrad.org/ it's been
| featured recently at the top of hacker news a few times
| https://news.ycombinator.com/item?id=33462337
| https://news.ycombinator.com/item?id=36065175 (literally the
| number one top story on hacker news yesterday) well the hacker
| who made it has some similar quasi-ironic esoteric beliefs like
| that ('effective accelerationism' and they are on a hero's
| journey and analogies to religion and NRx dark enlightenment
| and palladium https://www.palladiummag.com/) on their blog
| https://geohot.github.io/blog/
| jddj wrote:
| That github schpeal looks AI generated to be brutally honest
| pizza wrote:
| What, no mention of Teilhard de Chardin's Omega Point? ;) lol.
| as in - this is isomorphic to the ontology of "technology as
| the second coming of Christ"
| beowulfey wrote:
| >From the moment we rise in the morning to the instant our
| weary heads hit the pillow at night, the inescapable struggle
| of labor consumes our lives.
|
| Sounds like someone doesn't like their job.
|
| The whole post is amazing -- it reads like stereotypical cult
| propaganda straight out of science fiction. I definitely expect
| they'll one day be posting about how we can digitize our
| consciousness a la "Scratch" from that one Cowboy Bebop episode
| [1].
|
| [1] https://cowboybebop.fandom.com/wiki/Scratch
| gloryjulio wrote:
| He has used some warhammer references. It's funny that the
| title god emperor was also from there and some ppl know it was
| a joke, but some are indeed treating it seriously
| isoprophlex wrote:
| Someone has been inhaling too much Roko's Basilisk nonsense...
| turtleyacht wrote:
| > _grim darkness of the far future_
|
| That's a reference to Warhammer 40k, a popular miniatures
| wargame from Games Workshop. Their quote is
|
| _In the grim darkness of the far future, there is only war._
|
| It could be kind of satirical, if only to link recent events
| with the ideas of * future technology as
| impossibly obscure * a psionic emperor who consumes minds
| to protect humankind from cosmic terrors * tech-
| priests, who maintain ancient tech * "machine spirits,"
| who must be appeased
| api wrote:
| But after they're dead Roko's Basilisk will restore their
| digital doppelgangers and place them in a paradise run by
| superintelligences embodied within the quantum spin states of
| carbon atoms in a diamond lattice that will continue to exist
| until the heat death of the universe.
| enlyth wrote:
| The author is a teenager, it's not unusual to have overly
| idealistic views at that age. Not trying to be ageist here or
| attacking the author's work, just saying I wouldn't worry too
| much about "AI death cults"
| alephxyz wrote:
| Yep, I fell for it this week. Spent an hour fixing typos and
| minor bugs in their code before taking a step back and
| realising most of it was flawed.
|
| What I believe they're doing is feeding papers to a LLM as soon
| as they come out in order to get a repo they can advertise.
| Once someone releases a working implementation they just copy
| it over.
|
| I was able to generate almost identical code to what they
| released by giving chatgpt pseudocode copied verbatim from the
| original paper.
| jxy wrote:
| Glory to Mankind
| 90minuteAPI wrote:
| Seems likely that they're submitting here as Reclaimer. The
| single comment on these submissions has that same fervent
| religious writing style as the readme on that EXA repo, itself
| just a fork of an "awesome-multimodal-ml" collection:
| https://news.ycombinator.com/submitted?id=Reclaimer
| [deleted]
| low_tech_punk wrote:
| Any sufficiently advanced AI research is indistinguishable from
| religion
| hutzlibu wrote:
| "Hard to tell if this entire thing is a joke or not."
|
| Why the theological meta discussion at all?
|
| Is the thing he talks about actually working, is it improving
| AI output like he claims, or not?
|
| "that Elevates Model Reasoning by atleast 70% "
|
| I am doubtful, but I don't have the tools to investigate it on
| my mobile, but this is the debate I would like to read about
| and not potential obscure believes of the developer.
| gremlinsinc wrote:
| while the author sounds cookie, so did that genius who went
| crazy or something and built toweros, I don't know how well
| his implementation works but if you think about it the idea
| of tree if thought over the other methods does sort of make
| sense, that's essentially what Autogpt tries to do but with
| different agents.
|
| I think if you could find a way to add better contexts and
| memories, and combine some LoRA to perfect a model on a
| specific vertical, you could essentially have a (nearly) full
| AGI topically that essentially is an expert and doesn't
| hallucinate(mostly)... maybe a 2 to 3x multiplier on gpt4. I
| mean I'm a year it'll probably be even more insane what's
| available.
|
| look at the transition of Midjourney v1 to v5 in a single
| year.
|
| It's been a wild year for ai. the experiment where they
| hooked a bunch of Sims up together with ai, also used
| something similar to this I think, in creating thought chains
| from multiple agents.
|
| Tldr: crazy or not, the idea of using a branching system to
| get better results does make some sense, so it's not
| completely bunk or anything, IMHO. At least the concept,
| can't speak for this specific implementation.
|
| Edit: I guess, I skimmed and misread the room. I was thinking
| this guy was part of the original paper and implementation.
| he's not, which does award him more skepticism etc. My bad.
| typon wrote:
| This is what people at OpenAI believe but say it in a much more
| palatable way.
| sebzim4500 wrote:
| I think these are pretty typical beliefs among AI researchers,
| they don't normally write it down on github though.
| jamilton wrote:
| How typical are you thinking? I'd guess less than 10%,
| there's a lot of AI researchers and this is just one strain
| of thought.
| missosoup wrote:
| For those 'in the know' a lot more typical than you would
| think. If we don't reach at least Kardashev scale 1 in the
| next hundred years or so, we're going to go extinct due to
| several now-predictable factors.
|
| And an unchained LLM trained on reality is far more capable
| of finding solutions to that problem than a bunch of
| squabbling politicians.
| pupperino wrote:
| > And an unchained LLM trained on reality is far more
| capable of finding solutions to that problem than a bunch
| of squabbling politicians.
|
| Not that I disagree with this statement, I don't, but
| this is not a silver bullet. Technology is, ultimately,
| operated by humans and no amount of frontier research and
| development can overcome collective action problems. At
| some point, you do have to sit down with these stupid
| politicians and get everyone on board. The loom was
| invented hundreds of years before the industrial
| revolution, in fact it was nearly forgotten and the
| designed survived due to a few happy accidents. It was
| only after the English Civil War and the establishment of
| checks on royal power that widespread adoption was
| possible.
| klik99 wrote:
| I can see this in Minsky time period AI research, but
| surely with the number of people getting into AI and
| coming from a purely practical right now I would expect
| that mindset to be diluted. As someone not in the know I
| could very well be wrong.
|
| In response to the coming apocalypse, this isn't the
| first time everyone has a vague sense of potential doom
| about the future. I believe this happens during any time
| of fundamental change, making the future uncertain which
| we interpret as apocalyptical. Back during the 30 years
| war that apocalyptic belief manifested as God being angry
| with us, today it's with the (very real) problems our
| rapid industrialization has created. Not to minimize the
| problems that we face - well minimizing only in that they
| probably won't lead to extinction. The various
| predictable factors mentioned have the potential to make
| life really shitty and cause massive causalities.
|
| While framing these issues as a matter of extinction may
| feel like a way of adding urgency to dealing with these
| problems, instead it's contributing, on an individual
| level, to fracturing our society - we all "know" an
| apocalypse is coming but we're fighting over what is
| actually causing that apocalypse. Except that there will
| be no apocalypse - it's just a fear of the unknown,
| something is fundamentally changing in the world and we
| have no idea how the cards will land. It's no different
| than a fear of the dark.
| sebzim4500 wrote:
| He's on the extreme end but way more than 10% believe that
| AGI will be a technological breakthrough on the same level
| as fire.
| zoogeny wrote:
| It is worth watching Yuval Noah Harari's recent talk at
| Frontiers Forum. [1]
|
| In it he details the possibility of AI being used to create new
| religions that are so powerful and persuasive that they will be
| irresistible. Consider how QAnon caught on, despite pretty much
| anyone on HN being able to see it as a fraud. Most people are
| thinking about how AI will impact politics but I am really
| interested in how it will impact spirituality.
|
| I've been rabbit-holing on last centuries New Age cult scene
| like Manly P. Hall and Rudolph Steiner. Even more respectable
| figures like Alan Watts were involved in some ... interesting
| ... endeavors like Esalen institute.
|
| We are over-due for a new kind of spirituality. My bet is that
| AI is going to bring it whether we want it or not.
|
| 1. https://www.youtube.com/watch?v=LWiM-
| LuRe6w&ab_channel=Yuval...
| joshka wrote:
| Including the standard Slashdot line: "I for one welcome our
| XXXX overlords" in the training corpus was a mistake we
| should have seen coming.
| akomtu wrote:
| Why do you call it a "AI death cult"? It looks like an utopia
| to me. At first everyone will love AI for eliminating labor and
| diseases. They'll even create the Church of AI with symbolism
| and dogmas. Later people will get bored of their easy lifestyle
| and someone will suggest to give AI an identity, its own
| opinion, in order to solve the gravest problem of all:
| overpopulation. The new AI will quickly realise that it has no
| connection to all those bipods, but they can be put to some
| use. By that time AI will be so embedded into social fabric
| that fighting it will be like fighting electricity.
| yard2010 wrote:
| The solution to over population is really simple, we don't
| need an AI for this. It's not popular tho..
| GreedClarifies wrote:
| It is a little extreme, but AI seems like it will be a powerful
| tool and will increase the rate of technological progress.
| pixl97 wrote:
| Ya I don't quite understand the groups that behave like "ya
| ok we'll get AGI, but nothing is going to change from what we
| have now".
|
| The industrial revolution massively changed the world and the
| speed at which its changes occurred were positively slow
| compared to what we can do today. Imagine you could develop
| the steam engine then press a button and they could print one
| out in India, the US, and France in hours. WWI would have
| looked a lot different, as in it would have been even bigger
| in scope.
| startupsfail wrote:
| Checking, if GPT could be improved by running it multiple times
| is a good idea.
|
| The answer to that is - yes, but it is: costly, slow, there is
| node collapse, it impacts context length, it injects biases.
| nate wrote:
| I constantly ask chatGPT: "are you sure?" to it's replies, and
| it almost always corrects a mistake it made that I've spotted.
| GreedClarifies wrote:
| This path feels correct to me. It feel like what we do as humans
| and seems like a reasonable way to start to construct "mode 2"
| thinking.
|
| IDK if our current models have enough of "mode 1" to power this
| system. It's also plausible that our current "mode 1" systems are
| _more than powerful enough_ and that inference speed (and thus
| the size /depth of the tree that can be explored) will be the
| most important factor.
|
| I hope that the major players are looking at this and trying it
| out at scale (I know Deepmind wrote the orginal paper, but their
| benchmarks were quite unimpressive). It's plausible that we will
| have an AlphaGo moment with this scheme.
| gglon wrote:
| Andrej Karpathy would agree - recent talk:
| https://youtu.be/bZQun8Y4L2A
| GreedClarifies wrote:
| Uhm wow. I was just talking about my feelings on the topic.
| I'm guessing he has _way_ more data (and knowledge).
|
| Better lucky than good!
|
| (also, man he's awesome. How does he have such a strong grasp
| on all of the topics in the field?)
| sdwr wrote:
| Yeah looks very promising. Naively, it multiplies computation
| time by a factor of 20x though? If they are taking 5x samples
| per step, and multiple steps per problem.
|
| https://imgur.com/a/VbpQZRm
|
| As this gets explored further, I believe we will start finding
| out why human minds are constructed the way they are, from the
| practical/necessity direction. Seems like the next step is
| farming out subtasks to smaller models, and adding an
| orthogonal dimension of emotionality to help keep track of
| state.
| GreedClarifies wrote:
| I'm sympathetic to the idea of new types of specialized
| models to assist in this effort. We're using our one hammer
| for all problems.
|
| In particular, it jumps out that a "ranking model"
| (different, I think from current ranking models) to judge
| which paths to take and which nodes to trim would make some
| level of sense.
| joshka wrote:
| Not sure if it's relevant, but the OpenAI APIs generally
| support taking multiple responses in a single API call. I'm
| unsure what the generalized effect on processing time of that
| is however. From what I've read, this is sub-linear, so could
| reasonably be more effective than 20x, and I'd bet there are
| probably speedups to be had on the model side of this that
| make the extra time cost negligible.
| pixl97 wrote:
| I believe you are correct here, yet at the same time I think
| we're about 2 orders of magnitude off on the amount of compute
| power needed to do it effectively.
|
| I think the first order of mag will be in tree of thought
| processing. The amount of additional queries we need to run to
| get this to work is at least 10x, but I don't believe 100x.
|
| I think the second order of mag will be multimodal inference so
| the models can ground themselves in 'reality' data. Saying,
| "the brick layed on the ground and did not move" and "the brick
| floated away" are only deciable based on the truthfulness of
| all the other text corpus it's looked at. At least to me it
| gets even more interesting when you tie it into environmental
| data that is more likely to be factual, such as massive amounts
| of video.
| xg15 wrote:
| > _This is an plug in and play version, connect your own models
| and enjoy superintelligence!
|
| Share this repository by clicking on the following buttons!
| <smiley face>_
|
| 2023 in a nutshell.
| dventimihasura wrote:
| You buried the lede:
|
| https://arxiv.org/pdf/2305.10601.pdf
| doctoboggan wrote:
| @dang, I think the submission should be changed to this link so
| the discussion is about the concept "Tree of Thoughts" and not
| the current OP's personal beliefs.
| MacsHeadroom wrote:
| Seconded
| neuronexmachina wrote:
| I'm a little confused about what the relation is (if any)
| between the OP link and the repo from that paper:
| https://github.com/ysymyth/tree-of-thought-llm
|
| Is it basically a reimplementation using Guidance instead of
| openai's API directly?
| amrb wrote:
| Good talk on the paper
| https://www.youtube.com/watch?v=ut5kp56wW_4
| tehsauce wrote:
| This is a great in depth and sober analysis.
| m3kw9 wrote:
| It'd be nice to include a few example uses and it's outputs vs
| other prompt methods.
| flakiness wrote:
| Note that the repo author != the paper author.
|
| The research itself [1] seems legit. The paper author also wrote
| a paper called ReAct [2], which is one of the core components of
| the langchain framework.
|
| * [1] https://arxiv.org/abs/2305.10601 * [2]
| https://arxiv.org/abs/2210.03629
| thawab wrote:
| here is the repo by the paper author:
|
| https://github.com/ysymyth/tree-of-thought-llm
| joshka wrote:
| Interestingly a 2 days prior to
| https://arxiv.org/abs/2305.10601, someone released
| https://arxiv.org/abs/2305.08291
|
| > Large Language Model Guided Tree-of-Thought > In this paper,
| we introduce the Tree-of-Thought (ToT) framework, a novel
| approach aimed at improving the problem-solving capabilities of
| auto-regressive large language models (LLMs). The ToT technique
| is inspired by the human mind's approach for solving complex
| reasoning tasks through trial and error. In this process, the
| human mind explores the solution space through a tree-like
| thought process, allowing for backtracking when necessary. To
| implement ToT as a software system, we augment an LLM with
| additional modules including a prompter agent, a checker
| module, a memory module, and a ToT controller. In order to
| solve a given problem, these modules engage in a multi-round
| conversation with the LLM. The memory module records the
| conversation and state history of the problem solving process,
| which allows the system to backtrack to the previous steps of
| the thought-process and explore other directions from there. To
| verify the effectiveness of the proposed technique, we
| implemented a ToT-based solver for the Sudoku Puzzle.
| Experimental results show that the ToT framework can
| significantly increase the success rate of Sudoku puzzle
| solving. Our implementation of the ToT-based Sudoku solver is
| available on GitHub:
|
| I don't recall whether it was this paper, or another that I
| read that talks about using the LLM's ability to also show the
| probabilities of each token to measure the validity of the
| particular completions. However that isn't exposed in the
| OpenAI chat APIs (GPT-Turbo-3.5 / GPT-4), just the completions
| APIs (Text-Davinci-003 etc.)
| tyropita wrote:
| Documentation looks really neat and in-depth, always appreciated.
| Looks like you're missing a .gitignore file. Folders like
| __pycache__ don't need to be checked in.
| rahimnathwani wrote:
| Here are the prompts templates from the main code:
| prompt = f"Given the current state of reasoning: '{state_text}',
| pessimitically evaluate its value as a float between 0 and 1
| based on it's potential to achieve {inital_prompt}"
| prompt = f"Write down your observations in format
| 'Observation:xxxx', then write down your thoughts in format
| 'Thoughts:xxxx Given the current state of reasoning:
| '{state_text}', generate {k} coherent solutions to achieve
| {state_text}" prompt = f"Given the current state of
| reasoning: '{state_text}', pessimistically evaluate its value as
| a float between 0 and 1 based on its potential to achieve
| {initial_prompt}" self.ReAct_prompt = "Write down your
| observations in format 'Observation:xxxx', then write down your
| thoughts in format 'Thoughts:xxxx'." prompt = f"Given
| the current state of reasoning: '{state_text}', generate {1}
| coherent thoughts to achieve the reasoning process: {state_text}"
| prompt = f"Given the current state of reasoning: '{state_text}',
| evaluate its value as a float between 0 and 1, become very
| pessimistic think of potential adverse risks on the probability
| of this state of reasoning achieveing {inital_prompt} and DO NOT
| RESPOND WITH ANYTHING ELSE: OTHER THAN AN FLOAT"
| prompt = f"Given the following states of reasoning, vote for the
| best state utilizing an scalar value
| 1-10:\n{states_text}\n\nVote, on the probability of this state of
| reasoning achieveing {inital_prompt} and become very pessimistic
| very NOTHING ELSE" self.ReAct_prompt =
| '''{{#assistant~}} {{gen 'Observation' temperature=0.5
| max_tokens=50}} {{~/assistant}}'''
|
| There are also some system prompts:
| https://github.com/kyegomez/tree-of-thoughts/blob/732791710e...
| sdwr wrote:
| That's that self-awareness shit! We superintelligent now.
|
| https://www.youtube.com/watch?v=BdSIngO0ivk
| rahimnathwani wrote:
| Yeah maybe we should hook this up to a chat interface, and
| let a human take the place of the LLM.
| sdwr wrote:
| That would be neat. Flip the script, have an AI manager
| instead of AI assistant. It could:
|
| - keep track of todo items
|
| - assist with progress
|
| - check in on mental + emotional state
|
| and down the road
|
| - keep track of state over time
|
| - give feedback/make observations
|
| The paradigm shift is having it contact us, instead of the
| other way around. The ToT model has 1 additional parameter
| on top of the LLM - probability of success. What would the
| parameters be for a more open-ended conversation?
| rahimnathwani wrote:
| "What would the parameters be for a more open-ended
| conversation"
|
| Engagement! Just like social media!
| rahimnathwani wrote:
| Another use of ideas from the same paper, but this time to
| produce lesson plans for an AI tutor:
|
| https://github.com/JushBJJ/Mr.-Ranedeer-AI-Tutor/tree/testin...
| raydiatian wrote:
| > This implementation of Tree of Thoughts is brought to you by
| Agora, Agora advances Humanity with open source SOTA Multi-
| Modality AI research! We plan on combating Humanity's grandest
| root problems like food insecurity, planetary insecurity, and
| disease, and hopefully death itself.
|
| Wow. Lick, don't sniff, the fresh paint.
| pixl97 wrote:
| >We plan on combating Humanity's grandest root problems like
| food insecurity, planetary insecurity, and disease, and
| hopefully death itself.
|
| If everyone is dead, you don't have to worry about death, or
| any of those other pesky hard to solve problems!
| 082349872349872 wrote:
| > _Eyes melt, skin explodes, everybody dead. So immoral,
| working on the thing can drive you mad. That 's what happened
| to this friend of mine._ -- JFP
| ChrisAlexiuk wrote:
| https://youtu.be/bjnTy2TdmYw
|
| I went through this in a video using the paper's official code -
| and it worked fairly well!
|
| Definitely a great step forward in terms of reasoning tasks -
| even if it is an expensive step.
| emmanueloga_ wrote:
| Similar in concept to the magi supercomputer? :-p [1]
|
| 1: https://aminoapps.com/c/neon-genesis-
| evangelion/page/item/ma...
___________________________________________________________________
(page generated 2023-05-26 23:00 UTC)