[HN Gopher] Agentarium: Creating social simulations with AI Agents
       ___________________________________________________________________
        
       Agentarium: Creating social simulations with AI Agents
        
       Author : Thytu
       Score  : 71 points
       Date   : 2024-12-27 10:46 UTC (4 days ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | tetris11 wrote:
       | Looks fun, but also very OpenAI dependent -- similar to AI
       | Town[0].
       | 
       | I really want to play with these, I just don't have the tokens. I
       | do have a decent graphics card though.
       | 
       | 0:https://github.com/a16z-infra/ai-town
        
         | Thytu wrote:
         | Thanks for the interest! I use aisuite (cf link) to manage what
         | LLMs to use. You should be able to switch from one provider to
         | an other quite easily (even hugging-face if you want). I don't
         | know if aisuite supports local LLMs tho, might be a good thing
         | to check.
         | 
         | aisuite : https://github.com/andrewyng/aisuite
        
         | yard2010 wrote:
         | Maybe fork it and add ollama support :)
        
       | kivihiinlane wrote:
       | Would be fun to have this for Civ AI rulers
        
       | cjonas wrote:
       | What makes the "agents". From what I can tell, they don't perform
       | any external actions. I would call these chatbots...
        
         | qqqult wrote:
         | "AI agent" -> LLM with function calling
         | 
         | is the new
         | 
         | "AI" -> LLM
        
         | petesergeant wrote:
         | I don't (at very brief glance) see either agentic or workflow
         | code there, I think it's up to the users of the library to
         | bring their own agents?
         | 
         | (this being the only sensible terminology definition I've seen
         | for which has very definitely been a "fund my startup"
         | marketing term until now:
         | https://www.anthropic.com/research/building-effective-agents)
        
           | cjonas wrote:
           | Ya, these are the definitions I'd use.
           | 
           | While all "Agentic" system will likely use some form of
           | function calling, not all function calling is "Agentic". Most
           | implementations are more "Workflows" than "Agents".
        
         | Xen9 wrote:
         | Ypu would need mathematical theory of embedded agency to dig
         | yourself up from the terminological hellhole.
        
           | ratedgene wrote:
           | can you point me to some materials that are easy to digest?
        
             | sadboi31 wrote:
             | Easy to digest to me is a matter of process plus order.
             | Just because you can boof wine/ai and feel a greater high
             | than if you just took a sip doesn't mean you should. I'd
             | start by setting the table and throwing out your last meal.
             | start over in the 60s and work forward.
             | 
             | stafford beer on cybernetics (also worth mention, norbert
             | weiner): https://www.youtube.com/watch?v=JJ6orMfmorg
             | 
             | Lots of other people start w/ other things but i'm a mgmt
             | minded person so a social engineering + psychology +
             | anthropology oriented lens has always been my anchor.
             | 
             | My first real intro to math where everything clicked was w/
             | primitive graph theory as it were 2000+ years ago. From
             | there, algebra, geometry, trig, calc, etc# started
             | clicking.
        
         | Thytu wrote:
         | By default the "agent" is pretty limited in actions (only
         | talking / and thinking).
         | 
         | Currently it's up to the user to add its own actions, you can
         | find an example where I gave the agents access to ChatGPT (cf
         | link)
         | 
         | Obviously, some default actions will be added in the future :D
         | 
         | link to example:
         | https://github.com/Thytu/Agentarium/blob/main/examples/3_add...
        
         | JohnMakin wrote:
         | > What makes the "agents"
         | 
         | As most people understand them, a lot of marketing and
         | imagination. There's some fairly dense theory on the underlying
         | concept that is probably inaccessible to someone without a math
         | degree, and much of this research happened well after my
         | academic days so I cannot comment on a deep level, but I'm
         | extremely skeptical of its attempted implementation in the
         | current market, particularly when language models are at the
         | core of it. From what I understand, an agentic system can exist
         | entirely without LLM's and all the rube-goldberg machinations
         | behind giving it the appearance of working. However, this is
         | the course that every single tech company has gone all in on,
         | so we'll see how it goes. I suspect that LLM's will be
         | discarded in favor of something much better in the mid to far
         | future, but the fact that no one can currently say what that is
         | or even would look like is a little bit concerning to me when
         | the large bets are being made that it will just happen
         | inevitably in more near-future timelines.
        
       | deadbabe wrote:
       | These "AI Agent" type toys seem neat for a little bit but then I
       | quickly find them pointless.
       | 
       | I've found myself more entertained by far simpler AI using
       | complex behavior trees or utility AI mechanics. Their emergent
       | behaviors are just as good at creating stories in my head for
       | what is happening in their world, even if the characters don't
       | engage in actual conversation. It seems the ability for chatbots
       | to speak text naturally with each other is of little value unless
       | you like to eavesdrop every single conversation in every
       | interaction two characters have. You could accomplish the same
       | results by just passing raw numeric values between two bots that
       | ultimately change their internal mental states.
        
         | dr_dshiv wrote:
         | Can you give an example?
        
           | deadbabe wrote:
           | The Sims
        
         | Thytu wrote:
         | The goal isn't entertainment but practical simulation. I'm
         | building tools to automate A/B testing, model marketing
         | campaign responses, and optimize content - all by simulating
         | human behavior at scale. Where behavior trees fall short,
         | language models can capture nuanced user reactions that can't
         | be reduced to simple metrics.
         | 
         | (if I can make it realistic enough lol)
        
           | deadbabe wrote:
           | Why do you think modeling a bunch of LLM characters and
           | watching their interactions is somehow going to yield a
           | substantially better result than asking an LLM to output
           | content specifically tailored for a particular audience?
           | 
           | If the answer you seek can be observed by watching LLMs
           | interact at scale, then the answer is already within the LLM
           | model in the first place.
        
             | Thytu wrote:
             | I simply tested it, result are quite different tbh. Now the
             | big question is "Why".
             | 
             | My first thought would be that it's kinda we, humans,
             | behave. It feels a bit the equivalent of the mom-test. If
             | you ask someone "do you like this", he/she is bias to to
             | say "yes", same goes for LLMs (I took a dummy example but
             | you got the idea).
             | 
             | Anyway, I can be wrong, I can be right, ATM no one knows
             | not even me
        
               | deadbabe wrote:
               | I think what you're just doing there is averaging out the
               | output of many LLMs, and it gives some different results,
               | but there's no reason you couldn't arrive at those same
               | results I think by just complicating the original prompt
               | manually.
        
       | binary132 wrote:
       | All AI slop is nothing but market hype VC exploitation.
        
       | owenpalmer wrote:
       | How is this different than just using OpenAI's API? Sure, it's a
       | slightly different interface, but what's the advantage? To me, an
       | agential framework would have the agents interacting with the
       | external world.
        
       | vunderba wrote:
       | It sounds very similar to AI town - there's been a couple
       | attempts at building virtual playgrounds for chatbots to interact
       | with each other, and vote on solutions if they're trying to come
       | up with a consensus. In that respect, it's probably similar to
       | how mixture of experts work but just modeled at a higher level.
       | 
       | https://github.com/a16z-infra/ai-town
       | 
       | The big thing that you want though is diversity in terms of each
       | of the AI's, and I'm not convinced that altering temperature /
       | system context prompt / optional backing RAG represents
       | sufficient variety from virtual bot to virtual bot.
       | 
       | Ideally, you would want to throw as many different LLMs (Llama,
       | Mistral, Qwen, etc.) into the mix as possible, but hardware
       | constraints make this borderline impossible.
        
       | BrandiATMuhkuh wrote:
       | This is really cool. I think agent based simulations with LLM are
       | really cool. Just last week I was talking with a Professor of
       | Economy about the use of such simulations for their research.
       | 
       | Some background: For my PhD thesis, about 8 years ago, did I
       | simulate how voice agents can influence humans. I did that with
       | an agent based simulations. Before I did the simulation I
       | gathered actually influence values by doing experiments with
       | human participants.
       | 
       | The limitations however were, I only had basically one dimensions
       | of influence.
       | 
       | With LLM, we can create all sort of actors. And play scenarios,
       | like introducing a new legislation and see how different types of
       | the population will react.
        
       ___________________________________________________________________
       (page generated 2024-12-31 23:01 UTC)