[HN Gopher] Adaptive social networks promote the wisdom of crowds
       ___________________________________________________________________
        
       Adaptive social networks promote the wisdom of crowds
        
       Author : tosh
       Score  : 48 points
       Date   : 2021-07-09 11:05 UTC (11 hours ago)
        
 (HTM) web link (www.pnas.org)
 (TXT) w3m dump (www.pnas.org)
        
       | lonk11 wrote:
       | > In this paper, we test the hypothesis that adaptive influence
       | networks may be central to collective human intelligence with two
       | preconditions: feedback and network plasticity
       | 
       | In the paper they had the participants of the experiment to
       | manually pick who they want to follow. But what if the system
       | connected them automatically to high signal-to-noise individuals
       | based on feedback alone?
       | 
       | I've been working on something like this with my hobby project
       | https://linklonk.com - an information network where the
       | connections between you and other users are determined by your
       | ratings of content.
       | 
       | When you upvote an article - you connect to other users who
       | upvoted that article. When you downvote - your connection to
       | those who upvoted it becomes weaker. That way the strength of
       | your connection to other users captures the signal-to-noise
       | ration of those users for you.
       | 
       | The stronger you are connected to someone - the higher their
       | other upvoted items are ranked on the "For you" page.
       | 
       | For example, I upvoted this paper on LinkLonk:
       | https://linklonk.com/item/6534389451373608960 If you also upvote
       | it then you will get connected to me and will see more of my
       | recommendations on the main page. The next user who upvotes it
       | will connect to me and to you, etc.
       | 
       | Since you know that your content ratings have direct effect on
       | what content your future self will see, you are incentivized to
       | think whether each piece of content that you just consumed was
       | truly worth your time. This kind of retrospective thinking is
       | missing when we hit upvote/retweet/like in the existing social
       | systems.
       | 
       | My project is in a very early stage and suggestions/ideas are
       | welcome.
        
         | high_byte wrote:
         | sounds polarizing. I wouldn't decrease your connection upon
         | downvoting, but I would increase connections with others who
         | downvoted. I would only decrease connection over time without
         | similar ratings.
        
           | lonk11 wrote:
           | The purpose of the downvote is for you to say what content
           | wasted your time. Those who brought that content to you
           | deserve to lose your attention so they do not waste your time
           | in the future, do they not? That's why LinkLonk decreases
           | your connection to those who upvoted it. It also displays a
           | popup saying "You will see less content from N users and M
           | feeds that upvoted this" to explain how the downvote button
           | works.
           | 
           | LinkLonk also increases your "downvote connection" to other
           | who downvoted that item as you are suggesting. A "downvote
           | connection" is how much weight the other person's downvote
           | has for you. That is, it captures how good their past
           | downvotes were for you and how much their future can be
           | trusted. So there are two kinds of connections:
           | 
           | - Upvote connection - gives others ability to promote/curate
           | good content for you.
           | 
           | - Downvote connection - how much others can bury/moderate bad
           | content.
           | 
           | And as you are also suggesting, the connection is decreased
           | over time without similar ratings. Each time someone you are
           | connected to upvotes something your connection to them
           | becomes slightly weaker. So if you ignore content from a
           | user/RSS feed then it will have lower ranking for you over
           | time. So in practice the downvote button should not be used
           | much at all.
        
         | Palmik wrote:
         | Doesn't this create bubbles of like minded people? Do you
         | believe that to be desirable?
        
           | forgotmypw17 wrote:
           | Not necessarily bubbles, could be just supportive,
           | encouraging communities.
        
           | lonk11 wrote:
           | You are right, LinkLonk is a filter bubble. The difference
           | from other systems (e.g., algorithmic feeds powered by
           | machine learning that optimize for "engagement") that exhibit
           | the filter bubble dynamics is that LinkLonk puts all of the
           | control into the hands of the user. The user is responsible
           | for the content they upvote which directly determines who
           | they get content from in the future.
           | 
           | In a sense this is similar to how users of RSS feed readers
           | control which feeds they subscribe to. They are responsible
           | for the content they consume. What LinkLonk adds is a
           | transparent layer of automation that helps you
           | subscribe/unsubscribe based on your content ratings.
           | 
           | My hope is that LinkLonk will help people get more informed,
           | but I cannot be sure. The project is a live experiment to
           | find out if this is right system of incentives.
        
         | elevaet wrote:
         | This sounds like a really interesting experiment. I hope you
         | share your results here on HN.
         | 
         | This approach reminds me a bit of the saying in neuroscience
         | "if it fires together, it wires together"
        
           | lonk11 wrote:
           | You can think of every user as a neuron where upvotes on the
           | same items strengthen their connections.
           | 
           | Though there is one important bit of asymmetry: you connect
           | to those who upvoted that item *before* you. That way people
           | who recognize useful information earlier earn more trust. In
           | a sense this is a "proof of work" - to recognize valuable
           | content before it becomes popular.
           | 
           | My hope is that this asymmetrical nature of connections will
           | get less informed people to connect to more informed people.
           | Which is the opposite of the echo-chamber effect - when less
           | informed people are connected to similarly less informed
           | peers.
           | 
           | And yes, I'm slowly preparing to do a "Show NH" for LinkLonk.
           | But I probably need to grow the number of active users a bit
           | before I do that, otherwise the "Shown HN" will bring a lot
           | of clicks that will just bounce.
        
             | elevaet wrote:
             | So let me get this straight - you become connected to those
             | who upvoted the item _before_ you, but not to those who
             | upvoted _after_ , right? So this incentivizes the avant
             | garde, rather than the promoters of stuff that's already
             | going viral. Very interesting.
        
               | lonk11 wrote:
               | Correct. This also provides some protection from gaming
               | of the system. You can't simply upvote a bunch of popular
               | items to get other people who already upvoted it become
               | connected to you.
               | 
               | You have to be good at prediction what people will like
               | in order to get them connected to you. And this is what a
               | good curator does.
        
               | elevaet wrote:
               | I wonder how this dynamic would affect virality in
               | relation to truthfullness?
               | 
               | As in, will this dynamic tend to reward the spread of
               | "edgy" fake news more than current network paradigms
               | already do, or will it tend to slow its spread, or will
               | it be neutral on that axis? Seems like a hypothesis that
               | would need to be tested, but maybe you have a hunch or
               | insight on the matter.
        
       | guscost wrote:
       | > Each task consisted of estimating the correlation of a scatter
       | plot
        
       | elevaet wrote:
       | This is great. But how do we build social media networks that
       | reinforce centrality/influence of accurate truth-tellers, and
       | penalize sensationalism, extremism etc.?
       | 
       | This seems like one of the big hard problems of our era. If we
       | can solve it, maybe this is a phase transition humanity is going
       | through where we begin to operate on a higher level of
       | complexity, similar to when life transitioned to multicellular.
        
         | hsndmoose wrote:
         | Would this not require quantifying real, objective truth? How
         | does one compute truth without relying on human input? (Which
         | instead trends towards truthiness/the "feeling" of whether
         | something is true.)
         | 
         | I am not being dismissive, I genuinely would like to know.
        
           | theropost wrote:
           | Its a very big question - we almost need some sort of level
           | of detail about the commenter, to understand their expertise,
           | backgrounds, experiences, abilities, etc - but once again,
           | how would you quantify it? For all topics, not everyone's
           | voice should be considered equal
        
           | thih9 wrote:
           | I'd guess it depends on the topic and scenario.
           | 
           | > Would this not require quantifying real, objective truth?
           | 
           | Could you give an example?
        
             | hsndmoose wrote:
             | I cannot. It's a philosophical question of what is truth. I
             | don't know if it is possible for a human to obtain 100%
             | objective truth about something.
        
           | prox wrote:
           | It's called a library in my idea of it. The sum of human
           | knowledge, curated by experts in every field. I don't think
           | we can compute that last bit. We may not have to.
        
           | elevaet wrote:
           | I think this is one of "the big questions" right now.
           | 
           | Philosophy tells us that you can't compute truth without
           | relying on axioms. But computer science tells us that if we
           | accept basic axioms, the computation of truth quickly becomes
           | orders-of-magnitude too complex to compute.
           | 
           | I suspect that this all leads us to needing to rely on coarse
           | human input as "axioms".. which of course leads to the issue
           | of which humans do we rely on as stalwarts of the truth? It's
           | a bit of a chicken-and-egg problem.
           | 
           | My hope is that studies like these will tease out the nuances
           | of networks so that we can engineer networks to nudge the
           | nodes of better truth-telling to more centrality in the
           | networks, and that gradually we'll master the art of building
           | intelligent networks. After all, biology did it with the
           | human brain.
        
             | acituan wrote:
             | > Philosophy tells us that you can't compute truth without
             | relying on axioms.
             | 
             | Philosophy tells no such thing. It is not the province of
             | philosophy to tell us the final word on what is what, and
             | without corresponding it to any empirical exploration,
             | asserting such a claim is mere dogmatism.
             | 
             | Computationalist model of "truth" (by which I think you
             | mean reality) is dying. Embodied-embedded cognition offers
             | an alternative in which your intelligent system has to be
             | deeply embedded within all the other networks it has to
             | interact with, and its adaptivity and constraints define it
             | more than anything. There is no making an intelligent
             | network in a test tube (talking about general
             | intelligence).
             | 
             | > After all, biology did it with the human brain.
             | 
             | Biology might have put the required machinery, but
             | machinery by no means is a guarantee that it will be
             | neither intelligent nor adaptive. You _could_ "engineer"
             | your own network that is your body-brain to get better at
             | conforming to reality, which is called self-transcendence
             | and cultivating wisdom, and arguably the same principles
             | would work for our social networks, artificial networks and
             | us alike.
             | 
             | But going back to the notion of embeddedness, can a social
             | network that will ultimately aim to conform to the norm of
             | _making more money_ be wiser? Can a wiser social network
             | really out-survive a dumber one? Isn 't both going to be
             | ultimately embedded in the collective intelligence that is
             | our economy? Therefore both will be constrained by the
             | limits of the intelligence/wisdom of the economy, and
             | unless there is a bunch of benevolent rich that will
             | implement the engineered wiser social network and gift it
             | to the humanity and get humanity to actually use it, there
             | is no such place, i.e it is a utopia.
        
           | nosuchthing wrote:
           | No - consider it mostly a design and infrastructure problem.
           | 
           | When looking at social media, it's part public forum that
           | needs some type of discovery/filter mechanism, and part a
           | tool for individual users and community to communicate and
           | collaborate.
           | 
           | The barrier in current social media networks is largely
           | skewed towards manipulative design that optimizes towards
           | datamining and addictive gamified systems and interfaces.
           | 
           | Sure you could try to build a social network for open science
           | and peer reviews of research projects, but the bar is set so
           | low right now that any improvements to interfaces that
           | facilitate a more comprehensive search/discover/filter system
           | on datasets will be a massive improvement over now.
           | 
           | Information needs to be discoverable, but people need to be
           | free from propaganda.
        
           | meowkit wrote:
           | You can't quantity or even know "objective truth". We can get
           | really close for somethings, but knowing objective truth is
           | akin to being a god. At the end of the day, everything is a
           | model relying on some axioms.
           | 
           | Inter subjective truth, however, can be reached and is what
           | we rely on most (a dollar bill is a fancy piece of cloth, but
           | we all agree its worth a dollar). It is reached through
           | consensus making.
           | 
           | Gathering a consensus is traditionally done through
           | government or hierarchy, ultimately leading up to a single
           | human or single groups input as "truth". This method has
           | continually begun to disintegrate as communication tech gets
           | better (printing press -> mobile phone internet).
           | 
           | So the solution, to me, is to create consensus systems that
           | rely on the input of many - use the law of large numbers,
           | economic incentives, and the kaleidoscope of subjective
           | truths to reach the most accurate objective truth we can.
        
             | simplify wrote:
             | It's true that society uses consensus as a proxy for truth.
             | Even when scientists make a new discovery, it isn't
             | considered "truth" until they convince the community -
             | sometimes even taking decades!
             | 
             | Sadly, this consensus can be manufactured by those in
             | power. Censorship helps to a surprising degree, for
             | example. Social media sock puppets, astroturfing, bribery,
             | the list goes on.
             | 
             | How do we fight against manufactured consent? Is it even
             | possible at this point?
        
               | elevaet wrote:
               | Yeah this is the crux of it isn't it? And it's not just
               | the problem of manufactured consent either, there is also
               | the problem of mistaken consent that grows organically
               | out of human frailties like our cognitive biases and
               | appetites for drama.
        
               | hsndmoose wrote:
               | Yes thank you. Between this and your other comment to me
               | in this thread I think you've really gotten to the heart
               | of it. I appreciate you putting into concise words what
               | was rumbling around in my head when I first asked the
               | question.
        
               | elevaet wrote:
               | It's a joy to hear that someone else has made sense out
               | of what has been rumbling around in my own head a lot
               | lately.
        
           | kradroy wrote:
           | Maybe it's not necessary to define truth for this.
           | Considering metrics that you want to influence might be a
           | better way - hate crime arrests in locales, negative/divisive
           | message content, donations/volunteering for positive causes,
           | etc. But I'm a pessimist and I think moving these metrics in
           | the right direction would adversely affect the $$$ metric
           | that shareholders care about, so it's not going to happen.
        
             | swiley wrote:
             | So your depending on money and the law to define truth
             | then.
        
         | war1025 wrote:
         | > where we begin to operate on a higher level of complexity,
         | similar to when life transitioned to multicellular.
         | 
         | I've thought about this exact thing before and something to
         | keep in mind is that there will be a split where part of the
         | group consents to being absorbed into the mega-organism, and
         | part will stay individuals.
         | 
         | Humanity won't move in unison. If it happens, part of the group
         | will stay behind, just the same as we still have single celled
         | organisms.
        
         | gorwell wrote:
         | And a way to mitigate or dissolve Twitter mobs. I imagine pile
         | ons could be identified?
        
         | justshowpost wrote:
         | > networks that reinforce centrality/influence of accurate
         | truth-tellers, and penalize sensationalism, extremism etc.
         | 
         | I'm pretty sure CCP already built several such networks for us.
        
         | zwaps wrote:
         | the study itself is a bit "pnas'y". In fact, a lot of work
         | studies information aggregation in different kinds of settings.
         | 
         | As you, I think, suspect, things can also easily go the other
         | way. Well known is the fact that extensive information /
         | interaction networks can decrease diversity (a general fact of
         | averaging mechanisms in network settings that holds even in
         | Games on networks). A decrease in diversity in information
         | aggregation is often detrimental (for example through
         | correlation of errors, canceling out one mechanism of wisdom of
         | crowds, or through conformism).
         | 
         | Further, wisdom of crowds only exists (relative to individual
         | guesses) if in growing networks the opinions are not controlled
         | by influential groups or echo chambers (Jackson did work on
         | this). This of course, is what ends up happening a lot, in part
         | due due to aforementioned factors.
         | 
         | Since the proposed mechanism relies on weighting accurate
         | individuals higher, it is even more susceptible to such biases
         | than just wisdom of crowds in general.
        
           | AstralStorm wrote:
           | Funny. The rating mechanism makes this equivalent to a
           | reputation based prediction market.
           | 
           | These can also fail and are gameable.
           | 
           | The key assumptions are that people form social networks
           | according to DeGroot process, which is not actually true.
           | People tend to maximize objective gain not
           | information/influence gain - usually objective is not stated
           | and there are various ones, from knowledge, entertainment,
           | desirability, clout/influence and power, even financial ones.
           | Unfortunately, this does result in a divergent scenario.
           | 
           | As for feedback, current social networks have simplistic
           | "like" system that only weights engagement in a hidden matter
           | and generic desirability points with partial feedback
           | scenario. Trying to get full feedback would require someone
           | to be able to read likes of every user, not feasible. What
           | could be feasible is guessing how "likeable" a post is -
           | gaming the system.
           | 
           | That said, the paper is rather high quality considering its
           | limitations. Interesting that essentially listening to a
           | moderate number of curated but not isolated experts had best
           | results. (Top 5 interesting enough, this being ~33% of
           | group.)
           | 
           | The trick is in identifying these top performers in a real
           | multiobjective scenario, and whether the selection strategy
           | still works.
        
             | sysadm1n wrote:
             | > These can also fail and are gameable
             | 
             | Gameable perhaps, but in proper meritocratic[0] systems,
             | voting rings etc would be detected and the perpetrators
             | hopefully banned and they would also lose the ability to
             | participate again at a later stage, giving an incentive to
             | play fairly next time.
             | 
             | [0] https://en.wikipedia.org/wiki/Meritocracy
        
             | forgotmypw17 wrote:
             | >"Trying to get full feedback would require someone to be
             | able to read likes of every user, not feasible."
             | 
             | I think that with a fully transparent system this is not
             | only feasible, but easy.
        
       ___________________________________________________________________
       (page generated 2021-07-09 23:02 UTC)