[HN Gopher] Propaganda or Science: Open-Source AI and Bioterrori...
___________________________________________________________________
Propaganda or Science: Open-Source AI and Bioterrorism Risk
Author : 1a3orn
Score : 61 points
Date : 2023-11-02 18:27 UTC (4 hours ago)
(HTM) web link (1a3orn.com)
(TXT) w3m dump (1a3orn.com)
| artninja1988 wrote:
| An extremely impressive takedown of biorisk papers
| ethanbond wrote:
| Idk, writing off e.g. anthrax risk because the 2001 attacks
| only killed 5 people is pretty silly. There's good reason to
| believe those attacks weren't really _intended_ to kill people,
| and they certainly weren 't intended to kill a large number of
| people.
|
| This fact also casts some doubt of relevance on the Montague
| paper as well, which says that a bio agent's ability to spread
| is so incredibly important. Yes, the ability to spread does
| amplify the risk _enormously_ for obvious reasons, but there
| are plenty of non-spreading agents that you can do huge, huge
| amounts of damage with, including anthrax when used
| "appropriately."
| j45 wrote:
| A slippery slope is anythings dangerous that anyone wants to
| be dangerous and can fall into a larger and larger nebulus of
| safety for the many and control for the few, which is a
| different kind of danger.
|
| It will be interesting to see where this attempts to contain
| the interpretations of fear from the interpretations of
| fearmongering (or not) leads to.
|
| Maybe AGI gets disagrees with all the manipulators and
| becomes for the masses.
| slowmovintarget wrote:
| Quoting the article's own tl;dr:
|
| > I examined all the biorisk-relevant citations from a policy
| paper arguing that we should ban powerful open source LLMs.
|
| > None of them provide good evidence for the paper's conclusion.
| The best of the set is evidence from statements from Anthropic --
| which rest upon data that no one outside of Anthropic can even
| see, and on Anthropic's interpretation of that data. The rest of
| the evidence cited in this paper ultimately rests on a single
| extremely questionable "experiment" without a control group.
|
| > In all, citations in the paper provide an illusion of evidence
| ("look at all these citations") rather than actual evidence
| ("these experiments are how we know open source LLMs are
| dangerous and could contribute to biorisk").
|
| > A recent further paper on this topic (published after I had
| started writing this review) continues this pattern of being more
| advocacy than science.
|
| > Almost all the bad papers that I look at are funded by Open
| Philanthropy. If Open Philanthropy cares about truth, then they
| should stop burning the epistemic commons by funding "research"
| that is always going to give the same result no matter the state
| of the world.
|
| The rest of the paper supports this thesis with... wait for it...
| evidence!
| AnimalMuppet wrote:
| "burning the epistemic commons". Beautiful phrase. I may steal
| it.
| artninja1988 wrote:
| > Almost all the bad papers that I look at are funded by Open
| Philanthropy. If Open Philanthropy cares about truth, then they
| should stop burning the epistemic commons by funding "research"
| that is always going to give the same result no matter the
| state of the world.
|
| I see they are in the effective altruism space. Something about
| them seems extremely shady to me...
| matheusmoreira wrote:
| > policy paper arguing that we should ban powerful open source
| LLMs
|
| This is a corporation lobbying effort, right? I simply can't
| believe that anyone else would actually _want_ the power of
| LLMs to be concentrated in the hands of trillionaire
| corporations.
| MeImCounting wrote:
| The problem is that this type of analysis and evidence does not
| appeal to the same primal instinct that made The Terminator such
| a successful franchise in both the box office and pop culture. It
| is way more fun to entertain ideas of paperclip optimizers and
| emotionally manipulative super-intelligent sociopaths than to
| actually reason in good faith. The endless comparisons to nuclear
| bombs or imagined super-viruses are an inevitable result of
| people having lived through the rampant propaganda of the cold
| war.
|
| The idea of just another technology controlled exclusively by
| mega-corps in a world where a majority of the most powerful
| technologies are controlled by those corps doesnt seem that bad
| in comparison with nuclear annihilation. This is missing the
| fundamental truth of what AI is and therefore missing its most
| direct comparison. At risk of repeating the same arguments seen
| elsewhere on HN and other sites: cryptology is definitely the
| closest comparison.
|
| Both AI and crypto are incredibly clever applications of
| fundamental mathematics. Both are technologies that are
| absolutely disruptive to certain monopolies or methods of
| societal control. Both can be replicated by anyone who
| understands the basic principles and throws enough resources at
| them.
|
| I think we are going to watch a similar thing happening where
| there are many concerted attempts to compromise the technology
| itself or the availability of said tech and eventually the halls
| of power are going to end up targeting endpoints instead of the
| protocol (architecture) itself. The users of a given technology
| are where regulation ought to (and I believe will) lie in the
| end.
___________________________________________________________________
(page generated 2023-11-02 23:00 UTC)