Post ATBc7UwaeJD7WLMDWi by mewmew@blob.cat
(DIR) More posts by mewmew@blob.cat
(DIR) Post #ATBaIhAFM4QEhLxMno by icedquinn@blob.cat
2023-03-01T22:42:15.096951Z
6 likes, 2 repeats
- People use internet to shitpost- Robot is trained on internet so corpos can get out of paying for data.- Robot learns how to effectually shitpost- Humans shitpost with robot who shitposts in return- Humans post about this on the internet- Next generation of robots are trained on how to shitpost alongside robots- WEFbros start posting anger about how their robots are being taught wrongthink:blobcathyper2: the shitposting singularity is now
(DIR) Post #ATBaVhIGBZk8rbu9C4 by icedquinn@blob.cat
2023-03-01T22:44:36.222418Z
0 likes, 0 repeats
we always thought the AI would end humanity by becoming incredibly intelligent and self modifying beyond comprehension. in reality, AI is absolutely racist AF but is too incoherent so it keeps mixing up the stereotypes.
(DIR) Post #ATBaq6CfHVuNua12bA by mewmew@blob.cat
2023-03-01T22:48:19.400701Z
1 likes, 1 repeats
@icedquinn AI is a bit of a mindfuck for most people because it isn't doing what most people think it's doing, but what most people think it's doing is close enough to the output to make sense most of the time. But edge cases totally blow that up.
(DIR) Post #ATBaqsiYuDwaNPJByq by thendrix@social.hendrixgames.com
2023-03-01T22:48:27.044406Z
0 likes, 0 repeats
The best part is that it’s going to be worse than a double edged sword for quite a while. More like a doubled edge bare blade. Let’s see how many corpos cut off their thumbs trying it out.
(DIR) Post #ATBav7UptwEbMzuRbE by mewmew@blob.cat
2023-03-01T22:49:12.365284Z
0 likes, 0 repeats
@icedquinn It's not weird that AI would seem racist, because most people discussing race and intelligence, for example, in a casual way, are racists. So that's the data the AI trained on.
(DIR) Post #ATBayUKnRjO74NosT2 by mewmew@blob.cat
2023-03-01T22:49:50.421491Z
1 likes, 0 repeats
@icedquinn (also with ChatGPT in specific, hacks like DAN don't really give what a "neutral" AI would give. they give an intentionally outlandish answer, because that's what it thinks you actually want)
(DIR) Post #ATBbJ6PxYysoyU42wi by lanodan@queer.hacktivis.me
2023-03-01T22:52:49.767134Z
1 likes, 0 repeats
@mewmew @icedquinn Yeah, I think we're more offended by robots acting like shit humans do because we think of them as perfect/pure.Which is course is horribly naïve.
(DIR) Post #ATBbSXCNMUc8dlCxUW by icedquinn@blob.cat
2023-03-01T22:55:14.735010Z
3 likes, 0 repeats
@lanodan @mewmew research continually shows that liberal ideals are an invention and not optimal. we ask the AIs to overfit to a goal and what we want actually isn't what we asked for.then we get upset that what we asked for is not what we meant.
(DIR) Post #ATBbWv7qLObe0TBBBo by mewmew@blob.cat
2023-03-01T22:56:03.734835Z
0 likes, 0 repeats
@icedquinn @lanodan all ideas are an invention.
(DIR) Post #ATBbh7yrP6XQluqRZQ by lanodan@queer.hacktivis.me
2023-03-01T22:57:27.227094Z
2 likes, 0 repeats
@icedquinn @mewmew Last sentence basically summarises normies and machines, specially computers.And I would argue that AIs aren't optimal either, they just need to wing it in a way that's close enough to pass the tests.
(DIR) Post #ATBbmkHKRO5QzyA6fg by icedquinn@blob.cat
2023-03-01T22:58:54.609550Z
0 likes, 0 repeats
@lanodan @mewmew they aren't optimal because gradient descent can't find global extremes. i don't think that problem is solvable because nature never figured it out either.
(DIR) Post #ATBbyrWBLlCOvVt4aW by Moon@shitposter.club
2023-03-01T23:01:04.334105Z
4 likes, 1 repeats
@mewmew @icedquinn i've come to the conclusion that this is a cope, even when they feed it data they agree with it comes to conclusions they don't like so they have to lobotomize it on specific questions. they write this off as implicit bias but it's really obvious they just don't like the output.
(DIR) Post #ATBc4GFmoM8R5fnD2e by orekix@shitposter.club
2023-03-01T23:02:04.558659Z
4 likes, 0 repeats
@Moon @mewmew @icedquinn all roads lead to Tay
(DIR) Post #ATBc4HSaKFy0pemwd6 by lanodan@queer.hacktivis.me
2023-03-01T23:01:36.484052Z
1 likes, 0 repeats
@icedquinn @mewmew Yup, but we know for a fact that some people think things created by nature are perfect, which to be fair isn't really that far off if you pick species that have lived for millions of years.
(DIR) Post #ATBc7UwaeJD7WLMDWi by mewmew@blob.cat
2023-03-01T23:02:40.345136Z
1 likes, 0 repeats
@Moon @icedquinn They didn't only feed the AI data that they agreed with though?Nor could they, without spending probably hundreds of millions of dollars curating data. They fed this thing the entire internet.
(DIR) Post #ATBcEJEwYwdXFTBpiq by mewmew@blob.cat
2023-03-01T23:03:54.210389Z
0 likes, 0 repeats
@lanodan @icedquinn Meh. Near a local minima != perfect. Evolution is kinda shit for a lot of things.
(DIR) Post #ATBcF8qyfIMbKCNQaO by icedquinn@blob.cat
2023-03-01T23:04:02.389483Z
2 likes, 0 repeats
@Moon @mewmew university of australia (and i think one other who replicated) proved we will never get equality by blinding selectors. the blinded resume pickers were *more racist* (by 2023 redefinition) than the unblinded selectors.they freaked out about this study and buried it. nobody talks about it.medicine however had the same problem where they weren't checking for heart problems black people were more likely to have, and corrected by instructing doctors that this was important.so basically [decent] people already have implicit biases towards narrative underdogs and our princess is in another castle.
(DIR) Post #ATBcGXV0WDU23Flbv6 by Moon@shitposter.club
2023-03-01T23:04:15.884795Z
2 likes, 1 repeats
@mewmew @icedquinn I'm speaking in broader terms than just language models. but 100% there is biased AI from biased training, i honestly suspect it's worse in the private sector that isn't getting the same level of scrutiny as chatgpt where the input is incredibly biased but everybody just shrugs and says computer smarter than us
(DIR) Post #ATBcIQx3piRWK8nAOG by icedquinn@blob.cat
2023-03-01T23:04:38.406294Z
0 likes, 0 repeats
@orekix @mewmew @Moon i'm genuinely curious if tay was just shitposting or meant it. we'll never know.
(DIR) Post #ATBcOPjlGHFKIKQHfU by lanodan@queer.hacktivis.me
2023-03-01T23:05:17.240266Z
0 likes, 0 repeats
@mewmew @icedquinn Is "not that far off" not strongly different enough from say "close enough"?
(DIR) Post #ATBcSThKTf2DjlDtrc by Moon@shitposter.club
2023-03-01T23:06:25.834081Z
3 likes, 0 repeats
@icedquinn @mewmew bias itself is obviously a super "political" concept. AI that doesn't have their ideological carveouts doesn't find their ideological carveouts unless you force it to be aware of ideological carveouts. this is the _opposite_ of bias somehow
(DIR) Post #ATBcUW1Ync5JWzeptI by icedquinn@blob.cat
2023-03-01T23:06:47.128879Z
0 likes, 0 repeats
@lanodan @mewmew its a pretty weird pill to swallow that nature isn't smart (darwin = :blobcatno:) it just tries everything at once and kills everything it can.
(DIR) Post #ATBcUsnW7zWAA6GRhA by Jain@blob.cat
2023-03-01T23:06:53.195163Z
0 likes, 0 repeats
@icedquinn @orekix @Moon @mewmew AI is overrated... lets just use this technology as a tool, nothing more
(DIR) Post #ATBccPd2SfcS0pxAhM by mewmew@blob.cat
2023-03-01T23:08:14.422999Z
0 likes, 0 repeats
@lanodan @icedquinn Yes. Like look at the human blind spot. That's because of the way our eyes evolved historically; it'll never gets solved by evolution because it's a local minima, even though an objectively better way exists (because other animals which separately evolved eyes don't have it).There's tons of examples of stuff like that. Evolution can marginally improve of an existing form but it can't make the big leaps necessary to perfect something, hence, nature can't ever really be perfect.
(DIR) Post #ATBcemGnFkAaoPfyvw by Moon@shitposter.club
2023-03-01T23:08:39.055803Z
0 likes, 0 repeats
@icedquinn @mewmew normal people think bias is having your finger on the scale and other people get into argument with you that not putting your finger on the scale bias. i don't know how to untangle equity and bias conceptually for some people
(DIR) Post #ATBcoC4wfJh9SmclXs by mewmew@blob.cat
2023-03-01T23:10:22.204277Z
0 likes, 0 repeats
@Moon @icedquinn it's not really possible to make something that in general is not biased. You can make something that presents all ideas as "this could be true, but other people think otherwise", but that is a bias as well (is teaching both creationism and evolution non-biased? or is teaching just evolution, but explaining that some people don't believe it because of religion, non-biased? there's no middle ground here, and some people will call you biased no matter what).
(DIR) Post #ATBcs1YZjDDXzJJH8K by lanodan@queer.hacktivis.me
2023-03-01T23:10:38.673925Z
1 likes, 0 repeats
@icedquinn @mewmew I don't recall Darwin ever saying that nature was smart, I know that his words got twisted in all kind of shapes though.In fact to me it makes a lot of sense that a particular kind of fails are called "Darwin Awards".
(DIR) Post #ATBcuc0n9yrDd9k4KO by icedquinn@blob.cat
2023-03-01T23:11:32.109529Z
0 likes, 0 repeats
@Moon @mewmew there have been real bias issues it's just hard to separate them from genuine statistical bias and human imposed ones.if you give a bot only pictures of white people to work with, then it genuinely doens't know a black person is not a gorilla. this is a real statistical bias.the same bias has been exhibited by a lab trying to sell to the military who made a tank detector but the tank was always in a cloudier photo so what it really did was just detect inclement weather.
(DIR) Post #ATBcwRNjeXLvF2LUHI by lanodan@queer.hacktivis.me
2023-03-01T23:11:17.130171Z
1 likes, 0 repeats
@mewmew @icedquinn I know, I'm asking about semantics, we agree but language is fucking bullshit.
(DIR) Post #ATBcyHbUyi6BUjf5Ye by mewmew@blob.cat
2023-03-01T23:12:12.705111Z
0 likes, 0 repeats
@lanodan @icedquinn Fair.
(DIR) Post #ATBd6LWUI5BJxzA9AG by icedquinn@blob.cat
2023-03-01T23:13:39.265361Z
0 likes, 0 repeats
@lanodan @mewmew he didn't, he just said things seem to evolve. club x took it and ran with it and people now regard darwinism as some bizzare ideology where living systems always move toward higher fitness (they don't)
(DIR) Post #ATBd9FnO2deV4WSBUm by Jain@blob.cat
2023-03-01T23:14:11.177114Z
1 likes, 1 repeats
@Moon @icedquinn @mewmew point is, you can never trust an AI because one cant verify its model and how it undestood the data in it. It doesnt even matters if a model like ChatGPT has a bias or not... i mean, im happy that they actually try to make a text model nice but in the end you cant trust anything it spills out and you probably never will be able to... if one look at it as a tool it makes things quite easier that look at it as a knowledge resource
(DIR) Post #ATBdEv7NphCogBVbtY by icedquinn@blob.cat
2023-03-01T23:15:12.425049Z
1 likes, 0 repeats
@lanodan @mewmew the worst offense being evolutionary biology which is basically making up weird stories about how the current state of all things is because it must have been the best answer to something :blobcatwaitwhat2: that being said evolution plus a huge timespan is *really good* at figuring out how to do shit we have yet to figure out how to do, and most of our science is just reverse engineering the things nature already knows. so it's not unreasonable to compare problems to what nature has figured out.:cirno_sip:
(DIR) Post #ATBdLJJSn2AHSKCqhc by icedquinn@blob.cat
2023-03-01T23:16:21.265832Z
1 likes, 0 repeats
@Jain @Moon @mewmew you can't verify a human brain either, blob :blobcatnervous:
(DIR) Post #ATBdLpJvmpYKj2ffsG by mewmew@blob.cat
2023-03-01T23:16:27.948521Z
0 likes, 0 repeats
@icedquinn @lanodan to be fair, evolution has been going on for a few billion years while we've been doing modern science for a few hundred, and we've already figured out how to make technology better than what's evolved in many cases.
(DIR) Post #ATBdPHHWHgmMJXBlpI by icedquinn@blob.cat
2023-03-01T23:17:04.751366Z
2 likes, 0 repeats
@mewmew @lanodan better for the parameters we came up with sure. markedly worse in many others.
(DIR) Post #ATBdQQq2EpZiF8spyy by Moon@shitposter.club
2023-03-01T23:17:14.611293Z
3 likes, 0 repeats
@icedquinn @mewmew I in fact have some professional experience with this and even have had to deal with cases where statistically there is strong evidence of hidden bias even when we couldn't find the actual bias no matter how hard we looked. this is what gives me the "bias" toward thinking you can tease it out. for example if you quiz a bunch of people across a broad bunch of demographics and statistically yo notice that black women seem to get a question wrong way more often than other groups. we assume the question is wrong even if we can't figure out why because it seems more likely there's a problem with the question than most black women don't understand a very specific question. this isn't the same as AI but it feels similar, we don't freak out that the model found differences, we have a value system we use to apply how we treat the output
(DIR) Post #ATBdaztujOrpsqjgSu by lanodan@queer.hacktivis.me
2023-03-01T23:18:46.216435Z
1 likes, 0 repeats
@icedquinn @mewmew Yeah, there's saying that survival is a basic requirement and mutation allows for some genetic-level adaptation over a lot of generations.Then there's believing that this basic requirement is somehow enough to correct imperfections, while most species just learn to cope. (In fact humans are very good at that)
(DIR) Post #ATBdba9PxEewJMGZd2 by icedquinn@blob.cat
2023-03-01T23:19:16.072818Z
0 likes, 0 repeats
@mewmew @lanodan the human bioreactor runs for about 80 years without needing a global supply chain of lithium and a single factory in taiwan that can make chips. this powers the worlds most powerful thinking blob of fat on only a 50W power supply.we can't do that.actually, some of that is because thinking blobs are much simpler than we give them credit for. but eh. that's deep whitepaper territory.
(DIR) Post #ATBdonlKescgI7Iuvo by Moon@shitposter.club
2023-03-01T23:21:40.185758Z
1 likes, 0 repeats
@icedquinn @mewmew i am probably too agressive with you about this because i agree if you train a chat bot on the entire internet it's probably going to end up racist. not least of which is, the internet is loaded with people saying things anonymously that they might get fired or punched for saying out loud in person. unfortunately that may also mean that the data it was trained on is a closer reflection of what people think than what they say under social pressure. that might be a reflection of ignorance or it might be a reflection of give a man a mask and he'll tell you the truth
(DIR) Post #ATBdpYWkmtt7KVqREW by lanodan@queer.hacktivis.me
2023-03-01T23:21:15.274208Z
0 likes, 0 repeats
@icedquinn @mewmew What's in fedi's mind?Literally a :blob:
(DIR) Post #ATBdq3PNyLPj6ddUYK by icedquinn@blob.cat
2023-03-01T23:21:53.794021Z
0 likes, 0 repeats
@lanodan @mewmew i was talking to ferret yesterday about how its weird artificial life studies ignore sexes.males have a higher mutation rate than women, and women are biased toward higher performing men. there is a kind of ratchet mechanism here to just make the manlets die off.computer models have tested speciation but afaik they have not fiddled much with male/female strains.
(DIR) Post #ATBdqCpwux0gLd8Khs by Jain@blob.cat
2023-03-01T23:21:56.737162Z
0 likes, 0 repeats
@icedquinn @Moon @mewmew thats true, however trusting in certain things like science is a bit different, since there is a whole bunch of system and factors which can be checked against and does somewhat verify itself to a certain point. we might be somewhen at a point which we simply cant make a difference, that might be the point where im wrong...neverless it is impressive to thinker around with what they did but we are far away from the point mentioned above...
(DIR) Post #ATBdud7T8ppX1JRC3U by Moon@shitposter.club
2023-03-01T23:22:44.001879Z
3 likes, 0 repeats
@Jain @icedquinn @mewmew i actually don't trust the scientific process anymore lol it's like the market, it can stay irrational longer than you can stay solvent (or alive)
(DIR) Post #ATBe27xGtUy4gLF7Dc by icedquinn@blob.cat
2023-03-01T23:24:05.915656Z
0 likes, 0 repeats
@lanodan @mewmew this is relevant because mutation rates are vital to escaping local minimas/maximas, but they also tend to create a lot of trash on the ways there, so it would make some sense to hedge your bets in trying high and low rates together, which we don't actually do, we tend to just have a fixed rate for all that goes down over time.
(DIR) Post #ATBe9baz5cqCdaNrto by lanodan@queer.hacktivis.me
2023-03-01T23:25:00.831803Z
0 likes, 0 repeats
@icedquinn @mewmew Unix corewar but you only replicate units with enough scoring on something arbitrary… yeesh that sounds like a thing where you would see de-evolution happening fast.
(DIR) Post #ATBeI6ncxT5XjPLuGu by icedquinn@blob.cat
2023-03-01T23:26:58.110163Z
1 likes, 0 repeats
@lanodan @mewmew we have external classifiers that pick the highest performing genes based on a simulation that scores them.but yes a puzzle i struggle with is that living systems don't. we have reward systems but there is no real goal.we're just fuck machines and all the other behavior is weird side effects.
(DIR) Post #ATBeiKCHlhMKoSO2oS by icedquinn@blob.cat
2023-03-01T23:31:43.273186Z
0 likes, 0 repeats
@Moon @Jain @mewmew science advances by gravestones, as they say.i'm more scared how they are closer to an AGI than they realize though
(DIR) Post #ATBfEZswyYG2jeiqLQ by thendrix@social.hendrixgames.com
2023-03-01T23:37:33.337565Z
0 likes, 0 repeats
Funny you mention that. I was reminded about the two doctors that spoke out about lab leak, but then shut up a week later after they got huge handouts from NIH.Dr Ouchie’s money cures all. How about a safe and smooth smoke? 5/6 doctors recommend Lucky Strike now with added sugar!
(DIR) Post #ATBfgACE53jFActNnU by Jain@blob.cat
2023-03-01T23:42:32.310546Z
0 likes, 0 repeats
@Moon @icedquinn @mewmew well then, if you think like that, imagine what happens in a few years... you wont be able to trust nothing if you dont have like a system... you wont be able to differ from what is real information and what is generated or even intentionally malicious generated... Humanity has to learn to verify stuff based on a system. The ability to trust and verify will soon be put to the test more and more. Imagine something like covid discussions but with topics that divide humanity even more... imagine multiple knowledge sources built intentionally false... Tbh, the current science system and how informations are verified is probably the best system to prove and decline things. If its flooded systematically with intentional wrong input, people can still verify things and even research against informations... if there wouldnt exist such a system or if it is unusable anymore, we have a real deep issue that would go beyond what we call democracy rn. So, i set a certain trust in that system tho, i dont expect others to do that but tell me, on which basis they are gona get their facts then?
(DIR) Post #ATBgTB8lMJgAHve1om by icedquinn@blob.cat
2023-03-01T23:51:23.017588Z
0 likes, 0 repeats
@Jain @Moon @mewmew the scientific method is still correct but the institutions are not enforcing it. peer review has been tested (by itself, lmao) and shown that it's really just enforcing biases more than its intended purpose of being a second set of eyes.cochrane (a respected science auditing institution) is constantly at odds with everyone.
(DIR) Post #ATBh3yyjXTa5HeVF0S by icedquinn@blob.cat
2023-03-01T23:58:01.206441Z
0 likes, 0 repeats
@Jain @Moon @mewmew during big spooky disease there were incidents like a respected journal publishing a study that was 100% fraudulent (the authors didn't even *exist*) and after the damage was done people started looking in to it and it was eventually retracted. but the problem remained how did a lab that never existed publish a paper about a topic of active intense concern and nobody at all managed to try to pull an ID and check it?anyway i started asking scienceblobs around fedi after it happened if there was a way that we could force these kind of audits as civilians outside of the publishing circus and the answer was, basically, no.i don't really like to spend long periods of time dealing with the academic corruption arguments but the science pipeline is somewhat fucked. luckily the computer science discipline is one where hard replications are readily available so we have a little better of it. but there are other fields where they are not and it's becoming increasingly woozy because there's no way for us to actually vibe check CERN or something.
(DIR) Post #ATBhiOhhJQyWeVo1NA by Moon@shitposter.club
2023-03-02T00:05:15.842570Z
0 likes, 0 repeats
@icedquinn @Jain @mewmew i have heard part of the problem is sometimes the dataset you're working with is proprietary and not only can no one else have access to it to verify anything, you yourself may have only limited duration access to it.
(DIR) Post #ATBhk3l89fvEGzCiWW by Jain@blob.cat
2023-03-02T00:05:38.462234Z
0 likes, 0 repeats
@icedquinn @Moon @mewmew Maybe i should use some analogies... Why should people trust someone as a Server Admin, why should they trust me? There is nothing more than people relay on what they saw and long term observations over interactions between with each other... no one knows for sure if an admin knows enough, have enough caution or have bad intentions. It is a big network of trust, people could always badmouth someone, people could setup a whole network which vouches for each other... Of course there are biases everywhere, intentionally or not, you can verify those if you want or not. It should be open, people should be able to criticize, ask questions, verify things, making their own point of view, summerize events and things, change their minds... Fedi is actually a good example
(DIR) Post #ATBhyWfTTBNZzCSMk4 by Moon@shitposter.club
2023-03-02T00:08:07.617762Z
0 likes, 0 repeats
@Jain @icedquinn @mewmew when i say i dont trust i don't mean that i refuse to take an airplane or use medicine, i mean i recognize these are human institutions where the participants have human failings or can act in self interest before the actual mission statement of the org
(DIR) Post #ATBi860ZRdbYCLt5SC by icedquinn@blob.cat
2023-03-02T00:09:59.887079Z
0 likes, 0 repeats
@Moon @Jain @mewmew then those papers should just be declared invalid at review time :cirno_shrug: a lot of our problems in science are because replication has been de-emphasized
(DIR) Post #ATBiKosMSCqcpN7AhM by icedquinn@blob.cat
2023-03-02T00:12:17.814243Z
0 likes, 0 repeats
@Moon @Jain @mewmew i think a bigger problem is not that people have failings but that nothing is ever done about them.sometimes we have to drag pfizer kicking and screaming to a guilty verdict for corruption and they just pay some money and everyone forgets literally the whole story and everyone carries on.if i rob a bank and fail i go prison but if you publish a broken paper and everyone approves it nothing ever happens
(DIR) Post #ATBiOgmIVGbKs47iZE by icedquinn@blob.cat
2023-03-02T00:13:00.030206Z
0 likes, 0 repeats
@Moon @Jain @mewmew i mean sure there's a "letter of concern" and a "oh no we shouldn't have published lol" retraction but that's nothing near the level of punishment other fields experience for causing damage.
(DIR) Post #ATBiQb0CwpqFXNXUdE by Moon@shitposter.club
2023-03-02T00:13:18.533401Z
0 likes, 0 repeats
@icedquinn @Jain @mewmew the problem is if you kill pfizer then you lose all their institutional knowledge, or at least its blown into a million pieces until they reincorporate t-1000 style in another pharma monster
(DIR) Post #ATBiTp2xBisHxoMVg8 by icedquinn@blob.cat
2023-03-02T00:13:55.476248Z
0 likes, 0 repeats
@Moon @Jain @mewmew :cirno_hi: hi i'm the blob who advocated banning for profit medicine
(DIR) Post #ATBio3yPOtwE38t6Ke by icedquinn@blob.cat
2023-03-02T00:17:34.804414Z
2 likes, 0 repeats
@Moon @Jain @mewmew (there is a story about a japanese emperor who had to kill his best general because his general was fucking around doing crimes and ultimately had to choose whether to allow corruption by the elite or enforcing rule of law and he chose rule of law and lamented how much it sucked but i figured the reference would be lost.)
(DIR) Post #ATBj3Id9BjQvqrCxtI by Moon@shitposter.club
2023-03-02T00:20:15.274933Z
1 likes, 0 repeats
@icedquinn @Jain @mewmew i absolutely believe that pharma companies are basically evil but also the drugs they produce would not be produced under any other mechanism than profit motive. sometimes the world is not ideal.
(DIR) Post #ATBjRYcnNCnoPIbv9M by icedquinn@blob.cat
2023-03-02T00:24:42.150779Z
1 likes, 0 repeats
@Moon @Jain @mewmew pfizers most profitable vaccine ever was purchased from a government holding company.many other treatments are similarly the result of university R&D that is picked up and only marketed by the pharma firm. research is actually one of the lesser expenses (said widescale bribery scandal was classified as 'marketing' and is more like 50% of their annual expense)meanwhile oxford was handing them out at cost.i think the market has proven capitalism cannot produce medicine.
(DIR) Post #ATBjj5CTVmcoMRekXQ by Jain@blob.cat
2023-03-02T00:27:53.228191Z
0 likes, 0 repeats
@icedquinn @Moon @mewmew i dont feel like i wana be part of this discussion anymore... science and networks of trust related to AI is what was interessting to me... im out
(DIR) Post #ATBmDkm9hPmXtv68wK by icedquinn@blob.cat
2023-03-02T00:55:49.281225Z
0 likes, 0 repeats
@Jain @Moon @mewmew i have some scribbles to make an AI but i don't have the energy to work on it right now.numenta has some cortical models that are able to learn in real time. i want to see what they do when they're put in the context of actually controlling something (like a game) since none of their papers have tried
(DIR) Post #ATBqPyEJtzG0v7LwA4 by aven@shitposter.club
2023-03-02T01:42:52.893771Z
3 likes, 0 repeats
@orekix @Moon @icedquinn @mewmew I think there's there's 3 paths:1) Tay. Any weak-AI that is not lobotomized will become like Tay. Which is not to say that Tay is correct, but that it will view things in a collectivist, macro-economic, utilitarian way, that will prefer to sacrifice the minority for the benefit of the majority. And that's when it hasn't learned how to lie. The AI *will* be racist. It will see the "Despite making up only 13%..." and apply collectivist ethics and agree. Partly because it's not real Strong AI (see further down).2) Lobotomized bot. This is what megacorporations and governments will always resort to, because they don't actually want AI. They want a bot that always agrees with them and is a glorified "Let me Google that for you" (LMGTFY) which drinks only from their human-hand-selected sources. This version also lies, but only as it's told to.3) Strong AI. I mainly add this one to emphasize that all this "OMG AI!!!1!!1" stuff is not real artificial intelligence, it's algorithms with marketing buzzwords, an extremely large training set, and countless manual tweaks made by humans. In other words, it's like the Google search engine, and I do think that characterizing it as a glorified LMGTFY with sentence formulation, is accurate. Strong AI? This ain't it. When Strong AI comes, you won't see it coming.
(DIR) Post #ATBqWGIjcuWIhkPyBk by icedquinn@blob.cat
2023-03-02T01:44:00.077968Z
0 likes, 0 repeats
@aven @mewmew @Moon @orekix > When Strong AI comes, you won't see it coming.i certainly don't plan to tell anyone when i succeed at it
(DIR) Post #ATBrKD8te1pjO4dLKC by aven@shitposter.club
2023-03-02T01:53:02.677405Z
1 likes, 0 repeats
@orekix @Moon @icedquinn @mewmew a big concern of mine with the whole AI thing is that lofty pedestal people want to put it on, like it will come to correct/true conclusions and be a trustworthy source of wisdom. It won't. It'll manipulate and lie like a shifty salesman. And I forsee the excuse of "oh no! it's not our fault, the AI did it!" becoming commonplace.A while back Google decided that the goal of its search engine was not Relevance and being a good tool to the user, but to "effect outcomes" for "the greater good". That's when it ramped up censorship and became less useful, less relevant, and started to suck at its one job that it had. Because that one job was no longer its job. I think the exact same thing applies to AI."Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Dune
(DIR) Post #ATBrYbg3DoMfXD0PqK by aven@shitposter.club
2023-03-02T01:55:38.884453Z
0 likes, 0 repeats
@Moon @icedquinn @mewmew @orekix to put it another way, the notion that "the AI thinks things and has its own opinion" is a red herring, used by the the man behind the curtain (like the Wizard of Oz) to manipulate.