Post AUUBtiEWR4MLSONYSe by bflipp@vmst.io
(DIR) More posts by bflipp@vmst.io
(DIR) Post #AUU6zPJ7DOamj4QJiy by lauren@mastodon.laurenweinstein.org
2023-04-09T19:04:46Z
0 likes, 0 repeats
**** The “AI Crisis”: Who Is Responsible? ****https://lauren.vortex.com/2023/04/09/the-ai-crisis-who-is-responsibleThere is a sense of gathering crisis revolving around Artificial Intelligence today -- not just AI itself but also the public's and governments' reactions to AI -- particularly generative AI.Personally, I find little blame (not zero, but relatively little) with the software engineers and associated persons who are actually theorizing, building, and training these systems.I find much more blame -- and the related central problem of the moment -- in some non-engineers (e.g., some executives at key levels of firms) who appear to be pushing AI projects into public view and use prematurely, out of fear of losing a seemingly suddenly highly competitive race, in some cases apparently deemphasizing crucial ethical and real world impact considerations.While this view is understandable in terms of human nature, that does not justify such actions, and I fear that governments' reactions are heading toward a perfect storm of legislation and regulations that may be even more problematic than the premature release of these AI systems has been for these firms and the public. This may potentially set back for years critical work in AI that has the potential to bring great benefits (and yes, risks as well -- these both come together with any new technology) to the world.By and large the Big Tech firms working on AI are doing a negligent and ultimately self-destructive job at communicating the importance -- and limitations! -- of these systems to the public, leaving a vacuum to be filled with misinformation and disinformation to gladden the hearts of political opportunists (both on the Right and the Left) around the planet.If this doesn't start changing for the better immediately, today's controversies about AI are likely to look like firecrackers compared with nuclear bombs in the future. --Lauren--
(DIR) Post #AUU7Cg3IGlwsQKLnpA by PeoriaBummer@infosec.exchange
2023-04-09T19:07:09Z
0 likes, 0 repeats
@lauren Absolutely agree. The problem is rushing these models into products. ESPECIALLY “search engines”.
(DIR) Post #AUU9d1ngEUQdJOjxho by bflipp@vmst.io
2023-04-09T19:34:40Z
0 likes, 0 repeats
@lauren I do have problems with the engineers. Very few of them have any ethics training to properly understand or push back on any problematic requirements. I have posted before about a large project that I worked on in the same vertical as ChatGPT that we eventually shut down because we were just stealing data to train our models with. Indiscriminate web scraping, doc collection, video transcription, images, etc.
(DIR) Post #AUU9xJJMpvf9PgEW6y by berndporr@mastodon.social
2023-04-09T19:38:19Z
0 likes, 0 repeats
@lauren That’s a PhD student at Stanford who hosts The Gradient and also hosts sigmoid.social. The problem is that at least at Stanford they seem to have gone all crazy. There is a great weekend long read in the German SZ where they say that these tech people creating ( bad fiction ) while AI is a very good plagiarism tool. Obv they need to predict superhuman AI as they have been pushing it since 2014. In the meantime grifters dismantle our shared reality and democracy.
(DIR) Post #AUU9yzK083xrzlKUXQ by lauren@mastodon.laurenweinstein.org
2023-04-09T19:38:38Z
0 likes, 0 repeats
@bflipp I would put the failure to provide ethics training in the wheelhouse of those same executives. I have written earlier about the need to provide control over the gathering of training data -- that control needs to be in the hands of websites, for example.But ultimately, the rules about training, ethics, when products are released to the public, etc. are in the hands of those executives -- and those engineers are their responsibility.
(DIR) Post #AUUAURWVKGccREB0kq by lauren@mastodon.laurenweinstein.org
2023-04-09T19:44:22Z
0 likes, 0 repeats
@bflipp I'm hearing of many engineers trying to explain the problems to execs, and the execs just brush them aside and say "we gotta get this out now, we can always fix stuff later!" This seems rather endemic.
(DIR) Post #AUUAY1hVH9KRVd1vii by bflipp@vmst.io
2023-04-09T19:44:59Z
0 likes, 0 repeats
@lauren I completely disagree. Absolving engineers of ethical responsibility is the “just following orders” excuse of building intelligent systems. Everyone in the chain has responsibility. I am not absolving the execs either.
(DIR) Post #AUUAh4mXBngnfghMwq by lauren@mastodon.laurenweinstein.org
2023-04-09T19:46:40Z
0 likes, 0 repeats
@bflipp I am not absolving anyone of anything. But like I said, it appears that when engineers complain, the execs brush them aside. Short of quitting -- and being replaced with more "compliant" workers -- eventually they tire of the fight.
(DIR) Post #AUUBtiEWR4MLSONYSe by bflipp@vmst.io
2023-04-09T20:00:06Z
0 likes, 0 repeats
@lauren Yeah when execs insert themselves into the engineering process is generally when I exit. It’s also a good way to build a side contracting business keeping their lights on and training your replacement(s) until they eventually file for bankruptcy and/or get acquired because that’s where it’s usually going anyway.
(DIR) Post #AUUCIalpJrEHYDovSK by BartWronski@mastodon.gamedev.place
2023-04-09T20:04:35Z
0 likes, 0 repeats
@lauren @BoredomFestival great points. I'd add that I witnessed researchers working on some cool stuff, carefully communicating limitations, caveats, saying it's not ready - only to be overridden by PMs and directors "we need this in our product by yesterday".Plus that OpenAI started even worse trend - of not publishing results or papers, which hinders future research (of both improvements and risks) and slows it down. Other companies now follow and feel foolish not doing it earlier. :(
(DIR) Post #AUUE3j6KGPrix7RplQ by TruthSandwich@toad.social
2023-04-09T20:24:11Z
0 likes, 0 repeats
@lauren I find many of the fears odd. Yes, ChatGPT can be tricked into telling you how to make explosives, but then again, there's always Wikipedia.Yes, ChatGPT can hallucinate libel about public figures, but then again, there's always Wikipedia.I'm much more concerned about ChatGPT being out in the wild, with access to the Internet and the ability to further its own ends.
(DIR) Post #AUVtXhw7Rt9VJ534aW by rrb@qoto.org
2023-04-10T14:23:26Z
0 likes, 0 repeats
@TruthSandwich @lauren https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction
(DIR) Post #AUVtXik6S38Po6QzUu by lauren@mastodon.laurenweinstein.org
2023-04-10T15:43:52Z
0 likes, 0 repeats
@rrb @TruthSandwich "It's so simple. So very simple. That only a child can do it!"