Post AVap6NkCeKLq9bHTQO by Marvin@fans.sonichu.com
(DIR) More posts by Marvin@fans.sonichu.com
(DIR) Post #AVWIlkD1xKLca52Yym by rodhilton@mastodon.social
2023-05-10T17:49:50Z
6 likes, 8 repeats
#GoogleIO revealed the two weirdest features as a pair.1. Give a short summary and Google will draft an e-mail for you based on it. You can even click "elaborate" and it will make the e-mail longer.2. When opening an e-mail, Gmail can summarize the entire thing for you so you don't have to read all of it.Does everyone realize how fucking bizarre this is?Both people in the conversation want to work with directness and brevity, and Google is doing textual steganography in the middle.
(DIR) Post #AVWIlqFjUC4fKCLo1Y by kuba@toot.kuba-orlik.name
2023-05-10T18:18:57Z
1 likes, 0 repeats
@rodhilton marketoonist predicted thishttps://marketoonist.com/2023/03/ai-written-ai-read.html
(DIR) Post #AVWhzBz8UjihLS3C8O by rcz@101010.pl
2023-05-10T23:01:32Z
0 likes, 0 repeats
@rodhilton Unless the recipient is not using Google — then they get penalized for it by getting the useless uglified elaborate version.
(DIR) Post #AVWjAYZCyausWLuDmS by leoncowle@hachyderm.io
2023-05-10T18:35:54Z
1 likes, 0 repeats
@rodhilton From a few weeks ago:
(DIR) Post #AVWjAZM82i32y4nI24 by SocialistStan@kolektiva.social
2023-05-10T23:13:05Z
1 likes, 0 repeats
@leoncowle @rodhilton
(DIR) Post #AVWl84x8JGco0H2i3M by gatesvp@mstdn.ca
2023-05-10T22:20:22Z
0 likes, 0 repeats
@rodhilton just starting the countdown until we get the first employment discrimination settlement that includes a generated email.The writer, rightfully claiming that they did not in fact, type the words. The reader, displaying the AI shortened message, based on some admin configuration.
(DIR) Post #AVWl85QYXsA1TWoCum by rodhilton@mastodon.social
2023-05-10T22:22:38Z
0 likes, 0 repeats
@gatesvp yeah I think that the whole AI hype cycle in general is going to really brush up against issues of responsibility in a hurry.Your AI-powered self-driving car hits someone. Are you responsible? You just enabled a feature of the car. Is the AI responsible? It's an algorithm. Is the company who built it? No, they weren't there.If your AI chooses who to fire and it turns out to be a bunch of protected class employees, who gets sued? The employer or the builder of LayoffBot?
(DIR) Post #AVWl8600Q4W7FTOWAa by Marvin@fans.sonichu.com
2023-05-10T23:26:28.502077Z
0 likes, 0 repeats
Maybe there'll be responsibility issues, but I don't think those are very good examples.So with the car, the law regarding malpractice and selling defective products is pretty well established and I see no reason why it wouldn't continue to work as expected. That is, the manufacturer would be held responsible for selling a defective product. They would be responsible to source reliable components, and I don't see a reason why a software component would be any different from a hardware component in that regard.With AI based firings, we've already got pretty clear legal precedents for firing people based on performance metrics. How we apply those metrics isn't very important, so long as a protected characteristic isn't one of those metrics.Hell, AI based HR will probably reduce liability because it'd be easier to objectively prove that the input data didn't include any protected characteristics.
(DIR) Post #AVWshcPiarjlazJ5jE by gatesvp@mstdn.ca
2023-05-11T00:59:12Z
1 likes, 0 repeats
@Marvin @rodhilton I think these are really good examples and that you might not be into the details enough.For example, a self-driving car will not be involved in zero accidents. Zero is not the metric of success for mechanical things. Or even for software.And we don't typically hold companies liable for things that are operating within regulatory specification.The HR stuff is even messier. Because it actually highlights giant regulatory problem... /1
(DIR) Post #AVWshd8NunSxpWClLk by Marvin@fans.sonichu.com
2023-05-11T00:51:17.408872Z
0 likes, 0 repeats
I wasn't saying zero crashes. Just that a legal standard exists and it's flexible enough to accommodate AI.
(DIR) Post #AVWtmQ7ro6Yst4oJxQ by gatesvp@mstdn.ca
2023-05-11T01:02:01Z
1 likes, 0 repeats
@Marvin @rodhilton our current laws only protect outcomes based on input characteristics. Using AI tools as a shield allows companies to produce terrible outcomes, but claim that they are not responsible by pointing to the inputs.So we're driving a giant truck through a huge hole in our laws and just letting it run over people for no particularly good reason.Don't you think we can do better?//
(DIR) Post #AVWtmQmdMXAgvVssV6 by Marvin@fans.sonichu.com
2023-05-11T01:03:25.351623Z
0 likes, 0 repeats
I don't want the government to regulate outcomes. There's some very dark consequences when you start heading down that path. Like there's very good reasons behind why racial hiring quotas are illegal.But regardless, they can already do effectively the same thing as AI, just messier and by hand. Like I said, hiring or firing based on performance metrics. The AI would do the same thing, except you can provably eliminate accusations of illegal discrimination. I don't see the problem.
(DIR) Post #AVXCsmhEhZftks6Pj6 by gatesvp@mstdn.ca
2023-05-11T04:43:53Z
0 likes, 0 repeats
@Marvin @rodhilton Legal standards are invented by humans like you and I. If you were tasked with creating such a standard, what would you want see in it?How would you write this to be "inclusive of AI"?
(DIR) Post #AVXCsnQbyryG1bKeS8 by Marvin@fans.sonichu.com
2023-05-11T04:37:23.579923Z
0 likes, 0 repeats
The legal standard about shipping defective products is exactly what I'd create.
(DIR) Post #AVXH5H401TvlEExdOC by gatesvp@mstdn.ca
2023-05-11T05:00:00Z
0 likes, 0 repeats
@Marvin @rodhilton What legal standard?Basically every standard falls to a regulatory body. Implementation literally falls to the State level in the US. The laws use vague words like "as expected by a reasonable consumer".https://www.findlaw.com/injury/product-liability/what-is-product-liability.htmlSo if you're responsible for deciding "reasonable consumer" regulations for AI cars. What are you deciding? Remember, this rule doesn't exist today. You get to make it up.
(DIR) Post #AVXH5HfZllzL6mXdxY by Marvin@fans.sonichu.com
2023-05-11T05:24:30.340536Z
0 likes, 0 repeats
No, in the US there's a legal distinction between a regulatory body and a judicial standard. The "as expected" phrasing you're describing is handled by courts, not regulatory bodies. It's a pretty ordinary process by now.
(DIR) Post #AVXH7X2la287MxScu8 by gatesvp@mstdn.ca
2023-05-11T04:51:34Z
0 likes, 0 repeats
@Marvin @rodhilton Except things like LLMs can't in fact fire based on performance metrics. If they did that, it would be an algorithm, not some "AI" or model.And they can't generate performance metrics because those metrics would be literal BS.So you're painting a weird universe of things that don't really exist here.
(DIR) Post #AVXH7Xf3HgkrHhNCa0 by Marvin@fans.sonichu.com
2023-05-11T05:24:57.779164Z
0 likes, 0 repeats
That's a distinction without a difference legally.
(DIR) Post #AVXLnXiFskibQL9dp2 by seanking@kazv.moe
2023-05-11T06:27:36.297208Z
0 likes, 0 repeats
@rodhilton I'd imagine it's like one of those memes where the sentence is simple, but then gets completely deformed as the verbosity increases.
(DIR) Post #AVXLnzud2LFksp9uzo by Dave3307@mstdn.social
2023-05-10T17:56:08Z
1 likes, 0 repeats
@rodhilton Plot twist: the summary of the enhanced email is the original prompt used to create it.
(DIR) Post #AVXLo0tFOxk5upWKTw by Sweetshark@chaos.social
2023-05-10T22:04:50Z
1 likes, 0 repeats
@Dave3307 @rodhilton ... and at some point the intermediate wont be parsable by humans anymore because thats not relevant for the success anymore.
(DIR) Post #AVXzAshltcZrP2321I by bsokol@retro.pizza
2023-05-10T17:56:30Z
1 likes, 0 repeats
@rodhilton This is just the opposite of a compression algorithm
(DIR) Post #AVY39fYbh3pi9WRGXw by chrisisgr8@tech.lgbt
2023-05-11T03:49:22Z
1 likes, 1 repeats
@rodhilton i actually just came up with an even bigger innovation, let's just have our email chatbots talk one on one and then summarize how it went
(DIR) Post #AVYty0tnoYBORGgDnU by gatesvp@mstdn.ca
2023-05-11T22:56:20Z
0 likes, 0 repeats
@Marvin @rodhilton So we've gone back and forth discussing weird legal technical details.What you haven't contributed is the simplest question. What rules do you want to govern this?I would even accept an example of a bad thing happening that was okay and a bad thing happening that was bad.Do you have any of these? Or are you just posturing?
(DIR) Post #AVYty1fIxwBEoau9q4 by Marvin@fans.sonichu.com
2023-05-12T00:14:52.034162Z
0 likes, 0 repeats
Let's roll this back to the original point.Rod gave examples of AI introducing novel legal issues of responsibility. I disagreed that the examples he gave would test the law in novel ways.My point was that ordinary principles of liability (such as regarding defective products or employment law) would work more or less exactly how they've always worked. Any time a new technology is introduced, juries and judges are still answering the exact same questions in the same way.Nevermind if we're talking about a lawsuit about, say, the safety of teflon coatings on cookware or the safety of AI driving cars, the legal questions are the same. The AI element won't be a clever, spooky way for a company to dodge liability that they wouldn't be able to otherwise.That's my point.Now if you're asking me specifically what I'd think or what I'd want if I was on one of these juries... idk? I'm sure they'd bring in experts testifying about the standards of safety of AI driving cars, and numbers and stats about their reliability and then the opposing side would bring in their experts to argue that the products had been rushed to market too quickly.Hell, we'll probably see some UL standards published on the issue quickly enough. https://en.wikipedia.org/wiki/UL_(safety_organization)
(DIR) Post #AVYu3JuYYTGuqsYy9I by Marvin@fans.sonichu.com
2023-05-12T00:15:51.685655Z
0 likes, 0 repeats
Let's roll this back to the original point.Rod gave examples of AI introducing novel legal issues of responsibility. I disagreed that the examples he gave would test the law in novel ways.My point was that ordinary principles of liability (such as regarding defective products or employment law) would work more or less exactly how they've always worked. Any time a new technology is introduced, juries are still answering the exact same questions in the same way.Nevermind if we're talking about a lawsuit about, say, the safety of teflon coatings on cookware or the safety of AI driving cars, the legal questions are the same. The AI element won't be a clever, spooky way for a company to dodge liability that they wouldn't be able to otherwise.That's my point.Now if you're asking me specifically what I'd think or what I'd want if I was on one of these juries... idk? I'm sure they'd bring in experts testifying about the standards of safety of AI driving cars, and numbers and stats about their reliability and then the opposing side would bring in their experts to argue that the products had been rushed to market too quickly.Hell, we'll probably see some UL standards published on the issue quickly enough. https://en.wikipedia.org/wiki/UL_(safety_organization)
(DIR) Post #AVaAdMd9iNHK8SaowC by gatesvp@mstdn.ca
2023-05-12T02:15:49Z
0 likes, 0 repeats
@Marvin @rodhilton So let's drill into Rod's example.If your AI chooses who to fire and it turns out to be a bunch of protected class employees, who gets sued? The employer or the builder of LayoffBot?Who's the liable party?It can't be the employer, they outsourced the decision to LayoffBot who granted them indemnity.It can't be LayoffBot because "they don't ingest information about protected classes".We don't measure outcomes only inputs...
(DIR) Post #AVaAdNdBzitzErcMdM by gatesvp@mstdn.ca
2023-05-12T02:22:32Z
0 likes, 0 repeats
@Marvin @rodhiltonIn a normal case, we would subpoena the code and go to the algorithm. We would analyze the code to figure out why it is making decisions the way it is and then we make a decision on whether or not the code is defective.To me, the machine learning solutions are novel because you can't really analyze the code in that way. You may not even be able to repro the results.To me this is new territory. Do other companies commonly do this? //
(DIR) Post #AVaAdOGtc6f3E0C4WG by Marvin@fans.sonichu.com
2023-05-12T14:56:11.579220Z
0 likes, 0 repeats
Most jurisdictions in the US use "at will" employment, where you can be fired for any (or no) reason, barring a few discrimination issues. And the discrimination issues can be easily eliminated by merely cleaning the data you feed to LayoffBot.Faulty products are similarly easy to handle in that the law is concerned with outcomes more than the process. The process doesn't need to be analyzed if the company demonstrates that the outcomes are good enough.And if the outcomes are bad enough, no amount of excuses about the process will justify the situation. The company is obligated to simply not release the product if it keeps fucking people up beyond what is standard for the industry, process be damned.It's similar to how courts handle medical malpractice. Courts don't try to be experts on medicine and give opinions on whether a particular treatment is wise. They answer a far less technical question, "did the medical practitioner breach the standard of care?" That is, did they vary from what a typical doctor would do? (whether or not what typical doctors do is stupid or not)
(DIR) Post #AVaBVimZZRCqTYuvUO by Marvin@fans.sonichu.com
2023-05-12T15:06:05.542079Z
0 likes, 0 repeats
So to answer your question: if LayoffBot is some kind of unabiding racist or something, the company offering LayoffBot is probably breaching their obligations to the company using it, and the company using it is breaching their obligations to their employees.But if LayoffBot is fed a list of employee numbers and a list of how late their TPS reports are in minutes, and then making hiring/firing decisions based on that, LayoffBot and anyone using LayoffBot is probably fine legally. Even if TPS reports are useless and a waste of company resources.
(DIR) Post #AVap6MEyExm3URf16W by gatesvp@mstdn.ca
2023-05-12T16:17:22Z
0 likes, 0 repeats
@Marvin @rodhilton so I think I'm starting to see the gap in our discussion.Neither of the examples you have given are the things that an AI layoff bot would do. If you were going to perform layoffs based purely on TPS reports, that would be an algorithm, not an AI.The current batch of layoff bots are promising to analyze every digital interaction you've had on a company account. And then produce a "performance management" list without need for TPS reports...
(DIR) Post #AVap6N9gq590KMCJVo by gatesvp@mstdn.ca
2023-05-12T16:22:44Z
0 likes, 0 repeats
@Marvin @rodhilton Because the technology is unregulated and wildly misunderstood, all the "reasonable person" defenses fall down.Can this all be sorted out in court in the future? Sure. But that court date could be decades in the future after millions and millions of people are hurt.In my world, I would like to see some of those damages pre-empted. Doesn't that sound better?
(DIR) Post #AVap6NkCeKLq9bHTQO by Marvin@fans.sonichu.com
2023-05-12T22:29:39.891102Z
0 likes, 0 repeats
I think the distinction you're drawing between AI and algorithm is more appropriate technically, but not legally. Legally, a court will see some kind of process that takes inputs and produces outputs. A more complicated process, even an unauditably complicated process, is still just a process.Though I agree that the layoff bot process you propose is problematic, in that it could indeed drag in inputs that it legally cannot use in its decisions. So for example, if it's dragging in surnames, a clever civil rights lawyer could definitely make an argument that it might be making illegally discriminatory actions based on employees' ethnicities.And beyond that just being a possibility, I'm fairly certain it's an inevitability, if the companies in question don't actually do their homework and clean their data sources. Big corporations are extremely conservative when it comes to issues like these. The legal department at any big corporation would be shitting itself about putting such a system into place without lots of preparation.I'm just not convinced that current civil rights law doesn't cover these issues. I also don't trust legislators to write competent law on technical subjects. The laws they write will be clunky, full of stupid corner cases and drive away innovation. Even if a company isn't attempting to do something skeevy, just the risk of being caught up in some poorly written regulation would be enough to make entrepreneurs reconsider starting an AI in a jurisdiction with such regulations.Like the EU is considering AI regulations right now, and if it goes through, I bet the vast majority of AI developments going forward will be in the US, not the EU. Whether or not that's worth it is up for Europeans to decide, but I wouldn't want that in the US.
(DIR) Post #AVcN0QwFS3KtvHD4qG by raphael_fl@wandering.shop
2023-05-10T17:53:05Z
0 likes, 0 repeats
@rodhilton Wasn't there some kind of cartoon making that point earlier this year?
(DIR) Post #AVcN0RwHjOxZ1gEcXQ by gatesvp@mstdn.ca
2023-05-10T22:15:25Z
0 likes, 2 repeats
@raphael_fl @rodhilton Right here.
(DIR) Post #AW6qdNYgYVleOVM1DM by rose@503junk.house
2023-05-10T22:14:52Z
2 likes, 0 repeats
@rodhilton This is just further mediation and alienation of interactions. We no longer can even ask if "this is a pipe". We only get a summery of an interpretation of a pipe.
(DIR) Post #AW6qeGZe0eDN1PE3qS by tanepiper@tane.codes
2023-05-11T06:39:38Z
1 likes, 0 repeats
@leoncowle @rodhilton Was also going to post this.The biggest change in the world is the speed in which reality imitates art.
(DIR) Post #AW6qg1AaFjvzpFL5Ps by masukomi@connectified.com
2023-05-10T22:22:48Z
1 likes, 0 repeats
@rose @rodhilton As an autistic person I can assure you that this statement is almost always false: > Both people in the conversation want to work with directness and brevity, and Google is doing textual steganography in the middle.Neurotypical people SAY that, but they hate it when you do that. They get upset with you for being "blunt" or "aggressive" when you just tell them the truth plainly and concisely.they want softening words, and coming at things indirectly to protect emotions.
(DIR) Post #AW6qhluNxsk94ZbBnk by Zerglingman@freespeechextremist.com
2023-05-28T09:28:51.226337Z
0 likes, 0 repeats
@rodhilton What did you expect from a toy?
(DIR) Post #AW6qkqdepnqqc0F1kG by iagondiscord@wetdry.world
2023-05-11T04:08:36Z
1 likes, 0 repeats
@raphael_fl @rodhilton What's the difference between satire and reality? About six months.
(DIR) Post #AW6qn9SxnzGVaFL0Qi by jeffcliff@shitposter.club
2023-05-28T09:29:48.805543Z
0 likes, 1 repeats
The difference between satire and reality drops by about half every 18 months.