Post AipLkCs8JM5zJHzpuS by jared@mathstodon.xyz
 (DIR) More posts by jared@mathstodon.xyz
 (DIR) Post #AipH029XQviuvYrhKa by twylo@mastodon.sdf.org
       2024-06-11T18:15:58Z
       
       0 likes, 0 repeats
       
       I had someone tell me “ChatGPT is very useful even though it gets things wrong sometimes. It’s like an intern that never sleeps. You wouldn’t automatically trust everything an intern said, would you?” But an intern can be held accountable for bad information, and can learn from their mistakes. So, not very similar at all.
       
 (DIR) Post #AipLkCs8JM5zJHzpuS by jared@mathstodon.xyz
       2024-06-11T19:09:05Z
       
       0 likes, 0 repeats
       
       @twylo The "bad intern" argument ( let's start calling it a fallacy ) annoys me.Would the interlocutor brag about habitually hiring poorly informing interns? Would they be satisfied that their intern requires an entire team weeks of R&D and capital expenditures to improve just slightly? Would they be happy that their intern never graduates to become a valued employee and respected colleague?The "bad intern" fallacy exposes both the disappointing limitations of an LLM (it never improves as readily as a human improves) as well as a the callousness of the LLM user (they're not committed to a truly resource efficient solution to their staffing problem, i.e., education).
       
 (DIR) Post #AipPnFhgdjBqmFom5g by dwineman@xoxo.zone
       2024-06-11T19:54:28Z
       
       0 likes, 0 repeats
       
       @twylo It’s an intern who’s read everything, believes everything they read, and understands none of it