Posts by bhaggart@mastodon.social
 (DIR) Post #AT6ilc0JRnVgw1oH68 by bhaggart@mastodon.social
       2023-02-27T13:44:56Z
       
       0 likes, 0 repeats
       
       I’ve yet to see any discussion regarding the classroom use of #chatgpt even mention the fundamental issues of validity and reliability of LLMs.That LLMs offer neither is a problem that can’t be glossed over just by footnoting when this tech is used.
       
 (DIR) Post #AT6ilcxrsN9HujfpvU by bhaggart@mastodon.social
       2023-02-27T14:21:38Z
       
       0 likes, 1 repeats
       
       If we’re actually serious about education, using #chatgpt needs a better justification then: students are going to use it to cheat anyways.And this dismissal of the essay, backed by the belief that it can be replaced by fact-checking exercises, suggests a lack of understanding of what an essay actually teaches.https://www.theguardian.com/technology/2023/feb/27/chatgpt-allowed-international-baccalaureate-essays-chatbot
       
 (DIR) Post #AUNOy0YKpP5dDdHcn2 by bhaggart@mastodon.social
       2023-04-06T13:23:44Z
       
       0 likes, 0 repeats
       
       @tante That's also the plot summary of Person of Interest's second-season finale.
       
 (DIR) Post #AUoHIM7nHzHm0jUIDo by bhaggart@mastodon.social
       2023-04-19T12:35:06Z
       
       0 likes, 0 repeats
       
       @tarkowski You can only make that Platonic argument by ignoring the human labour involved in shaping the initial computer model, and by ignoring that these programs replicate the average, not the ideal. To carry the analogy further, this tech creates shadows, not forms.
       
 (DIR) Post #AWkloNnPyagQcdz0FM by bhaggart@mastodon.social
       2023-06-16T11:36:11Z
       
       1 likes, 0 repeats
       
       “As AI search takes off, users will need to get into the habit of fact-checking their search results, he said.“Against what? That people are treating a search technology that invariably delivers falsehoods as a revolutionary, positive advance is just bananas.What an amazing statement. There is no universe where it’s a good thing that a search engine company forces people to do a second search to confirm their first, company-provided, result isn’t a straight-up lie.https://www.thestar.com/business/technology/2023/06/15/googles-new-ai-search-function-is-revolutionary-but-dont-believe-everything-it-says-experts-say.html
       
 (DIR) Post #AXJfu8ySfspi2YjM2q by bhaggart@mastodon.social
       2023-07-03T11:50:59Z
       
       0 likes, 0 repeats
       
       @tante Disappointing, but no surprising, as someone else said. The generative AI debate is definitely highlighting some longstanding blindspots held by previously untouchable champions of the internet, namely a lack of concern with private power and a belief that openness cures all. See also: EFF and Facebook/Cambridge Analytica.
       
 (DIR) Post #AXldUUqQdFuGSRZm0e by bhaggart@mastodon.social
       2023-07-16T22:10:38Z
       
       0 likes, 1 repeats
       
       The belief that individual expertise is enough to verify that ChatGPT et al output is scientifically valid (as in this article) is simultaneously hubristic and anti-science.Written academic output is validated not just (or mainly) by individual expertise but through citations built on citations: scientific chain of custody. Cites are also a check against individual bias. Absent these, you're left relying on LLM output verified because it sounds good to you.https://slate.com/technology/2023/07/chatgpt-class-prompt-engineering.html
       
 (DIR) Post #AXldUZpC7GtFu42MHQ by bhaggart@mastodon.social
       2023-07-16T22:14:49Z
       
       0 likes, 0 repeats
       
       I mean, the attitude that "I'm an expert in X, so I can confirm that this probabilistically generated text is accurate" just screams "confirmation bias."