Posts by bwyble@neuromatch.social
(DIR) Post #ARP29MjFXOFelh7gTw by bwyble@neuromatch.social
2023-01-07T14:40:35Z
0 likes, 0 repeats
@tiago @networkscience @academicchatter I had wondered if changes in citation practices might be the cause of the trend.
(DIR) Post #ASJ67lXF8v9pmPrGdc by bwyble@neuromatch.social
2023-02-03T15:49:42Z
0 likes, 0 repeats
@TedUnderwood This is why people are right to be ringing the alarm bells. What seems obvious to some of us who understand such models (even vaguely) is not obvious to others.
(DIR) Post #ATsJAmRu3jUfh6Qfey by bwyble@neuromatch.social
2023-03-22T13:22:53Z
0 likes, 0 repeats
@garymarcus I think your video about AI is very well done but I think you are setting the bar for intelligence too low. Watching a movie and describing what happened, even the motivations of the actors is going to be a speedbump that AI will cross in the next few years. Movies have such common narrative tropes that an AI with the ability to parse video will quickly be able to learn the archtypes present in modern movies and decode them from a new movie just as well as ChatGPT can spin up descriptions of textual narratives. The real signs of intelligence will be in effective metacognition, judgement, morality and ethics. These are the really hard problems. As I've said before, not only are LLMs unable to differentiate truth from fiction, they cannot discern that they *should* make this distinction. The very notion of right and wrong behavior does not seem to be represented (beyond the trivial sense of trying to predict the next word)Even worse, LLMs don't to have a clear notion of "should", since they are trained to respond in a way that is consistent with the central tendency of their training distribution for any given query.
(DIR) Post #ATspICYga3RbEPBTt2 by bwyble@neuromatch.social
2023-03-22T19:22:47Z
0 likes, 0 repeats
@garymarcus We've already got the basics of image->semantics working in AI. The question is whether video->intention is straightforward. I think it may not be in general from random real-world video, but may be easier in movies.
(DIR) Post #AU1QMHgXZplgyLng8W by bwyble@neuromatch.social
2023-03-26T22:29:13Z
0 likes, 0 repeats
@pfessenbecker @TedUnderwood the point of writing the college essays isn't the essays, it's the practice of composing thoughts. LLMs don't change that. All we're learning here is that many people don't understand how education works.
(DIR) Post #AUPNFYVY1fuTrL8gZk by bwyble@neuromatch.social
2023-04-06T02:52:03Z
0 likes, 0 repeats
@pbloem I recently asked Gpt-4 to help me understand how to do something with Amazon's elastic beanstalk. It took some basic advice on configuring EC2 instances and claimed that it was about Elastic Beanstalk. Took me a few minutes to figure out that it was just stringing things together. I went back to Google and found the answer I needed.
(DIR) Post #AUPNFZmFJ4rRnPxXF2 by bwyble@neuromatch.social
2023-04-06T11:05:56Z
0 likes, 0 repeats
@pbloem isn't that just another way of saying that it's a stochastic parrot?BTW I run into these situations quite a lot in my efforts to use GPT-4. I wouldn't say it's particularly rare
(DIR) Post #AUPNFaxysvqHU6SQAi by bwyble@neuromatch.social
2023-04-06T21:02:17Z
0 likes, 0 repeats
@pbloem how do you know it can generalize rather than just interpolate?
(DIR) Post #AUPNFcDGFbevLmc8cy by bwyble@neuromatch.social
2023-04-07T11:35:06Z
0 likes, 0 repeats
@pbloem The difficulty with GPT is that we don't know what is in its training set. We can be comfortably certain that it has digested articles that explain how satire works, and probably even articles that explain the Onion's flavor of satire. I'm not saying that it's impossible that LLMs generalize but rather that it's very hard to demonstrate that this is so given their massive input, the list of which is inaccessible to us.
(DIR) Post #AUPNFdXVJpRhSr5ooq by bwyble@neuromatch.social
2023-04-07T12:05:38Z
0 likes, 0 repeats
@pbloem Creating a new thing is not necessarily generalization. Even the equation y = ax +b can generate novel exemplars along its line. It depends on how well the training distribution covered this space. Whether something is interpolation vs generalization is extremely hard to know for high dimensional spaces and large, unknown training sets.
(DIR) Post #AUPOfxffBziRNv1enQ by bwyble@neuromatch.social
2023-04-07T12:30:02Z
0 likes, 0 repeats
@TedUnderwood @pbloem that equation does do generalization yes but it also fills in an infinity of data points between training points.
(DIR) Post #AUn2jYW5dDcbn9P5jU by bwyble@neuromatch.social
2023-04-18T22:17:12Z
0 likes, 0 repeats
@vriska the neuromatch.social server let's you post 10000 char messages in mastodon. Maybe that would do?