[HN Gopher] Teaching with AI
___________________________________________________________________
Teaching with AI
Author : todsacerdoti
Score : 245 points
Date : 2023-08-31 17:00 UTC (5 hours ago)
(HTM) web link (openai.com)
(TXT) w3m dump (openai.com)
| jjcm wrote:
| I've personally found AI to be a great help whenever I'm diving
| into a topic that I'm less familiar with. Recently I used it to
| help me prep for an interview as well. My partner uses it to help
| explain STEM concepts that she didn't cover in her schooling.
|
| I do wonder how far away we are from an actual Young Lady's
| Illustrated Primer. Three years ago I'd say we were 50 years
| away. Now it feels more like 10.
| shepardrtc wrote:
| The same for me, I love it for this sort of thing. I can bounce
| ideas off of it and it'll give me a solid response without
| getting tired of my questioning. And it'll explain in detail
| why I'm wrong. I really can't express how useful this is for my
| style of learning - I like to take things apart and figure out
| how they go back together.
| jeremyjh wrote:
| > Three years ago I'd say we were 50 years away. Now it feels
| more like 10.
|
| I think those agents could actually reason though. LLMs do not
| do any reasoning. They produce plausibly reasonable text.
| yoyohello13 wrote:
| I just don't know about this. I also find it answers great when
| I'm not familiar with a topic. However, when I am familiar with
| a topic I find all sorts of inconsistencies or wrong facts. I'm
| concerned the same inconsistencies are there in the topics I'm
| not familiar with, I just don't know enough about the subject
| to spot them.
| azertykeys wrote:
| Anecdotally, my friend who's just starting out teaching high
| school physics has used ChatGPT to generate worksheet questions
| with mixed results, having to throw out the majority of what it
| generates, but still saving time overall
| pixl97 wrote:
| Just asking it to make up 10 question isn't a great way of
| doing it most of the time.
|
| It turns out making a single question really is a bunch of
| different questions in itself. You have to ask on each question
| "How can this be misinterpreted", "Can the question be written
| better", "Is this a challenging question that actually causes a
| person to learn".
|
| A lot of human generated question are just confusing hot
| garbage in and of themselves. Quite often we encode cultural
| biases in the questions. Or, if a person actually knows about
| the topic they can get the question wrong, based if they are
| only supposed to formulate the answer based on the paragraph
| shown to the user.
|
| The AI Explained channel on youtube just had an episode about
| this in relation to the tests we're giving AI. Turns out a lot
| of the questions just suck.
| MostlyStable wrote:
| Surprised not to see any discussion here of Khanmigo[0], which I
| believe has been using GPT-4 as a tutor for quite a while now (in
| a beta form). It's been long enough that I've actually been
| (idly) trying to find any efficacy data. I'm sure that by now
| Khan academy has it, but I haven't seen them release it anywhere.
|
| The famous tutoring 2-sigma result (referenced elsewhere in the
| comments), only took place over 6 weeks of learning, and Khanmigo
| should have over 6 months (I believe) of data by this point
|
| [0]https://www.khanacademy.org/khan-labs
| rcarr wrote:
| I don't understand why they've made it US only during the beta
| period, seems like a weird decision.
| j_gravestein wrote:
| Finally OpenAI admits AI detectors are useless.
| jheriko wrote:
| [flagged]
| dtnewman wrote:
| I built a chrome plugin called Revision History [1] that I
| released about 2.5 weeks ago, so I've been talking to a good
| number of educators about this recently. I'd say the majority of
| teachers are terrified of AI, because it means that they have to
| completely change how they teach with only a few months' notice.
| It's not easy to change your lesson plans or assignment
| structures that quickly and it'll take time to see where this all
| lands.
|
| Some teachers are looking for ways _not to adapt_ , which is why
| there's a surge of interest in AI detection (which doesn't work
| well), but the sharpest educators I talk to are cognizant of the
| fact that there is no going back. So the plan is to incorporate
| AI into their curriculum and try to make assignments more "AI
| proof". This means more in-class work (e.g., the "flipped
| classroom" model [2]). Others are looking for ways to encourage
| students to use AI on assignments, but to revise and annotate
| what AI generates for them (this is what I am marketing my plugin
| for). Either way, it's going to be very rough over the next few
| years as educators scramble to keep us with a monstrous change
| that came about practically overnight.
|
| [1] https://www.revisionhistory.com. The plugin helps teachers
| see the students' process for drafting papers, unlike many than
| other plugins that are trying to be "AI detectors".
|
| [2] https://bokcenter.harvard.edu/flipped-
| classrooms#:~:text=A%2....
| nomel wrote:
| > because it means that they have to completely change how they
| teach with only a few months' notice
|
| Why? Doesn't it only mean they need to change how they _test_
| understanding?
| Wowfunhappy wrote:
| I have to say, I think I would have _hated_ this growing up. I
| have a tendency to become emotionally-invested in the quality
| of my writing, and I don 't like people seeing it in a state I
| don't consider presentable.
|
| Maybe this tool would have forced me to get over that, I don't
| know.
| WhiteSabbath wrote:
| Never know until you try
| all2 wrote:
| I had a college professor (English 101) tell the class on the
| first day, "You're going to learn how to mutilate and kill
| your babies." He was brutal in draft reviews, but he pushed
| me to learn the process of drafting and writing. I produced
| work in that class that I didn't think I was capable of.
| dtnewman wrote:
| Thank you for the feedback. My wife feels this way as well.
| It's good to hear perspectives on it.
| moffkalast wrote:
| The fact that homework has become so prevalent that it takes
| more than an hour each day (on top of already a full time job's
| worth of classes) is a crime against childhood anyway, good
| riddance.
| TillE wrote:
| Amen. I did great in school, except I _hated_ homework from a
| very early age onwards. Tedious crap I already understood.
|
| The occasional take-home project is probably fine, but
| otherwise let school be school, and leave it at that.
| wudangmonk wrote:
| I was frustated with the sheer amount of essays I was helping
| out my nephew with. It felt like every single week he had an
| essay from multiple classes, and it was always the same b.s
| for every single one. I just couldn't see the point of having
| to do so many of the same type of "research" assignment over
| and over again.
|
| When chatgpt beta first became available I was overjoyed, it
| worked wonders. It worked so well that I figured teachers
| would have to let go of the essay crutch they had been
| relying on so much.
| AuryGlenz wrote:
| My high school experience was largely teachers mentally
| assuming that their homework was the only homework we were
| assigned, and not realizing that the time we'd need to
| spend on all of it was 6 or 7 times what they personally
| did.
|
| I had one teacher assign four separate (essentially busy-
| work) assignments over Christmas break. Ridiculous.
| Workaccount2 wrote:
| I can't help but feel that people are completely missing the
| forest for the trees with AI and education. Which isn't
| particularly surprising when you realize most people haven't
| ever made the connection that the primary point of education is
| to make effective economic contributors in your society, rather
| than just being something you do because it's just what we do.
|
| We are going to use powerful AI to teach kids to do jobs that
| AI will almost certainly do better in 10-20 years?
|
| Like I get that there is a notion of "What else are we supposed
| to do?", but it still just feels so silly and futile to go
| along with. Like "Lets use AI to teach kids how to
| program!"....uhhh, the writing is on the wall
| spion wrote:
| Its not certain we'll get to that point, and if we do we'll
| probably need to rethink society as a whole. We have a lot of
| training data on human knowledge, discussion and Q&A, but
| very little on humans actually working and going through
| their thought process, which I suspect is why projects like
| AutoGPT aren't really that good [1].
|
| [1]: https://www.reddit.com/r/AutoGPT/comments/13z5z3a/autogp
| t_is...
|
| Relatively high fidelity and public data for some domains
| does exist, however (think all github commits, issues,
| discussions and pull requests as a whole). For those domains,
| it indeed might be only a matter of time.
| earthboundkid wrote:
| It's been so long since calculators hit that I guess we all
| forgot what that was like, but Wolfram Alpha can solve all of
| the problems in a typical math textbook. Now writing based
| classes have the same problems, but the solutions are pretty
| similar:
|
| - Make kids show their work (outlines, revision histories)
|
| - Retool to focus on the things where the tools can't do all
| the work (proofs, diagrams, word problems for math; research,
| note gathering, synthesizing for writing)
|
| - After kids learn the basics, incorporate the tools into the
| class in a semi-realistic way (using a TI-whatever in the
| last years of high school math education)
| loandbehold wrote:
| ChatGPT can also "show work". Requiring students to show
| work doesn't prevent cheating.
| throwuwu wrote:
| Show work in the context of chatgpt means show each
| prompt and response and any edits or collation that you
| performed.
| redox99 wrote:
| > Wolfram Alpha can solve all of the problems in a typical
| math textbook
|
| I mostly disagree (and I used WolframAlpha thoroughly
| during my engineering education). Wolfram can solve well
| encapsulated tasks (solve for X, find the integral, etc).
| Even then often it gives you a huge expression, whereas
| doing it by hand you can achieve a much simpler expression.
|
| It doesn't really handle complex problems, at least like
| the ones you'd find in a college level math or physics
| course. It can be a tool for solving certain steps within
| those problems (like a calculator), but you can't plug the
| whole thing into Wolfram and get an answer.
|
| GPT4 is kinda OK at this, like 50%+ success rate probably,
| highly dependent on how common the problem is.
| danenania wrote:
| "We are going to use powerful AI to teach kids to do jobs
| that AI will almost certainly do better in 10-20 years?"
|
| I think understanding how to work well with AI and what its
| limitations are will be helpful regardless of what the
| outcome is.
|
| Even if silicon brains achieve AGI or super-intelligence, I
| think it's highly unlikely that they will supersede
| biological brains along every dimension. Biological brains
| use physical processes that we have very little understanding
| of, and so they will likely not be possible to fully mimic in
| the foreseeable future even with AGI. We don't know exactly
| how we'll fit in and be able to continue being useful in the
| hypothetical AGI/super-intelligence scenario, but I think
| it's almost certain there will be gaps of various kinds that
| will require human brains to be in the loop to get the best
| results.
|
| And even if we _do_ assume that humans get superseded in
| every conceivable way, AGI does not imply infinite capacity,
| and work is not zero sum. Even if AI completely takes over
| for all the most important problems (for some definition of
| important), there will always be problems left over.
|
| Right now, just because you aren't the best gardener in the
| world (or even if you're one of the worst), that doesn't mean
| you couldn't make the area around where you live greener and
| more beautiful if you spent a few months on it. There is
| always some contribution you can make to making life better.
| isaacremuant wrote:
| > make effective economic contributors in your society,
| rather than just being something you do because it's just
| what we do.
|
| That's your utilitarian take that would eliminate many things
| you would consider superfluous. Social sciences, history,
| art, useful personal skills that might not be directly to the
| economy (like home related stuff or questioning authority)
|
| It should be about empowering citizens in many ways
| regardless of how much they'll end up contributing to
| society. As for historical examples, women and wealthy people
| who studied and couldn't or didn't need to work still
| studied.
|
| If you subscribe to the purely resource exploitation view,
| you end up on the road to optimize for the elites in power
| through lobbying and corporate manipulation.
|
| Having a population that thinks for themselves may actually
| lead to more unrest and economic uncertainty of all the KPIs
| that corporations usually love.
|
| Of course, corps will want education to be a specialized
| training ground for future human resource exploitation and
| govs might want to create voters for their party (or nation
| build through ideology and principles) but just because
| either might get their way to a hig degree doesn't mean that
| "the primary purpose" is what they get away with.
| bbor wrote:
| The purpose of education is not labor preparation :)
| Workaccount2 wrote:
| It absolutely, unequivocally is. People can romanticize it
| anyway they want, but formal education is very different
| than religion camp, painting class, or spiritual retreat
| training.
|
| I feel for people who don't get this or perhaps never
| contemplated it, but the system is designed to breed good
| workers and sort them into bins. And its not a bad thing
| either. Sure there are non-economic self contained
| benefits, but those are perks, not purposes.
| A4ET8a8uTh0 wrote:
| I think, and this does not directly contradict your post,
| because I do think you are not far off, but formal
| education is supposed to help a person find their place
| in society. Not everyone becomes plumber, electrician,
| lawyer, mba, or engineer. Some become artists, activists
| or, heavens forfend, politicians.
| sanderjd wrote:
| It absolutely, unequivocally, is _one_ of the reasons
| universal education to a general level is a valuable
| investment for a society.
|
| But it is not the _only_ reason, or (in my view) even the
| most important reason.
|
| Maybe before assuming people haven't contemplated what
| you're saying, you could try to contemplate what else
| general education might be buying us. Maybe by imagining
| how it would look if school was actually just job
| training starting in elementary school, rather than
| covering all these other things.
| lewhoo wrote:
| I share your view on this. I guess it all hinges on
| whether AGI is possible and if so then how fast is it
| coming. If we won't achieve AGI then education is still
| necessary to push knowledge further and since we don't
| know who is going to achieve this it makes sense (right
| now) to still push education for all as social
| obligation.
| [deleted]
| saint_fiasco wrote:
| It is one important purpose, but education has lots of
| stakeholders who each have their own purposes.
|
| The students want to learn things and socialize with
| peers.
|
| Teachers want to teach, earn a living, get respect of
| society.
|
| Parents want their children to be taught, but also want
| their kids to be taken care of by other adults so they
| can go to work in peace. Poorer parents in particular
| also need their kids to be fed and sometimes schools have
| to do that too.
|
| Governments want an educated citizenry that is
| productive, pays taxes, knows the basics of law, civics
| and so on. They also want to monitor and protect
| unfortunate children who have bad parents.
|
| If schools only had one purpose you wouldn't see the
| stakeholders fight each other so often. But in reality
| parents fight governments over the curriculum, students
| fight teachers over the amount of work, teachers fight
| government/parents over their wage and so on.
| cheonic7394 wrote:
| > The students want to learn things and socialize with
| peers.
|
| No. Students want to socialize with peers or play
| sports/video games. Not learn.
|
| > Teachers want to teach, earn a living, get respect of
| society.
|
| This is correct
|
| > Parents want their children to be taught, but also want
| their kids to be taken care of by other adults so they
| can go to work in peace. Poorer parents in particular
| also need their kids to be fed and sometimes schools have
| to do that too.
|
| Also correct. Parents want schools to be daycare, or for
| elite families, schools are networking opportunities
|
| > Governments want an educated citizenry that is
| productive, pays taxes, knows the basics of law, civics
| and so on. They also want to monitor and protect
| unfortunate children who have bad parents.
|
| Correct. But a population can be productive while being
| largely uneducated (see China)
|
| But despite the different priorities of the groups, "the
| student learning", is not one of them
| conception wrote:
| > No. Students want to socialize with peers or play
| sports/video games. Not learn.
|
| This is frightfully incorrect. Students definitely love
| to learn. They do not like to be stuffed in a chair and
| lectured at and forced to do rote activities. But who
| does?
| OkayPhysicist wrote:
| > But a population can be productive while being largely
| uneducated (see China)
|
| China's a terrible example in trying to support your
| point. If the pitch is "education makes better workers"
| then you shouldn't be looking at GDP, you should be
| looking at GDP per capita, aka "Are the workers more
| productive in more educated countries?". And China has a
| terrible GDP per capita. It ranks 64th in the world to
| the US's 7th. Applying slightly more rigorous comparison
| across the world, there's a clear correlation between
| GDP/capita and average educational attainment.
|
| And you have a very dismal view of students. In my area,
| at least at the honors level, students were pretty well
| engaged in learning. Now, that was mostly in order to get
| into good colleges and appease their parents' desire for
| them to learn, but they definitely were eager to have the
| knowledge that was being taught. By the time you get to
| college, a fair fraction of the students are truly
| engaged with the material for the material's sake. Even
| moreso in degrees that aren't glorified trade school
| programs.
| sanderjd wrote:
| I don't think the population of China is "largely
| uneducated" in the sense that began this thread. It is
| not rare for Chinese kids to go to school, and those
| schools are not only used for job training.
| all2 wrote:
| And most of that fighting goes away when parents assume
| the responsibility of teaching their own children. This
| particular responsibility is presently only available to
| those who build their lives around the idea of a nuclear
| family and home schooling. I used to think that one had
| to achieve some middle/upper class financial status to
| make this viable (and having money does make this
| easier), but I've seen poor families manage home
| schooling quite well. This requires community (a church
| with others who are home schooling, a home school co-op)
| because the kiddos will age out of your ability to teach
| rather quickly (10-12 and suddenly they're doing math you
| haven't touched in 2 decades, or more involved history or
| literature that the average parent may not be equipped to
| teach well, or electives that fall outside the experience
| of the parents).
| sanderjd wrote:
| > _And most of that fighting goes away when parents
| assume the responsibility of teaching their own
| children._
|
| No, because you've forgotten one of the important
| stakeholders here, which is society at large, which has a
| interest in ensuring a general level of shared education.
| Which once again results in fighting, as parents who are
| teaching their own children run up against government
| requirements that they may not agree with.
| tivert wrote:
| >> The purpose of education is not labor preparation :)
|
| > It absolutely, unequivocally is. People can romanticize
| it anyway they want, but formal education is very
| different than religion camp, painting class, or
| spiritual retreat training.
|
| No it isn't "absolutely, unequivocally." What _specific_
| formal education are you talking about?
|
| Especially in the past, but continuing somewhat into the
| present-day, formal education has mainly been about
| _enculturation_ , and not "labor preparation." That can
| seen clearly by the former emphasis on dead classical
| languages and the continued (though lessened) emphasis on
| literature and similar subjects. There's zero value in
| reading Shakespeare or Lord of the Flies from a "labor
| preparation" standpoint.
|
| However, I do see a modern trend where many people are so
| degraded by economics that they have trouble perceiving
| or thinking about anything except through the lens of
| economics or some economics-adjacent subject.
| ifyoubuildit wrote:
| > There's zero value in reading Shakespeare or Lord of
| the Flies from a "labor preparation" standpoint.
|
| There is zero labor prep value in learning to extract
| information from text (that you possibly have no interest
| in reading)?
| Workaccount2 wrote:
| Enculturation is just to make it so people who otherwise
| cannot bear much economic fruit can at least not be
| producing negative value. Studying the classics, if
| nothing else, should at least produce a well adjusted
| human. That in and of itself has value.
|
| But those are the fringes of the education system. The
| core focus is on producing high value citizens that will
| produce far more than they take. This is abundantly clear
| if you look at the social valuations of high caliber
| students with fruitful degrees.
| lemmsjid wrote:
| One of the initial proponents of public education in
| America, Horace Mann, saw education in a two pronged
| manner.
|
| First, a functional democracy requires that the
| citizenship be well informed and capable of critical
| thinking: "A republican form of government, without
| intelligence in the people, must be, on a vast scale,
| what a mad-house, without superintendent or keepers,
| would be on a small one."
|
| He also saw the economic side, saying that education was
| an equalizer for people in terms of helping them to reach
| their full potential.
|
| I quite agree with his assessment. In a system where
| everyone has a vote, it becomes quite important that
| everyone have a sense of things that extends beyond their
| career vocation. His imagery of an uneducated republic
| being a madhouse makes much sense from this perspective.
|
| Insofar as we have given up any optimism about the
| democratic enterprise, then certainly we could look at
| education as purely to put people into economic bins, but
| at least in my own public school education in the US,
| every student did get significant doses of math, history,
| science, etc., outside of their expected career
| direction.
|
| This to me suggests that there is a tension, not fully
| resolved and HOPEFULLY never fully resolved, between
| education-for-economics and education-for-democracy. I
| think it's quite pessimistic though to give up the ghost
| on the education-for-democracy aspect.
| kaibee wrote:
| > It absolutely, unequivocally is.
|
| This is a category error. You're talking about the
| education system as though it was designed from accurate
| first principles towards a specific intended outcome.
| Like, you can say that the absolute unequivocal purpose
| of a nuclear reactor is to heat water. But when we're
| looking at sociopolitical organizations, that have been
| codified through various political forces over tens of
| generations, through the demands of ever-shifting
| stakeholders, etc this is not a useful framing.
|
| > but the system is designed to breed good workers and
| sort them into bins. And its not a bad thing either. Sure
| there are non-economic self contained benefits, but those
| are perks, not purposes.
|
| I think a more accurate framing is that the system is
| currently evolved into strongly emphasizing this mode of
| behavior.
| borroka wrote:
| This is true, and it is puzzling how people think that
| there are geniuses, evil or otherwise, who have planned
| the educational system so as to achieve some sort of
| results for the society at large that go beyond the
| mundane.
|
| Where the mundane is keeping young people out of the
| streets, maybe teach them arithmetic and some grammar.
| And the leaders, that is, the teachers, want, most of the
| time, just to bring home a salary, not funnel the masses
| from schools to office desks or assembly lines.
| bbor wrote:
| Ok it's hard to say anything "absolute" about the purpose
| of education since it's a philosophical/political stance
| and not a physical phenomenon, but I appreciate your
| cynicism. I see how the rich and powerful have shaped our
| public education institutions, and agree that American
| schools at least often push students into rote labor-
| focused paths.
|
| That said, the discussion is about the purpose of
| classrooms in a world of AI, and I think it's a good time
| to remember the less economic purposes of education that
| have always been there under the surface. I think few
| teachers are more driven by bringing economic benefits to
| their students than enriching/exciting/interesting them,
| and secondary and post secondary education has always had
| a huge variety of non-occupational courses, from ancient
| history to obscure languages to nice math.
|
| Overall, I imagine we agree on the most important thing:
| if education does end up changing immensely as AGI gains
| footing, we should change it to be less economic
| bheadmaster wrote:
| Depends on who you ask.
|
| "Purpose" in an entirely subjective thing.
| albumen wrote:
| Workaccount2 beat me to it. But it's well documented,
| e.g. https://qz.com/1314814/universal-education-was-
| first-promote...
|
| "Much of this education, however, was not technical in
| nature but social and moral. Workers who had always spent
| their working days in a domestic setting, had to be
| taught to follow orders, to respect the space and
| property rights of others, be punctual, docile, and
| sober. The early industrial capitalists spent a great
| deal of effort and time in the social conditioning of
| their labor force, especially in Sunday schools which
| were designed to inculcate middle class values and
| attitudes, so as to make the workers more susceptible to
| the incentives that the factory needed."
| bbor wrote:
| I really take issue with describing this as some sort of
| well documented absolute fact that education is about
| labor. It isn't the 1850s! Just because capitalists
| interested in skilled labor were "some of" the biggest
| supporters of English public schools in the 1850s doesn't
| mean we should forever commit our society to their
| designs.
|
| From the Marxist paper backing that article:
| England initiated a sequence of reforms in its education
| system since the 1830s and literacy rates gradually
| increased. The process was _initially motivated by a
| variety of reasons_ such as religion, enlightenment,
| social control, moral conformity, socio-political
| stability, and military efficiency, as was the case in
| other European countries (e.g., Germany, France, Holland,
| Switzerland) that had supported public education much
| earlier.15 However, in light of the modest demand for
| skills and literacy by the capitalists, the level of
| governmental support was rather small.16 In the
| second phase of the Industrial Revolution, consistent
| with the proposed hypothesis, the demand for skilled
| labor in the growing industrial sector markedly increased
| (Cipolla 1969 and Kirby 2003) and the proportion of
| children aged 5 to 14 in primary schools increased from
| 11% in 1855 to 25% in 1870 (Flora et al. (1983)).17
|
| Sorry if I sound challenging or rude - just hurts my soul
| to imagine people giving in to the capitalist's desire
| for us to interpret our prison as a fact of nature
| ctoth wrote:
| Nah, POSIWID.
| bheadmaster wrote:
| Depends on how you define "purpose".
|
| To me it means "the reason why <person> does <thing>", so
| the phrase "purpose of a system" doesn't make sense
| without a particular human subject who's interacting with
| the system.
| jimhefferon wrote:
| Certainly a major purpose.
| vlark wrote:
| It is today. It used to not be. Blame Reagan when he was
| governor of California:
| https://www.chronicle.com/article/the-day-the-purpose-of-
| col...
| taneem wrote:
| It's hard to stare in the face of the abyss.
| sanderjd wrote:
| I'm glad I learned how to figure out the shape of a function,
| even though graphing calculators were already a mature
| technology at the time I was learning that (and had been even
| more obsoleted by jupyter notebooks by the time I entered the
| workforce).
|
| It's al very tricky to figure out what foundational knowledge
| will be useful in 15 years (that's why we pay educators the
| big bucks ... oh wait ...), but just because it's hard and
| uncertain doesn't mean it isn't valuable to try to figure it
| out.
| sebzim4500 wrote:
| >the primary point of education is to make effective economic
| contributors in your society
|
| I see zero evidence that this is true. This is not the stated
| nor revealed preference of a significant portion of the
| population.
| [deleted]
| atomicUpdate wrote:
| People don't spend thousands of dollars and spend years of
| their lives to get a degree because the process is
| inherently fun. The reason to go to college is so you can
| get a good job that pays more than if you stopped at high
| school. Same thing with getting a high school diploma
| (though, less so).
|
| What more evidence than that do you need?
| actionfromafar wrote:
| I think you conflate economic contribution and good job
| too much. They often don't overlap much.
| geek_at wrote:
| Good job!
|
| Btw your site is exposing the .git directory
| https://www.revisionhistory.com/.git/config
|
| Might want to set a filter rule for that
| imachine1980_ wrote:
| there is any extension to do that or you manually do it, i
| have adhd so this type of tools save y life, this happens to
| me a few months back i kill the vm host and make new os,
| maybe too overboard but i works.
| geek_at wrote:
| I'm using DotGit [1] which checks for .git and .env files
| for every site you visit. You wouldn't believe the things I
| randomly found (and reported)
|
| https://chrome.google.com/webstore/detail/dotgit/pampamgoih
| g...
| dtnewman wrote:
| OMG... feedback like this is soooo helpful! Thank you.
| Nothing concerning in the .git directory, but yeah, I
| probably shouldn't be showing that. I will update my sync
| process to exclude that. Thank you!
|
| Edit: should be fixed now :)
| HellsMaddy wrote:
| You were just being true to your name, by offering _your_
| revision history!
|
| On a more serious note, this is a great example of how to
| handle a vulnerability report - fix it, change your
| processes, and say thank you! (geek_at could probably have
| done better by disclosing this in private first, though)
| geek_at wrote:
| I figured there wouldn't be any secrets in the git and
| also if your site is on hacker news (or top of a comment
| thread on hn) you are glued to it so I thought they'd fix
| it fast
| floydianspiral wrote:
| Just wanted to say I had this _exact_ same idea a week ago and
| was googling around to see if anyone had done this yet. I guess
| I don't have to build it now haha. Hopefully you can sell this
| to universities/the right people and make some headway on it!
| dtnewman wrote:
| There are a bunch that purport to do "AI detection" and a few
| others that are similar to mine (and more coming, I'm sure),
| but I like to think that mine is the most convenient to use
| :)
| vlark wrote:
| How does your extension differ from Draftback?(https://chrome.g
| oogle.com/webstore/detail/draftback/nnajoiem...)
|
| Just curious.
| dtnewman wrote:
| Not entirely different but built more specifically for
| teachers, so that they get relevant information without
| having to watch the video every time. Also, draftback doesn't
| integrate with Google Classroom.
| TechBro8615 wrote:
| If you're not using the latest, best tools available to teach
| your students - and if you're not teaching them about those
| tools - then you are a bad teacher. Period.
|
| Language models should be introduced in classrooms because
| they're a part of society now, and they're here to stay. Kids
| should learn about them - how they work, where they came from,
| how to use them - just like they should learn how to type or
| send an email.
|
| It does remind me of my experience as a middle schooler in
| 2002, when our class took a trip to the library, and the
| librarian gave us a lesson on "how to use search engines
| properly." In retrospect, the societal worry at the time was
| about search engines replacing librarians, so it was perhaps
| notable that this librarian had the humility to teach us how to
| use her "replacement." Surely the same applies to teachers and
| ChatGPT: a good teacher will not be worried about whatever
| impact ChatGPT might have on them personally, but will instead
| take the opportunity to teach their students about the new
| horizons opened up by this technology.
|
| (The funny part of that seminar in the library was that the
| lesson emphasized the need to construct efficient, keyword-
| based queries, rather than asking natural language questions to
| the search engine directly - but twenty years later we've come
| full circle and now you actually can just ask your question to
| the language models.)
| [deleted]
| grozmovoi wrote:
| That's how I've been using it for a week now to clarify certain
| concepts from computer science that I always had little
| confidence in and it has been excellent.
| theprivacydad wrote:
| The main issue that is not addressed is that students need points
| to pass their subjects and get a high school diploma. LLMs are a
| magical shortcut to these points for many students, and therefore
| very tempting to use, for a number of normal reasons (time-
| shortage, laziness, fatigue, not comprehending, insecurity,
| parental pressure, status, etc.). This is the current, urgent
| problem with ChatGPT in schools that is not being addressed well.
|
| Anyone who has spent some time with ChatGPT knows that the 'show
| your work' (plan, outline, draft, etc.) argument is moot, because
| AI can retroactively produce all of these earlier drafts and
| plans.
| phalangion wrote:
| I suspect it's not being addressed well because it's one of the
| fundamental challenges of school in the first place. For many,
| assessment and grades are the end goal, and any learning that
| happens is secondary.
| blueboo wrote:
| The status quo is a miserable mess, but consider "assessment
| and grades" is the best apparent evidence of ultimate-
| goal-"learning". Is that not reasonable for people who pay
| for education to ask for?
|
| If it is reasonable, then the problem is likely the form of
| evidence and not its requirement se de
| jorgemf wrote:
| I think your argument is similar to the one we had with the
| calculators and later with Internet. I think ChatGPT is another
| tool. For sure there is going to be lazy people who use it and
| won't learn anything, but it also sure it is going to be a
| boost for so many people. We will adapt.
| bloppe wrote:
| Calculators solve problems that have exactly one correct
| answer. You cannot plagiarize a calculator. They are easy to
| incorporate into a math curriculum while ensuring that it
| stays educationally valuable to the students.
|
| LLM's, the internet, even physical books all tend to deal
| primarily with subjective matters that can be plagiarized.
| They're not fundamentally different from each other; the more
| advanced technologies like search engines or LLM's simply
| make it easier to find relevant content that can be copied.
| They actually remove the need for students to think for
| themselves in a way calculators never did. LLM's just make it
| _so easy_ to commit plagiarism that the system is starting to
| break down. Plagiarism was always a problem, but it used to
| be rare enough that the education system could sort-of
| tolerate it.
| argiopetech wrote:
| I argue that calculators are overtly harmful to arithmetic
| prowess. In summary, they atrophy mental arithmetic ability
| and discourage practice of basic skills.
|
| It pains me (though that's my problem) to see people pull
| out a calculator (worse, a phone) to solve e.g., a
| multiplication of two single digit numbers.
| bloppe wrote:
| Sure, calculators made people worse at mental arithmetic,
| but arithmetic is mechanical. It's helpful sometimes, but
| it's not intellectually stimulating and it doesn't
| require much intelligence. Mathematicians don't give a
| shit about arithmetic. They're busy thinking about much
| more important things.
|
| Synthesizing an original thesis, like what people are
| supposed to do in writing essays, is totally different.
| It's a fundamental life skill people will need in all
| sorts of contexts, and using an LLM to do it for you
| takes away your intellectual agency in a way that using a
| calculator doesn't.
| seydor wrote:
| "Please teach our models how to replace you"
| elashri wrote:
| ChatGPT (and other LLMs) still cannot do (and probably never
| will) well in any consistent manner in physics. I don't think
| physics departments are worrying much about the AI. The only
| thing that can help students in a more reliable way is some
| coding projects. Which is okay because in most of these classes
| (computational physics) students are encouraged to work together,
| seek help and even ask on the internet (before ChatGPT...etc.) It
| was always about how to explain and describe the thinking. AI (at
| least in its current form) is very weak at the problem-solving
| aspects and in understanding concepts.
|
| On the other hand, as non-native English speaker, it save me much
| time into paraphrasing my poorly thoughts and writing that I
| would need an hour to express in a good formal manner. It can
| guide you in some aspects of coding tasks, introduce you to some
| APIs ...etc. This is actually a good tool that I agree that a
| good student (researcher) would use wisely to gain some knowledge
| and save sometime.
|
| It will not help much with solving a cart on an inclined plane
| with some friction and a pendulum hanging from the cart. No, it
| will not be able to give you the normal modes.
|
| This is just a personal experience and opinion, though. It might
| be completely different in other areas.
| witherk wrote:
| Really? Granted LLMs might be a little weaker in physics than
| other areas. If someone figures out get LLMs to use a
| mathmatica API, and train it some more I can imagine some rapid
| progress.
| tipsytoad wrote:
| Really? The only reason that ChatGPT is more adept at coding
| problems is because there is vastly more training data. There's
| nothing fundamentally different between problem solving a
| coding problem and physics problem. Like all the others before
| it, I don't think this comment will age too well.
| chaxor wrote:
| You should really think about making statements like "AI will
| probably never do X well". Many formal linguists made very
| strong statements about the impossibility of (__insert feature
| here__, such as pragmatic implicature) to be learned by AI,
| which they are now being shown to be wrong.
|
| For instance, Miles Cranmers work on using GNNs for symbolic
| regression is a start towards useful new discoveries in
| physics. Transformers are just GNNs with a specific message
| passing function and position embeddings. It's not hard to see
| that either by a different architecture, augmentation, or
| potentially even just more of the same, we can get to new
| discoveries in physics with AI. The GNN symbolic regression
| work is evidence that it's already happened.
|
| As for grounding knowledge in the LLMs we have exactly just
| this moment (a rather short-sighted view) there is plenty of
| interest and work in the area, for which I expect will be
| addressed in a multitude of ways. It's ability with grounded
| physics knowledge is not perfect, but it's _very good_ w.r.t.
| the common knowledge of a human off the street. External
| sources alone make it much better, and that 's just the
| exceedingly short-sighted analysis of what we have today.
| Alupis wrote:
| I can't be the only one thinking, given how much ChatGPT gets
| confidently wrong, that it's _way_ too early to be talking about
| funneling this into classrooms?
|
| The internet is bursting with anecdotes of it getting basics
| wrong. From dates and fictional citations, to basic math
| questions... how on earth can this be a learning tool for those
| who are not wise enough to understand the limitations?
|
| OpenAI's examples include making lesson plans, tutoring, etc.
| Just like with self driving cars - too much too quick, and many
| are not capable of understanding the limits or blindly trust the
| system.
|
| ChatGPT isn't even a year old yet...
| yieldcrv wrote:
| Its probably the perfect time to be talking about it giving how
| fast the advancements occur
|
| They probably wont be using that model for another year, while
| people will be using that website for many years
| Zetice wrote:
| It doesn't get the kind of things taught in most classrooms
| wrong in the way it gets business applications wrong, because
| there's a (mostly) correct response that isn't going to vary a
| ton from source to source. The weighting will always push its
| responses towards the right answer, though in moments of
| relative uncertainty I guess if you had the temperature turned
| super high you might get some weird responses.
|
| It'll (mostly) always know about the Sherman Antitrust Act and
| what precipitated its passage, for example.
|
| That said, OpenAI repeatedly suggests verifying responses and
| says, "make it your own" which IMO includes spot checking for
| correctness.
| Alupis wrote:
| > It doesn't get the kind of things taught in most classrooms
| wrong in the way it gets business applications wrong, because
| there's a (mostly) correct response that isn't going to vary
| a ton from source to source.
|
| It's fabricated legal cases and invented citations to back up
| it's statements.
|
| The issue is, it can be difficult to know when it's wrong
| without putting in a lot of effort. Students won't put in the
| effort, and that assuming they're even capable of understand
| when/where it's wrong in the first place.
|
| Just like self driving cars - we can say "pay attention and
| keep your hands on the wheel at all times"... but that's not
| what everyone does and we've seen the consequences of that
| already.
|
| We need to be careful here. This tech is new. ChatGPT hasn't
| even existed (publicly) for a year. Getting it wrong and
| going too fast has consequences. In the education space in
| particular, those consequences can be profound.
| Zetice wrote:
| This is nothing at all like self driving cars; firstly the
| risks are not even in the same ballpark, and secondly every
| piece of advice given includes, "check the response
| independently." It says nothing about a tool like this if
| people choose to misuse it.
|
| At some point, using LLMs like ChatGPT recklessly is on the
| user, not the tool.
| [deleted]
| spion wrote:
| The internet is sampling the interesting samples, not
| necessarily a realistic picture.
|
| I'd love to see a good research study on this that shows the
| actual error rate as well as a comparison with other non-human
| alternatives (e.g. googling, using textbook only, etc) as well
| as possibly human (personal tutor, group instructor, ...)
| Alupis wrote:
| > The internet is sampling the interesting samples, not
| necessarily a realistic picture.
|
| A tutor is expected to know the subject and guide the
| student. If, say, 10% of the time it guides the student into
| a false understanding, the damages are significant. It's very
| hard to unlearn something, particularly when you have
| confidence you know it well.
|
| My personal adventures with ChatGPT are probably close to a
| 50% success rate. It gets some stuff entirely wrong, a lot of
| stuff non-obviously wrong, and even more stuff subtly wrong -
| and it's up to you to be knowledgeable enough to wade through
| the BS. Students, learning a subject in school are by
| definition not knowledgeable enough to discern confident BS
| from correctness.
|
| Will ChatGPT be useful in the future? Yes, almost certainly.
| But let's not rush this and get it very wrong. The
| consequences can be staggering in the education space -
| children or adults.
| spion wrote:
| I'm getting north of 95% success with GPT4, and while a
| dedicated tutor or a group instructor would definitely be
| better, none of the other non-human alternatives come
| close. Searching the internet can also lead to wrong
| information and false understanding - all self-directed
| methods have this pitfall.
|
| Still - a well designed study will give us a much better
| picture of where we actually are. I think that would be
| extremely valuable.
| Der_Einzige wrote:
| The reality is that effective LLMs, combined with some kind of
| knowledge retrieval, are coming close to becoming the idealized
| individual tutor. This is also a daily reminder that studies show
| that individual tutoring is objectively the best way to educate
| people:
|
| https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem
| swyx wrote:
| [deleted because dont want to be drawn into flamewar]
| maxbond wrote:
| > [deleted because don't want to be drawn into flamewar]
|
| Good on you. I'm not confident I would have the restraint.
| empath-nirvana wrote:
| Chatgpt does tutoring just fine, i've had it draw up a lesson
| plan for me and execute with hardly any special prompt
| engineering at all, just sort of like: "Please tutor me on
| french adverbs, please start by asking me a few questions to
| find out what I already know," and it dialed in fairly well
| to my level.
| BoorishBears wrote:
| [flagged]
| gojomo wrote:
| As parent deleted, which tweet was being referenced?
| BoorishBears wrote:
| https://twitter.com/swyx/status/1697121327143150004
|
| There was no need to delete except being so trivially
| shown to be wrong, I didn't chase them to twitter or
| something.
|
| But that's the MO for the tech grifter:
|
| - you herd the few people who are unsure and will listen
| to any confident voice
|
| - the people who know the most about <insert tech> tend
| to not like that, but when the herd is small just defer
| to their confrontations with humility and grace, and use
| that show of virtue to continue herding
|
| - the more people you herd, the easier it is to get
| incrementally smarter people to follow: We're all subject
| to certain blindspots in a large enough crowd
|
| - the more people who follow someone who's clearly wrong,
| the more annoyed people who are knowledgeable about
| <insert tech> will get about the grifter
|
| - This makes each future confrontation more heated, so
| now the heated nature of the confrontation is
| justification to disengage without deferring. Just be
| confidence and continue herding
|
| - rinse and repeat until people who don't follow the
| grifter gospel are a minority.
|
| --
|
| The actual VC dollars start chasing whatever story their
| ilk has weaved by then. And eventually it all collapses
| because there was no intellectual underlying: just self-
| enrichment.
|
| That realization from the crowd exhausts any good will
| that was left for <insert tech> and the grifters move on
| to the next bubble.
| gojomo wrote:
| Thanks for ref!
|
| I share your frustration at those who confidently &
| prematurely write-off rapidly-changing AI tech based on
| dated examples, cherry-picked anecdotes from the
| unskilled, & zero extrapolation based on momentum. They
| do a a double-disservice to those who trust them: 1st, by
| discouraging beneficial work on ripe, solvable
| challenges, and 2nd, encouraging a complacency about
| rapid new capabilities that may leave vulnerable people
| at the mercy of others who were better prepared.
|
| But, not being familiar with the account in question, I
| don't see those attitudes in that tweet. It seems more an
| assessment "no one has quite nailed this yet" than
| defeatism over whether it's possible.
| BoorishBears wrote:
| The tweet was just a reference in their comment:
|
| > i have yet to see any ai system properly implement
| individual level-adjusting tutoring. i suspect because
| the LLM needs a proper theory of mind
| (https://twitter.com/swyx/status/1697121327143150004)
| before you can put this to practice.
|
| But to be perfectly transparent, I'd _never_ respond so
| harshly to someone for just that tweet, or even that
| comment.
|
| Instead it's the fact they're currently a synecdoche for
| the crypto-ization of AI. This person doesn't usually
| dismiss AI, instead they heavily amplify the least
| helpful interpretations of it.
|
| _
|
| This is one of the largest voices behind the new "the
| rise of the AI engineer" movement in which this author
| specifically claimed researchers were now obsolete to AI
| _due to the tooling they built_ :
| https://news.ycombinator.com/item?id=36538423
|
| Like, I get wanting to make money by capturing value as
| much as the next person... but basing an entire brand on
| declaring that the people who are enabling your value
| proposition are irrelevant _just to create a name for
| yourself_ is pointlessly distasteful.
|
| The only thing he gained by saying researchers don't
| matter and understanding Attention doesn't matter is
| exactly I described above: a wild opinion that attracted
| the unsure, pissed off the knowledgeable, and served as a
| wedge that he could then carve out increasingly large
| slices of the pie for himself with.
|
| Fast forward 2 months and now the process has done its
| thing, the "AI engineer" conference is being sponsored by
| the research driven orgs because they don't want to be on
| the wrong side of the steamroller.
| minimaxir wrote:
| You are replying to swyx.
| BoorishBears wrote:
| Thank you, updated indirect references to direct: I know
| it's un-HN but _Jesus christ_ am I tired of hearing this
| person 's garbage quoted ad nauseam like a gospel
| maxbond wrote:
| Jeez bro. This is a pretty intense reaction to a lukewarm
| and reasonable take. Personally I appreciate an "AI
| influencer" being down to earth and being willing to say
| that the technology isn't magic, amidst a huge amount of
| hype. If you think people are parroting swyx uncritically
| - that's hardly a criticism of swyx, is it?
|
| I think you should keep reflecting on your realization
| about how people got swept up in the cryptoasset hype.
| You can believe this technology is promising and will
| improve dramatically without being a fanatic. You can
| disagree without going for the jugular.
| rvz wrote:
| Who?
|
| It has been known that LLMs cannot reason transparently
| nor can these black-boxes explain themselves without
| regurgitating and rewording their sentences to sound
| intelligent, but instead are confident sophists no matter
| what random anyone tells you otherwise.
|
| EDIT: This is the context before it was deleted by the
| grandparent comment:
|
| >> i have yet to see any ai system properly implement
| individual level-adjusting tutoring. i suspect because
| the LLM needs a proper theory of mind
| (https://twitter.com/swyx/status/1697121327143150004)
| before you can put this to practice.
|
| My point still stands.
| gojomo wrote:
| Emphatic assertions "it has been known" are anti-
| convincing.
| BoorishBears wrote:
| You're showing why I'm so annoyed by this perfectly!
|
| It's malicious to rope theory of mind into justifying
| that point because it's _just wrong enough_.
|
| If the reader doesn't think deeply about why on earth you
| would _ever_ to rope theory of mind into this, your brain
| will happily go down the stochastic parrot route:
|
| "How can it have theory of mind, theory of mind is
| understanding emotions outside of your own, the LLM has
| no emotions"
|
| But that's a complete nerdsnipe.
|
| --
|
| If instead you distrust this person's underlying
| motivations to not be genuine intellectual curiosity, but
| rather to present a statement that is easily agreed to
| even at the cost of being wrong... you examine that
| comment at a higher level:
|
| What is theory of mind adding here besides triggering
| your typical engineer's well established "LLMs are over-
| anthropomorphized" response? Even in psychology it's a
| hairy non-universally accepted or agreed upon concept!
|
| Theory of mind gives two things at the highest level:
|
| inward regulation: which is nonsensical for the LLM, you
| can tell it what emotion it's outputting as, it does not
| need theory of mind to act angry
|
| outward recognition: we've let computers do this with
| linear algebra for over 2 decades. It's what 5 of the
| largest companies in technology are built on...
|
| --
|
| Commentary like that accounts is built on being _just
| wrong enough_ :
|
| You calmly state wild opinions. There are people who want
| to agree with any calm voice because they're seeking
| guidance in the storm of <insert hype cycle>. They invent
| a foothold in your wild statement, some sliver of truth
| they can squint and maybe almost make out.
|
| Then you gain a following, which then starts to add a
| social aspect: If I don't get it but this is a figure
| head, I must be looking it wrong. Now people are
| squinting harder.
|
| This repeats itself until everyone has their eyes closed
| following someone who has never actually said anything
| with any intention other than advancing their own
| influence.
|
| They don't care how many useful ideas die along the way,
| there's no intellectual curiosity to entice them to even
| stumble upon something more meaningful, it's just
| draining the energy out of what should be a truly
| rewarding time for self-thinking.
| Madmallard wrote:
| personal tutoring and coaching is basically mandatory for
| mastery. name a professional concert pianist or athlete who
| doesn't have one. I act as personal tutor for comp sci students
| and I'm envious of them. I didn't have one and I think it
| really limited my growth.
| binarymax wrote:
| My sister (who is a middle school teacher) and I developed a real
| training program for teachers, and this "guide" from OpenAI is
| quite underwhelming. It doesn't address 90% of the problems
| teachers actually face with AI...this is mostly a brochure on how
| to use ChatGPT to get info.
|
| If you are a teacher or know a teacher who is struggling to adapt
| this school year, I'd be honored to speak with them and see if we
| can help.
| miketery wrote:
| Can you share some of the outline or problems your guide
| solves?
| binarymax wrote:
| Sure thing! https://max.io/teacher-training.html
| rmbyrro wrote:
| I thought it gives good guidance.
|
| Of course it's not a 4-hour in-person workshop, like what
| you're proposing. But it already adds positive value.
|
| It covers a good amount of the topics your course covers, I
| think. Introductory-level, perhaps, but it's a start.
|
| Honestly? I don't understand your comment - I read as negative
| towards OpenAI (am I wrong?)
|
| I'd expect someone like you to praise OpenAI's willingness to
| contribute in this space.
| chankstein38 wrote:
| Yeah I read this and was repeatedly surprised and thankful
| they finally put some of these things in writing. That
| section about whether or not detectors work is going to be
| hugely helpful to students wrongly accused of using AI to
| generate their essays or something. Take that page and show
| it to your teacher "Look! The publisher of the thing says
| those detectors aren't accurate!"
|
| I'm with you the parent seems more like an ad and negativity
| towards OpenAI.
| A4ET8a8uTh0 wrote:
| << I'd expect someone like you to praise OpenAI's willingness
| to contribute in this space.
|
| Why would you assume OP position in this case? There are
| multiple valid, albeit unstated reasons, why the company in
| question may not be the best vessel for those efforts. And,
| just to make sure that is not left unsaid, it is not like
| openAI is doing it for altruistic reason.
|
| I do agree that it is not a bad starting material, but I
| think you will agree that it is clearly not targeted at group
| that gathers at HN.
| [deleted]
| halflings wrote:
| This looks like a promotional comment to sell some kind of paid
| "AI Training" [1], doesn't address anything in the linked
| article.
|
| [1] https://max.io/teacher-training.html
| qwertox wrote:
| Oh, I think I just fell for it. I was asking them if they
| could share their knowledge...
| josh-sematic wrote:
| Thanks for the detective work! On the one hand, I don't have
| a problem with someone mentioning a helpful resource they
| developed in a relevant thread, even if it's paid. But it
| would be more honest to disclose that's what's being offered
| rather than disguising it as an offer of a free resource.
| fsloth wrote:
| The prompt 4 "AI teacher" is pretty good for learning group
| theory at least. (Just trying it right now on ChatGPT 4.0)
| rmbyrro wrote:
| I found lots of good value in their publication as well.
|
| Especially for teachers, who I believe (most at least) have
| no clue about prompt engineering and how to talk to an LLM.
| fsloth wrote:
| IMO 'Prompt engineering' is an implication that the LLM:s
| are really immature technology. There is no intrinsic value
| in prompt engineering - it's ok to wait a bit until LLM:s
| get a proper product shell you don't need to walk on
| eggshells over. I would not promote LLM:s as production
| ready offerings until this aspect becomes better.
|
| Using an LLM is like having a therapy session - where you
| the user are the therapist. Humans should not need to learn
| en masse become AI therapists, that's a the inverse of what
| should happen :D
| qwertox wrote:
| I agree, most don't even know they can tell it how to
| behave.
| westurner wrote:
| TIL about "CoderMindz Game for AI Learners! NBC Featured: First
| Ever Board Game for Boys and Girls Age 6+. Teaches Artificial
| Intelligence and Computer Programming Through Fun Robot and
| Neural Adventure!" https://www.codermindz.com/
| https://www.amazon.com/gp/aw/d/B07FTG78C3/
|
| Codermindz AI Curriculum: https://www.codermindz.com/stem-
| school/
|
| https://K12CS.org K12 CS Curriculum (and code.org, and
| Khanmigo,) SHOULD/MUST incorporate AI SAFETY and Ethics
| curricula.
|
| A Jupyter-book of autogradeable notebooks (for AI SAFETY first,
| ML, AutoML, AGI,) would be a great resource.
|
| jupyter-edx-grader-xblock
| https://github.com/ibleducation/jupyter-edx-grader-xblock ,
| Otter-Grader https://otter-grader.readthedocs.io/en/latest/ ,
| JupyterLite because Chromebooks
|
| What are some additional K12 CS/AI and QIS Curricula resources?
| qwertox wrote:
| > If you are a teacher or know a teacher who is struggling to
| adapt this school year, I'd be honored to speak with them and
| see if we can help.
|
| This is a worldwide issue.
|
| I think it's great what you two did, maybe it would be more
| effective if you did a small article or video on it?
|
| Many would be honored to be able to get help from your
| insights, it's needed. I see how teachers are struggling in
| Germany, while they are still open to embrace this technology.
| binarymax wrote:
| Thanks for the kind words and I agree!
|
| I prefer to do the teacher training workshop in person for
| various reasons, but we have considered recording it.
|
| I've also given 2 open lectures at different libraries (and
| have been asked to do more) for the general public. I should
| certainly record that, since it's more general audience.
| [deleted]
| nirmel wrote:
| I made https://anylearn.ai, an education app built on OpenAI. if
| you click the settings icon, then the teach tab, it will generate
| a teaching guide on any topic. Try it!
| burkaman wrote:
| I put in "8th grade french" and it gave me a guide on how to
| develop a teaching guide, not the teaching guide itself. Like
| "Step 4: Prepare instructional materials", "Step 5: sequence
| the lesson", etc., with generic instructions for each. The Test
| Questions tab has questions about my knowledge of lesson
| planning, not questions for French students.
|
| "College-level calculus" was similar, just vague generic high-
| level advice with no lesson plan or specific guide.
| nirmel wrote:
| Good catch. Will modify the prompts to make it produce the
| desired content.
| ineptitude wrote:
| If there was a tab with a code example when the lesson is
| related to programming, it would be perfect, as the chat
| doesn't detect markdown's code block.
| ajhai wrote:
| > Building quizzes, tests, and lesson plans from curriculum
| materials
|
| Example prompts that OpenAI shared here are a great start.
| However I think these use-cases are better served as micro apps
| built on top of these prompts. For example, a teacher will keep
| coming back to use this prompt with same/similar set of responses
| most of the year. On top of that, enriching the context with
| additional information pulled from local sources will quickly
| become a need.
|
| ChatGPT's custom instructions will help with not having to repeat
| prompts but the interface falls short when it comes to repeat
| narrow use cases. This is where imo LLM apps shine. A simple app
| built with langchain or some low-code platforms and providing
| local data from a vector store can be super powerful.
|
| We recently open-sourced LLMStack
| (https://github.com/trypromptly/LLMStack), a platform that allows
| users to build these micro apps to automate their workflows. Our
| goal is to make these workflows sharable so someone can download
| a yaml file for this prompt and chain and start using it in their
| job.
| tomlue wrote:
| somebody write a textbook chunker that generates context from
| textbooks for LLMs to build anki cards please.
|
| Extra credit if you build a new anki that dynamically generates
| cards with different text and the same meaning to prevent answer
| memorization.
| wavesounds wrote:
| The elephant in the room here is that these LLM's still have
| problems with hallucinations. Even its only 1% or even 0.1% of
| the time thats still a huge problem. You could have someone go
| their whole lives believing something they were confidently
| taught by an AI which is completely wrong.
|
| Teachers should be very careful using a vanilla LLM for education
| without some kinds of extra guardrails or extra verification.
| csa wrote:
| > The elephant in the room here is that these LLM's still have
| problems with hallucinations. Even its only 1% or even 0.1% of
| the time thats still a huge problem.
|
| If you heard the bullshit that actual teachers say (both inside
| and outside of class), you would think that "1% hallucinations"
| would be a godsend.
|
| Don't get me wrong, some teachers are amazing and have a
| "hallucination rate" that is 0% or close to it (mainly by being
| willing to say they don't know or they need to look something
| up), but these folks are the exceptions.
|
| Education as a whole attracts a decidedly mediocre group of
| minds who sometimes (often?) develop god complexes.
| cocoto wrote:
| [flagged]
| pixl97 wrote:
| Damned, I'd have loved if my teachers only hallucinated 1% of
| the time. Instead we had the southern Baptist football coaches
| attempting to teach us science... poorly.
| jheriko wrote:
| in my experience, its sometimes 100% of the time, even after
| repeated attempts to correct it with more specific prompts.
| Even on simple problems involving divisions or multiples of
| numbers from 1 to 10 with one additional operation.
| chaxor wrote:
| This is also the case if taught by any educator who happens to
| trust the source they looked up as well. The internet, text
| books, and even scientific articles can all be factually
| incorrect.
|
| GNNs (for which LLMs are a subclass of) have a potential to be
| optimized in such a way that all the knowledge contained within
| them remains as parsimonious as possible. This is not the case
| for a human reading some internet article for which they have
| not gained extensive context within the field.
|
| There are plenty of people that strongly believe in strange
| ideas that were taught to them by some 4th grade teacher that
| was never corrected over their life.
|
| While you're statements are correct in this miniscule snapshot
| of time, it's exceedingly short-sighted to assert that language
| modeling is to be avoided due to some issues that exists this
| month, and disregard the clear future of improvements that will
| come very soon.
| vouaobrasil wrote:
| Soon to be: teachers ARE AI.
|
| You're all having fun now. But you'll regret using AI for
| anything because soon humans will become mostly fit for manual
| labour while AI concentrates the wealth of the world into the
| hands of the tech elite.
|
| Then, without a human connection in teaching, children will grow
| up into psychologically damaged adults.
| throwuwu wrote:
| Or teachers focus more on helping kids with the fundamental
| social and organizational skills necessary for learning and
| cooperating while AI handles the individualized lesson plans
| for each of the topics. The kids become much better adjusted
| and much more knowledgeable and go on to use AI in their
| working lives to create unimaginable amounts of wealth and
| productivity.
|
| In other words: you know what beats one elite with an AI? Ten
| thousand well educated people each with their own AI.
| r3trohack3r wrote:
| > Soon to be: teachers ARE AI.
|
| > soon humans will become mostly fit for manual labour
|
| > without a human connection in teaching, children will grow up
| into psychologically damaged adults.
|
| If humans are only going to be doing manual labor, what will
| the AI teacher be teaching? Do you need 16+ years of education
| for manual labor?
|
| Just taking your argument at face value, I don't understand how
| "AI replaces nearly all human knowledge workers" leads to
| "children become psychologically damaged adults."
|
| It seems like it would free them from being strapped into a
| chair for 16 years and denied the opportunity to be children in
| an attempt to prepare them for a life of knowledge work? Unless
| we just keep up the ruse of an entire childhood of classroom
| based education for ... reasons?
|
| To push past your argument, society and knowledge isn't zero
| sum.
|
| I'm not writing software because it's the single most important
| thing in the universe for me to focus on right now. It's
| actually pretty low on the list of important things on the
| grand scale of important things. I'm writing software because
| it's the work that needs to be done right now and there isn't a
| replacement for me doing it.
|
| I feel like you are asserting that plugging numbers into
| spreadsheets as an accountant or doing string transformations
| "at scale" to convert DB queries into HTML and JSON is both: 1)
| A fulfilling life 2) The only thing humans could possibly be
| doing of value right now; if you take this away there is
| nothing left
|
| There are a tonne of fundamental questions/problems about life,
| the universe, interstellar travel, preservation of our species,
| etc. that I _just don't have time for_ right now because I'm
| over here trying to figure out how to take these bytes coming
| over the wire from an SQL query and pack them into a JSON
| object so a browser can hydrate this bit of HTML. And I'm
| sorry, but, this isn't how I'd choose to live my life if there
| was someone else I could put in this seat.
|
| Please AI take my job so I can be free to focus on all of the
| stuff that comes with the next layer of abstraction/automation.
| snek_case wrote:
| Seems like we could head towards a world where people go to
| school from home, learn from AI, work remotely, get food
| delivered, find entertainment in VR. Apartments get smaller and
| smaller, until most people are essentially just renting a room
| in a large dorm, which they almost never leave.
| burkaman wrote:
| Exactly the world described more than a century ago in The
| Machine Stops, which I think should be required reading in
| all CS curriculums. Free to read here: https://www.cs.ucdavis
| .edu/~koehl/Teaching/ECS188/PDF_files/....
| MandieD wrote:
| I'm 10 pages in and think it should be required reading for
| not only CS curriculums, and regret not having been exposed
| earlier.
|
| Thanks for sharing.
| throwuwu wrote:
| No. VR is to the WWW what the WWW was to the internet. It
| will bring the rest of the world onto the net where
| previously only print, video, and audio were. AI will be the
| next UI medium. NLUI natural language user interface or SUI
| spoken user interface, somebody will come up with a better
| name.
| lemmox wrote:
| Interesting prompts! IME the quality of the answers the users
| give to the ChatGPT questions in these prompts will make or break
| the experience.
|
| I played around with this use case in the spring when my teenage
| daughter was looking for extra test prep materials. At first the
| experience was interesting but there was an "AI uncanny valley"
| shaped problem: the material just didn't _seem_ to fit. It _felt_
| wrong.
|
| This uncanny valley was significantly reduced, even eliminated in
| some instances, by including the entirety of our school
| district's online material about the course; information about
| the core competencies (across communication, thinking, personal &
| societal), the big ideas, the curricular competency & content
| about the learning standards. Our district has a pretty good
| website with all of this information laid out for each course and
| grade level.
|
| Including all of this information in the prompt context resulted
| in relevant and harmonious content when asking to generate course
| outlines, student study-prep handouts, and even sample study
| session pre-tests (although ChatGPT wasn't strong at reliably
| creating answer sheets for the pre-tests).
|
| Context is key!
| lemmox wrote:
| An interesting trick I found here was to ask ChatGPT to produce
| tables of concept definitions and include a metaphor for each
| concept to help understanding. It was quite good at coming up
| with metaphors and that actually felt kind of magical.
| blibble wrote:
| from their own FAQ linked from this page: Is
| ChatGPT safe for all ages? ChatGPT is not meant for
| children under 13, and we require that children ages 13 to 18
| obtain parental consent before using ChatGPT.
|
| so in other words: no
|
| it's grossly irresponsible to be pushing "Teaching with AI" in
| this scenario
| teacpde wrote:
| Teaching doesn't prescribe students to be younger than 18.
| catchnear4321 wrote:
| you act like parental consent wasn't listed as a requirement.
| though it may not be broadly recognized as such, that
| requirement is an admission that it is foolish to hand a child
| access without guidance.
|
| you know, like in the form of a parent. parental guidance.
| which starts with parental consent.
|
| so in other words: it depends.
|
| it's grossly irresponsible to treat a hammer as inherently
| dangerous.
| waffletower wrote:
| I disagree. Ethical teachers audit and examine all content they
| intend to be consumed by students -- it is their responsibility
| regardless of what medium or agents are used to create them. It
| is common for people to disregard that generative AI is
| currently a tool without agency whose use requires a selection
| process. Just as a camera needs to be aimed, AI does as well.
| cdblades wrote:
| can you prove to me in a verifiable way that no matter what
| prompt I put into ChatGPT, it won't give me pornography back?
| sebzim4500 wrote:
| No, but then I can't prove that to you about Google either
| and I don't see schools trying to ban that.
| pixl97 wrote:
| Can you prove in a verifiable way no matter what you prompt
| to your teacher they won't give you pornography back?
| jstarfish wrote:
| > can you prove to me in a verifiable way that no matter
| what prompt I put into ChatGPT, it won't give me
| pornography back?
|
| No, but if it's turning you away even when you're
| explicitly asking for it, it's probably doing _good
| enough_. Nobody held Yahoo, Lycos or Altavista to this
| standard.
|
| If accidental erotica is the worst outcome you can imagine
| for the shortcomings of AI teaching, please leave worrying
| about this to the professionals. Consider flawed chemistry
| lessons, where it tells some kid to mix two things they
| shouldn't. That will _actually_ cause material harm to
| everyone around them.
| waffletower wrote:
| It is the teacher's responsibility to evaluate any
| materials they present to students. If they are given an
| output they interpret to be pornographic, they decide
| whether to provide it or not to students. I imagine it is
| possible that you might determine something to be
| pornographic that a given teacher may not. Pornography is
| an interpretation, which varies culturally and politically.
| Regardless, it is definitely not my responsibility to prove
| what ChatGPT will provide whatsoever, I don't work for
| OpenAI.
| whywhywhywhy wrote:
| Can't imagine I'd have bothered engaging with any subject I
| wasn't interested in if ChatGPT existed back then.
|
| Always remember the glorious few months when I had Encarta at
| home before too many students had it and before teachers
| clocked on where homework became just printing the page on the
| subject off after removing identifying bits.
| harry8 wrote:
| You make a strong case about lack of education quality and
| make-work time-wasting foisted upon children.
|
| Education is not a problem the human race has solved despite
| progress made.
| dustincoates wrote:
| I'm ambivalent on LLMs, but I have found one really good use for
| it: helping me with language learning. I'm now at a level (C1)
| with my second language that it's really difficult to find
| resources or even tutors to help refine it.
|
| So what I've been doing is chatting with Claude and asking it to
| correct whatever faults I make or asking it to give me exercises
| on things where I need to focus. For example, "Give me some
| exercises where I need to conjugate the past tense and choose the
| correct form."
|
| It's like a personal language learning treadmill.
| Vinnl wrote:
| Yeah, this Show HN convinced me:
| https://news.ycombinator.com/item?id=36973400
|
| Unfortunately it's no longer free to try, but it worked well.
| PeterisP wrote:
| Underresourced languages also are underresourced in terms of
| training data for LLMs, and so for smaller languages LLMs do
| have _significantly_ more problems with sometimes generating
| something that 's completely weird and wrong not only in terms
| of facts but also in terms of language, word choice or grammar.
| isaacremuant wrote:
| Just remember, that you have no guarantees that it will be
| correct.
|
| Use a combination of external sources to cross verify. Also
| spoken form generation is very important if you plan to
| interact with people.
|
| Combining it with real conversation will definitely help.
|
| But I can see how it can be absolutely awesome to play around,
| as an extra tool.
| Miraste wrote:
| I'm surprised languages aren't more of a focus in the LLM hype.
| They're like if Rosetta Stone ads were true. They translate at
| state of the art levels, but you can also give and ask for
| context, and they're trained on native resources and culture.
| There hasn't been a jump in machine translation this big and
| fast, ever.
| minimaxir wrote:
| I'm surprised OpenAI is encouraging large system-style prompts
| for the main ChatGPT webapp where they are less effective there.
|
| Now that the ChatGPT Playground is the default interface for the
| ChatGPT API with full system prompt customization, they should be
| encouraging more use there, with potential usage credits for
| educational institutions.
___________________________________________________________________
(page generated 2023-08-31 23:00 UTC)