Discussion about this post

User's avatar
Priya Mathew Badger's avatar

I really appreciate the academic rigor brought to this discussion. This debate feels like an evolution of the "screen time" era. Just like screens aren't just good or bad for kids, the same is true for AI use where the how matters a lot.

In the popular MIT "Your Brain on ChatGPT" study the kids who used AI to edit drafts after writing on their own did not have the same effect as those who used it for their first draft.

In your post I thought one of the case where the student who asked ChatGPT line by line to explain their professor's lecture would have been a GOOD example of how to use AI. I essentially did this with my TAs in college going to every office hour. I was the type of learner that really wanted to understand the details and needed more help. The beauty of AI is that every student can get their detailed questions answered and explained in a mode that helps them learn. The students who take the agency to find answers with AI are mastering the skill of "learning to learn" anything.

The AI writing the first draft and limiting recall issues are real challenges and personally where I think educators need to experiment with new (or going back to old) methods.

To go back to screen time analogy, as iPad use grew for kids there became a more clear sense of good and bad. The AAP now has standards for it for different age levels. I feel like this is lacking for AI. We haven't aligned as adults on what is "good" or "bad" AI usage so kids don't know either.

I'm looking forward to when we get there as both a parent and supporter of advancing edtech.

No posts

Ready for more?