But the emerging consensus - by no means universal yet - is that the history of tech adaptation is that authorities have almost always failed to prevent its use. In this case it is perceived to be a skill students should learn to use and master because it will help them ilearn and in their careers. JL
James Hagerty reports in the Wall Street Journal:
Nearly a year after the launch of ChatGPT, teachers and professors across the U.S. are realizing they can’t ban or ignore a tool that many of their students are eagerly using. They’re worried about students using ChatGPT to cheat or cut corners. But that is tempered by hopes that it can help them learn faster. Teachers are searching for the right balance between taking advantage of ChatGPT’s virtues and avoiding overreliance. Many say students should be encouraged to use bots because employers will expect them to have learned that skill. “We perceive it as a tool to be leveraged.”Kevin Lisle, who has been teaching high-school chemistry at Marist School in Atlanta for 23 years, now has an assistant on call 24/7 to help his students. His new assistant sometimes spouts misinformation, but is good at explaining concepts and suggesting possibilities that students may have overlooked.
Lisle’s assistant is better known as ChatGPT—the most prominent of a new breed of artificial-intelligence chatbots.
Nearly a year after the launch of ChatGPT, teachers and professors across the U.S. are realizing they can’t ban or ignore a tool that many of their students are eagerly using. For many, ChatGPT is changing how they teach and evaluate their students. Yes, they’re worried about students using ChatGPT to cheat or cut corners. But that is tempered by hopes that it can help them learn faster. And many teachers say that students should be encouraged to use bots because employers will expect them to have learned that skill.
“We don’t perceive it as a threat to be blocked but as a tool to be leveraged,” says Kevin Mullally, principal of Marist, a Roman Catholic institution serving grades seven through 12.
‘Are we allowed to do this?’
When Lisle first encountered ChatGPT, which is produced by OpenAI, in late 2022, “I was just astounded by what it was able to do,” he says. For the first six months or so, he experimented with the bot but didn’t incorporate it into his chemistry classes. He noticed that upgraded versions were far better than the original.
When classes resumed for the current semester, he encouraged students to open up a ChatGPT account if they hadn’t already. “One student said, ‘Wait a second! Are we allowed to do this?’ ” Lisle recalls. “That student felt like it might be cheating. I said absolutely not, as long as you have my permission to use it for whatever we’re doing. I’m explicit about when they should use it and when they shouldn’t.”
Students can use ChatGPT to help with homework but aren’t allowed to consult it during tests. Because of the risk students might rely too heavily on ChatGPT outside the classroom, Lisle is putting less emphasis on homework in determining students’ grades, and more weight on tests and in-school activities.
Early in the semester, his students designed experiments, without ChatGPT, to investigate why eating spicy food makes people sweat. Several students proposed to measure at intervals the body temperatures of people eating salsa to find out if they were heating up.
Lisle asked the students to feed their experiment plans into ChatGPT and seek feedback. Among other things, the bot suggested measuring temperatures of a control group of people who weren’t eating salsa to see whether other factors might be involved in changing body temperatures. It also suggested a way to prevent differences in water intake from distorting the data.
ChatGPT “had great suggestions for the kids,” Lisle says—and the students were able to recognize that not all of them were helpful. When Lisle’s students have questions, he says, they tend to ask him but also check what ChatGPT has to say.
Unreliable source
Some educators are far more cautious about the technology. Jesse J. Holland, associate director of the School of Media and Public Affairs at George Washington University, tells students in his advanced news-reporting courses not to use artificial intelligence for writing assignments. “You don’t know where it’s getting its information, and you don’t know how reliable that information is,” he says.
David Joyner, who oversees an online master’s degree program in computer science at the Georgia Institute of Technology, compares using AI to seeking help from classmates. He says it is fine to ask a fellow student to explain a concept or describe how a problem can be solved, but asking someone else to do your homework would be cheating. The same applies to working with bots, Joyner says.
That said, many educators say it would be hard to prove that students used ChatGPT for schoolwork outside the classroom. Turnitin, an Oakland, Calif.-based company, provides software designed to detect plagiarism and AI-generated writing, but some universities have rejected that option because of the risk that students will be falsely suspected of cheating. “From our experience, there is a 1% chance that we classify a document as having AI-written content when it doesn’t,” a Turnitin spokesperson says.
One approach to combat misuse of the technology is to raise the bar for students, now that they have such a powerful tutor and research aide. “Good is no longer good enough,” says Christian Terwiesch, a management professor at the University of Pennsylvania’s Wharton School. “It has to be awesome.” In evaluating students’ work, he says, “my expectations are up.”
Teachers also want students to understand the technology’s limits. James Hendler, who teaches artificial intelligence at Rensselaer Polytechnic Institute, tells students to ask ChatGPT to write a two-page essay about a topic they already know well. Then the students are asked to correct mistakes made by the bot and find references to verify the statements they believe are true.
At Washington University in St. Louis, finance professor Mark Leary requires students to evaluate responses ChatGPT gives when asked to solve typical business problems. In one example, a firm had to decide whether to pay for a $1 million blast furnace early to get a $50,000 discount. ChatGPT framed the problem correctly but then mixed up the timing of the firm’s cash flows and incorrectly concluded that paying early was a good idea.
Because bots can present plausible-sounding answers that turn out to be totally wrong, Leary and many other teachers make students responsible for checking the accuracy of any AI-derived content in their papers.
Boosting the basics
In some cases, teachers hope that students will use bots to bolster basic skills that would eat up a lot of time. For instance, teachers of math or science typically dislike providing remedial education in basic writing, so, as an alternative, students can ask bots to read their first drafts and suggest improvements. That would leave more time to explore higher-level ideas.
That is what Andres Zambrano does, with approval from his academic adviser at the University of Pennsylvania. Zambrano, a native of Colombia working on a Ph.D. in learning sciences and technologies, struggled with writing in English when he arrived in the U.S. last year. He regularly asks ChatGPT to revise his writing and often specifies the type of audience he is addressing. Zambrano says that studying the revisions made by ChatGPT has helped him improve his spoken and written English even when he isn’t using the bot. He still wants a human to provide the final editing for his papers.
Pingkun Yan, who teaches a class in analysis of biological images at Rensselaer, used to devote four or five lectures to teaching students how to write Python computer code. Now that students can get much of that knowledge from ChatGPT, he aims to wrap up the Python training in three lectures and move more quickly into analyzing images.
AI meets C.V.
With or without their teachers’ support, students are finding creative ways to use ChatGPT.
When Cassandra Harris was completing her application for editor of the Globe student newspaper at Point Park University in Pittsburgh earlier this year, she felt too exhausted to write a cover letter. So she fed information about her qualifications into ChatGPT. The artificial-intelligence tool wrote her cover letter.
Harris deemed the letter “pretty good” but edited it to make the language a bit more personal and human before sending her application to the Office of Student Affairs.
She got the job.
Harris later told one of her journalism professors, Andrew Conte, who was on the editor-selection committee. He was shocked: “I felt like I had been duped a little bit.” Still, Conte thinks Harris had demonstrated her writing talent in other ways.
In his newsgathering and reporting class, Conte tells students that ChatGPT shouldn’t be used as a substitute for their reporting and fact-checking. When an outside speaker was scheduled to address the class, he asked ChatGPT to produce a profile of the guest. It was full of errors and so proved to be a lesson for students who might be tempted by shortcuts.
Students are learning the limits of bots on their own. Thomas Tyndall, a screenwriting major at Point Park University, queried ChatGPT when he was writing a short movie with a scene depicting children pretending to be cowboys. “I asked for things cowboys would say before they shoot each other,” he says.
He was disappointed to find the answers predictable and unhelpful. They included a movie cliché: “This town ain’t big enough for the both of us.”
For now, teachers are searching for the right balance between taking advantage of ChatGPT’s virtues and avoiding overreliance. Shannon Juhan, an English teacher at Marist, supports the idea of using bots for brainstorming or help with grammar and clarity. She still wants students to develop their own styles of writing, however. She also wants them to reject some of the writing advice they get from bots: “I tell them you still can be smarter than the computer.”
0 comments:
Post a Comment