Consider a teenager, Jorge, who is caught possessing a large amount of marijuana by a school administrator and will be expelled if he’s reported to his parole officer. If the administrator does not report him, they’re breaking the law; if they do, they’re condemning him to one of the worst schools in the city and likely recidivism. 

This is a case study we presented to a class of 60 students at the Harvard Graduate School of Education. We asked them to pretend to be a teacher or administrator at the school and design a course of action. One hour into their conversation, we presented them with ChatGPT’s analysis of the study.

The program suggested several anodyne solutions: “We must initiate a review of [the school’s] existing policies and procedures related to substance abuse, with the goal of ensuring they are consistent, transparent, and reflective of best practices … The school should take a compassionate approach [but] also communicate clearly that drug abuse and related offenses will not be tolerated … This approach should be taken while ensuring that the school is responsive to the unique needs of its students, particularly those from low-income and working-class backgrounds.”   

Our graduate students initially performed no better than this chatbot. They, too, were prone to regurgitating the same tired discourse around justice, equity, and education—discourse that seems appealing but lacks substance, failing to provide a concrete approach beyond what vague virtuous goals it should meet. As one student commented, “We were just saying formulaic, buzzworthy stuff, instead of talking about anything new like we said we wanted to when class started.”

The students were also visibly taken aback at how closely ChatGPT’s solutions mirrored their own. They spoke of how terrifying it was that these solutions sounded exactly like what a school would implement. Then they questioned themselves and their ability to come up with solutions that differed from what others had been recreating for so long. They expressed feeling stuck in a “loop.” One student tried to ease the tension by dismissing ChatGPT’s contribution as “not really saying anything.” Another challenged him: “Did we really say anything?”

Yet it was after ChatGPT reflected to the students their failure of imagination that they could begin to think of options that they, or any automatic language scrawler, would not have readily reached for. They realized that the case was entirely focused on the perspective of administrators, and that their earlier discussion had had no room for action that involved teachers, students, and Jorge, too. 

The students began questioning the logic and legitimacy of existing structures, such as schooling and juvenile justice, that shape their choices and outcomes, and began to propose new, more creative approaches to Jorge’s case. One student joked that the teachers, en masse, should smoke weed with Jorge (that is, to make themselves into targets for law enforcement, instead of remain as innocent bystanders). Another spoke of abolishing schools. A third gave an example of grandmothers who destroyed public property in pursuit of environmental justice. These ideas may seem nonsensical—but then, anything that disrupts existing patterns of thinking is quite likely to sound, at least at first, like nonsense.

By the end of the discussion, students had not only explored their immediate, conscience-clearing responses in the context of Jorge’s case, but also considered potential actions. Students began to realize that it is possible to both respect the law and to refuse it, if sufficient collective power has been established. For instance, they could turn Jorge in while simultaneously threatening to go on strike if he were expelled—neither acting as mere administrators nor mere saviors. Rather than abolishing schools altogether, shutting down this one school.