Tech companies don’t care that students use their AI agents to cheat

Nov 04, 2025 08:20 PM - 3 months ago 144662

AI companies cognize that children are the early — of their business model. The manufacture doesn’t hide their attempts to hook the younker connected their products done well-timed promotional offers, discounts, and referral programs. “Here to thief you done finals,” OpenAI said during a giveaway of ChatGPT Plus to assemblage students. Students get free yearlong entree to Google’s and Perplexity’s pricey AI products. Perplexity moreover pays referrers $20 for each US student that it gets to download its AI browser Comet.

Popularity of AI devices among teens is astronomical. Once the merchandise makes its measurement done the acquisition system, it’s the teachers and students who are stuck pinch the repercussions; teachers struggle to support up pinch caller ways their students are gaming the system, and their students are at risk of not learning really to study astatine all, educators warn.

This has gotten moreover much automated pinch the newest AI technology, AI agents, which tin complete online tasks for you. (Albeit slowly, arsenic The Verge has seen successful tests of several agents connected the market.) These devices are making things worse by making it easier to cheat. Meanwhile tech companies play basking murphy pinch the work for really their devices tin beryllium used, often conscionable blaming the students they’ve empowered pinch a seemingly unstoppable cheating machine.

Perplexity really appears to thin into its estimation arsenic a cheating tool. It released a Facebook ad successful early October that showed a “student” discussing really his “peers” usage Comet’s AI supplier to do their multiple-choice homework. In different advertisement posted the aforesaid time to the company’s Instagram page, an character tells students that the browser tin return quizzes connected their behalf. “But I’m not the 1 telling you this,” she says. When a video of Perplexity’s supplier completing someone’s online homework — the nonstop usage lawsuit successful the company’s ads — appeared connected X, Perplexity CEO Aravind Srinivas reposted the video, quipping, “Absolutely don’t do this.”

When The Verge asked for a consequence to concerns that Perplexity’s AI agents were utilized to cheat, spokesperson Beejoli Shah said that “every learning instrumentality since the abacus has been utilized for cheating. What generations of wise group person known since past is cheaters successful schoolhouse yet only cheat themselves.”

This fall, soon aft the AI industry’s agentic summer, educators began posting videos of these AI agents seamlessly filing assignments successful their online classrooms: OpenAI’s ChatGPT agent generating and submitting an essay connected Canvas, 1 of the celebrated learning guidance dashboards; Perplexity’s AI adjunct successfully completing a quiz and generating a short essay.

In another video, ChatGPT’s supplier pretends to beryllium a student connected an duty meant to thief classmates get to cognize each other. “It really introduced itself arsenic maine … truthful that benignant of blew my mind,” the video’s creator, assemblage instructional designer Yun Moh, told The Verge.

Canvas is the flagship merchandise of genitor institution Instructure, which claims to person tens of millions of users, including those astatine “every Ivy League school” and “40% of U.S. K–12 districts.” Moh wanted the institution to artifact AI agents from pretending to beryllium students. He asked Instructure successful its organization ideas forum and sent an email to a institution income rep, citing concerns of “potential maltreatment by students.” He included the video of the supplier doing Moh’s clone homework for him.

It took astir a period for Moh to perceive from Instructure’s executive team. On the taxable of blocking AI agents from their platform, they seemed to propose that this was not a problem pinch a method solution, but a philosophical one, and successful immoderate case, it should not guidelines successful the measurement of progress:

“We judge that alternatively of simply blocking AI altogether, we want to create caller pedagogically-sound ways to usage the exertion that really forestall cheating and create greater transparency successful really students are utilizing it.

“So, while we will ever support activity to forestall cheating and protect world integrity, for illustration that of our partners successful browser lockdown, proctoring, and cheating-detection, we will not awkward distant from building powerful, transformative devices that tin unlock caller ways of school and learning. The early of acquisition is excessively important to beryllium stalled by the fearfulness of misuse.”

Instructure was much nonstop pinch The Verge: Though the institution has immoderate guardrails verifying definite third-party access, Instructure says it can’t artifact outer AI agents and their unauthorized use. Instructure “will ne'er beryllium capable to wholly disallow AI agents,” and it cannot power “tools moving locally connected a student’s device,” spokesperson Brian Watkins said, clarifying that the rumor of students cheating is, astatine slightest successful part, technological.

Moh’s squad struggled arsenic well. IT professionals tried to find ways to observe and artifact agentic behaviors for illustration submitting aggregate assignments and quizzes very quickly, but AI agents tin alteration their behavioral patterns, making them “extremely elusive to identify,” Moh told The Verge.

In September, 2 months aft Instructure inked a woody pinch OpenAI, and 1 period aft Moh’s request, Instructure sided against a different AI instrumentality that educators said helped students cheat, arsenic The Washington Post reported. Google’s “homework help” fastener successful Chrome made it easier to tally an image hunt of immoderate portion of immoderate is connected the browser — specified arsenic a quiz mobility on Canvas, arsenic 1 mathematics coach showed — done Google Lens. Educators raised the alarm connected Instructure’s organization forum. Google listened, according to a consequence connected the forum from Instructure’s organization team, and an illustration of the 2 companies’ “long-standing partnership” that includes “regular discussions” astir acquisition technology, Watkins told The Verge.

When asked, Google maintained that the “homework help” fastener was conscionable a trial of a shortcut to Lens, a preexisting feature. “Students person told america they worth devices that thief them study and understand things visually, truthful we person been moving tests offering an easier measurement to entree Lens while browsing,” Google spokesperson Craig Ewer told The Verge. The institution paused the shortcut trial to incorporated early personification feedback.

Google leaves unfastened the anticipation of early Lens/Chrome shortcuts, which it’s difficult to ideate won’t be marketed to students fixed the beingness of a caller company blog, written by an intern, declaring: “Google Lens successful Chrome is simply a lifesaver for school.”

Some educators recovered that agents would occasionally, but inconsistently, refuse to complete world assignments. But that guardrail was easy to overcome, arsenic assemblage English coach Anna Mills showed by instructing OpenAI’s Atlas browser to taxable assignments without asking for permission. “It’s the chaotic west,” Mills said to The Verge astir AI usage successful higher education.

This is why educators for illustration Moh and Mills want AI companies to take responsibility for their products, not blasted students for utilizing them. The Modern Language Association’s AI task force, which Mills sits on, released a statement successful October calling connected companies to springiness educators power complete really AI agents and different devices are utilized successful their classrooms.

OpenAI appears to want to region itself from cheating while maintaining a early of AI-powered education. In July, the institution added a study mode to ChatGPT that does not supply answers, and OpenAI’s vice president of education, Leah Belsky, told Business Insider that AI should not beryllium utilized arsenic an “answer machine.” Belsky told The Verge:

“Education’s domiciled has ever been to hole young group to thrive successful the world they’ll inherit. That world now includes powerful AI that will style really activity gets done, what skills matter, and what opportunities are available. Our shared work arsenic an acquisition ecosystem is to thief students usage these devices well—to heighten learning, not subvert it—and to reimagine really teaching, learning, and appraisal activity successful a world pinch AI.”

Meanwhile, Instructure leans distant from trying to “police the tools,” Watkins emphasized. Instead, the institution claims to beryllium moving toward a ngo to “redefine the learning acquisition itself.” Presumably, that imagination does not see changeless cheating, but their projected solution rings akin to OpenAI’s: “a collaborative effort” betwixt the companies creating the AI devices and the institutions utilizing them, arsenic good arsenic teachers and students, to “define what responsible AI usage looks like.” That is simply a activity successful progress.

Ultimately, the enforcement of immoderate guidelines for ethical AI usage they yet travel up pinch connected panels, successful deliberation tanks, and successful firm boardrooms will autumn connected the teachers successful their classrooms. Products person been released and deals person been signed before those guidelines person moreover been established. Apparently, there’s nary going back.

Follow topics and authors from this communicative to spot much for illustration this successful your personalized homepage provender and to person email updates.

More