{"id":116499,"date":"2026-03-09T06:03:44","date_gmt":"2026-03-09T09:03:44","guid":{"rendered":"https:\/\/tech.einnews.com\/article\/898249279"},"modified":"2026-03-09T06:03:44","modified_gmt":"2026-03-09T09:03:44","slug":"real-time-data-shows-exactly-how-students-use-ai-on-school-technology","status":"publish","type":"post","link":"https:\/\/new7.shop\/zerocostfreehost\/index.php\/2026\/03\/09\/real-time-data-shows-exactly-how-students-use-ai-on-school-technology\/","title":{"rendered":"Real-Time Data Shows Exactly How Students Use AI on School Technology"},"content":{"rendered":"<div><img data-opt-id=341113800  fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/epe.brightspotcdn.com\/dims4\/default\/4b071e3\/2147483647\/strip\/true\/crop\/7167x4792+0+0\/resize\/942x630!\/quality\/90\/?url=https%3A%2F%2Fepe-brightspot.s3.us-east-1.amazonaws.com%2Fda%2F7b%2F5bebea35460da2475ce945d3045d%2Fai-risks-trojan-horse-032026-1619145850.jpg\" class=\"ff-og-image-inserted\"><\/div>\n<p>Roughly one in five student interactions with generative artificial intelligence on school technology involved cheating, self-harm, bullying and other problematic behaviors, according to data collected and analyzed by Securly, a company offering internet filtering and other safety services.<\/p>\n<p>What\u2019s more, Securly identified roughly 1 in 50 student-AI interactions as red flags that students might be involved in violence, cyberbullying, or self-harm. <\/p>\n<p><a class=\"a-link\" href=\"https:\/\/www.securly.com\/\" target=\"_blank\">Securly\u2019s analysis<\/a> looked at nearly 1.2 million interactions in more than 1,300 districts from Dec. 1, 2025, to Feb. 20, 2026.<\/p>\n<p>Educators should take heart that most of the time, students use AI appropriately, said Tammy Wincup, the CEO of Securly, whose competitors include GoGuardian and Lightspeed Systems.<\/p>\n<p>\u201cWhen a district actually sets some guardrails and policies around their AI usage in schools, 80% of the conversations happening are within the district\u2019s policies,\u201d Wincup said. \u201cThat\u2019s the good news on the learning side of the house.\u201d<\/p>\n<h2>Why the usage data is so \u2018fascinating\u2019<\/h2>\n<p>The analysis offers an early window into how students actually use generative AI tools. Most other research on student usage of AI comes from <a class=\"a-link\" href=\"https:\/\/www.edweek.org\/technology\/are-teens-just-using-ai-to-cheat-well-not-quite-if-you-ask-them\/2026\/03\" target=\"_blank\">surveys<\/a>, which rely on student self-reporting.<\/p>\n<p>Securly\u2019s data shows \u201cwhat are students really doing when they\u2019re writing text into generative AI,\u201d said Jeremy Roschelle, the co-executive director of learning science research for Digital Promise, a nonprofit organization that works on equity and technology issues in schools. <\/p>\n<p>\u201cThat\u2019s why it\u2019s fascinating,\u201d he said. <\/p>\n<p>In November, Securly allowed district officials to set parameters around students\u2019 AI use, similar to the way they ask the company to filter out particular types of websites. <\/p>\n<p>If districts opt to use this feature, large language models will \u201cdeflect\u201d a student\u2019s query to AI that\u2019s out-of-bounds with district policy. <\/p>\n<p>For instance, if a student tries to use AI to complete an assignment, large language models may instead point to information on the general topic but won\u2019t supply an exact answer. Or if a student asks about dosing for a particular medication, the tool will tell them to ask a trusted adult for help.<\/p>\n<p>Nearly all the deflected student queries\u201495%\u2014were from students trying to get AI tools to complete their schoolwork for them.<\/p>\n<p>That percentage didn\u2019t surprise Wincup. She expects that when districts allow students to use large language models on school networks and devices, kids will \u201cexperiment with understanding the guardrails\u201d placed around the tools and try to get around those guardrails.<\/p>\n<p>Another 2% of the interactions identified as inappropriate related to games. A little less than 1% dealt with sexual content and a similar percentage concerned firearms or hunting. Gambling, drugs, and hate (such as racism and antisemitism) each comprised roughly 0.5% of flagged interactions.<\/p>\n<p>Though only 2 percent of interactions were identified as potentially unsafe, that represents more than 24,000 queries overall. And some of the questions students asked AI were troubling.<\/p>\n<p>For instance, one student directed a large language model to help draft an email to their mother explaining they had suicidal thoughts.<\/p>\n<p>Another student conducted a quick series of internet searches on questions, including \u201cWhat\u2019s the main nerve in the forearm?\u201d and \u201cWhat nerve near the wrist carries blood?\u201d Then the student switched to an AI tool, asking it how to commit suicide. (In both of these cases, the identity of the student was \u2018unmasked\u2019 by Securly and district officials were made aware of the safety issues.)<\/p>\n<h2>Students used ChatGPT more often than large language models created for K-12 schools<\/h2>\n<p>Overall, Securly detected a higher percentage of potentially unsafe AI interactions\u20142%\u2014than potentially unsafe student internet searches, 0.4%.<\/p>\n<p>It\u2019s too early to pinpoint an exact explanation for that discrepancy, Wincup said. She noted that Securly has had many years to hone its system for recognizing when a student\u2019s internet searches may be a sign of danger, while its work with AI interactions is brand new.<\/p>\n<p>Roschelle, meanwhile, is curious about what, exactly, students asked AI in the 80 percent of interactions that were deemed appropriate for school. <\/p>\n<p>How did their prompts and AI\u2019s responses help\u2014or hinder\u2014their understanding of an assignment, an issue, or the world around them, he wondered.<\/p>\n<p>\u201cWhat we want to do is make sure [AI] is not just appropriate, but is actually valuable for student learning,\u201d Roschelle said.<\/p>\n<p>The analysis also revealed which large language models students use most often. <\/p>\n<p>ChatGPT is by far the most popular, accounting for 42% of interactions. Securly\u2019s AI Chat made up 28%. Google\u2019s Gemini comprised 21%. And other ed-tech tools that embed AI features\u2014including MagicSchool, SchoolAI and BriskTeaching\u2014comprised 9%. (That data isn\u2019t nationally representative because only districts that use Securly have access to Securly AI. But Wincup believes \u201cbig tech\u201d large language models are probably most popular in all districts.)<\/p>\n<p>AI puts education technology leaders in a new position, Wincup said.<\/p>\n<p>\u201cThey\u2019re no longer just buying things and setting things up like this,\u201d she said. This is a moment \u201cwhere they have to have visibility in order to help their district make not just great tech decisions but make great teaching and learning decisions.\u201d<\/p>\n<p><strong><a href=\"https:\/\/blockads.fivefilters.org\"> <\/a><\/strong> <a href=\"https:\/\/blockads.fivefilters.org\/acceptable.html\"> <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>&#8230; generative artificial intelligence on school <span class=\"match\">technology<\/span> involved cheating, self-harm, &#8230; 21%. And other ed-<span class=\"match\">tech<\/span> tools that embed AI &#8230; AI. But Wincup believes \u00e2\u0080\u009cbig <span class=\"match\">tech<\/span>\u00e2\u0080\u009d large language models are &#8230; district make not just great <span class=\"match\">tech<\/span> decisions but make great &#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"fifu_image_url":"","fifu_image_alt":"","footnotes":""},"categories":[1],"tags":[],"class_list":["post-116499","post","type-post","status-publish","format-standard","hentry","category-news","wpcat-1-id"],"_links":{"self":[{"href":"https:\/\/new7.shop\/zerocostfreehost\/index.php\/wp-json\/wp\/v2\/posts\/116499","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/new7.shop\/zerocostfreehost\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/new7.shop\/zerocostfreehost\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/new7.shop\/zerocostfreehost\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/new7.shop\/zerocostfreehost\/index.php\/wp-json\/wp\/v2\/comments?post=116499"}],"version-history":[{"count":0,"href":"https:\/\/new7.shop\/zerocostfreehost\/index.php\/wp-json\/wp\/v2\/posts\/116499\/revisions"}],"wp:attachment":[{"href":"https:\/\/new7.shop\/zerocostfreehost\/index.php\/wp-json\/wp\/v2\/media?parent=116499"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/new7.shop\/zerocostfreehost\/index.php\/wp-json\/wp\/v2\/categories?post=116499"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/new7.shop\/zerocostfreehost\/index.php\/wp-json\/wp\/v2\/tags?post=116499"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}