High-Ability Students Offer Nuanced Responses to the Rise of AI Models in Schools

Posted By: Kathryn Fishman-Weaver Blog Posts,

By Kathryn Fishman-Weaver, Ph.D., Computers & Technology Network

As a school leader, I was proud of our balanced response to the 2022 public release of ChatGPT. Together, we worked to understand the new technology, with many of us signing up for courses, webinars, and discussion groups to learn as much as possible. We kept our focus on supporting learning, and as our policies and practices evolved, we modeled empathy and affirmed high expectations for integrity to address misuse. Today, our assignments include an AI assessment scale, and our teachers encourage conversations about authenticity, voice, bias, and accuracy.

While this work matters, the high-ability students in my programs offer a more complex response to the rise of AI models in the classroom. Their perspective, which is grounded in long-studied traits of gifted learners (Guidice, 2023) such as entelechy (roughly self-determination), high expectations, and justice, is an important addition to the educational implications of large language models (LLM). A 2025 phenomenological study conducted by Bedir, Benek, Yuca, and Donmez suggests that gifted learners may bring important ethical considerations, awareness of societal impacts, and responsibility questions to the field. I first noticed this more nuanced conversation in my classroom in the fall of 2024.

I was working with a group of high-ability students on an action research project. As part of the project, each student was tasked with developing a fictional character. Our primary learning target was character development. Each student was asked to write a short hero’s journey for their character. The class decided it also would be fun to create an artistic illustration of their characters. I used this opportunity to model using AI in a creative way. Drawing on two different AI models, I generated a sample illustration, and cited both the sources and prompts I had used to create a delightful pink dolphin character in a cozy teal sweater and pearls.

My students were floored. Visibly upset, they exchanged sideways looks at each other. A young woman raised her hand, speaking for the class. “Look,” she said, “we don’t like this. This is our project, and it needs to be completed by us, not AI. If you need help with your sketch, I am happy to paint you a watercolor dolphin.”

“Yes,” added another student, “and then we could get the eyes and dorsal fin where they actually belong.”

Entelechy: Self-Actualization in Learning

High-ability students often possess an intense, innate drive for self-actualization (also known as entelechy). This drive includes high expectations for work, especially independent work, ownership over projects, and a need for agency in their learning process. Are AI tools supporting high-ability learners’ needs for entelechy, or limiting them? The answer it seems is: It’s complicated.

“I am not using AI to support my learning in any way, shape, or form,” says one high school student. “It does not deepen my thinking. It limits my thinking in the fact that in the few times I attempted to use it, I found myself correcting it. I find it is clunky and distrustworthy.”

 Read: I will paint the dolphin myself; and unlike AI, I know where the dorsal fin goes.

The high-ability learners I talked with say their reaction and annoyance with AI in the classroom is further compounded by restrictive use policies that sometimes limit students' access to legitimate sources and digital tools. For example, one student told me, “I don't use AI for school; I'm just inconvenienced by people who do, because then teachers try to make it harder for them, and it makes it harder for everyone else, too.”

Students referenced frustration with lock-down, browser-blocking word processing tools such as Google Drive. Further, they told me the return to paper assignments limits their narrative creativity, as they can’t easily move sentences or paragraphs around for better organization. For students with a propensity toward perfectionism, the inability to present clean final papers to their teachers can be unnerving. For 2E students with dyslexia and dysgraphia, this has presented additional challenges.

The rise of AI tools in the classroom has also led to new conversations about instructional pedagogy. As those of us who have worked in gifted education for years know, high-ability students often hold those around them to their same expectations for independent and critical thinking work. As one middle schooler told me, “AI should only be used as a very last resort for teachers [and] only for the most tedious tasks.”

To be transparent, I have personally found AI to be a useful brainstorm partner in refining learning objectives, noticing patterns in my lesson materials, and generating ideas for creative activities and performance-based assessments. “AI's transformative potential in gifted education,” writes Kristen Seward, “is marked by its capability to personalize learning, create adaptive curricula, and facilitate teacher development (NAGC, 2025).” Further, as the Davidson Institute writes, the ability to generate differentiated learning materials can offer new entry points for the specialized interests of gifted learners. These can even be customized to the unique and asynchronous needs of the learners in your classes.

While teachers may be finding new possibilities to harness AI tools to support entelechy and differentiation, introducing its use into assignments may be met with dissonance from gifted learners. Further, the students in our gifted programs offer an important reminder to evaluate how the policies we make to safeguard learning can undermine those very intentions.

Justice: Opportunities, Risks, and Global Issues

The unique social-emotional needs of high-ability learners often includes a sense of justice, moral judgment, or altruism.” As these students are acutely aware, AI presents both layered potential and serious risks related to justice. “The environmental impact alone is appalling,” one ninth grader told me, citing research such as this two-part story by MIT  on the increased carbon dioxide emissions, pressures on the electric grid, and the water needed to train and deploy generative AI models. 

Beyond hardware needs, our students are skeptical of potential harm from AI outputs. They mention the ways AI can perpetuate bias and misinformation. “It has no place in gifted classes,” a student told me empathically, “unless,” they paused, “it is a kid interested in creating their own AI for good.”

For example, last school year two high-ability Mizzou Academy juniors developed their own AI model to support recycling and sustainable waste management in their local communities. Everyone rightly celebrated this project, which leveraged AI for a clear moral purpose. This wasn’t about playing with image generators; it was about creating meaningful change in our communities. Some of the moral lessons AI technologies have brought to our classrooms were both hard to predict and highly engaging for gifted learners.

A 2025 interview with MIT’s Justin Reich suggests that while many aspects of the development of generative AI are new, others are familiar. As “technology gets integrated into practices,” Reich suggests the impacts are usually “smaller, narrower and more targeted than people expect. People also tend to underestimate the mixed results that come down the line. You think one thing will happen, but often it’s another.”

For all these reasons, it is important that we continue to proceed with caution, critical thinking, and creativity, making sure that our practices support learning, learners, and the greater good.

“AI’s potential to advance medical research is truly exciting,” admitted an otherwise AI-critical student. 

“Yes,” added her classmates, “there are so many factors. It can be great, but it can also be terrible.”

References

Bedir, G., Benek, I., Yuca, E., & Donmez, I. (2025). Gifted students’ perceptions of artificial intelligence through drawings: A perspective from science and art centers. Journal of Education in Science, Environment and Health, 11(2), 126–139.

Boryga, A. (2025, September 11). How to make purposeful decisions about generative AI in your school. Edutopia. https://www.edutopia.org/article/purposeful-decisions-school-ai-policies/

Davidson Academy of Nevada. (2025, August 13). The role of AI in gifted education today: Benefits & boundaries. https://www.davidsonacademy.unr.edu/blog/the-role-of-ai-in-gifted-education-today-benefits-boundaries/

Giudice, A. (2024). The characteristics of giftedness in women. ResearchGate. https://www.researchgate.net/publication/380711683_The_characteristics_of_giftedness_in_women

National Association for Gifted Children. (n.d.). Glossary of terms. https://www.nagc.org/glossary-of-terms

Seward, K. (2025, April 3). Using artificial intelligence to transform curriculum for gifted students and for teachers. National Association for Gifted Children. https://www.nagc.org/news/using-artificial-intelligence-to-transform-curriculum-for-gifted-students-and-for-teachers

Zewe, A. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117