Much has been made of artificial intelligence’s potential to revolutionize education. AI is making it increasingly possible to break down barriers so that no student is ever left behind.
This potential is real, but only if we are ensuring that all learners benefit.
Far too many students, especially those with special needs, do not progress as well as their peers do academically. Meanwhile, digital media, heavily reliant on visuals and text, with audio often secondary, is playing an increasing role in education.
For a typical user in most cases, this is fine. But not for blind or deaf students, whose sensory limitations frequently impede their access to quality education. The stakes are much higher for these students, and digital media often underserves them.
That’s why the development of AI-powered tools that can accommodate all learners must be a priority for policymakers, districts and the education technology industry.
Related: ‘We’re going to have to be a little more nimble’: How school districts are responding to AI
Good instruction is not a one-way street where students simply absorb information passively. For learning content to be most effective, the student must be able to interact with it. But doing so can be especially challenging for students with special needs working with traditional digital interfaces.
A mouse, trackpad, keyboard or even a touch screen may not always be appropriate for a student’s sensory or developmental capabilities. AI-driven tools can enable more students to interact in ways that are natural and accessible for them.
For blind and low-vision students
For blind and low-vision students, digital classroom materials have historically been difficult to use independently. Digital media is visual, and to broaden access, developers usually have to manually code descriptive information into every interface.
These technologies also often impose a rigid information hierarchy that the user must tab through with keys or gestures. The result is a landscape of digital experiences that blind and low-vision students either cannot access at all or experience in a form that lacks the richness of the original.
For these students, AI-powered computer vision offers a solution — it can scan documents, scenes and apps and then describe visual elements aloud through speech synthesis. Coupled with speech recognition, this allows seamless conversational navigation without rigid menus or keyboard commands.
Free tools like Ask Envision and Be My Eyes demonstrate this potential. Using just an AI-enabled camera and microphone, these apps can capture and explain anything the user points them toward, and then answer follow-up questions.
These technologies have the potential to allow blind and low-vision students to get the full benefit of the same engaging, personalized ed tech experiences that their peers have been using for years.
For deaf and hard-of-hearing students
In some ways, the visually oriented world of digital media is an ideal fit for deaf and hard-of-hearing students. Audio is often a secondary consideration; particularly once users can read.
In cases in which audio is required for comprehension, like with video, the accommodation most digital developers provide is text-based captioning. Unfortunately, this means that a user must already be a proficient reader.
For younger learners, or any learner who does not read fluently or quickly, translation into sign language is a preferable solution. AI can be of service here, translating speech and text into animated signs while computer vision reads the user’s gestures and translates them into text or commands.
There are some early developments in this area, but more work is needed to create a fully sign language-enabled solution.
For the youngest learners
For young learners, even those without diagnosed disabilities, developmentally appropriate interactions with conventional desktop/mobile apps remain a challenge. A young child cannot read or write, which makes most text-based interfaces impossible for them. And their fine motor control is not fully developed, which makes using a mouse or keyboard or trackpad more difficult.
AI voice controls address these problems by enabling students to simply speak requests or responses, a more natural interaction for these pre-readers and -writers. Allowing a child to simply ask for what they want and verbally answer questions gives them a more active role in their learning.
Voice control may also enable a more reliable assessment of their knowledge, as there are fewer confounding variables when the student is not trying to translate what they understand into an input that a computer will understand.
Computer vision can smooth over text-based methods of interaction. For example, username/password login forms can be replaced with QR codes; many school-oriented systems have already done so.
Computer vision can also be used to enable interactions between the physical and digital world. A student can complete a task by writing or drawing on paper or constructing something from objects, and a computer can “see” and interpret their work.
Using physical objects can be more developmentally appropriate for teaching certain concepts. For example, having a child count with actual objects is often better than using digital representations. Traditional methods can also be more accurate in some cases, such as practicing handwriting with pencil and paper instead of a mouse or trackpad.
Even without physical objects, computer vision can enable the assessment of kinesthetic learning, like calculating on fingers or clapping to indicate syllables in a word.
A major hurdle in education is that although every student is unique, we have not had the tools or resources to truly tailor their learning to their individualized strengths and needs. AI technology has the potential for transformative change.
The responsibility falls on all of us — districts, policymakers and the ed tech industry — to collaborate and ensure that AI-powered accessibility becomes the norm, not the exception.
We must share knowledge and urgently advocate for policies that prioritize and fund the swift deployment of these game-changing tools to all learners. Accessibility can’t be an afterthought; it must be a top priority baked into every program, policy and initiative.
Only through concerted efforts can we bring the full potential of accessible AI to every classroom.
Diana Hughes is the vice president of Product Innovation and AI at Age of Learning.
This story about AI and special needs students was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter.
At The Hechinger Report, we publish thoughtful letters from readers that contribute to the ongoing discussion about the education topics we cover. Please read our guidelines for more information. We will not consider letters that do not contain a full name and valid email address. You may submit news tips or ideas here without a full name, but not letters.
By submitting your name, you grant us permission to publish it with your letter. We will never publish your email address. You must fill out all fields to submit a letter.