
4 Steps to Responsible AI Implementation — Campus Technology
4 Steps to Responsible AI Implementation
Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education. Developed under a cooperative agreement with the United States Department of Education, the framework “is intended to provide guidance on how schools can incorporate AI into its daily operations and curriculum,” the university said in a news announcement.
The document offers four key recommendations:
1) Establish a stable, human-centered foundation.
“Prioritize educator judgment, student relationships, and family input in all AI-enabled processes,” the framework advises. “Avoid overreliance on automation for decisions that affect learning trajectories, behavior responses, or instructional placement.” In addition, the report emphasizes the importance of transparency and compliance with data protection laws.
2) Implement future-focused strategic planning for AI integration.
Here, the report advises establishing an AI integration task force with representation from educators, administration, families, legal advisors, instructional technology, and special education. Conduct an audit and risk analysis before the adoption of AI tools, and prioritize tools that align with strategic goals.
3) Ensure AI educational opportunities for every student.
Schools should require that AI tools offer multiple means of content access, multiple options for student response and expression, and customizable supports for executive function and focus, as well as meet accessibility guidelines. The framework also cautions schools to safeguard against algorithmic misjudgment by prohibiting AI tools from making final decisions on IEP eligibility or other service decisions, disciplinary actions, or student progress decisions.
4) Conduct ongoing evaluation, professional learning and community development.
The framework recommends that schools establish continuous review and improvement loops, prepare educators as informed users and decision-makers, and build AI readiness and digital judgment across the institution.
“We see this framework as a foundation,” said James Basham, director of CIDDL and professor of special education at the University of Kansas, in a statement. “As schools consider forming an AI task force, for example, they’ll likely have questions on how to do that, or how to conduct an audit and risk analysis. The framework can help guide them through that, and we’ll continue to build on this.”
“The priority at CIDDL is to share transparent resources for educators on topics that are trending and in a way that is easy to digest,” said Angelica Fulchini Scruggs, research associate and operations director for CIDDL. “We want people to join the community and help them know where to start. We also know this will evolve and change, and we want to help educators stay up to date with those changes to use AI responsibly in their schools.”
The full report is available on the CIDDL site here.
About the Author
Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].
Source link