
Why AI Strategy Matters (and Why Not Having One Is Risky)
More than a mere trend of the times, artificial intelligence is quickly becoming a baseline way of working in higher education. AI usage is evolving rapidly and influencing everything from student success to operational efficiency. If your institution hasn’t started developing an AI strategy, you are likely putting yourself and your stakeholders at risk, particularly when it comes to ethical use, responsible pedagogical and data practices, and innovative exploration.
You and your team won’t have all the answers today, and that’s okay. AI is advancing daily, and by establishing a strategic foundation now, your institution can stay agile and aligned with its mission, vision, and goals to serve learners as the education sector continues to evolve its usage of AI globally.
The topic of AI strategy was the focus of a multi-institutional presentation titled “Why AI Strategy Matters (and Why Not Having One is Risky),” led by Vincent Spezzo from the Georgia Institute of Technology and Dana Scott at Thomas Jefferson University, at 1EdTech Consortium’s 2025 Learning Impact Conference in Indianapolis. The attendance was standing room only and participation was robust.
The Reality Is: Most Institutions Are Still Figuring It Out
The session started with a survey of essential questions for participants in the room, and the results revealed are consistent with other reports stemming from 1EdTech working groups, conversations at industry conferences, and within recent publications: Most institutions either lack a defined AI strategy or have efforts that are disjointed or siloed. Leaders are asking for support, guidance, and tools to move forward with purpose.
The most important takeaway here? Everyone is still learning.
Faculty, students, and staff are experimenting with AI, and pods of innovation are abundant across institutions. Your role as an institutional leader isn’t to control innovation; it’s to guide it. A well-crafted AI strategy ensures that exploration happens within shared guardrails, reinforcing institutional values and serving long-term goals. Employing the advice of Dr. Susan Aldridge, president of Thomas Jefferson University, who framed four strategic objectives from her call to action, “How best can we proactively guide AI’s use in higher education and shape its impact on our students, faculty and institution,” the session walked attendees through these objectives and coupled them with additional practice frameworks that capture the importance of innovation and discovery, integral components of AI strategy which can’t get lost in translation while institutions figure things out.
- Objective 1: Ensuring that across our curriculum, we are preparing today’s students to use AI in their careers. That enables them to succeed in parallel with employers’ expanded use of AI.
- Objective 2: Employing AI-based capacities to enhance the effectiveness (and value) of the education we deliver.
- Objective 3: Leverage AI to address specific pedagogical and administrative challenges.
- Objective 4: Concretely address the already identified pitfalls and shortcomings of using AI in higher education and develop mechanisms for anticipating and responding to emerging challenges.
Source: Aldridge, S.C. “Four objectives to guide artificial intelligence’s impact on higher education.” Times Higher Education. 2025.
Framing Strategy with Data Privacy
Among 1EdTech session attendees, who came from both institutions and ed tech providers, data privacy was the top concern regarding existing and future AI tools. Last year, the 1EdTech community launched the Generative AI Taskforce and developed the TrustEd Generative AI Data Rubric, a framework that promotes transparency and responsible data practices. This rubric enables institutions to vet their apps for data privacy while providers can self-assess their posture and position when it comes to their AI practices.