Skip to main content

1EdTech Highlights Successes in AI to Enhance the Learning Experience

“It’s a calculator for writing.” This was one of the first descriptions I heard about AI as the 2023 Learning Impact conference kicked off with the question of how we’ll use this new tool to enhance the learner experience with some folks already offering answers.

Protecting Data Privacy Will Drive the Near-Term Future of AI

A representative from UC San Diego explained that he predicts that AI is going to transform from large models to much smaller models as “walled gardens” begin to crop up around data. Users want to know where the output from AI models is coming from, and those providing AI tools want to be able to provide higher confidence levels in the accuracy of the AI output as well as ensure that they’re maintaining the privacy of their data and their customers’ data. It’s also been proven at this point that there’s a plateau on AI performance as model size increases. All of these factors are expected to influence those providing AI software to build their own, smaller models. Along with smaller models, he also predicts that we’ll begin to see modular models or plugins that can be added to robotic devices or other models themselves. Finally, he predicts that this cycle will repeat itself as interest will again grow around larger, more generalized models. As we progress through this, he also predicts that these “walled gardens” around data are going to limit the speed at which models can be updated, because it will be impossible for the models to improve themselves as you interact with them since they can’t currently immediately send your data back in as input to train the model. Ultimately, privacy remains a top priority guiding how AI models will be built.

AI Increases Writing Quality and Decreases Grading Time

Packback showcased their suite of AI tools for helping students with writing assignments. Their first AI tool was built in 2018 to act as a personal TA to reward students when they ask questions and give even bigger rewards when they ask high-quality questions in an effort to bring the Socratic method back into learning. Their second student-facing tool could assess the quality of their written submissions for grammar, plagiarism, content, and adherence to the instructor’s rubric. They report that their tools have had a major impact on ESL students, and instructors have saved 30-50% of time grading due to the higher quality of submitted assignments.

Scaling Learning by Scaling Question Generation with AI

VitalSource highlighted the research that they recently did using AI for automatic question generation within textbooks. This research was inspired by Koedigner et al.’s “Doer Effect” which showed that a student is six times more likely to learn a concept if they practice it compared to just reading it. With these kinds of results, they wanted to scale the “doing” in textbooks, so they generated their own AI model which they trained using millions of textbooks. However, because AI models hallucinate, they ultimately had experts guide the system of question generation as they felt it would be an ethical disaster to risk presenting students with inaccurate information in textbook questions. Ultimately, with this question generation model, they were able to replicate the results of Koedigner et al., and they also found there was no difference in student interaction between AI-generated questions versus those generated by humans. However, they also found that unless the instructor made answering these questions 20% of the students’ grades, most students would stop answering the questions as the course progressed, despite how helpful they are to the learning process.

Converting to CBE Using AI Increases Student Retention

eLumen presented their work in using AI to help struggling community colleges increase student retention by converting their traditional curriculum into learning outcomes for the Competencies and Academic Standards Exchange (CASE) format, so that they could easily be converted into Comprehensive Learner Record (CLR) results. With CLR being the 1EdTech Competency Based Education (CBE) standard that ties what a student learns in school to skills desired in the workforce, any schools or edtech products using this standard will be highly desirable to the increasing number of learners today looking for hard proof that the education they’re getting will be beneficial to their career. eLumen’s AI product integrated directly into Canvas’s Outcomes feature and acted like Grammarly for writing learning outcomes. eLumen’s large language model (LLM) had consumed huge quantities of existing learning outcomes and syllabi for training. They shared the insight that LLMs are much better at large-scale coarse-grained queries than specific queries due to the limits of contextual input. This means that while it could give a very high-quality syllabus for an entire course, it would struggle to identify a single learning outcome within the course. Its specialty is producing statistical approximations of sets of facts. With this in mind, eLumen prompt-engineered the best way to get the LLM to identify and correct problems adhering to Bloom’s taxonomy within learning outcomes in real time as instructors wrote them. As with many others, eLumen also strongly believes that anyone using AI should be transparent to the user about what queries were given to it, so this is included in the output of their product. After using eLumen’s products for seven years, it was found that community colleges were able to increase their student retention from as low as 9% up to 54%.

Conclusion

While these examples illustrate how AI is already being used to help improve teaching and learning, there’s no shortage of questions for the road ahead. Many questions were discussed at the Learning Impact conference: To what degree do we want these tools to reflect the world we live in versus imagining a new world? What is real? Will there be a resurgence of oral exams and oral interviews? Will AI ever be able to have unspoken inputs, or will humans always have that advantage? Can you tell what characteristics indicate that something was written by an AI? When you give ChatGPT an exercise for determining its geographical origin, the results somewhat consistently say that it’s from New Jersey; is this an equitable voice for us to be hearing from? Is AI image generation appropriately diverse? How can we ensure equity of access to AI tools? Let us know on social media what your burning AI questions are and how you think it can be used to improve teaching and learning!

Mary Gwozdz

Mary Gwozdz

Senior Software Developer
Mary Gwozdz is a Senior Software Developer and Integration Specialist who has been with Unicon since 2017. While at Unicon, Ms. Gwozdz has impacted numerous learners by designing and developing software solutions for the California Community Colleges (CCC) Applications, Cisco Networking Academy, Lumen Learning, and others as well as assisting with SIS integrations to Instructure products such as Canvas. Ms. Gwozdz specializes in the LTI (Learning Tool Interoperability) specification from 1EdTech and is also knowledgeable in AWS architecture, Spring Boot REST web services, and other 1EdTech specifications such as Common Cartridge and OneRoster.