Home | about | articles | getting-started-open-learning-analytics-analysis

Getting Started with Open Learning Analytics - Analysis

Share it now!

In this article we continue our discussion of how to get started with Open Learning Analytics. Here we discuss the analysis of learning events within the context of the Learning Analytics Diamond (see figure 1).

This is the third in a series of articles. You can catch up on the Learning Analytics Diamond if you missed the first article. In the first article we reviewed the components of an open analytics environment as represented in the Learning Analytics Diamond diagram and discussed strategies for capturing learning events within your application. In the second article we focused on options for storing those learning events. In this article we look at analysis of learning events and other learning data.

At this point you are capturing learning events within your application and storing them in a learning record store. You have likely made a significant investment in technical infrastructure and software development. It is time to put that investment to work through analysis of the data you have collected and stored. Data analysis can take many forms and use a variety of tools. Analysis could be as simple as calculating aggregates using an spreadsheet or as involved as working with a business intelligence tool such as IBM Cognos. However, within the context of Open Learning Analytics, analysis is typically performed by an application referred to as a learning analytics processor.

Think of the learning analytics processor as the manager of an analysis workflow. The analysis workflow is generally referred to as a pipeline and consists of three distinct phases: input, model execution, and output. The first phase, input, typically involves extraction, transformation, and loading (ETL) of data from one or more data sources. In a learning analytics context, this would typically mean extracting data from a learning record store (event data), a student information system (student, course data), and possibly a learning management system (grade data). Once all of the data sources have been transformed into the appropriate format and loaded into the processor the next phase is execution of an analytics model. In a learning context, models could either support academic analytics or predictive analytics. Predictive models are often represented in Predictive Model Markup Language (PMML) an XML-like format and execution is often handled by a third-party library such as Weka or a separate system such as Apache Spark. The final phase, output, aggregates the results of the model execution and typically persists the results to the filesystem or a data store. Often the learning analytics processor will expose the model output via web service APIs.

The Apereo Learning Analytics Processor is an open source web application that provides a framework the execution of analytics pipelines. Currently, the Apereo Learning Analytics Processor supports the Marist OAAI Early Alert and Risk Assessment model but development of additional models as well as feature and scalability enhancements are underway as part of the Jisc Effective Learning Analytics project. Unicon technologists in partnership with Marist College have been, to date, the primary developers of the Apereo Learning Analytics Processor.

If you are interested in learning more about the analysis phase of the Learning Analytics Diamond or would like some guidance in executing your own learning analytics processor project, Unicon can help. Contact us to setup a discussion with one of our Integration and Analytics technologists and stay tuned for the next article in the Getting Started with Open Learning Analytics series, which will cover the action component of the Learning Analytics Diamond.

Yes