Originally published February 15, 2019.
Are you looking at your data? Do you know what data you have access to from your Brightspace learning environment? Do you know what decisions you could be (should be) making based on this data? The answer that I hear to at least one of these questions often is ‘No’. So, let’s take a moment to talk about the why and the what of some of your data.
Why should you be looking at your data?
You have lots and lots of data available to you through Brightspace. Sometimes it might seem like there is too much so I’m going to help point out some key pieces to look at so that it doesn’t seem so overwhelming. But, beyond that, you should be looking at your data because your data is the best way to make sure that you are fully informed and making decisions based on facts. This factual data also provides an excellent foundation for usage statistics and other key metrics in which senior leaders in your organization are going to be interested. Most other organizations are shifting to data-driven decision-making models so now is the right time to get comfortable with the data that you have available. Data also drives other key business processes such as continuous program and course improvement, hiring and staffing decisions, resource allocations and even determining how many course sections to offer.
Student Success and Achievement
One of the highest profile areas related to the use of data is supporting students to be successful in their learning experience and reporting on their success and achievement. This can include a wide variety of data metrics such as course progress and completion, grades, engagement and learning outcome achievement. This data helps you to make informed decisions about learning efficiencies and the effectiveness of content, learning resources and assessments. Assessment results are a commonly used data metric for evaluating student success. Achievement of learning outcomes associated with those assessments gives a rich view of student learning beyond simply a grade-based evaluation. All of these data elements help to identify students who are succeeding or excelling as well as those who could potentially be struggling or falling behind. Identification of these students as potential candidates for a conversation or other intervention can be a crucial step helping that student remain engaged and moving toward successful completion.
We can help
If all of this seems daunting or a bit overwhelming, D2L provides data related consulting services to help. We can help you get started, walk through the process and provide the guidance you need so that you don’t have to figure it out on your own. We also have a number of data related resources available here in the Brightspace Community to help.
What is Available?
One of the easiest ways to begin to use the data you have available is to look at the data elements in the learner usage dataset which is available from the Data Hub as one of the Advanced Data Sets. The following identifies some commonly referenced performance metrics and keywords. We have provided some brief definitions to the keywords to confirm that we are considering them in the same context that you are. Then the list of performance metrics which can be easily determined from the learner usage dataset.
Definitions
Adoption
Level of usage of the learning environment within your organization. Adoption is about the number of users accessing the learning environment (number of logins) and about the breadth and depth of usage. Breadth means that a wide variety of tools and features are being used in the learning environment whereas depth means that tools and features are being used to their fullest capacity. Healthy adoption means a significant number of users are logging into the learning environment and that a wide variety of features and tools are being used to their fullest potential in your organization’s courses.
Success criteria
Success criteria are identified goals selected to measure the success of the implementation and/or adoption of the learning environment. For example, perhaps you have an initiative to increase the number of students who are completing a course with a passing grade, or the number of students who are demonstrating mastery of an identified competency, or you have introduced an initiative to increase the number of users logging into the learning environment. Regardless of what your organization identifies as important dimensions of success, these elements become your success criteria and the items upon which you will measure or evaluate the success of your initiative.
Success metrics
Success metrics are selected data points to measure and monitor the level of performance or activity related to each of your identified success criteria. For example, if one of your success criteria is related to increasing the number of students who are completing a course with a passing grade, then monitoring or measuring that number of students will be one of your metrics. You could simply count the number of students who have a passing grade for a course within a term, for example. Or, you could also break that down further to know the number of students by each grade level (or grade value) in the grade scheme. You could also look at this data as a trend, term over term, year over year, etc. These become your success metrics. Some form of data measurement defined for each of your success criteria are your success metrics.
Student success
A measure of successful learning behavior for students. Traditionally, student success was only measured by GPA and whether a student completes a program and graduates. Today, with the use of detailed data available from the learning environment, we can dive a little deeper into some other criteria that indicate successful learning behavior, such as:
- Course Grade
- Attendance/absenteeism
- Credit load/term
- Time to complete a degree
- GPA
- Level of engagement
Course completion
A measure by which a course can be deemed as completed. In the learning environment, this could be:
- Viewing all content topics in a course
- Completing all assessment activities
- Earning a competency or learning objective
- Being awarded a badge
- Earning a certificate or credential
- Completing a summative assessment
- Earning a designated grade level
Retention
Retention is about measuring how often students drop out or transfer to another institution. There are three basic levels of retention that can be measured with data from the learning environment:
- Students who remain enrolled in a course until completion
- Students who continue to return and enroll into the next course in the program sequence after completing prior course(s)
- Students who complete all program requirements and graduate
Matriculation
Students who complete an academic year and are promoted forward into the next academic performance level.
Engagement
Engagement is a slippery term. It is often used interchangeably to discuss the level of student interest in a course or program, to discuss the level at which students or instructors are actively participating in learning activities, or to discuss the level of interaction between learners and instructors. When all three of these components of engagement are done well, student satisfaction rates and student success rates tend to increase. For example, you could measure the number of students who have submitted an assignment, submitted a quiz or posted into a discussion forum as a level of student engagement. Likewise, you could also evaluate the number of courses which are configured to use the grades tool, or using the discussion tool, or the time to evaluate student submissions as levels of instructor engagement.
Metrics
Learner Usage Data Set Alone
The Learner Usage data set can provide a variety of data metrics which are useful in many ways. The bulk perspectives can provide an interesting lens with which you can look at comparative data over time (e.g. term by term or year by year) to investigate levels of engagement for a student or group of students or course by course.
One measure of engagement across a collection of students can be to quickly scan the min, max and avg counts for student completions of the following tool items:
- Checklist completions
- Quiz submissions (total number of submission attempts vs. submitted/not submitted)
- Discussions (number of threads authored, read & replied)
- Assignments submitted
These min, max and average values help to put individual student activity levels within the context of his/her peers. If an individual student in a course, for example, has made 5 quiz submission attempts but most of the other students in the course are completing the quiz with just 1 or 2 attempts, then perhaps this is an opportunity for a conversation with that student. Or, if the class average is taking 4 or 5 attempts to pass a quiz, then perhaps it is an opportunity to re-evaluate the course material where the content information of the quiz is presented, or it could also be a prompt to dig in further and investigate question by question (or section by section) where students are struggling with the quiz. (Of course, this scenario means that your quiz is configured to allow multiple submission attempts. This is not a valid example if you only allow one submission attempt on your quizzes.)
Another easy indication of engagement is to look at the most recent activity date across a group of learners (by course, by section, etc.) and look for outliers or anomalies.
Most Recent of:
- Assignment submission
- Discussion Post (authored and/or read)
- Course Login
- Mail sent
The most recent data elements give you a good sense of how current the student’s engagement in the course has been. Perhaps the student was very active at the beginning of the course but then has not engaged in the past three or four weeks, this could be a sign of some life circumstance which could be interfering with his/her learning process. This could signal an opportunity to reach out and check on the student before he/she gets too far behind in the course to catch up.
Learner Engagement with Other Data Sets
When looking at engagement data, if you combine the learner engagement data set with some of the other Advanced or Brightspace Data Sets, you can obtain to useful performance and engagement metrics.
If you combine the learner engagement data set with:
- the final grades data set, you can readily determine the grade distribution by course. This will show you the count of the grade levels, according to the grade scheme in the course.
- the all grades data set, you can determine the grade distribution by assessment activity. This will show you the count of grade levels, according to the grade scheme for each activity in the course.
- the checklist data set, you can determine the percent complete of checklists in the course.
- the quiz objects data set, you can determine the percent complete for quiz submissions in the course.
- the discussions data set, you can determine the percent complete for discussion posts in the course.
- the assignment objects data set, you can determine the percent complete for assignment submission in the course.
As an example, with the all grades data set, this is a great continuity from the data which is available in the grades tool regarding the distribution of grades. The instructor is able to see within his/her own course, what the distribution of grades looks like for each individual grade item and for the final grade. However, if you need or want to see this information aggregated for that instructor across multiple courses that the instructor is teaching, or if you want to see the information aggregated across multiple course offerings or a department, a program, etc., then that is an excellent opportunity to leverage the all grades data set for that broader view of the data.