Written by Mark Tse, Software Developer, Platform Development D2L
With the launch of Brightspace Data Sets, our data strategy has evolved to provide clients with the raw, user-level data they need to answer nuanced questions that are specific to their institution. In this article, we will explore various factors that shape your data, and how that translates to insights unique to your institution.
Understanding how the data is shaped
The raw, user-level data in Brightspace Data Sets is the cumulation of many different sources and factors. Let’s explore what goes into shaping the data.
Data migrated from other learning products
If you migrated to Brightspace from another learning platform, data included during your migration process will become a part of the data in your data extracts. Your data is influenced by how the previous learning platform was configured, how users interacted with it, and how the data was transformed during your migration to Brightspace. This migrated data now available via the data extracts will be impacted by these factors and should be taken into consideration.
Usage patterns and integrations
Usage patterns differ from institution to institution depending on the segment (i.e. K-12, Higher Education, or Corporate), location (e.g. language, cultural differences, compliance requirements), teaching and learning styles (including differences between blended and online environments), and other differences that makes your institution unique. In addition, usage patterns are influenced by tools you enable in Brightspace and products you integrate with, such as SIS, LTI and other tools unique to your institution that leverage Brightspace APIs.
Data in your data extracts will be reflective of when certain tools are integrated with Brightspace, and how the usage patterns and tools within Brightspace have changed.
Changes from release to release
Each release of Brightspace introduces changes to the code to improve Brightspace, whether they are small improvements or feature additions. As the system evolves, so does the data, and the footprint of this evolution is unique to a given institution depending on how many releases the data has gone through and as it intersects with other factors. As well, each integration you have also goes through a release cycle, adding to the number of factors that shape your data.
Live updates as the data extract is being generated
When a data extract is generated, we extract as much as possible to the time of extraction. However, to minimize the impact of the live system, it is possible that a small subset of data is stale, missing, duplicated, or present when it has already been deleted. It can also mean the data might be inconsistent between two different data sets (e.g. data in Quiz User Answer Responses might not match up with data in Quiz Attempts).
Rest assured, we ensure any data gaps and inconsistencies are reconciled when your next data extract is generated. However, as you import the data into your systems, it is important to keep in mind that these nuances should be accounted for.
Other data sources
Finally, Brightspace Data Sets is only one potential data source out of many in your institutional ecosystem, and can always be combined with other data sources to provide a more holistic view of your system.
Mining your data
As you can see, many different factors going into shaping the data for each institution. Each of these factors can influence how you approach the data.
For example, if a LTI tool integrated into the content tool last fall generated more data points per user interaction, a student may appear less engaged in years before the launch of the integration due to a lower number of data points. By knowing when the integration was launched and accounting for it in your analysis, you can avoid skewing engagement data when comparing the current year with previous years. You can also use that data to demonstrate the effectiveness of an integration or tool.
Another example could be determining competency or achievement after-the-fact when moving to a new competency model. By leveraging Brightspace Data Sets to determine which courses, quizzes, assignments, etc. has been completed, learners could be fast-tracked on the new competency model in bulk instead of on a case-by-case basis.
There are many other examples not listed above, as the combination of factors and questions being asked makes each scenario unique. By providing you with the raw, user-level data, you have access to non-opinionated data that contains insights unique to and only available to your institution.
Summary
We have explored what goes into shaping the data you see today, and how providing you with raw, user-level data translates to the unique insights available for your institution. We would love to hear how you are using Brightspace Data Sets, any insights you have discovered, as well as any questions or comments below!