Grades for withdrawn students - dropped items in data set do not match D2L UI

I'm attempting to get grades of 100+ withdrawn students from a course that ran last spring semester but I'm noticing that the wrong grade items are sometimes logged as dropped in the Grade Results data set. The grading system is points and Treat ungraded items as 0 was enabled at the start of the semester. All grade items were also created before any students withdrew from the course.
Since the students withdrew early in the semester, there are many missing grades which have an automatic 0* in the gradebook and nulls in the data set. Is there a reason why the dropped items would be different between what I'm seeing under Classlist → Enrollment Statistics → View Grades for each student vs. the Grades Results data set? I assumed they would be the same data.
For example, a student has scores for quizzes 1-3, and auto 0 scores for quizzes 4-5. The category is set to drop the lowest 2 scores. In D2L, the View Grades screen correctly shows that quizzes 4 & 5 were dropped. However, in the data set, quizzes 3 & 4 are marked as dropped.
Comments
-
I think this discrepancy occurs because “Treat ungraded items as 0” only affects how grades are displayed in the D2L UI and it doesn’t change the data. In the Grade Results data set, ungraded items remain null, not zero. Since nulls are ignored in drop calculations, the dataset may drop different items than the UI, which treats missing grades as zeros when applying “drop lowest” rules.
When “Treat ungraded items as 0” is enabled, the UI temporarily treats all ungraded (blank) items as if they have a score of zero for display and calculation purposes. This affects how the “Drop lowest N” rule is applied in the Grades tool, leading D2L to correctly drop the lowest scored or missing items as shown in the instructor view.
The Grade Results data set does not store these display-based zeroes. In the dataset, ungraded items remain recorded as null (no value). Since null values are ignored by the system when determining which items to drop, the dataset’s logic may select different items—often dropping a legitimately low graded item instead of a missing one.
This is what I think is happening but not tested. I don't know if this would work to verify the thought, but turn off “Treat ungraded items as 0" temporarily and then running the data set again, there might be a delay in data reprocessing in the data hub, ours seems to be a day behind, then check to see if the data hub data matches the UI. If so, you have an answer and you may need to manipulate the data from data hub to have an accurate account of what is being dropped. -
Thanks for your input, @Justin.B.253 - I agree it's most likely an issue with the auto 0's being logged as nulls in the data sets. We're trying to determine why students withdrew from courses at a particular time and would like to see the gradebook as the students saw it when they made that decision. It's odd that which grades get logged as dropped in the data set seems a little random, since sometimes it's two scores, sometimes it's one score and a null, and sometimes it's two nulls when it should always be two nulls since the treat ungraded as 0 setting was turned on at the beginning of the semester. I'm not seeing a pattern for what causes that discrepancy yet.
-
@Jennifer.W.973 very interesting. This is also something we want to look at as well. Where is the drop off happening and why. The information you are gathering is helpful.
I'm wondering, if you could go about this in a different direction that could give you matching results. I agree with you that the data should be accurate to what the student sees and maybe this is something to bring up with D2L. Currently for us, when a student drops the course it's after the instructor has had a chance to record the Last Date of Activity. I built a JavaScript web app in D2L that enrolls the user back into the course with a special role, it then downloads the student activity report and removes the student from the course once complete. I'm wondering if you could do something similar where it loads the student back in, repopulates the gradebook, downloads the gradebook and removes the student.
From a .csv file of userid's and courseid you are surveying and run it the background like a python script on your desktop or through a web app. I'll play around with that idea and see if its feasible. If it works well, I'll load the code up on my GitHub for you to grab and try.