Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: ORA staff grader endpoints upgrade #10

Draft
wants to merge 3 commits into
base: FG/ORA_staff_grader_initialize_upgrade
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
121 changes: 121 additions & 0 deletions openassessment/data.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
from django.utils.translation import gettext as _
import requests

from submissions.models import Submission
from submissions import api as sub_api
from submissions.errors import SubmissionNotFoundError
from openassessment.runtime_imports.classes import import_block_structure_transformers, import_external_id
Expand All @@ -27,6 +28,7 @@
from openassessment.assessment.models import Assessment, AssessmentFeedback, AssessmentPart
from openassessment.fileupload.api import get_download_url
from openassessment.workflow.models import AssessmentWorkflow, TeamAssessmentWorkflow
from openassessment.assessment.score_type_constants import PEER_TYPE, SELF_TYPE, STAFF_TYPE

logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -1577,3 +1579,122 @@ def get_file_uploads(self, missing_blank=False):
files.append(file_upload)
self.file_uploads = files
return self.file_uploads


def score_type_to_string(score_type):
"""
Converts the given score type into its string representation.
"""
SCORE_TYPE_MAP = {
PEER_TYPE: "Peer",
SELF_TYPE: "Self",
STAFF_TYPE: "Staff",
}
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
return SCORE_TYPE_MAP.get(score_type, "Unknown")

def parts_summary(assessment_obj):
"""
Retrieves a summary of the parts from a given assessment object.
"""
return [
{
'type': part.criterion.name,
'score': part.points_earned,
'score_type': part.option.name if part.option else "None",
}
for part in assessment_obj.parts.all()
]

def get_scorer_data(anonymous_scorer_id):
"""
Retrieves the grader's data (full name, username, and email) based on their anonymous ID.
"""
scorer_username = map_anonymized_ids_to_usernames([anonymous_scorer_id]).get(anonymous_scorer_id, "Unknown")
scorer_name = map_anonymized_ids_to_fullname([anonymous_scorer_id]).get(anonymous_scorer_id, "Unknown")
scorer_email = map_anonymized_ids_to_emails([anonymous_scorer_id]).get(anonymous_scorer_id, "Unknown")
Ian2012 marked this conversation as resolved.
Show resolved Hide resolved
return scorer_name, scorer_username, scorer_email
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved

def generate_assessment_data(assessment_list):
results = []
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
for assessment in assessment_list:

scorer_name, scorer_username, scorer_email = get_scorer_data(assessment.scorer_id)

assessment_data = {
"idAssessment": str(assessment.id),
"grader_name": scorer_name,
"grader_username": scorer_username,
"grader_email": scorer_email,
"assesmentDate": assessment.scored_at.strftime('%d-%m-%Y'),
"assesmentScores": parts_summary(assessment),
"problemStep": score_type_to_string(assessment.score_type),
"feedback": assessment.feedback or ''
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
}

results.append(assessment_data)
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
return results

def generate_received_assessment_data(submission_uuid=None):
"""
Generates a list of received assessments data based on the submission UUID.

Args:
submission_uuid (str, optional): The UUID of the submission. Defaults to None.

Returns:
list[dict]: A list containing assessment data dictionaries.
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
"""

results = []
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved

submission = None
if submission_uuid:
submission = sub_api.get_submission_and_student(submission_uuid)

if not submission:
return results

assessments = _use_read_replica(
Assessment.objects.prefetch_related('parts').
prefetch_related('rubric').
filter(
submission_uuid=submission['uuid']
)
)
return generate_assessment_data(assessments)


def generate_given_assessment_data(item_id=None, submission_uuid=None):
"""
Generates a list of given assessments data based on the submission UUID as scorer.

Args:
submission_uuid (str, optional): The UUID of the submission. Defaults to None.

Returns:
list[dict]: A list containing assessment data dictionaries.
"""
results = []
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
# Getting the scorer student id
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
primary_submission = sub_api.get_submission_and_student(submission_uuid)
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved

if not primary_submission:
return results

scorer_student_id = primary_submission['student_item']['student_id']
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
submissions = None
if item_id:
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
submissions = Submission.objects.filter(student_item__item_id=item_id).values('uuid')
submission_uuids = [sub['uuid'] for sub in submissions]

if not submission_uuids or not submissions:
return results

# Now fetch all assessments made by this student for these submissions
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
assessments_made_by_student = _use_read_replica(
Assessment.objects.prefetch_related('parts')
.prefetch_related('rubric')
.filter(scorer_id=scorer_student_id, submission_uuid__in=submission_uuids)
)

return generate_assessment_data(assessments_made_by_student)
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
35 changes: 34 additions & 1 deletion openassessment/staffgrader/staff_grader_mixin.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
from openassessment.assessment.models.base import Assessment, AssessmentPart
from openassessment.assessment.models.staff import StaffWorkflow, TeamStaffWorkflow
from openassessment.data import (OraSubmissionAnswerFactory, VersionNotFoundException, map_anonymized_ids_to_emails,
map_anonymized_ids_to_fullname, map_anonymized_ids_to_usernames)
map_anonymized_ids_to_fullname, map_anonymized_ids_to_usernames, generate_received_assessment_data, generate_given_assessment_data)
from openassessment.staffgrader.errors.submission_lock import SubmissionLockContestedError
from openassessment.staffgrader.models.submission_lock import SubmissionGradingLock
from openassessment.staffgrader.serializers import (
Expand Down Expand Up @@ -194,6 +194,39 @@ def list_staff_workflows(self, data, suffix=''): # pylint: disable=unused-argum
log.exception("Failed to serialize workflow %d: %s", staff_workflow.id, str(e), exc_info=True)
return result



@XBlock.json_handler
@require_course_staff("STUDENT_GRADE")
def list_assessments_grades(self, data, suffix=''): # pylint: disable=unused-argument
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
"""
List the assessments' grades based on the type (received or given) for a specific submission.

Args:
data (dict): Contains the necessary information to fetch the assessments.
- 'item_id': The ID of the xblock/item.
- 'submission_uuid': The UUID of the submission.
- 'assessment_type': A string, either "received" or any other value
to determine the type of assessments to retrieve.

Returns:
list[dict]: A list of dictionaries, each representing an assessment's data.

Note:
- If 'assessment_type' is "received", the function fetches assessments received
nandodev-net marked this conversation as resolved.
Show resolved Hide resolved
for the given 'submission_uuid'.
- For any other value of 'assessment_type', the function fetches assessments
given by the owner of the 'submission_uuid' for other submissions in the same item.
"""
item_id = data['item_id']
subission_uuid = data['submission_uuid']

if data['assessment_type'] == "received":
return generate_received_assessment_data(subission_uuid)
else:
return generate_given_assessment_data(item_id, subission_uuid)
Copy link

@mariajgrimaldi mariajgrimaldi Oct 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this given and received as the assessment type present elsewhere? can we elaborate on the meaning of each?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the default should be list all assessments for the submission uuid, so received.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is a single handler better than separating concerns? list_assessments_received or list_assessments_given are more explicit than list_assessments..., but I don't have other strong arguments

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the MFE, it's just a table with a filter button; the data output has the same format. At the backend level, it's just a design decision that, compared to having two handlers, neither adds nor subtracts anything; it's pretty much the same.

Copy link

@mariajgrimaldi mariajgrimaldi Oct 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not the same. When using a single handler with a variable assessment_type, which defaults to given, you are making a decision there, you're saying the defaults assessments returned are given by students on the submission. When using two handlers, the client can actively decide which one to call without guessing the system's default. Don't you think?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah! that's true... but it does not exists a request without assessment_type (now called assessment_filter), it will always be given or received...

image

In fact, it is a required param in the AssessmentFeedbackView of edx-platform and declared in lms/djangoapps/ora_staff_grader/constants.py so, list_assessments() would be never called without assessment_filter...

jeje, but now tha you are saying this... I think that in form, this will look even better 😃

        filter_value = data['assessment_filter']

        if filter_value == "received":
            return generate_received_assessment_data(subission_uuid)
        elif filter_value == "given":
            return generate_given_assessment_data(item_id, subission_uuid)
        else:
            raise ValueError("Invalid assessment_filter value")



def _get_list_workflows_serializer_context(self, staff_workflows, is_team_assignment=False):
"""
Fetch additional required data and models to serialize the response
Expand Down