Grading Setup Guide
Configure grading criteria, rubrics, and customize grading behavior
Overview
Kai’s intelligent grading system combines AI-powered assessment with customizable rubrics and workflows to provide consistent, detailed feedback while saving you time. This guide covers everything from basic rubric setup to advanced grading configurations.
Quick Start
The fastest way to get started with grading:
- Create your first rubric in the Dashboard
- Test with sample submissions to calibrate AI
- Review and adjust grading thresholds
- Enable auto-grading for appropriate assignments
Start with Kai’s pre-built rubrics and customize them to match your needs. This is faster than building from scratch and ensures you don’t miss important criteria.
Creating Grading Rubrics
Basic Rubric Structure
A rubric in Kai consists of:
- Criteria: Individual aspects being evaluated (e.g., “Thesis Statement”, “Evidence”)
- Point Values: Weight of each criterion
- Performance Levels: Quality tiers (Excellent, Good, Fair, Poor)
- Descriptors: What defines each performance level
Creating via Dashboard
Step-by-step:
- Navigate: Dashboard → Grading → Rubrics → “Create New Rubric”
- Name your rubric: Use clear, descriptive names (e.g., “Research Paper - HIST 101”)
- Set total points: Typically 100 or aligned with your gradebook
- Add criteria: Click “Add Criterion” for each aspect you want to evaluate
Example: Essay Rubric
Rubric Name: "Analytical Essay - ENGL 201"
Total Points: 100
Criterion 1: Thesis & Argument (25 points)
- Excellent (25): Clear, specific, arguable thesis with sophisticated argument
- Good (20): Clear thesis with solid argument, minor improvements possible
- Fair (15): Thesis present but vague; argument needs development
- Poor (10): Weak or missing thesis; argument unclear
Criterion 2: Evidence & Analysis (30 points)
- Excellent (30): Strong textual evidence with deep, original analysis
- Good (24): Good evidence with competent analysis
- Fair (18): Some evidence but superficial analysis
- Poor (12): Minimal evidence; analysis missing or incorrect
Criterion 3: Organization & Structure (20 points)
- Excellent (20): Logical flow, clear transitions, coherent structure
- Good (16): Generally organized with minor flow issues
- Fair (12): Disorganized in places; unclear transitions
- Poor (8): Poor organization; difficult to follow
Criterion 4: Writing Quality (15 points)
- Excellent (15): Clear, precise writing; minimal errors
- Good (12): Clear writing with minor errors
- Fair (9): Unclear at times; multiple errors
- Poor (6): Frequent errors impede understanding
Criterion 5: Citations & Format (10 points)
- Excellent (10): Perfect citation format; all sources cited
- Good (8): Minor citation errors
- Fair (6): Multiple citation issues
- Poor (4): Missing citations or major format errors
Creating via API
For programmatic rubric creation or bulk uploads:
import requests
rubric = {
"name": "Analytical Essay - ENGL 201",
"total_points": 100,
"criteria": [
{
"name": "Thesis & Argument",
"points": 25,
"description": "Clear, arguable thesis with well-developed argument",
"levels": [
{
"name": "Excellent",
"points": 25,
"description": "Clear, specific, arguable thesis with sophisticated argument"
},
{
"name": "Good",
"points": 20,
"description": "Clear thesis with solid argument, minor improvements possible"
},
{
"name": "Fair",
"points": 15,
"description": "Thesis present but vague; argument needs development"
},
{
"name": "Poor",
"points": 10,
"description": "Weak or missing thesis; argument unclear"
}
]
},
{
"name": "Evidence & Analysis",
"points": 30,
"description": "Use of textual evidence and quality of analysis",
"levels": [
{
"name": "Excellent",
"points": 30,
"description": "Strong textual evidence with deep, original analysis"
},
{
"name": "Good",
"points": 24,
"description": "Good evidence with competent analysis"
},
{
"name": "Fair",
"points": 18,
"description": "Some evidence but superficial analysis"
},
{
"name": "Poor",
"points": 12,
"description": "Minimal evidence; analysis missing or incorrect"
}
]
}
# ... additional criteria
]
}
response = requests.post(
"https://chi2api.com/v1/rubrics",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
json=rubric
)
rubric_id = response.json()['rubric_id']
print(f"Created rubric: {rubric_id}")Subject-Specific Rubric Templates
Kai provides pre-built templates for common subjects:
Available Templates:
| Subject | Template Name | Best For | Criteria Count |
|---|---|---|---|
| Writing | Essay Analysis | Analytical essays, literary analysis | 5 |
| Writing | Research Paper | Research-based writing | 6 |
| Writing | Creative Writing | Fiction, poetry, creative work | 4 |
| Science | Lab Report | Laboratory experiments | 7 |
| Science | Problem Sets | Math/physics problems | 4 |
| Social Science | Case Analysis | Business/policy cases | 5 |
| Programming | Code Quality | Programming assignments | 6 |
| Presentation | Oral Presentation | Speeches, presentations | 5 |
Using a Template:
# Browse templates
templates = kai.rubrics.list_templates(subject="writing")
# Apply template
kai.rubrics.create_from_template(
template_id="essay_analysis",
course_id="course_abc",
customizations={
"name": "Literary Analysis - American Lit",
"total_points": 100,
"adjust_weights": {
"Evidence & Analysis": 35, # Increase from default 30
"Writing Quality": 10 # Decrease from default 15
}
}
)Grading Configuration
Auto-Grading Thresholds
Configure when Kai can automatically assign grades vs. when human review is required.
Confidence-Based Grading:
grading_policy = {
"auto_grade_threshold": 0.85, # Auto-grade if AI confidence >= 85%
"review_required": {
"low_confidence": True, # Review if confidence < threshold
"high_stakes": True, # Review if assignment weight > 20%
"boundary_scores": True, # Review if near grade boundary (±2%)
"flagged_content": True, # Review if content flagged
"student_request": True # Review if student requests
},
"grade_boundaries": [90, 80, 70, 60], # A, B, C, D cutoffs
"boundary_margin": 2, # ±2 points triggers review
"exceptions": {
"final_exams": {"auto_grade": False}, # Always review finals
"first_assignment": {"auto_grade": False} # Review first to calibrate
}
}
kai.courses.update(
course_id="course_abc",
grading_settings=grading_policy
)Example Scenarios:
| Submission | AI Score | Confidence | Auto-Grade? | Reason |
|---|---|---|---|---|
| Excellent work, clear criteria match | 95 | 0.96 | ✅ Yes | High confidence, not near boundary |
| Good work, some ambiguity | 82 | 0.78 | ❌ No | Below 0.85 threshold |
| Borderline B/C (78%) | 78 | 0.88 | ❌ No | Within 2 points of 80% boundary |
| Final exam | 92 | 0.94 | ❌ No | Final exam exception |
| Plagiarism detected | 88 | 0.91 | ❌ No | Flagged content |
| Student requested review | 85 | 0.89 | ❌ No | Student request |
Feedback Customization
Configure the style and detail of AI-generated feedback.
Feedback Styles:
| Style | Description | Detail Level | Best For |
|---|---|---|---|
| Detailed | Comprehensive with specific suggestions | High | Major papers, final projects |
| Concise | Brief, actionable feedback | Medium | Regular assignments, quizzes |
| Encouraging | Positive, growth-focused | Medium | Struggling students, formative work |
| Professional | Formal, academic tone | High | Graduate work, professional programs |
| Points Only | Score with minimal commentary | Low | Self-graded work, completion-based |
Configuration:
# Set course-wide default
kai.courses.update(
course_id="course_abc",
grading_settings={
"feedback_style": "detailed",
"feedback_options": {
"show_rubric_scores": True,
"include_suggestions": True,
"highlight_strengths": True,
"provide_examples": True,
"suggest_resources": True
}
}
)
# Override for specific assignment
kai.assignments.update(
assignment_id="assign_final",
grading_settings={
"feedback_style": "professional",
"feedback_options": {
"include_suggestions": False, # Just evaluation
"provide_examples": False
}
}
)
# Override for specific student (accommodation)
kai.students.update_grading_preferences(
student_id="student_123",
course_id="course_abc",
preferences={
"feedback_style": "encouraging",
"additional_support": True
}
)Example Feedback Comparison:
Assignment: Essay on "The Great Gatsby"
Score: 82/100
--- DETAILED STYLE ---
Thesis & Argument (20/25):
Your thesis in paragraph 2 clearly identifies Fitzgerald's use of symbolism,
which is a strong foundation. However, consider making it more specific -
instead of "symbolism is important," argue WHY the green light specifically
represents Gatsby's impossible dream. See Smith (2018, p. 45) for examples
of strong thesis statements in literary analysis.
Evidence & Analysis (26/30):
Excellent use of textual evidence in paragraphs 3-5. Your analysis of the
green light scene is sophisticated and well-developed. To strengthen further,
connect your analysis back to your thesis more explicitly. For example, after
analyzing the "orgastic future" passage, add a sentence like "This demonstrates
how Fitzgerald uses temporal imagery to underscore the thesis that..."
--- CONCISE STYLE ---
Thesis & Argument (20/25): Strong foundation. Make thesis more specific about
green light symbolism.
Evidence & Analysis (26/30): Good textual evidence. Connect analysis back to
thesis more explicitly.
--- ENCOURAGING STYLE ---
Thesis & Argument (20/25): You're developing strong analytical skills! Your
thesis shows good understanding of symbolism. Next step: make it even more
specific and arguable.
Evidence & Analysis (26/30): Excellent work with textual evidence! Your
analysis shows real insight into Fitzgerald's techniques.
Grading Workflows
Define the process from submission to final grade.
Standard Workflow:
workflow = {
"name": "Standard Essay Grading",
"steps": [
{
"step": 1,
"action": "ai_grade",
"config": {
"use_rubric": True,
"rubric_id": "rubric_essay_001"
}
},
{
"step": 2,
"action": "check_confidence",
"config": {
"threshold": 0.85,
"if_low": "route_to_review"
}
},
{
"step": 3,
"action": "check_policies",
"config": {
"check_plagiarism": True,
"check_boundaries": True,
"check_stakes": True
}
},
{
"step": 4,
"action": "assign_grade",
"config": {
"auto_post": False, # Instructor must approve
"notify_student": True
}
}
]
}
kai.workflows.create(workflow)Advanced Workflow with Peer Review:
peer_review_workflow = {
"name": "Peer Review + AI Grading",
"steps": [
{
"step": 1,
"action": "peer_review",
"config": {
"reviewers_per_submission": 2,
"review_rubric_id": "peer_review_rubric",
"anonymize": True,
"deadline_offset_hours": 48
}
},
{
"step": 2,
"action": "ai_grade",
"config": {
"consider_peer_feedback": True,
"peer_weight": 0.2, # 20% peer, 80% AI/instructor
"flag_discrepancies": True
}
},
{
"step": 3,
"action": "instructor_review",
"config": {
"required_if": [
"large_peer_ai_discrepancy",
"flagged_content",
"low_confidence"
]
}
},
{
"step": 4,
"action": "finalize_grade"
}
]
}Calibration and Training
Initial Calibration
Train Kai to match your grading style:
Calibration Process:
- Grade Sample Set: Manually grade 5-10 representative submissions
- AI Comparison: Kai grades the same submissions
- Review Differences: Examine where AI differs from your grades
- Adjust Settings: Modify thresholds and rubric weights
- Re-test: Test on new sample set
Running Calibration:
# Create calibration set
calibration = kai.calibration.create(
course_id="course_abc",
assignment_id="assign_essay_001",
rubric_id="rubric_essay_001",
sample_size=10
)
# Grade manually
for submission in calibration.submissions:
instructor_grade = manual_grading_interface(submission)
kai.calibration.add_instructor_grade(
calibration_id=calibration.id,
submission_id=submission.id,
grade=instructor_grade
)
# Run AI grading on same submissions
kai.calibration.run_ai_grading(calibration.id)
# Compare and analyze
analysis = kai.calibration.analyze(calibration.id)
print(f"Average difference: {analysis.avg_difference} points")
print(f"Criteria with largest variance: {analysis.variance_by_criterion}")
# View detailed comparison
for comparison in analysis.comparisons:
print(f"\nSubmission {comparison.submission_id}:")
print(f" Your grade: {comparison.instructor_grade}")
print(f" AI grade: {comparison.ai_grade}")
print(f" Difference: {comparison.difference}")
print(f" Largest criterion gap: {comparison.largest_gap}")Interpreting Results:
| Average Difference | Interpretation | Action |
|---|---|---|
| < 3 points | Excellent alignment | Proceed with confidence |
| 3-5 points | Good alignment | Review criteria with largest gaps |
| 5-8 points | Moderate alignment | Adjust rubric weights, re-calibrate |
| > 8 points | Poor alignment | Revise rubric descriptors significantly |
Ongoing Adjustment
Continuously improve grading accuracy:
Feedback Loop:
# After each grading session, review AI performance
performance = kai.grading.get_performance_metrics(
course_id="course_abc",
timeframe="last_week"
)
print(f"Auto-graded: {performance.auto_graded_count}")
print(f"Required review: {performance.review_count}")
print(f"Instructor agreed with AI: {performance.agreement_rate}%")
print(f"Average adjustment: {performance.avg_adjustment} points")
# Identify patterns in adjustments
patterns = kai.grading.analyze_adjustments(course_id="course_abc")
for pattern in patterns:
print(f"\nPattern: {pattern.description}")
print(f" Frequency: {pattern.frequency}")
print(f" Suggestion: {pattern.suggested_adjustment}")Special Grading Scenarios
STEM Problem Sets
Configure for mathematical and scientific work:
stem_grading = {
"partial_credit": {
"enabled": True,
"method": "step_by_step", # Track work, not just final answer
"steps": [
{"name": "Problem Setup", "weight": 0.2},
{"name": "Method Selection", "weight": 0.2},
{"name": "Calculation", "weight": 0.3},
{"name": "Final Answer", "weight": 0.2},
{"name": "Units & Formatting", "weight": 0.1}
]
},
"answer_checking": {
"numerical_tolerance": 0.01, # 1% tolerance for rounding
"accept_equivalent_forms": True, # √2 = 1.414...
"check_units": True
},
"common_errors": {
"track": True,
"provide_hints": True,
"error_categories": [
"sign_error",
"unit_conversion",
"algebraic_manipulation",
"calculation_error"
]
}
}
kai.assignments.update(
assignment_id="physics_problemset_1",
grading_settings=stem_grading
)Code Assignments
Special configuration for programming:
code_grading = {
"execution_testing": {
"enabled": True,
"test_cases": [
{"input": "test1.txt", "expected_output": "output1.txt"},
{"input": "test2.txt", "expected_output": "output2.txt"}
],
"timeout_seconds": 5
},
"code_quality": {
"check_style": True,
"style_guide": "pep8", # Or "google", "airbnb", etc.
"check_documentation": True,
"check_efficiency": True,
"max_complexity": 10
},
"plagiarism_detection": {
"enabled": True,
"compare_to": ["current_class", "previous_semesters"],
"code_similarity_threshold": 0.85
},
"rubric_weights": {
"correctness": 0.50,
"code_quality": 0.25,
"documentation": 0.15,
"efficiency": 0.10
}
}Creative Work
Configure for subjective assessments:
creative_grading = {
"subjective_criteria": {
"enabled": True,
"criteria": [
{
"name": "Originality",
"description": "Uniqueness and creativity of approach",
"weight": 0.3,
"allow_high_variance": True # Subjectivity expected
},
{
"name": "Emotional Impact",
"description": "Effectiveness of emotional resonance",
"weight": 0.3,
"allow_high_variance": True
},
{
"name": "Technical Execution",
"description": "Craft and technical skill",
"weight": 0.4,
"allow_high_variance": False # More objective
}
]
},
"require_human_review": {
"always": True, # Creative work always needs human judgment
"ai_provides": "suggestions_only"
},
"feedback_style": "encouraging" # Foster creativity
}Grade Management
Posting Grades
Control when and how grades are released:
posting_settings = {
"auto_post": {
"enabled": True,
"conditions": {
"only_if_auto_graded": True,
"only_if_high_confidence": True,
"delay_hours": 24 # Give instructor time to review
}
},
"manual_post": {
"batch_post": True, # Post all at once
"notify_on_post": True
},
"anonymity": {
"hide_from_peers": True,
"show_distribution": True, # Show class distribution
"show_percentile": True # Show student's percentile
}
}Grade Analytics
Track grading patterns and student performance:
# View grade distribution
distribution = kai.analytics.grade_distribution(
course_id="course_abc",
assignment_id="essay_001"
)
print(f"Mean: {distribution.mean}")
print(f"Median: {distribution.median}")
print(f"Std Dev: {distribution.std_dev}")
print(f"Distribution: {distribution.histogram}")
# Identify outliers
outliers = kai.analytics.identify_outliers(
course_id="course_abc",
assignment_id="essay_001",
threshold=2.0 # 2 standard deviations
)
# Track improvement over time
progress = kai.analytics.student_progress(
student_id="student_123",
course_id="course_abc"
)
print(f"Trend: {progress.trend}") # "improving", "declining", "stable"
print(f"Average improvement: {progress.avg_improvement} points per assignment")Best Practices
Rubric Design
Use specific, observable criteria rather than vague descriptors. Instead of “good writing,” use “clear topic sentences in each paragraph with supporting evidence.”
Good vs. Poor Descriptors:
| ❌ Poor | ✅ Good |
|---|---|
| “Good analysis” | “Analysis connects evidence to thesis with clear reasoning” |
| “Well organized” | “Logical paragraph order with transitions between ideas” |
| “Nice work” | “Correctly applies formulas with work shown for each step” |
Consistency
- Use same rubric for similar assignments
- Calibrate regularly (every 3-4 assignments)
- Document exceptions when you override AI grades
- Share rubrics with students before assignment
Transparency
# Make rubrics visible to students
kai.rubrics.update(
rubric_id="rubric_001",
settings={
"visible_to_students": True,
"show_before_submission": True,
"show_with_feedback": True
}
)
# Provide sample graded work
kai.assignments.add_examples(
assignment_id="essay_001",
examples=[
{
"submission": "sample_excellent.pdf",
"grade": 95,
"feedback": "Example of excellent work",
"rubric_scores": {...}
},
{
"submission": "sample_good.pdf",
"grade": 82,
"feedback": "Example of good work with room for improvement",
"rubric_scores": {...}
}
]
)Efficiency Tips
- Batch Grade: Grade similar submissions together for consistency
- Use Templates: Start with templates, customize as needed
- Review Strategically: Focus human review on borderline/flagged cases
- Export Reports: Use analytics to identify trends early
Troubleshooting
Common Issues
Problem: AI consistently grades higher or lower than you would
Solutions:
# Adjust baseline
kai.courses.update(
course_id="course_abc",
grading_settings={
"grade_adjustment": +3 # Add 3 points to all AI grades
# or use multiplier: "grade_multiplier": 1.05
}
)
# Or recalibrate with more examples
kai.calibration.run_full_recalibration(
course_id="course_abc",
sample_size=20 # Larger sample
)Problem: Similar submissions receive different grades
Solutions: - Ensure rubric descriptors are specific and measurable - Increase auto-grade threshold to 0.90 (stricter) - Review and revise rubric after calibration - Check for ambiguous criteria
Problem: Too many submissions flagged for review
Solutions:
# Improve rubric specificity
# Add more performance level descriptors
# Provide more calibration examples
# Or adjust threshold temporarily
kai.courses.update(
course_id="course_abc",
grading_settings={
"auto_grade_threshold": 0.75 # Lower threshold
}
)Support Resources
- Email: grading-support@chi2labs.com
- Calibration Workshops: Schedule a session
- Video Tutorials: Rubric creation walkthrough
- Community: Share rubrics
Next Steps: - Create content templates for assignments and quizzes - Personalize the learning experience for your students - Review best practices for effective grading