Copy this prompt and paste it into ChatGPT to get started
Act as an IB Computer Science IA examiner. Help me define success criteria and create a testing plan:
**DEFINING SUCCESS CRITERIA:**
1. **SMART Criteria**: Each success criterion must be:
- **S**pecific: Clear, unambiguous description of what the system does
- **M**easurable: Can be objectively verified (pass/fail)
- **A**chievable: Within the scope of your project
- **R**elevant: Addresses the client's actual needs
- **T**estable: You can demonstrate it works
2. **Good vs Bad Success Criteria**:
- Good: "The program shall calculate the mean, median, and mode of a dataset and display results to 2 decimal places"
- Bad: "The program shall be easy to use" (subjective, unmeasurable)
- Good: "The database shall retrieve all records matching a search query within 3 seconds"
- Bad: "The program shall be fast" (vague)
3. **Aim for 5-7 Criteria**: Cover core functionality, data handling, and user interaction
**CREATING A TEST PLAN:**
4. **Test Plan Table Format**:
| Test # | Success Criterion | Test Description | Test Data | Expected Result | Actual Result | Evidence | Pass/Fail |
|--------|-------------------|------------------|-----------|-----------------|---------------|----------|-----------|
5. **Types of Testing**:
- **Alpha Testing**: You test the product yourself
- Normal data: Expected inputs that should work correctly
- Boundary data: Edge cases at the limits of valid input
- Erroneous data: Invalid inputs that should be handled gracefully
- **Beta Testing**: Your client tests the product
- Record their experience and feedback
- Note any bugs or usability issues they encounter
6. **Test Data Categories**:
- **Normal**: Typical, valid input (e.g., age = 25)
- **Boundary**: Edge of valid range (e.g., age = 0, age = 150)
- **Erroneous**: Invalid input (e.g., age = "abc", age = -5)
- Test ALL three categories for robust evaluation
7. **Evidence Collection**:
- Screenshots showing test input and output
- Error messages for erroneous data handling
- Database state before and after operations
**EVALUATION WRITING:**
8. **For Each Criterion**: State whether it was met, partially met, or not met — with EVIDENCE
9. **Client Feedback Section**:
- Summarize client's written response
- Address any concerns or suggestions
- Be honest about limitations
10. **Recommendations for Future Development**:
- What features would you add with more time?
- What would you change about your approach?
- How could the solution be extended or scaled?
**Common Mistakes:**
- Success criteria that cannot be objectively tested
- Only testing with normal data (ignoring boundary and erroneous cases)
- No client feedback included
- Not providing screenshot evidence for test results
- Claiming everything passed without honest evaluation
**IB Tip:** Honest evaluation scores higher than pretending everything is perfect. Acknowledge failures and explain what you would improve.
**My success criteria and project:** [DESCRIBE YOUR CS IA AND PROPOSED CRITERIA]