Pre-Pod = Pre-pod is a test of new tutorials. Pre-Pod participants review all new content in the Jupyter books. They provide feedback to the content creator about what went well and what are areas for improvement.
Pre-Pod Participants = a subset of TAs who apply through the normal TA application process. They select that they are interested in helping with Pre-Pod.
- Pre-Pod participants are paid an extra $500 (as of 2022).
- This is calculated as a daily rate based on the TA payment rate.
- They should have experience in the content topic and experience with Python.
- Preference is given to returning TAs or former students.
- Participants should be located in a U.S. timezone or share a timezone with the content creator for ease of scheduling.
- Coordinate with Curriculum Team Leads/Exec Chairs to understand how much content is being updated.
- Schedule dates for Pre-Pod sessions.
- Recruit Pre-Pod participants via the Instructions Department Slack channels and email to accepted TAs.
- Nominate a Pre-Pod Lead: one person for each group who’s willing to help record feedback and facilitate the group discussion. The Pre-Pod Lead does not receive additional compensation.
- Coordinate with all Day Leads and content creators in a day to be available for when their content is reviewed → The pod has the option of holding a meeting to discuss the content of the day either at the end of the day, or at the beginning of the next day to discuss the content from the previous day. The timing of the meeting will depend on the availability of the Day Lead
- Run Pre-Pod
- Create a Pre-Pod schedule.
- Send schedule out to day-leads and check their availability for pre-pod
- Send acceptances to pre-pod TAs, including the schedule as well as information needed for pre-pod (see ‘Content of information that will be emailed to participating pre-pod TAs’ below)
- Create Zoom Rooms
- 🌤️Using Our Shared Zoom Account
- Record the discussion that the pod will have w/ Day Leads on Zoom.
- Create a folder to share the feedback forms that the pod will fill out daily together with the recording of meetings.
- Check with Day Leads and / or Curriculum people if the feedback form needs to be updated or not. In 2023, CMA used this feedback form. https://docs.google.com/forms/d/1HN9CIsCqoCgAM96WMNX3TNPXwHM6RtAFaKbs3blremY/edit
.1 30-minute meeting led by the Pre-Pod Lead to discuss the previous day’s content.
.2 TAs watch the introductory lecture alone (~30 minutes).
.aNot all courses have an intro lecture - remove this portion as appropriate.
.3 30-minute transition into ‘pods’.
.4 4-hour chunk for reviewing the tutorial, with a 1-hour break after the first 1.5 hours.
.5 30 minutes of watching outro lecture, ending with a pod Q&A.
.6 1-hour meeting led by the Pre-Pod Lead to communicate with the Day Lead.
-
- For CMA 2023: May 23 - 27
- Work with Project Manager and CuCurriculum Team to schedule.
- 10 TAs (2 teams of 5 each)
Group 1:
Group 2:
Our primary goal with the feedback is to . What we want to avoid is that students get discouraged because we are expecting them to have knowledge that they don’t. We have a broad range of students with different levels of education in neuroscience, math, and programming. So we can’t assume that the average student has any depth of knowledge in these topics. This means the lectures and tutorials must:
a) Have clear instructions that iterate in order:
What the students should do.
Why the students should do it.
What we learned after doing it.
b) stand-alone.
Spend some time discussing “discussion” sections
Outline some of the open Q’s that you would want the lecturers to answer
- 11:30 AM - 12:30 ET for Group 2, 11:00 AM - 12:00 PM ET for Group 1 (in week one)
- First 30 min in break out rooms - groups of 3-4 each, each group is assigned to a certain part of the content
- ASSIGN GROUPS DAY OF
- All groups:
- What was the most clear and what was the most difficult thing covered that day?
- Were the learning objectives accomplished?
- 1 group = intro lecture + outro lecture
- Were there any concepts that were too difficult or too easy?
- Was it related sufficiently to the tutorial concepts?
- Was there too much / too little math and/or neuro?
- 1 group = tutorial lectures
- Are they understandable?
- Do the tutorial slides/video match the tutorial content and give a good introduction to it? Or do they need to be related more to the code?
- Was there too much / too little math and/or neuro?
- At the completion of the tutorial, was there intuition built for the central mathematical/statistical/modeling/neuro concepts of the tutorial? What concepts needed further intuition-building?#
- At the completion of the tutorial, if you build a reverse outline from the exercises, does it match the stated objectives in the intro? If not, what needs to be changed to alleviate this mismatch?
- 1 group = tutorial code
- At the onset of the tutorial, can you build a mental image of what the tutorial exercises should roughly cover, from the Intro and stated tutorial objectives? If not, what could be improved?
- Were the exercises clearly defined with enough specific instructions for what code needed to be added? Were they an appropriate difficulty? Was the amount of code about right? Did they feel helpful for your understanding?
- Did you learn enough when you ran the code?
- Was more explanation or comments needed for certain parts?
- Was the python syntax consistent from other days?
- At the completion of the tutorial, was there intuition built for the central mathematical/statistical/modeling/neuro concepts of the tutorial? What concepts needed further intuition-building?
- At the completion of the tutorial, if you build a reverse outline from the exercises, does it match the stated objectives in the intro? If not, what needs to be changed to alleviate this mismatch?
- Next 30 min as a group with the “day-chief” (<- organizer of that day’s material)
- Each group gives a 5-10 minute summary of their discussion
- Data collector takes notes
functions:
1) In the group feedback, it would be ideal if there is some platform where all pod members could together type in the response. In the current approach, each person takes turn to fill the form by getting inputs from each member, which is not efficient.
2) Maybe rephrase the question to get a score (1-10) rather than a yes or no type response for e.g. "Was the course content clear and easy to understand?" would mostly yield binary response. There could be additional text field if someone wants to add more details.