Status:
Summary of rewrite:
- I had a prior confusion on the separation between the EMG and M1 data and how the model flowed together. It might have been clear to the students but just in case, I made it impossible to misunderstand by extending the explanation
- Lots of terms need explanations and requires an extra definitions section at the start or explanations to be added in parentheses afterwards
- in silico
- homeostasis
- Loose terminology fixed (e.g. "properly constrained" is not the best language to say "under realistic biological conditions") - fixed
- Chaotic regime - not defined ! - fixed
- "Feedforward drive" - explained as continuing on from an explanation in the Comp Neuro course? Where? (Found & edited)
- Some input variables in the RNN formula are not explained, this is highly confusing (tweaked / amended and added)
- Specifically for scaling parameters
h
and c
- A diagram of an RNN to recap students' minds might be useful slightly before Section 2.2
- The coding exercise uses different variable names than the explanation. Not sure what is best here:
- long names won't fit in the given definition and might be confusing
- but code with those letters is much less informative
- use comments to link back to longer variable names and what is given in the equation (e.g.
# firing_rate_before → r
) - Specification of trajectory isn't clear enough that it's over a temporal sequence and people more familiar with standard RL might be more confused
- Add in 'trajectory' to definition section (done)
- The
generate_trajectory
function takes in an input
parameter but this isn't clearly defined and I had to dig back and see it came out of the inner loop of a previously run cell. I think this could be confusing - added in some explanation of where this comes from in case others try to do the code input tracing - Don't keep the 'codebase' terminology, unnecessary (amended)
- "Stochastic Gradient" → SGD (missing words in explanation)
- → (a criticism that occasionally pops up with using 'sanity' in this way, best to avoid if a change is easy to make)
- PSTH - used throughout but not explained anywhere, need to add into definition and make sure concept is clear (done)
- Some potentially misleading terminology corrected (RNN activity shouldn't "thus look like motor cortex" out of the box, just by training predictions to have the same outcome) - incorrect statement (fixed)
- hidden representations / latent activity (pick one term and stick with it) - fixed
- Same with "extrapolate beyond" & "generalise" - stick with generalise
- Limits the size of the weights → magnitude of the weights
- Concept of "expense" in the regularisation section doesn't really fit well, can be amended to sound better (done)
Need to rewrite a paragraph. My proposed solution:
We know natural solutions tend towards simplicity when this also coincides with biological operations that can be considered expensive (e.g. use up a lot of energy intake). If methods to accomplish tasks can be done while minimising the need for valuable resources, evolution tends to favour such solutions. With this in mind, we want to bias the artificial neural network into also preferring such solutions. We achieve this by adding in extra loss terms that penalize the learning procedure when it chooses overly complex solutions.
- Clean up code in PyTorch instead of the tensor movement operations added unnecessary mental clutter to the functions (
x.detach().cpu().numpy()
) -
Networks that perform similar tasks to the brain and generalize well sometimes converge to similar solutions to the brain. We don’t fully understand this phenomenon, and we’ll introduce tools throughout the course to more quantitatively assess the correspondence. (Can be worded better) - Done
Other considerations:
- more dense comparison of plots to save scrolling up and down a lot in the discussion point
-