(seed planted — september 2022; watered — december 2024)
The way I learn is to maximize my exposure to a bunch of varied resources to build upon some kind of concept map of a larger underlying thing I am consciously building, but unconsciously growing.
Because I don’t have formally trained and specialized intuitions on these concepts, and that is what I am trying to develop, I begin with the most generalized conversation of something specialized and dig vertically. However, that is only enabled by horizontal scanning in order to jump deeper.. think of a staircase
For example: as I read a paper trying to explain why large generative models are unpredictable as you scale them (and thus exacerbate the inherent unknown bias being simultaneously introduced into the model) I’m having to keep a separate tab open on Google for: YouTube comments that re-explain what partial derivatives means, going back to understand backpropagation, understanding what compute means for scale and why it’s important, etc.
What ends up happening is I then stumble upon reading about yet another topic in this field such as embedding, or hyper parameter tuning, or what a sigmoid neural net algorithm is, and then it continues to build upon itself.
Then I listen to a talk from experts in the field and they reference their own knowledge in the way they are having a conversation. Example, talking about corrigibility and utilization functions to model how we are approaching safety mechanisms today.
Then I lose the initial objective of why/how I spun into that deep spiral of learning the thing, then go back to the original paper and continue reading. This is how I learn and form my own mental models/concept maps. While it is a nice way to develop some kind of intuition, it is potentially not an optimized approach to maximize my own learning because I jump from one space to another continuously and then backwards to iterate on my understanding. See: the black box method of learning.