A mental model is a unstipulated idea that can be used to explain many variegated phenomena. Supply and demand in economics, natural selection in biology, recursion in computer science, or proof by induction in mathematicsthese models are everywhere once you know to squint for them.
Just as understanding supply and demand helps you reason well-nigh economics problems, understanding mental models of learning will make it easier to think well-nigh learning problems.
Unfortunately, learning is rarely taught as a matriculation on its ownmeaning most of these mental models are known only to specialists. In this essay, Id like to share the ten that have influenced me the most, withal with references to dig deeper in specimen youd like to know more.
1. Problem solving is search.
Herbert Simon and Allen Newell launched the study of problem solving with their landmark book, Human Problem Solving. In it, they argued that people solve problems by searching through a problem space.
A problem space is like a maze: you know where you are now, youd know if youve reached the exit, but you dont know how to get there. Withal the way, youre constrained in your movements by the mazes walls.
Problem spaces can moreover be abstract. Solving a Rubiks cube, for instance, ways moving through a large problem space of configurationsthe scrambled cube is your start, the cube with each verisimilitude segregated to a single side is the exit, and the twists and turns pinpoint the walls of the problem space.
Real-life problems are typically increasingly expansive than mazes or Rubik’s cubesthe start state, end state and word-for-word moves are often not clear-cut. But searching through the space of possibilities is still a good label of what people do when solving unfamiliar problemsmeaning when they dont yet have a method or memory that guides them directly to the answer.
One implication of this model is that, without prior knowledge, most problems are really difficult to solve. A Rubiks cube has over forty-three quintillion configurationsa big space to search in if you arent clever well-nigh it. Learning is the process of acquiring patterns and methods to cut lanugo on brute-force searching.
2. Memory strengthens by retrieval.
Retrieving knowledge strengthens memory increasingly than seeing something for a second time does. Testing knowledge isnt just a way of measuring what you knowit urgently improves your memory. In fact, testing is one of the weightier study techniques researchers have discovered.
Why is retrieval so helpful? One way to think of it is that the smart-ass economizes effort by remembering only those things that are likely to prove useful. If you unchangingly have an wordplay at hand, theres no need to encode it in memory. In contrast, the difficulty associated with retrieval is a strong signal that you need to remember.
Retrieval only works if there is something to retrieve. This is why we need books, teachers and classes. When memory fails, we fall when on problem-solving search which, depending on the size of the problem space, may goof utterly to requite us a correct answer. However, once weve seen the answer, well learn increasingly by retrieving it than by repeatedly viewing it.
3. Knowledge grows exponentially.
How much youre worldly-wise to learn depends on what you once know. Research finds that the value of knowledge retained from a text depends on prior knowledge of the topic. This effect can plane outweigh unstipulated intelligence in some situations.
As you learn new things, you integrate them into what you once know. This integration provides increasingly hooks for you to recall that information later. However, when you know little well-nigh a topic, you have fewer hooks to put new information on. This makes the information easier to forget. Like a crystal growing from a seed, future learning is much easier once a foundation is established.
This process has limits, of course, or knowledge would slide indefinitely. Still, its good to alimony in mind considering the early phases of learning are often the hardest and can requite a misleading impression of future difficulty within a field.
4. Creativity is mostly copying.
Few subjects are so misunderstood as creativity. We tend to imbue creative individuals with a near-magical aura, but creativity is much increasingly mundane in practice.
In an impressive review of significant inventions, Matt Ridley argues that innovation results from an evolutionary process. Rather than springing into the world ully-formed, new invention is substantially the random mutation of old ideas. When those ideas prove useful, they expand to fill a new niche.
Evidence for this view comes from the miracle of near-simultaneous innovations. Numerous times in history, multiple, unconnected people have ripened the same innovation, which suggests that these inventions were somehow nearby in the space of possibilities right surpassing their discovery.
Even in fine art, the importance of copying has been neglected. Yes, many revolutions in art were explicit rejections of past trends. But the revolutionaries themselves were, scrutinizingly without exception, steeped in the tradition they rebelled against. Rebelling versus any institute requires sensation of that convention.
5. Skills are specific.
Transfer refers to enhanced skills in one task without practice or training in a variegated task. In research on transfer, a typical pattern shows up:
- Practice at a task makes you largest at it.
- Practice at a task helps with similar tasks (usually ones that overlap in procedures or knowledge).
- Practice at one task helps little with unrelated tasks, plane if they seem to require the same wholesale skills like memory, critical thinking or intelligence.
Its nonflexible to make word-for-word predictions well-nigh transfer considering they depend on knowing both exactly how the human mind works and the structure of all knowledge. However, in increasingly restricted domains, John Anderson has found that productionsIF-THEN rules that operate on knowledgeform a fairly good match for the amount of transfer observed in intellectual skills.
While skills may be specific, unrestrictedness creates generality. For instance, learning a word in a foreign language is only helpful when using or hearing that word. But if you know many words, you can say a lot of variegated things.
Similarly, knowing one idea may matter little, but mastering many can requite enormous power. Every uneaten year of education improves IQ by 1-5 points, in part considering the unrestrictedness of knowledge taught in school overlaps with that needed in real life (and on intelligence tests).
If you want to be smarter, there are no shortcutsyoull have to learn a lot. But the converse is moreover true. Learning a lot makes you increasingly intelligent than you might predict.
6. Mental bandwidth is extremely limited.
We can only alimony a few things in mind at any one time. George Miller initially pegged the number at seven, plus or minus two items. But increasingly recent work has suggested the number is closer to four things.
This incredibly narrow space is the stickup through which all learning, every idea, memory and wits must spritz if it is going to wilt a part of our long-term experience. Subliminal learning doesnt work. If you arent paying attention, youre not learning.
The primary way we can be increasingly efficient with learning is to ensure the things that spritz through the stickup are useful. Devoting bandwidth to irrelevant elements may slow us down.
Since the 1980s, cognitive load theory has been used to explain how interventions optimize (or limit) learning based on our limited mental bandwidth. This research finds:
- Problem solving may be counterproductive for beginners. Novices do largest when shown worked examples (solutions) instead.
- Materials should be designed to stave needing to flip between pages or parts of a diagram to understand the material.
- Redundant information impedes learning.
- Complex ideas can be learned increasingly hands when presented first in parts.
7. Success is the weightier teacher.
We learn more from success than failure. The reason is that problem spaces are typically large, and most solutions are wrong. Knowing what works cuts lanugo the possibilities dramatically, whereas experiencing failure only tells you one specific strategy doesnt work.
A good rule is to aim for a roughly 85% success rate when learning. You can do this by calibrating the difficulty of your practice (open vs. sealed book, with vs. without a tutor, simple vs. ramified problems) or by seeking uneaten training and assistance when falling unelevated this threshold. If you succeed whilom this threshold, youre probably not seeking nonflexible unbearable problemsand are practicing routines instead of learning new skills.
8. We reason through examples.
How people can think logically is an timeworn puzzle. Since Kant, weve known that logic cant be uninventive from experience. Somehow, we must once know the rules of logic, or an illogical mind could never have invented them. But if that is so, why do we so often goof at the kinds of problems logicians invent?
In 1983, Philip Johnson-Laird proposed a solution: we reason by constructing a mental model of the situation.
To test a syllogism like All men are mortal. Socrates is a man. Therefore, Socrates is mortal, we imagine a hodgepodge of men, all of whom are mortal, and imagine tht Socrates is one of them. We deduce the syllogism is true through this examination.
Johnson-Laird suggested that this mental-model based reasoning moreover explains our logical deficits. We struggle most with logical statements that require us to examine multiple models. The increasingly models that need constructing and reviewing, the increasingly likely we will make mistakes.
Related research by Daniel Kahneman and Amos Tversky shows that this example-based reasoning can lead us to mistake our fluency in recalling examples for the very probability of an event or pattern. For instance, we might think increasingly words fit the pattern K _ _ _ than _ _ K _ considering it is easier to think of examples in the first category (e.g., KITE, KALE, KILL) than the second (e.g., TAKE, BIKE, NUKE).
Reasoning through examples has several implications:
- Learning is often faster through examples than utopian descriptions.
- To learn a unstipulated pattern, we need many examples.
- We must watch out when making wholesale inferences based on a few examples. (Are you sure youve considered all the possible cases?)
9. Knowledge becomes invisible with experience.
Skills wilt increasingly streamlined through practice. This reduces our conscious sensation of the skill, making it require less of our precious working memory topics to perform. Think of driving a car: at first, using the blinkers and the brakes was painfully deliberate. Without years of driving, you barely think well-nigh it.
The increased automation of skills has drawbacks, however. One is that it becomes much harder to teach a skill to someone else. When knowledge becomes tacit, it becomes harder to make explicit how you make a decision. Experts commonly underestimate the importance of basic skills because, having long been automated, they dont seem to factor much into their daily decision-making.
Another drawback is that streamlined skills are less unshut to conscious control. This can lead to plateaus in progress when you alimony doing something the way youve unchangingly washed-up it, plane when that is no longer appropriate. Seeking increasingly difficult challenges becomes vital considering these tumor you out of automaticity and gravity you to try largest solutions.
10. Relearning is relatively fast.
After years spent in school, how many of us could still pass the final exams we needed to graduate? Faced with classroom questions, many adults sheepishly shoehorn they recall little.
Forgetting is the unavoidable fate of any skill we dont use regularly. Hermann Ebbinghaus found that knowledge tapers off at an exponential ratemost quickly at the beginning, slowing lanugo as time elapses.
Yet there is a silver lining. Relearning is usually much faster than initial learning. Some of this can be understood as a threshold problem. Imagine memory strength ranges between 0 and 100. Under some threshold, say 35, a memory is inaccessible. Thus if a memory dropped from 36 to 34 in strength, you would forget what you had known. But plane a little uplift from relearning would repair the memory unbearable to recall it. In unrelatedness a new memory (starting at zero) would require much increasingly work.
Connectionist models, inspired by human neural networks, offer flipside treatise for the potency of relearning. In these models, a computational neural network may take hundreds of iterations to reach the optimal point. And if you jiggle the connections in this network, it forgets the right wordplay and responds no largest than if by chance. However, as with the threshold subtitle above, the network relearns the optimal response much faster the second time.1
Relearning is a nuisance, expressly since struggling with previously easy problems can be discouraging. Yet its no reason not to learn tightly and broadlyeven forgotten knowledge can be revived much faster than starting from scratch.
What are the learning challenges youre facing? Can you wield one of these mental models to see it in a new light? What would the implications be for tackling a skill or subject you find difficult? Share your thoughts in the comments!
The post Ten Mental Models for Learning first appeared on Scott H Young.
The post Ten Mental Models for Learning appeared first on Scott H Young.