One way to define an expert is as someone that has developed an intuition (a ‘feeling’) for solutions to complicated problems in some domain. We’re all experts in some laws of physics (do you use a calculator when catching a ball?) and also in reading the emotions of other humans (which computers can’t do). We aren’t born experts in these things, as my 6 month old could tell you. We learn by doing.
Expertise can be developed in other domains the same way. Here Hamming relates how he developed a feel for the trajectories of missiles he was simulating for the military:
Again, I developed a feeling for the behavior of the missile-I got to “feel” the forces on it as various programs of trajectory shaping were tried. Hanging over the output plotters as the solution slowly appeared gave me the time to absorb what was happening. I have often wondered what would have happened if I had had a modern, high speed computer. Would I ever have acquired the feeling for the missile, upon which so much depended in the final design? I often doubt hundreds more trajectories would have taught me as much-I simply do not know. But that is why I am suspicious, to this day, of getting too many solutions and not doing enough very careful thinking about what you have seen. Volume output seems to me to be a poor substitute for acquiring an intimate feeling for the situation being simulated.
With better computers, he would have spent less time pondering the results. That means he would have been less of an expert because that mulling over of information is how our brains train.
Powerful computers are amazing devices but for some tasks our perception, our ‘feel’ for things, is immmensely more powerful still. A pity our meta understanding of this (how does it work?) is poor, hence the limits of AI.
Hey, speaking of this, have you heard of the vest that sees? See this kickstarer campaign:
The idea of sensory substitution is not new: it was pioneered by Paul Bach-y-Rita in 1969 with blind participants. He developed a dental chair with an array of push pins on its back, which was attached to a video-camera feed. Blind participants sat in this chair, and felt what was presented in front of the camera. After practice, the participants began to develop a visual intuition for the sensations they felt. Today, the current incarnation of this device is called the Brainport, and blind individuals have been able to use this in complex visual tasks (like obstacle course navigation).
I’m constantly trying to strike the balance between intuition and computation in my day job. When building a model, loading the data into a framework is probably about 85% of the observable ‘work’ after debugging and whatnot. But I spend probably 2x that time just thinking about the project and how I feel about our approach. I do it on my commute, when I walk the dogs and when I sleep. And I do it at my desk with my head in my hands in front of some outputs I’m trying to make sense of.
But you also need to know when to stop yourself. In many ways expertise is a miraculous thing but it has blind spots, particularly when we think about the very complex. Here are some things that we aren’t good at feeling:
- non-linear systems
- multiple interactions of variables and constraints
- very low probability events
As a remedy, Hamming advocates for an interesting habit: building toy models.
The reason back of the envelop calculations are widely used by great scientists is clearly revealed-you get a good feeling for the truth or falsity of what was claimed, as well as realize which factors you were
inclined not to think about, such as exactly what was meant by the lifetime of a scientist. Having done the calculation you are much more likely to retain the results in your mind. Furthermore, such calculations keep the ability to model situations fresh and ready for more important applications as they arise. Thus I recommend when you hear quantitative remarks such as the above you turn to a quick modeling to see if you believe what is being said, especially when given in the public media like the press and TV. Very often you find what is being said is nonsense, either no definite statement is made which you can model, or if you can set up the model then the results of the model do not agree with what was said. I found it very valuable at the physics table I used to eat with; I sometimes cleared up misconceptions at the time they were being formed, thus advancing matters significantly.
Models force people to be explicit about what they believe and let the results think for themselves. It can build intuition but it also works in situations (like with the bullets above) where intuition is unreliable. This isn’t trivial. People are often taken advantage of when they don’t have an intuitive feel for a problem and simply trust what others say. For those so trained, it’s easy to find a ton of nonsense statistical claims out there. One defense against this is our ability to tell if someone is lying.
But what if, blinded by bias they don’t know they’re misleading you? Even the most sophisticated scientists fight viciously over complex questions. Read Paul Krugman’s blog for a few days to see a high status economist be as nasty as you like against his intellectual rivals because nobody can convince anyone else of anything in macroeconomics. Too complex for us to understand. Both sides will tell you different, but then why can’t they convince each other?
It’s easy to get down about partisan political debates but to be honest, I’m not sure they matter much. In your life and mine, improving one’s own expertise about problems that matter is totally achievable.
But it doesn’t happen by accident. Training your brain takes work.