I Like Disruption Jargon

So I’m a big fan of one category of business jargon: disruption. Now, to be clear I don’t think it’s a useful general theory of business. It’s just an excellent metaphor and an interesting collection of stories about innovation. That’s it (and that’s enough!).

One of the founding insights of the movement is that there is a bias by makers of products to over-engineer. Best case is that this is driven by the requirements of a minority of very vocal, high-end users. Worst case, it’s driven by the unmarketable whims of engineers. Either way, the product strays from the core value it gives to customers (the “Job to be done” in the jargon) and even company management can lose sight of which features are the killers amid the noise of the bundle.

Fast forward a few years. Technology advances and features get updated. But without a clear idea for which features drive demand, it’s easy to fall off the cutting edge. Along comes a startup who discovers this, focuses on customer need, unbundles the bloat and bang. Disruption.

So out in the wild, look for bloated bundles or other signs of over-engineering when thinking of business ideas. In reinsurance, I sometimes think of catastrophe models as a candidate. They’re incredibly complex and don’t disclose their inner workings to customers. There’s a real movement afoot to pull the science apart from the platform and people are already working on this.

But is that disruption on its own? Nobody seems interested in radically cheaper products in this business. Maybe that’s just how enterprise software works.

Posted in Uncategorized | Leave a comment

What Credit Cards Did

In the 70s, it wasn’t the banks providing consumer lending, it was the stores, through layaways and store cards.

It took the credit card companies to intermediate between the banks and the consumer to disrupt and displace the stores in consumer credit.

that’s from The Critical Path, one of my favorite podcasts.

Posted in Uncategorized | Leave a comment

Nobody Eats GDP

Here’s Michael Pettis :

An orderly rebalancing will be good for the world on average and a disorderly one bad.

The same is true about the effect of a Chinese slowdown on social conditions. People do not generally care about GDP growth rates. They care about their own income growth relative to their expectations. Rebalancing in China means by definition that Chinese household income growth will outpace GDP growth, after many years of the opposite. A best-case orderly rebalancing should result in little change in the growth of household income, even as GDP growth drops sharply. This for example is what happened in Japan from 1990 to 2010, when GDP growth dropped close to zero but household income grew at nearly 2%.

A disorderly rebalancing, however, could result in negative growth in both GDP and household income, with the former dropping more than the latter. This, for example, is what happened in the US in the 1930-33 period – with GDP dropping by around 35% and household income dropping by around 19%. In the case of China, in other words, while elites will suffer in both scenarios, in the former case there is no reason for popular discontent.

Posted in Uncategorized | Leave a comment

How The Disruptors See Insurance

Mostly as a data aggregation exercise,  saying something like: “If we get enough of the right data we can price any uncertain process.” see for example this a16z.

This is only one half of insurance,  however, and ignores underwriting,  or the detection of moral hazard.

In gambling terms, insurance is a bet that people are making at a table where they are the dealer. For most circumstances interests are aligned,  which is why insurance works most of the time. But underwriting exists for a reason.

Now I’d be dismissing the Disruptors too easily if I left this argument here. They make their living proving insiders who lean back in their chair sounding smart wrong.

Some consumer lines are very data driven, like auto and homeowners. But these are stifled with regulation and incredibly competitive anyway. My understanding is that at least one venture backed startup in online distribution has fallen down on regulatory concerns. A tough industry to disrupt.

Posted in Uncategorized | 2 Comments

The Most Interesting Thing Said In Insurnace In Years

Bolt added that his biggest long-term fear for the specialty insurance market was that it too would face challenges from new lower-cost competitors.

“You look at Amazon.com – they’re doing same day delivery. And we don’t get the policy out and it’s just an ephemeral promise to pay and it’s evidenced by an electronic thing and you can’t get it to somebody the same day. And the cost is somewhat expensive to deliver.”

Bolt said that London takes about 30 points of the premium just to deliver the product, including brokerage fees, with around 70 points of benefit baked into the product.

“I’m just really worried that someone will come along – Samsung or someone – and figure it out.”

That’s from the insurance insider (gated). My view is of course that this fear is probably accurate in the bigger sense for insurance but specialty lines are the underwriting heartland. Humans will last longest there.

Posted in Uncategorized | 1 Comment

Who Creative Destruction Destroys

“We’ve got fifteen people left but without a replacement carrier, they’ll all be gone soon.”

We all knew this company was toast. We had gotten the referral from a prospective client who was the carrier that cancelled them. Talk about the brush off. For all of us.

This guy’s results had been terrible and his strategy was a generation old. The market was minimum limit auto business (called nonstandard), one of the most competitive and efficient markets the world has ever seen. Carriers that win there tend to be extremely technologically sophisticated (cutting edge analytics, highly scaled online distribution), narrowly focused strategically (only auto) and regional (strong relationships in community and with local producers to limit moral hazard). Everyone else gets creamed and these guys were getting creamed.

A couple of my colleagues, bless them, were spending time trying to pull together some information to help. As the lead analyst in the office the pitch on the numbers was inevitably going to be up to me. But we weren’t going to get what we needed, which was simple enough: evidence (data) to support the story that this company could make money (rates are going up!). I knew this guy wasn’t sophisticated enough to have that evidence. He wasn’t even sophisticated enough to ask that question! In the right way anyway.

This means that he not only didn’t make any money but that he didn’t understand how someone could make any money on the business he was doing. Wait until the market turns, he thought, like a sickly old goat wandering into a navy seal firing range.

So we had the call. He sent us all the data he had, clearly working hard at it. But it wasn’t useful and told a terrible story. So we had one more call and said as much. That’s when we got the sob story which to be honest felt awful.

This was a little while ago and I still think about that guy. On a personal level, I hope he’s found something for himself and the people who relied on him. In the bigger picture though, the world is a better place when outdated business models die.

Posted in Uncategorized | Leave a comment

From Couch Potatoes to Neural Nets

How about we do no less than lay out the levels of modeling sophistication among humans.

Level 1: I don’t understand or care to understand how the world works. People magazine (ESPN for chicks) or ESPN (gossip rag for bros) and I’m done.

Level 2.The world is pretty simple. I don’t bother to even pretend to hide political or cognitive biases. Whatever those are.

Level 3. The world is complicated. If you ignore nuance you lose. Our understanding needs to reflect that complexity so let’s build models that are complicated.

Level 4. The world is complicated but humans can’t handle that. Let’s use heuristics to make good enough decisions but get a lot more done.

Level 5. Hey level 4, you’re more like level 2 than you’d care to admit. Heuristics are fine but use them wisely, know your biases, pass ideological Turing tests. Succumb to your flaws, but consciously.

Balancing the need for complexity with a limited ability to understand it is the biggest challenge in my professional life. And at some frontiers of human knowledge this is the dominant problem (social systems like the business world are different). Some hope that the machines will save us. Computers’ superiority to us in chess is the beginning, they say.

Could be I suppose, but we’re a long way away. Take neural networks, a very sexy topic of late.These are machine learning techniques modeled on the human brain, the killer innovation being that the analysis is done in nodes that all interact with each other.

The thing to take away from that is that wet don’t know what the network is doing to the data in between the input and output. We feed it data and tell it what the output should look like and repeat millions of times. Eventually the program settles in on the steps that minimize error against the output. Again, remember, we don’t know what the program does to the data inside the network. And one of the candidate theories (that the network transforms the data in meaningful ways in its journey through they net) has now been discredited.

This means that neural networks do not “unscramble” the data by mapping features to individual neurons in say the final layer. The information that the network extracts is just as much distributed across all of the neurons as it is localized in a single neuron.

And here’s another result

Since the very start of neural network research it has been assumed that networks had the power to generalize. That is, if you train a network to recognize a cat using a particular set of cat photos the network will, as long as it has been trained properly, have the ability to recognize a cat photo it hasn’t seen before.
Within this assumption has been the even more “obvious” assumption that if the network correctly classifies the photo of a cat as a cat then it will correctly classify a slightly perturbed version of the same photo as a cat. To create the slightly perturbed version you would simply modify each pixel value, and as long as the amount was small, then the cat photo would look exactly the same to a human – and presumably to a neural network.
However, this isn’t true.

So if they take a correctly recognized picture of a cat and change a few pixels the computer no longer recognizes it. Ouch. See the link for pairs of images that are indistinguishable to they human eye but baffling to the net.

One last quote because it is a bit hilariously overcomplicated:

One possible explanation is that this is another manifestation of the curse of dimensionality. As the dimension of a space increases it is well known that the volume of a hypersphere becomes increasingly concentrated at its surface. (The volume that is not near the surface drops exponentially with increasing dimension.) Given that the decision boundaries of a deep neural network are in a very high dimensional space it seems reasonable that most correctly classified examples are going to be close to the decision boundary – hence the ability to find a misclassified example close to the correct one, you simply have to work out the direction to the closest boundary.

I think what this means is that they’re fragile as predictive devices. The net is capable of generating incredible complexity in its treatment of the data with each node interacting with so many others. The term of art for this is overfitting: a high radio of predictive variables to training events. This means if you give it a new test it hasn’t seen before (ie if you actually use the damn thing) it will fail hopelessly.

I am also reminded of Google’s driverless car. When I was leaning machine learning the neural networks section featured this project prominently as an example of NN’s practical use. I’ve since learned that they abandoned this strategy ages ago and now literally program the routes and rules into the software. If it runs into a situation it doesn’t recognize or a route that hasn’t been programmed it stops and waits for a human to take over. Driverless, sure, I guess. But not smart.

I think it was Robin Hanson who wrote something a while ago that stuck in my head but I can’t find on his blog to cite. From memory he said that the more general the situation a system must adapt to the more similar systems look that adapt to it independently. Think about how eyes or wings evolved independently many times. Perhaps our brain, as an incredibly general device, really is as good as a system could possibly be at managing complexity? If we want better performance from another kind of mind we must necessarily then sacrifice something enormous from its capability.

Another way of saying this is that of course neural networks are crap. If they were any good we would be able to understand them!

Posted in Uncategorized | Leave a comment