The art of the deal and stock picking

Nicholas Cregan

Fairlight Asset Management

Well, that wasn’t supposed to happen. Not after all the hand-wringing, ripping up the models and changed polling techniques that happened after the 2016 election.

As we write, the 2020 US election isn’t fully resolved with a Biden victory announced but Trump contesting the result and counting still underway in some states.

It wasn’t meant to be like this, this was supposed to be a big win. Polling had consistently shown that Biden had a 7-8% lead, with a margin of error (which can be thought of as “if we’re wrong, how much will it be”) of about 3%.

Nate Silver, who built a reputation as a savvy forecaster and runs the website “Five thirtyeight” wrote this the day before the election:

Joe Biden is favored to beat President Trump (though Trump still has a 1-in-10 chance); Democrats have a 3-in-4 shot at taking back the Senate; and the House will most likely remain under Democratic control (Democrats might even expand their majority by a few seats).

Nope. The Democrats lost seats in the House, are unlikely to control the senate, won back no statehouses at the State level and almost lost the election.

So what went wrong? Salvatore Babones in the SMH and Zeynep Tufekci in the New York Times wrote some initial thoughts which we will draw on here.

Polling relies on two strands of mathematics: sampling and modelling.

Since you can’t ask everyone, you have to work out from the ones you’ve asked how representative they are of the group. The more you ask, the more likely it reflects the group, but the more costly and time consuming it is. This is sampling or statistics. One emerging issue here is that with mobile phones, most people don’t answer unknown numbers. It would seem that only 3% of people answer pollsters which is changing the polling error.

Modelling draws on the polling data, but also then adds in “fundamentals” or things we expect to influence the result. This could be “strong economies favour incumbents” or “Democrats in past elections mail in more votes” or “white males favour X on voting day”. The polling company then runs simulations where they change each of these assumptions a little and sees what happens, generating a probability of winning.

Up to now, this sounds a lot like weather forecasting: take historical data, add in the current situation and model it out.

But at this point it starts to change. For the weather we have thousands of days of data. For US Federal elections that happen each 4 years, over 40 years we have 10 data points. So assumptions start to matter more. The USA has voluntary voting so polling doesn’t translate into whether that person votes or not.

And now it really diverges. You see, me knowing that there is an 80% chance of rain isn’t going to change the chance of rain. I’ll still get wet if I don’t have an umbrella.

Except we’re dealing with humans here not the weather. And it turns out humans change their behaviour based on what they are reading about election forecasts. This paper shows how some voters don’t bother voting if they think the result is a foregone conclusion.

Or that Facebook may have decided to be more lax on fake news in the lead up to 2016, to placate Republicans, on the assumption that Democrats win easily?

And at the margin, in a voluntary voting democracy, this matters.

George Soros, the renowned macro investor would recognise this instantly, as he coined an investing phrase for it: “reflexivity”.

The concept is that people’s views of the world change the world itself, as these people act on their views, thus creating a “reinforcing loop”.

So how does this relate to Fairlight and investing? A lot of our day job is building models and forecasting for our stocks. If we come up with outcomes better than the current share price, the stock is a buy and vice versa. To obtain our forecasts we “poll” (surveys of competitors, pricing trends, speak with the CEO etc). We then “model fundamentals” (known seasonality in divisions etc).

What the election result reminds us is that: 1) market views matter. If the market loves the stock, it can drive the price up, enabling the company to issue shares cheaply or acquire to grow and change the future path of that company; 2) That on the other side if we like a stock the market hates, the same reinforcing cycle can work against you, where the views change the fundamentals; and 3) for a lot of companies we don’t have 50 years of data, so some humility in forecasting with limited data points is needed.

What it doesn’t mean is that we shouldn’t model or try to quantify some outcomes. It just means that the difference between 20% and 40% forecast gains for a stock is probably less than you think.

Or as George Box the statistician said “All models are wrong, but some are useful”.

Not already a Livewire member?

Sign up today to get free access to investment ideas and strategies from Australia’s leading investors.


Nicholas Cregan
Portfolio Manager & Partner
Fairlight Asset Management

Nicholas is partner and portfolio manager of the Fairlight Asset Management Global Small and Mid Cap Fund. He has 20 years investment experience in domestic, US and international markets through roles at Evans and Partners and Schroder IM

I would like to

Only to be used for sending genuine email enquiries to the Contributor. Livewire Markets Pty Ltd reserves its right to take any legal or other appropriate action in relation to misuse of this service.

Personal Information Collection Statement
Your personal information will be passed to the Contributor and/or its authorised service provider to assist the Contributor to contact you about your investment enquiry. They are required not to use your information for any other purpose. Our privacy policy explains how we store personal information and how you may access, correct or complain about the handling of personal information.

Comments

Sign In or Join Free to comment