InvestorsHub Logo
Post# of 251799
Next 10
Followers 16
Posts 1775
Boards Moderated 0
Alias Born 08/05/2008

Re: None

Thursday, 04/17/2014 6:20:53 AM

Thursday, April 17, 2014 6:20:53 AM

Post# of 251799
When Use Of Pseudo-Maths Adds Up To Fraud
http://www.ft.com/intl/cms/s/0/6a5f21be-c53d-11e3-89a9-00144feabdc0.html

An academic journal called the Notices of the American Mathematical Society may seem an unlikely periodical to have exposed fraud on a massive scale. The investigation, published in the current edition, is certainly not going to sit among the nominees for next year’s Pulitzer prizes. But a quartet of mathematicians have just published a piercing article in the public interest and in the nick of time.

In their paper, entitled Pseudo-Mathematics and Financial Charlatanism, they make the case that the vast majority of claims being made for quantitative investment strategies are false.*

By calling it fraud, the academics command attention, and investors would be wise to beware. With interest rates about to turn, and a stock market bull run ageing fast, there have never been such temptations to eschew traditional bond and equity investing and to follow the siren sales patter of those who claim to see patterns in the historical data.

The (unnamed) targets of the mathematicians’ ire range from individual technical analysts who identify buy and sell signals in a stock chart, all the way up to managed futures funds holding billions of dollars of clients assets.

There will be many offenders, too, among investment managers pushing “smart beta” strategies, which aim to construct a portfolio based on signals from history.

There is even a worrying do-it-yourself trend: many electronic trading platforms now have tools encouraging retail investors to back test their own half-baked trading ideas, to see how they would have performed in the past.

Twisting strategy to fit data

The authors’ argument is that, by failing to apply mathematical rigour to their methods, many purveyors of quantitative investment strategies are, deliberately or negligently, misleading clients.

It is reasonable to want to test a promising investment strategy to see how it would have performed in the past. The trap comes when one keeps tweaking the strategy until it neatly fits the historical data. Intuitively, one might think one has finally hit upon the most successful investment strategy; in fact, one is likely to have hit only upon a statistical fluke, a false positive.

This is the problem of “over-fitting”, and even checks against it – such as testing in a second, discrete historical data set – will continue to throw up many false positives, the mathematicians argue.

Do not despair. The paper does not conclude that history is bunk, just that backtesting ought to require more statistical thought than investment managers need to display to make a sale to investors.

The perennial success of Renaissance Technologies, founded by code-breaking maths genius Jim Simons, suggests that some can separate signal from noise in financial markets.

At least the best quantitative hedge funds are attuned to the problem of overfitting. London’s Winton Capital published a paper last year warning that, even if individual researchers are scrupulous about calculating their probabilities, institutions risk “meta-overfitting”, because the tendency is to only submit the best fitting strategies for approval to the higher-up management committee.

It seems that finance may need the same overhaul as the pharmaceuticals industry did a decade ago.

Statistical flukes

Amid a furore over the safety of its antidepressant Paxil in 2004, it was discovered that GlaxoSmithKline had conducted numerous trials that failed to prove the drug was an effective treatment for children. However, a minority of trials did suggest efficacy, to a statistically significant confidence level, and these were the studies that got published. It wasn’t until scientists added together all the unpublished data that it became clear the drug increased the risk of teen suicides, for no offsetting benefit in treating depression, and it was banned for use by minors.

GSK responded by promising to reveal all its trials and to publish all its data, regardless of their outcome, and other large drug companies followed, more or less reluctantly. As a result, we continue to learn that large claims made for blockbuster medicines tend not to stack up over time, Tamiflu being the latest example.

When it comes to quantitative investment strategies claiming to have performed well historically, it is not good enough for managers to stamp “past performance is no guide to future performance” on to a marketing document. A crucial detail, almost never revealed, is how many discarded tweaks and tests led to the miraculous discovery of the strategy.

The authors of the Notices of the AMS paper are upbeat about the chances of banishing pseudo-mathematics from finance.

One of their number, Marco Lopez de Prado of Lawrence Berkeley National Laboratory, distributes open source software, at quantresearch.info, which can improve the modelling of mathematical probabilities and limit the risks overfitting. Another, David Bailey of the University of California, Davis, suggests that a regulatory body such as Finra could step in to promote best practice in the marketing of mathematical claims, just as the Food and Drug Administration monitors drug advertising. Together they have created a blog at financial-math.org to debate their ideas.

Raising the issue is necessary for raising the bar. Too many investment managers and advisers, it is claimed, are purveyors of false positives, getting rich on statistical flukes. If their methodologies do not improve in line with the improvements in academic thinking about backtesting and overfitting, then they really will deserve to be called out as frauds.


*Bailey DH, Borwein, JM, Lopez de Prado M, Zhu Q. Pseudo-Mathematics and Financial Charlatanism: The Effects of Backtest Overfitting on Out-of-Sample Performance. Notices of the American Mathematical Society. 2014;61(5):458-71. http://www.ams.org/notices/201405/rnoti-p458.pdf

We prove that high simulated performance is easily achievable after backtesting a relatively small number of alternative strategy configurations, a practice we denote “backtest overfitting”. The higher the number of configurations tried, the greater is the probability that the backtest is overfit. Because most financial analysts and academics rarely report the number of configurations tried for a given backtest, investors cannot evaluate the degree of overfitting in most investment proposals.

The implication is that investors can be easily misled into allocating capital to strategies that appear to be mathematically sound and empirically supported by an outstanding backtest. Under memory effects, backtest overfitting leads to negative expected returns out-of-sample, rather than zero performance. This may be one of several reasons why so many quantitative funds appear to fail.


Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.