Monte Carlo permutation test of timothy masters

1 replies

11 months ago #281476

Hope all is well!

Two questions about the Monte Carlo permutation test:

1. Just wondering whether SQ is looking into adding the MC permutation test of timothy masters as robustness test?

2. The Monthe Carlo permutation test uses relative price changes instead of absolute price points. This means that the trading signals, rules and technical indicatoers should be designed to work on relative price changes rather than abslute price points. Just wondering whether the signals that are implemented in SQ are compitable to do such an analysis?

The Monte Carlo permutation test of Timothy Masters is a statistical technique used to test the significance and robustness of a trading strategy’s performance. The test involves simulating multiple iterations of the strategy using randomly permuted versions of historical data, allowing the trader to evaluate the strategy’s performance under different scenarios and determine the statistical significance of the results.

To conduct this test, Masters first transforms the absolute price points into relative price points by taking the logarithm of the absolute price points and then taking the differences between consecutive logs. This transformation results in a series of log-returns that can be interpreted as relative price changes. The advantage of using log-returns instead of simple returns is that they are additive over time and are normally distributed, making them more suitable for statistical analysis.

Next, the historical data is randomly permuted to create multiple simulated datasets, each with a different arrangement of historical prices. For each simulated dataset, the trading strategy is run to generate a set of trades and resulting portfolio values. The process is repeated for many iterations, each time using a different random permutation of the historical data.

Finally, the results of the simulated trades are compared to the actual performance of the trading strategy. The purpose of this comparison is to determine whether the observed performance of the trading strategy is statistically significant or if it could have been obtained by chance. If the results of the simulated trades are significantly different from the actual performance of the trading strategy, it suggests that the observed performance of the strategy may not be robust and may have been influenced by the specific arrangement of historical prices used in the analysis.

1 month ago #285220

Hi  TP

This “step” (or similar) is almost a must for data mining of large number of strategies since luck will be part of the strategy population. T Masters approach seem very simple yet robust. I also vote for this to be implemented. Note that in your step 2 there is no need for any changes to handle the relative price point. The relative price point is only necessary when building the population of the modified historical data, once you have those you just exponentiate these and they are normal “fake” historical data. Only difference is that your randomization of the relative price changes have erased the “signal” in the data. The signal/pattern that your robust strategy should be picking up and thus running the strategy on 500 or 1000 of these should result in worse performance for >>90%… if not, your strategy does not pick up a “signal” and does ~not have a real edge. (assuming I understand T Masters MC approach to filter out lucky strategies) What is necessary is to handle a large number of historical data-series in SQX which I have not seen yet.. but I have limited experience with it.  Anyone know any similar robust test that is statistically sound and does not use the OOS data?

Viewing 1 replies (of 1 total)