Not logged in
Viewing 1 post (of 1 total)

Forums>StrategyQuant>General Discussion>HELP – Third OOS Test Always Fails After 18000 Tries

  • #260536 |
    Customer
    3 Posts

    Hey guys! Hope everyone is well. I’m writing today because I’m having a problem that I’m not sure how to remedy and I’ve wracked my brain trying to think of a solution.

    I recently got back into StrategyQuant for the first time in years after having a controlled function at a financial firm here in New York that prevented me from trading currencies. I noticed that there’s a new video course on the site, and I figured that it would be a great refresher course for getting back up to speed, especially with the new features – I primarily used SQ3 back in the day.

    So, I set up my builder in a configuration very similar to the course and let it run until I generated 3000 strategies so I could go through the robustness testing step by step with the videos. My strategies did not fare NEARLY as well as the packet of strategies in the course did. I repeated this process 5 times (with the exact same builder settings) with 3000 strategies each for a total of 18000 strategies put through the robustness testing. Here are my average results.

    These are the following guideline values that I’m using from the Quastic course:

    Second OOS
    — (should knock out about 75% of strategies) – killed 83% of my strategies on average. A close result, nothing to see here.

    Slippage Test
    — (should knock out about 15% of remaining strategies) – killed 13% of my strategies on average. Nothing to see here.

    Another Market Test
    — (should knock out about 50% of remaining strategies) – killed 90% of my remaining strategies on average. WTF(??) It seems as though my strategies are MUCH more over-fit than the ones that gentleman in the course produced. I have my suspicions about why this is, but I’ll get there.

    Another Timeframe 1 & 2 Test
    — (should knock out about 30% of remaining strategies) – killed 31% of my remaining strategies on average. Bang on.

    Monte Carlo Trades Order Exact Test
    — (should knock out about 25% of remaining strategies) – killed 19% of my remaining strategies on average. A better result than the course!

    Monte Carlo Parameter & Data Test
    — (should knock out about 15%-40% of strategies) – killed 20% of my remaining strategies on average. Fine with this.

    Third OOS Test
    — (Should knock out about 40% of the remaining handful of strategies) – KILLS 100% OF MY STRATEGIES EVERY TIME.

    I have not had a single strategy pass through the final test – which, at this point, is pretty frustrating. I usually get to between 3-12 remaining strategies until this final test that knocks them all out. I wanted to keep everything the same so I could gain a higher degree of confidence in my data before I presented it to you guys for analysis.

    Since failing, I’ve done rigorous, smaller sample testing & manual analysis to try to find out what’s going wrong and I think I’ve narrowed down the problem. Aside from the fact that I’m 90% sure the builder OOS peeks at data (IDK how), I think I’m having an issue with my building blocks.

    My what-to-build settings are very standard and deviations there don’t produce different results. My genetic settings produce 0.11% accepted strategies on average which I feel is fine. The data and trading options are all extremely standard, as well as the money management, higher precision test, and rankings. The building blocks section, however, can produce wild variation on the efficiency of accepted built strategies, and when it comes down to the final OOS test, I’m noticing that I’m getting some spurious strategy ideas that have somehow survived the ringer of other robustness testing. I’m getting things like “Go long when the LWMA[1] > High[1]”, which to me should never be occurring(?).

    If someone has gone through this and come out the other side into actually building robust strategies with the new system, then I would very much accept your feedback. Additionally, if anyone has a builder config file which they find can consistently bring strategies across the finish line into manual / matrix testing, then I would very much appreciate your help / the file, to see the differences.

    I have attached the builder file I am using for your convenience.

    Attachments:
    You must be logged in to view attached files.
Viewing 1 post (of 1 total)

You must be logged in to reply to this topic.