Reply

I am currently building a workflow for successful forex strategies. Join me!

47 replies

AlgotradingDE

Customer, bbp_participant, community, sq-ultimate, 30 replies.

Visit profile

1 year ago #277576

I’ve been using StrategyQuant for more than a decade, but believe it or not, I haven’t even used its full capabilities until now.

My process is to create thousands of strategies in the builder and then run them through a very selective robustness check. The (few) surviving strategies are then activated on an MT4 demo account, where they must execute at least 25 trades before I consider them for use on a live account.

So far, I am very happy with this process and would like to automate more of the production of successful systems. That’s why I’ve been diving into the custom project features that can be used to create custom workflows.

While I am building some sample workflows for successful forex strategies, I would be happy if anyone on this forum is interested in the same and willing to share their experiences.

Especially, I would be interested to know if anyone has ever started such a project on their own?

Gerhard Frischholz
https://Algotrading.de

1

AlgotradingDE

Customer, bbp_participant, community, sq-ultimate, 30 replies.

Visit profile

1 year ago #278123

Hi Kevin,

another important piece to consider, thanks for your comment. I think we all agree that live performance should be as close as can be to the simulated performance stats from StrategyQuant. Although we all know that there will never be a perfect match. But they have to match “somehow” (while I´m still thinking how to quantify this “somehow”).

Hey, why not build a workflow together and optimize it jointly? We may have different instruments, different timeframes we want to focus on, but the methods to build and robust test systems should be something which we can jointly do.

What does everyone think? I´m happy to share my workflow files as a starting point.

Best regards

AlgotradingDE

 

Gerhard Frischholz
https://Algotrading.de

2

FirestarZA

Customer, bbp_participant, community, sq-ultimate, 19 replies.

Visit profile

1 year ago #278126

Firstly, it might seem that I’m “advanced” in my workflow usage, but this is only perception. I’ve thought about this quite a lot, and I’ve built and tested a huge number of workflows, but I still don’t have anything I’m even close to satisfied with.

Then, one thing that I am a very firm believer in, is a period (say the last couple of years) of holdout. This is a period that’s never been touched by the development process (I’ll usually run 2019/01/01 to current date for this). This basically simulates the last 3 years as a demo account run. I mark this period as OOS, while IS is the rest of historic data available. This is my last step, before I check portfolio correlation. I then filter on the OOS only (since the rest was thoroughly tested in one way or another already).

Now, to give you my version of the answers of the questions you asked:
– which robustness tests are other people using?
I like the more advanced robustness tests, but I try and get a little bit of everything. So, I’ll do a bunch of MC tests, what-if tests, different time-frame and markets tests, and then move on to SPP and WFM specifically, which I’m a big fan of (though I may not necessarily delete strategies that fails these two tests).

– which robustness tests help to produce strategies that work in a live environment
I believe the WFM and the holdout are great tests to simulate a live environment. Again, I may not necessarily be too strict with WFM specifically, as its a really touch test, but I want some decent results in there, even if it doesn’t quite give me my 9×9 square that passed.

– What about optimization of strategies that have successfully passed: is it ok / required to optimize them afterwards or should we leave them untouched?
I don’t optimise currently. The reason for this is that I am stupid scared of overfitting. I avoid as much as is possible, that can add to overfitting. I don’t think I’ll avoid this always, but for now, I don’t optimise. This may become something I do more of later on, when my strategies starts to fail and I need to breathe some life into them before discarding them. At that point, I’ll test to see if this is a good or a bad thing.

Finally, I am 100% in, if you want to build a workflow together. I will also gladly share my workflow that I built (most of my other workflows… the ones that works, more or less, are other people’s workflows that I modified).

I must just ask that we do it on another platform. The forum is not really conducive to good communication, IMO. But I am happy to join in with whatever you guys decide.

2

Kevin

Subscriber, bbp_participant, customer, community, sq-ultimate, 4 replies.

Visit profile

1 year ago #278130

Hi Gerhard,

In response to your suggestion, I would very much like to work jointly building and optimising workflows. Please let me know how you would like to go forward. I find algotrading.de a very interesting project.

Following on with the going live criteria, I find the ranking criteria a perfectly valid one, and I would at some stage like to analyse the strategy selection criteria as something that can be optimised as a strategy in itself.

I am currently in a process of testing and refining my workflows, but basically what I’m using is the following:

– Builder: basic filtering on initial population for genetic evolution, stricter filtering in the ranking by profit factor, stability and annual % return / DD% on initial capital (cf. https://strategyquant.com/codebase/annual-return-max-drawdown-of-initial-capital/).

– Robustness: I run 1 min precision, MC trade manipulation (order, skipping), run on different timeframes, if on a timeframe less than H1, I run on tick data, MC strategy params, MC data history, MC slippage spread

– Optimisation: sequential optimisation and WF Matrix. I apply the optimised parameters and do all the robustness testing again with the new parameters

I usually have to run the whole workflow several times up until the optimisation, in order to have at least a few strategies I can start the optimisation phase with, because that is usually more “labour intensive” part, in the sense that it is where I revise each strategy one by one to see what parameters are the most stable ones, etc.

After testing on demo etc. I run correlation tests in QA, to remove from the portfolio correlated ones and leave them on stand by. I adjust position sizes depending on DD MC 95% of each strategy and account size.

That’s my current way of working anyway and I’m still in a phase of consolidating, which is why I’d be interested in working jointly.

Regards

1

Conmariin

Subscriber, bbp_participant, community, customer, 54 replies.

Visit profile

1 year ago #278166

Hi all! 🙂

Here is one of my workflows for Eurusd H1

Best regards

Conmariin

Attachments:
You must be logged in to view attached files.

Automatisches Handeln mit Expert Advisor
https://www.rabenesche.de

2

AlgotradingDE

Customer, bbp_participant, community, sq-ultimate, 30 replies.

Visit profile

1 year ago #278237

Hi Conmariin,

thanks a lot for sharing your workflow. This is the right spirit that helps us all to get better.

I have started today to analyze your workflow and it will be a wonderful opportunity to discuss pros and cons of doing different tasks.

I will also share a workflow which I have recently finalized. It gives me very good results on the EURUSD 1H timeframe. You can download and analyze it via the link below.

Let us give us some time to go through the components of the workflows that we shared and then jointly work on possible improvements or simply learnings from each of the examples.

I very much appreciate your willingness to share your work and invite anyone who is interested to join our discussion. Let´s build a workflow (and not just one) that makes us a lot of money.

Best regards

Gerhard Frischholz

Attachments:
You must be logged in to view attached files.

Gerhard Frischholz
https://Algotrading.de

2

AlgotradingDE

Customer, bbp_participant, community, sq-ultimate, 30 replies.

Visit profile

1 year ago #278271

I´m running your workflow on a Cloud Computer and am just realizing that you´re requiring 3000 strategies to be built before getting the retests in action. A rough calculation tells me that the Builder would have to run approx. 2 months to produce that many initial strategies.

My question would be: is this based on your experience with the upcoming robustness tests or will a smaller number of initial systems also work?

Gerhard Frischholz
https://Algotrading.de

0

Conmariin

Subscriber, bbp_participant, community, customer, 54 replies.

Visit profile

1 year ago #278273

Hi Gerhard,

I take 3000 built strategies because the robustness tests sort out a lot after that and usually don’t leave much. I also call it the “boot camp” 😉

But since I have a VPS with 8 cores and 32 GB Ram on Linux, it takes on average 3-4 weeks until I have the 3000 strategies full. Depends on the pair I have built.

I personally find it very important to set hard criteria when sorting out, as I will not optimize strategies. They are then replaced by new ones if they run badly.

—–

In your workflow, I noticed that you take extra steps to sort out strategies. Do you do these extra steps because you then sort out the strategies manually? I always have them sorted out automatically at the end of each step. You can set this under ‘Ranking’ -> ‘Delete failed strategies from databank’. This would streamline your workflow and automate it more.

Automatisches Handeln mit Expert Advisor
https://www.rabenesche.de

0

AlgotradingDE

Customer, bbp_participant, community, sq-ultimate, 30 replies.

Visit profile

1 year ago #278279

Hi Conmariin,

our approaches are very similar and also for me it turned out that I need a significant number of built strategies before even starting all the robustness tests. Because they really have to be very strict in order to get realistic live results.

Concerning optimization, I came to the same conclusion as you, meaning not optimizing the strategies that survived the robustness tests. Initially I thought they should be robust enough to be optimzied for the latest market phase, but I think this is completely wrong. I fully agree with you that it´s all about finding (builder) and selecting (retester) strong strategies. And not get your hands on the optimizer button 🙂

As for the 3000 strategies you build before starting the robustness tests. I use a slight modification which saves some time in the build process:

I let the builder create strategies on the selected timeframe only (like 1H), which creates strategies a lot faster. Then, as my first retest, I run those strategies on the 1min timeframe. Accoring to my statistics, about 80% of the strategies survive this 1min timeframe test. The test itself takes just a few minutes so in conclusing I can create much more strategies in the same amount of time simply by splitting this into two process steps.

For clarity, what I do is:

1. Run the Builder with the selected timeframe for, say, 3000 strategies

2. Run the Higher Backtest Precision test with 1min data on those 3000 strategies

I very much enjoy this discussion, it has everyting needed to learn from each other. Thanks again for sharing your experience, I´ll be happy to share mine.

I´ll continue to evaluate your workflow and will get back if I have more things to ask and discuss.

Gerhard Frischholz
https://Algotrading.de

0

Conmariin

Subscriber, bbp_participant, community, customer, 54 replies.

Visit profile

1 year ago #278280

Hi Gerhard,

at first glance your method looks like it is faster. But there is a small error in thinking. My method is:

Step1: Build strategy with H1 timeframe with data in H1 (fastest) -> strategy okay then directly with cross check if strategy is also okay in timeframe H1 with data in M1 (slowest) -> if strategy is not okay then immediately delete automatically. Build next strategy.
Step2: When 3000 strategies have been built, comes the retest.

Your method is:
Step1: Build strategies with H1 timeframe in H1 (fastest) until 3000 strategies have been built.
Step2: Cross-check if the strategies are ok in timeframe H1 with data in M1 (slowest) -> 80% (in your example) are ok = 2400 strategies.
Step3: The retest is started with 2400 strategies.

Of course you will be faster with less strategies 😉

Automatisches Handeln mit Expert Advisor
https://www.rabenesche.de

0

AlgotradingDE

Customer, bbp_participant, community, sq-ultimate, 30 replies.

Visit profile

1 year ago #278281

Hi Conmariin,

your comment makes me rethink my statement. I was of the opinion that the Higher Backtest Precision test on 1 min data is carried out for every strategy which is being generated, not just the ones which passed the set Builder criteria. if the later is the case then of course you won´t save anything with separating the two task, instead ending up with less strategies than you were aiming for.

I didn´t find any clear statement in the StrategyQuant docs with regard to this. Did you try it out?

Gerhard Frischholz
https://Algotrading.de

0

Conmariin

Subscriber, bbp_participant, community, customer, 54 replies.

Visit profile

1 year ago #278284

Hi Gerhard,

I don’t think it’s something that needs to be documented. It’s more like a logical thing.
The way your method is, you can definitely do it! But then you don’t start from 3000 strategies, but from less.

Actually, it doesn’t matter, because both methods have their justification. I avoid manual intervention in the workflow. At least as far as it goes. It’s the same as trading with robots: to eliminate the human source of error as much as possible. I’ve often found myself wanting to accept a ‘Failed’ strategy for ‘Passed’. “But it had such a nice drawdown…!” 😉

By the way, I have to say that the Expert Advisors created with StrategyquantX are the most profitable ones. I started a completely new SQX portfolio on a demo account last year. And the profit, although it was a hard year, was about 60%.
Based on this result, I started this year with a live account. Yes, I know, it’s really not a good year to start! 😀 But since I use the 1% rule, losses are bearable and the robots are just working their way back up.
The difference demo to live is also very small. Of course the robots don’t always make 1:1 every trade like on the demo, but they are acceptably very close.

Automatisches Handeln mit Expert Advisor
https://www.rabenesche.de

0

AlgotradingDE

Customer, bbp_participant, community, sq-ultimate, 30 replies.

Visit profile

1 year ago #278285

Hi Conmariin,

you are absolutely right. And what I overlooked was the fact that you put very stringent requirements into your High Backtest Precision 1min test. So even with “my” method, the majority of initial strategies would not pass the 1min test.

I like you mentioning that you don´t want to manually interfere with the robustness test. That´s all so true. Often we are tempted to curve it it because we like certain strategies and don´t want them to be put aside by the “cruel” robustness test.

And it looks like this way of handling things works for you. That´s good. Also for me the test on a live account is the ultimate test.

Gerhard Frischholz
https://Algotrading.de

0

Kevin

Subscriber, bbp_participant, customer, community, sq-ultimate, 4 replies.

Visit profile

1 year ago #278301

Hi Gerhard and Conmariin,

Thanks for sharing those workflows. I’ve attached one of mine. As I’ve noted previously, I use a custom metric (Annual % Ret / DD% on initial capital), which I’ve published for all in the codebase. However, if you don’t want to bother with that, the built-in Annual % ret / DD% would be similar, but not as strict.

Note that some tasks after the initial builds are disabled. They are there because I’ve tried working with a full-on building phase where I would do several iterations of building, with two computers (one with my pro license and another with my starters license). The last time, I waited until I had like 8000 built, before moving on. Otherwise, I’ve noted that if I start doing robustness tests with only, say 1000 strategies, an none pass the tests, I’m tempted to relax my criteria, in order to let more go through the filters. I’ve separated the high precision (M1 test resolution) from the building phase, basically because the starter license doesn’t permit doing retests. The other option would be to do the building directly with M1 resolution. But for now, I prefer the use the same process for both, even though it does mean having to deal with a lot more strategies that I know will be filtered out in the next phase.

Take note that I’ve customised the genetic evolution options and included filters on the initial population. This way it takes longer to generated the initial population, but I’ve found that the strategies produced by that initial population are of a better quality. At the end of each task I’m either filtering in the ranking or filtering in the cross check task. I copy over only the successful strategies into a databank for each phase. That way, I can examine which strategies have survived which retesting phase, in case I want to adjust the workflow in some way or simply examine the results of a particular phase in more detail, and not having to run the tasks one by one in order to do this.

As I’m using Darwinex as a broker, you’ll see that I’ve got a step where I test with tick data from Darwinex on the available tick data (typically only from 2018 or so).

I am rethinking my workflow with some of the ideas that you’ve shared in this forum. So I am now wanting to have a process where I don’t have to apply optimised parameters, and so have to retest etc. with those new parameters, as I think this makes things a lot more cumbersome. Even so, I think the sequential optimisation and walk forward optimisation steps are important, but more as robustness tests, even though I don’t apply the “optimised” parameters. Sequential optimisation assures that the parameters are in a stable range of values and, well, to make a long story short, reading Roberto Pardo has convinced me that WF is useful for robustness as well.

I like the idea of selecting the strategies to be included into a live account portfolio based on performance in a demo account or in a real account with minimal position sizing. However, a part from ranking, I think it would be good to have a minimal requirement cut off, as well, say profit factor > 1.5 or something like that. What do you think?

Cheers!

Attachments:
You must be logged in to view attached files.

1

AlgotradingDE

Customer, bbp_participant, community, sq-ultimate, 30 replies.

Visit profile

1 year ago #278305

Hi Kevin, hi Conmariin,

unfortunately my (long) answer to your posts is still awaiting moderation, although I don´t know why. So hopefully it will be published soon, because it´s very important. I wanted to suggest a very concrete collaboration project.

I´d say if it won´t be released until tomorrow, I will take the effort and write the whole post once again.

Let´s hope for the best!

Gerhard Frischholz
https://Algotrading.de

0

Conmariin

Subscriber, bbp_participant, community, customer, 54 replies.

Visit profile

1 year ago #278310

Hi Kevin,

thanks for your workflow! 🙂

I adapted the useful ProfitFactor(IS) >= 1 in the Genetic Options to my workflow. Yes, it takes than longer to get on your amount of strategies, but I think you’re right.

I’m doing the Walk-Forward-Matrix with the winner of the “contest” too, but it’s only for me a statistically element. I will recognize the results but they’re no criteria for me to sort the EA’s out.

 

Automatisches Handeln mit Expert Advisor
https://www.rabenesche.de

0

Viewing 15 replies - 16 through 30 (of 48 total)

1 2 3 4