Not logged in
Viewing 9 posts - 1 through 9 (of 9 total)

Forums>StrategyQuant>General Discussion>Parallel Processing For Genetic Training And Generation

  • #247191|
    Customer
    36 Posts

    Hey everyone. From what I understand, the SQ team has made a lot of speed improvements for SQX including parallel computing for Monte Carlo and WFO simulations.

    However, is it possible to use parallel computation for genetic training and random generation? I would presume this would make this process much faster for higher core count CPUs.

    Also, I saw that GPU acceleration for computations is on the to be done later task list. But from what I understand, the only way GPU computing will help with the speed of building the strategies is if the building and training were done with parallel processing. I want to invest in premium GPUs to use with my machine for SQX when this feature becomes available however, I wanted to make sure that it would actually improve the speed of building and training strategies.

    I’ve seen GPU acceleration improve the speed of training machine learning models by orders of magnitude. Could the same be applied to SQX? Curious to get thoughts from Mark or the development team. It would really be awesome to get the 10-100x speed improvements that have been observed with different models using multi-GPU setups :)

    #247208
    Customer
    48 Posts

    From wiki: Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. This is technically already happening since SQ3 and maybe before it. (Otherwise we would not be able to use more than one core at a time.)

    Looking forward to GPU support too, should be much faster but might be a lot of work to get there…

    From this paper: https://pdfs.semanticscholar.org/8fa2/9a317120add5525e61f41f73c5a96e932d60.pdf
    GPGPU programming is writing massively parallel programs for execution on personal computers.

     

    #247214
    Gianfranco
    Customer
    37 Posts

    like cuda cores  nvidia

    I know some optimized software with nvidia cuda libraries that take advantage of the cuda cores of the nvidia gpu…. or tesla cards for parallel calculation… but I think it’s a big job writing sqx optimized with Cuda libraries

    #247226
    Customer
    36 Posts

    This is technically already happening since SQ3 and maybe before it. (Otherwise we would not be able to use more than one core at a time.)

    I thought it was only WFO and Monte Carlo that took advantage of this but I have no clue as to how the inner workings of SQX are handled so it was only a best guess.

    I know some optimized software with nvidia cuda libraries that take advantage of the cuda cores of the nvidia gpu…. or tesla cards for parallel calculation… but I think it’s a big job writing sqx optimized with Cuda libraries

    If I could only have one Christmas present, it would be this lol

     

    #247241
    Customer
    268 Posts

    it was written here many times how sqx works why it is not possible to use GPU etc. just use search function

    #247243
    Customer
    36 Posts

    it was written here many times how sqx works why it is not possible to use GPU etc. just use search function

    Mark added GPU capabilities to the to be done later milestone list to SQX: https://roadmap.strategyquant.com/tasks/sq4_2985

    I have read what you were referring to but after this task was submitted and there was some discussion about how it could be implemented, I was under the presumption that this sentiment had changed.

    #247246
    Customer
    36 Posts

    Did a lot of research on SQ forums and elsewhere to understand the current bottlenecks of GPU computing when it comes to backtesting. When Mark spoke about this in the past, the issue had to do with using GPU’s for general purpose computing as backtesting was more complex than mathematical operations. A similar sentiment was described in this academic abstract here: https://pdfs.semanticscholar.org/d087/f5cb4a92f98aaef5008ceb682954f7ffbee2.pdf

    However, it appears that two researchers have been able to use GPU’s effectively with genetic algorithms on FX markets. I highly recommend looking at and potentially implementing this approach. A full link to their academic research is here: http://people.scs.carleton.ca/~dmckenne/5704/Paper/Final_Paper.pdf

    I also found that NVIDIA was able to use GPU’s for financial market backtesting and achieve a 6000x speedup https://www.nvidia.com/content/dam/en-zz/Solutions/industries/finance/finance-trading-executive-briefing-hr-web.pdf

    How they did it is unknown so I’ll keep looking but, there seems to be a way to incorporate CUDA logic that’s capable of doing the type of backtesting that was previously considered near impossible to do. So I believe there is still hope for this to be implemented into SQX one day. If it has been done before, I do not consider it “impossible.” The researchers incorporating GPU’s in their genetic algorithm also used CUDA in order to achieve their results.

    • This reply was modified 4 weeks ago by  keinc301.
    #247279
    Administrator
    2333 Posts

    I consult this repeatedly with my PHD friends from university who work with GPUs, and unfortunately the status is not changed – GPUs anre not suitable for us.

    It is possible to run genetic evolution also “some kind” of backtesting on GPU. The problem is the open nature of SQ that we want to achieve.

    We could make a fixed strategy architecture and fixed set of indicators that would be tested in GPU, but then we wouldn’t be able to use things like strategy templates, programming custom indicators and other blocks, which are all strong points of SQ X.

    This hundredfold speedup they mention in papers is possible only under certain conditions and with certain programs, it is not yet possible to speed up general backtesting process like this.

    Mark
    StrategyQuant architect

    #247291
    Customer
    36 Posts

    I consult this repeatedly with my PHD friends from university who work with GPUs, and unfortunately the status is not changed – GPUs are not suitable for us. It is possible to run genetic evolution also “some kind” of backtesting on GPU. The problem is the open nature of SQ that we want to achieve. We could make a fixed strategy architecture and fixed set of indicators that would be tested in GPU, but then we wouldn’t be able to use things like strategy templates, programming custom indicators and other blocks, which are all strong points of SQ X. This hundredfold speedup they mention in papers is possible only under certain conditions and with certain programs, it is not yet possible to speed up general backtesting process like this.

    That’s unfortunate. I agree that it is much more important to keep flexibility in using custom indicators/templates than sacrifice that for performance.

    So the true solution in pursuing higher performance through hardware is by using multiple CPUs connected through a grid system. At least I can save money on expensive GPUs haha.

Viewing 9 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic.