Reply

Parallel Processing For Genetic Training And Generation

10 replies

kainc301

Customer, bbp_participant, community, 54 replies.

Visit profile

4 years ago #247191

Hey everyone. From what I understand, the SQ team has made a lot of speed improvements for SQX including parallel computing for Monte Carlo and WFO simulations.

However, is it possible to use parallel computation for genetic training and random generation? I would presume this would make this process much faster for higher core count CPUs.

Also, I saw that GPU acceleration for computations is on the to be done later task list. But from what I understand, the only way GPU computing will help with the speed of building the strategies is if the building and training were done with parallel processing. I want to invest in premium GPUs to use with my machine for SQX when this feature becomes available however, I wanted to make sure that it would actually improve the speed of building and training strategies.

I’ve seen GPU acceleration improve the speed of training machine learning models by orders of magnitude. Could the same be applied to SQX? Curious to get thoughts from Mark or the development team. It would really be awesome to get the 10-100x speed improvements that have been observed with different models using multi-GPU setups 🙂

0

bentra

Customer, bbp_participant, community, sq-ultimate, 22 replies.

Visit profile

4 years ago #247208

From wiki: Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. This is technically already happening since SQ3 and maybe before it. (Otherwise we would not be able to use more than one core at a time.)

Looking forward to GPU support too, should be much faster but might be a lot of work to get there…

From this paper: https://pdfs.semanticscholar.org/8fa2/9a317120add5525e61f41f73c5a96e932d60.pdf
GPGPU programming is writing massively parallel programs for execution on personal computers.

 

May all your fits be loose.


https://www.darwinex.com/darwin/SUG.4.2/

0

Gianfranco

Subscriber, bbp_participant, customer, community, 114 replies.

Visit profile

4 years ago #247214

like cuda cores  nvidia

I know some optimized software with nvidia cuda libraries that take advantage of the cuda cores of the nvidia gpu…. or tesla cards for parallel calculation… but I think it’s a big job writing sqx optimized with Cuda libraries

0

kainc301

Customer, bbp_participant, community, 54 replies.

Visit profile

4 years ago #247226

This is technically already happening since SQ3 and maybe before it. (Otherwise we would not be able to use more than one core at a time.)

I thought it was only WFO and Monte Carlo that took advantage of this but I have no clue as to how the inner workings of SQX are handled so it was only a best guess.

I know some optimized software with nvidia cuda libraries that take advantage of the cuda cores of the nvidia gpu…. or tesla cards for parallel calculation… but I think it’s a big job writing sqx optimized with Cuda libraries

If I could only have one Christmas present, it would be this lol

 

0

clonex / Ivan Hudec

Customer, bbp_participant, community, sq-ultimate, contributor, author, editor, 271 replies.

Visit profile

4 years ago #247241

it was written here many times how sqx works why it is not possible to use GPU etc. just use search function

0

kainc301

Customer, bbp_participant, community, 54 replies.

Visit profile

4 years ago #247243

it was written here many times how sqx works why it is not possible to use GPU etc. just use search function

Mark added GPU capabilities to the to be done later milestone list to SQX: https://roadmap.strategyquant.com/tasks/sq4_2985

I have read what you were referring to but after this task was submitted and there was some discussion about how it could be implemented, I was under the presumption that this sentiment had changed.

0

kainc301

Customer, bbp_participant, community, 54 replies.

Visit profile

4 years ago #247246

Did a lot of research on SQ forums and elsewhere to understand the current bottlenecks of GPU computing when it comes to backtesting. When Mark spoke about this in the past, the issue had to do with using GPU’s for general purpose computing as backtesting was more complex than mathematical operations. A similar sentiment was described in this academic abstract here: https://pdfs.semanticscholar.org/d087/f5cb4a92f98aaef5008ceb682954f7ffbee2.pdf

However, it appears that two researchers have been able to use GPU’s effectively with genetic algorithms on FX markets. I highly recommend looking at and potentially implementing this approach. A full link to their academic research is here: http://people.scs.carleton.ca/~dmckenne/5704/Paper/Final_Paper.pdf

I also found that NVIDIA was able to use GPU’s for financial market backtesting and achieve a 6000x speedup https://www.nvidia.com/content/dam/en-zz/Solutions/industries/finance/finance-trading-executive-briefing-hr-web.pdf

How they did it is unknown so I’ll keep looking but, there seems to be a way to incorporate CUDA logic that’s capable of doing the type of backtesting that was previously considered near impossible to do. So I believe there is still hope for this to be implemented into SQX one day. If it has been done before, I do not consider it “impossible.” The researchers incorporating GPU’s in their genetic algorithm also used CUDA in order to achieve their results.

0

Mark Fric

Administrator, sq-ultimate, 2 replies.

Visit profile

4 years ago #247279

I consult this repeatedly with my PHD friends from university who work with GPUs, and unfortunately the status is not changed – GPUs anre not suitable for us.

It is possible to run genetic evolution also “some kind” of backtesting on GPU. The problem is the open nature of SQ that we want to achieve.

We could make a fixed strategy architecture and fixed set of indicators that would be tested in GPU, but then we wouldn’t be able to use things like strategy templates, programming custom indicators and other blocks, which are all strong points of SQ X.

This hundredfold speedup they mention in papers is possible only under certain conditions and with certain programs, it is not yet possible to speed up general backtesting process like this.

Mark
StrategyQuant architect

0

kainc301

Customer, bbp_participant, community, 54 replies.

Visit profile

4 years ago #247291

I consult this repeatedly with my PHD friends from university who work with GPUs, and unfortunately the status is not changed – GPUs are not suitable for us. It is possible to run genetic evolution also “some kind” of backtesting on GPU. The problem is the open nature of SQ that we want to achieve. We could make a fixed strategy architecture and fixed set of indicators that would be tested in GPU, but then we wouldn’t be able to use things like strategy templates, programming custom indicators and other blocks, which are all strong points of SQ X. This hundredfold speedup they mention in papers is possible only under certain conditions and with certain programs, it is not yet possible to speed up general backtesting process like this.

That’s unfortunate. I agree that it is much more important to keep flexibility in using custom indicators/templates than sacrifice that for performance.

So the true solution in pursuing higher performance through hardware is by using multiple CPUs connected through a grid system. At least I can save money on expensive GPUs haha.

0

gusyoan

Customer, bbp_participant, community, 21 replies.

Visit profile

4 years ago #254839

hi mark,

besides GPU, is it possible use FPGA card or ASIC?

0

Mark Fric

Administrator, sq-ultimate, 2 replies.

Visit profile

4 years ago #257552

no, this kind of cards are not possible to use. But I don’t understand the obsession about the highest possible power – it is not the key to profitable trading.

Mark
StrategyQuant architect

0

Viewing 10 replies - 1 through 10 (of 10 total)