in an old post of mine I expressed if it was possible to use gpu (nvidia) cuda cores for parallel computing
and they said that this is not possible ….
so for those who have money to spend … xeon platinum, gold (two) amd TR amd EPYC but a lot of power without knowing what to do .. is useless … I mean that many focus only on computing power. .hope to pull out super strategies .. I know algotraders who work with mid-level computers and ‘make working strategies … I think that running after all the latest CPU is of little use
it’s just my observation
I have changed the computer 3 times, always faster, the strategies have not improved with the speed of the computer
regarding using GPU in SQ, you can read here a recent post from the administrator
https://strategyquant.com/forum/topic/parallel-processing-for-genetic-training-and-generation/#post-247279
The administrator, Mark Fric, has also explained in more older posts these principles like in this one
https://strategyquant.com/forum/topic/1774-what-type-of-computer-specifications-do-you-all-have/page/2/#post-126669
Regarding Xeon and performance, in the present, the latest AMD Ryzen 9 and Threadrippers are among the best that world technology has now.
Timisoara, Romania
3900X 3.8 Ghz 12 cores, 64GB RAM DDR4 3000Mhz, Samsung 970 EVO Plus M.2 NVMe
It seems Xeon can not match professional needs enough, does SQ team plan to add more functionalities part like GPU? even an evolution road map like Bitcoin miner will be better, CPU-GPU-FPGA-ASIC.
I consult this repeatedly with my PHD friends from university who work with GPUs, and unfortunately the status is not changed – GPUs are not suitable for us. It is possible to run genetic evolution also “some kind” of backtesting on GPU. The problem is the open nature of SQ that we want to achieve. We could make a fixed strategy architecture and fixed set of indicators that would be tested in GPU, but then we wouldn’t be able to use things like strategy templates, programming custom indicators and other blocks, which are all strong points of SQ X. This hundredfold speedup they mention in papers is possible only under certain conditions and with certain programs, it is not yet possible to speed up general backtesting process like this.
That’s unfortunate. I agree that it is much more important to keep flexibility in using custom indicators/templates than sacrifice that for performance.
So the true solution in pursuing higher performance through hardware is by using multiple CPUs connected through a grid system. At least I can save money on expensive GPUs haha.
I consult this repeatedly with my PHD friends from university who work with GPUs, and unfortunately the status is not changed – GPUs anre not suitable for us.
It is possible to run genetic evolution also “some kind” of backtesting on GPU. The problem is the open nature of SQ that we want to achieve.
We could make a fixed strategy architecture and fixed set of indicators that would be tested in GPU, but then we wouldn’t be able to use things like strategy templates, programming custom indicators and other blocks, which are all strong points of SQ X.
This hundredfold speedup they mention in papers is possible only under certain conditions and with certain programs, it is not yet possible to speed up general backtesting process like this.
Mark
StrategyQuant architect
Did a lot of research on SQ forums and elsewhere to understand the current bottlenecks of GPU computing when it comes to backtesting. When Mark spoke about this in the past, the issue had to do with using GPU’s for general purpose computing as backtesting was more complex than mathematical operations. A similar sentiment was described in this academic abstract here: https://pdfs.semanticscholar.org/d087/f5cb4a92f98aaef5008ceb682954f7ffbee2.pdf
However, it appears that two researchers have been able to use GPU’s effectively with genetic algorithms on FX markets. I highly recommend looking at and potentially implementing this approach. A full link to their academic research is here: http://people.scs.carleton.ca/~dmckenne/5704/Paper/Final_Paper.pdf
I also found that NVIDIA was able to use GPU’s for financial market backtesting and achieve a 6000x speedup https://www.nvidia.com/content/dam/en-zz/Solutions/industries/finance/finance-trading-executive-briefing-hr-web.pdf
How they did it is unknown so I’ll keep looking but, there seems to be a way to incorporate CUDA logic that’s capable of doing the type of backtesting that was previously considered near impossible to do. So I believe there is still hope for this to be implemented into SQX one day. If it has been done before, I do not consider it “impossible.” The researchers incorporating GPU’s in their genetic algorithm also used CUDA in order to achieve their results.
it was written here many times how sqx works why it is not possible to use GPU etc. just use search function
Mark added GPU capabilities to the to be done later milestone list to SQX: https://roadmap.strategyquant.com/tasks/sq4_2985
I have read what you were referring to but after this task was submitted and there was some discussion about how it could be implemented, I was under the presumption that this sentiment had changed.
it was written here many times how sqx works why it is not possible to use GPU etc. just use search function
This is technically already happening since SQ3 and maybe before it. (Otherwise we would not be able to use more than one core at a time.)
I thought it was only WFO and Monte Carlo that took advantage of this but I have no clue as to how the inner workings of SQX are handled so it was only a best guess.
I know some optimized software with nvidia cuda libraries that take advantage of the cuda cores of the nvidia gpu…. or tesla cards for parallel calculation… but I think it’s a big job writing sqx optimized with Cuda libraries
If I could only have one Christmas present, it would be this lol
like cuda cores nvidia
I know some optimized software with nvidia cuda libraries that take advantage of the cuda cores of the nvidia gpu…. or tesla cards for parallel calculation… but I think it’s a big job writing sqx optimized with Cuda libraries
From wiki: Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. This is technically already happening since SQ3 and maybe before it. (Otherwise we would not be able to use more than one core at a time.)
Looking forward to GPU support too, should be much faster but might be a lot of work to get there…
From this paper: https://pdfs.semanticscholar.org/8fa2/9a317120add5525e61f41f73c5a96e932d60.pdf
GPGPU programming is writing massively parallel programs for execution on personal computers.
May all your fits be loose.
https://www.darwinex.com/darwin/SUG.4.2/
Hey everyone. From what I understand, the SQ team has made a lot of speed improvements for SQX including parallel computing for Monte Carlo and WFO simulations.
However, is it possible to use parallel computation for genetic training and random generation? I would presume this would make this process much faster for higher core count CPUs.
Also, I saw that GPU acceleration for computations is on the to be done later task list. But from what I understand, the only way GPU computing will help with the speed of building the strategies is if the building and training were done with parallel processing. I want to invest in premium GPUs to use with my machine for SQX when this feature becomes available however, I wanted to make sure that it would actually improve the speed of building and training strategies.
I’ve seen GPU acceleration improve the speed of training machine learning models by orders of magnitude. Could the same be applied to SQX? Curious to get thoughts from Mark or the development team. It would really be awesome to get the 10-100x speed improvements that have been observed with different models using multi-GPU setups 🙂
yeah. first try to write GPU to Search then ask pls. with all due the respect.
https://strategyquant.com/forum/search/GPU/
This has been discussed few times on this forum already, GPU cannot really be used if You want to allow users to program their own indicators and other things and extend SQX with their own ideas is one of the reasons. But it has also been said that parts could be added that could be GPU accelerated.
I was surprised to read on the internet that SQ did not yet have the ability to accelerate the calculations of the processing strategies by GPU and openCL nor the computation in the cloud !!! How is it possible?
I think it is a total backlog that SQ with the advanced that is compared to other competing software and that does not have this capacity yet, platforms like MT5 and others have already implemented it for a long time !!!
I hope to see some of these technologies incorporated in the coming months !!!
I propose that together we make strength so that they incorporate this technology and that you request and make requests for them to add this as soon as possible
Thank you