It depends on your strategy and combination of parameters. Here are some things that influence the speed: 1) Your data source location - over internet will be increasing speed. On your local machine it would be faster 2) The indicators that you apply - Each indicator gets calculated at the close of the candle. The lower the timeframe and larger the number of indicators, there would be an induced latency, though small 3) The parameters of the indicators - eg. you want to see how it behaves on moving average 50 and moving average 200. That's going to cost 2 times in terms of time. 4) The number of instruments that you want this to run - one instrument time X number of instruments 5) How much back you want to go
Python inherently is fast it really would not matter which programming language you use - its all going to come down on the complexity of the strategy, indicators used, location of data source, duration of back test and target instrument list.
1) Create a RAM drive and put all the data files in it, if you keep accessing the data files for each run.
2) If your machine has multiple cores, make your script re-entrant and run multiple sessions in parallel. Sometimes this is the most effect way of reducing the runtime.
3) Avoid running SQL queries. Just load everything into in-memory arrays/hashes. If you must use queries, create temporary MEMORY table(s), copy over the data in to them and access those tables in your script.
1) Your data source location - over internet will be increasing speed. On your local machine it would be faster
2) The indicators that you apply - Each indicator gets calculated at the close of the candle. The lower the timeframe and larger the number of indicators, there would be an induced latency, though small
3) The parameters of the indicators - eg. you want to see how it behaves on moving average 50 and moving average 200. That's going to cost 2 times in terms of time.
4) The number of instruments that you want this to run - one instrument time X number of instruments
5) How much back you want to go
Python inherently is fast it really would not matter which programming language you use - its all going to come down on the complexity of the strategy, indicators used, location of data source, duration of back test and target instrument list.
I am working on a project which includes machine learning and optimising and combining technical strategies.
At present it takes 1 day to test 500 models of ML algorithms on NSE 500 stocks
pypy vs python:
1) Create a RAM drive and put all the data files in it, if you keep accessing the data files for each run.
2) If your machine has multiple cores, make your script re-entrant and run multiple sessions in parallel. Sometimes this is the most effect way of reducing the runtime.
3) Avoid running SQL queries. Just load everything into in-memory arrays/hashes. If you must use queries, create temporary MEMORY table(s), copy over the data in to them and access those tables in your script.
We can help you out with your above query, It is very much possible in our platform with more customizable options
Kindly contact: [email protected]
Call : +91 8879647666
Website : www.algobulls.com