Here are my experiences so far with Zerodha on latency & slippage, on CM and Index Options trading,
1) On latency - trades get executed within 10 seconds after my script makes the API call. Sometimes the first attempt fails and the trades goes through on second attempt (The script takes care of automatic retry). Sometimes most trades fail (system tries for 10 times max),
2) Slippage - when there are many trades, it has been ok... Tends to even out over time. But on some days the slippage goes through the roof, greatly changing the P/L figure,
I am yet to analyze why the slippages are so high on certain days. I suspect that the trades were fired when the index was on a steep slope...
@ramatius, Nice way to calculate slippage everyday.
Can you help, how are u handling the orders list. I believe you may be keeping a copy of kite.orders() and then joining Estimated value to it. Or may be keeping your signals list and joining the other details from kite.orders() to it.
Which approach are you using? I also want to start recording slippages in my trades. Can you please guide...
Since the question of slippages came up.here are my two cents. With any discount brokerage there seems to be high slippage. I donot know the reason behind this, but this is the experience talking. When I run the same algo,generated with almost the same time(10ms difference) on any given ordering non-discount brokerage has better order fill quality.
On the latency front, I have been routinely placing orders within latency of 1.5 seconds. I haven't measured the delay in subsequent fill though as I keep polling API every 15 seconds to find new fills, rather using using postback. That said, I face frequent connection issues.
Further, while placing Market orders at Entry, I typically fetch the prevailing mid price from the Websocket data-feed, and use that mid price as Limit Price to place orders. I do this to minimize slippage. In my experience around 95 percent of such orders get filled. When markets move fast (and that is when one desperately needs fills), I have seen orders remaining unfilled. But please note, in this case there is double latency: 1. latency in websocket feed 2. latency in order placement. In addition, there is processing time spent by algo in between receiving data and placing order.
A trader who is placing market orders to Zerodha by monitoring prices at his end and then firing market orders at certain triggers suffers from these two latencies plus processing time and hence will likely encounter more slippages. Such slippages can be greatly reduced by one of these two methods: 1. Replace, wherever possible, such market orders with a SL-M/Limit order. In both orders, the task of monitoring the trigger price is pushed to Zerodha's co-location facility at exchange, where latency is minimal.
2. Rather than placing a Market order, place a Limit order (at prevailing price) when triggers are fired. However, in this case, trader will miss a few trades. So if the algorithm fires a lot of trades and is sensitive to slippage, the total slippage saved by placing limit orders, hopefully over time, will be more than the notional profit which could be made on missed trades. For an algorithm which trades infrequently, has a high accuracy or where slippage is not a key criterion, trader may stick to market order.
on the historical data API, I was once having a broker's terminal running on his LAN, and I was viewing through remote desktop. I could clearly see that on Nifty futures, the websocket was permanently trailing behind, however the latency wouldn't be more than 1 second, though I didn't measure it.
Some recent example on slippage: Rather than placing Market Orders, to enter at Market I place Limit order at prevailing mid price (price from Zerodha Websocket) to save on slippage. Using this method, yesterday at around 925-930, Out of 5 lots which I tried to buy for Infratel (order staggered one lot per minute), I got filled for four. Today with same attempt, I got filled for 2 out of 5. Both day stock was moving sharply at that time. This is coming at extremely liquid time of the day. The time taken by my algo to process data and place order is less than 200ms. This gives further idea of latency involved in trading over internet and/or at Zerodha's end.
@haribabu , there are two issues here. Our discussion here is limited to first issue: slippage i.e the difference between the price at which you placed for entry/exit order vis-à-vis the price you get filled in. If the average slippage is high, it is akin to a fixed cost on your trading operations. Further, if the strategy trades frequently and thus tries to make small profits on each of a large number of trades it executes, slippage could make an otherwise profitable strategy a loss making one. On the entry side, one can try to minimize slippage by entering though limit order only . The cost here is that if you got the direction right but movement was fast then you will miss out on a few profitable trades. On the exit side, generally you would avoid placing a Limit order, because if it doesn't get filled and market races against you, you may end up with a big loss on that trade. The management of slippage is important for efficient execution.
However, you seem to be talking of another issue: which is what is the best way to exit a trade. It is largely in domain of strategy/algo design which we are not discussing here. For a fully automatic algo, backtesting (with a lot precautions) is considered compulsory. Even after that it is very difficult to say if the strategy is going to perform in live market. During the design of the algo and its back-testing, one has to experiment with both his entry plan and exit plan and implement the best-looking plan. You are right here, exit plan is as important as entry plan. Unfortunately, there is no good entry or exit plan. It all depends upon what you want to achieve. If you want to catch one the the few, but large sustained moves, you will have to probably keep a wider stoploss as well as not book profit quickly. However, if you want to place a large number of trades with small profit targets, you may have to exit the first moment the trade starts under-performing. There are plenty of other techniques one can use for example trailing stoploss, parabolic exits, donchian breakouts. But in a nut shell, there is no good and bad exit plan. The reason is that there is no consistent pattern in stock prices. Stock prices are a result of human actions in the market and it is best to assume that they move completely random in short run. So while one exit strategy may look good and work for one month, it may fail in the next month on the same stock. You keep experimenting till you find something that suits you. However, one thing is certain. A lot of algos fail because the price in a lot less slippage than actually experienced in the market. I personally work with a high 0.1% of position value as entry cost and another 0.1% as exit cost while designing my system. This will include all brokerage, taxes and slippages. Hope it helps.
@Shaha It will depend upon what scrips you are trading , your trade size, and your frequency of trading, and your holding period. The more illiquid scrips you trade (say you start moving from Nfity 50, to nifty 100 and then to 200 and then to 500), the bigger your trade size is, the higher the frequency and lower the holding period, the slippage will keep increasing. On index futures/options, however slippage is extremely low.
Currently I trade intraday and slippage can make or break my system. Hence, I personally subscribe to philosophy that make the worst case assumption on slippage and work so that you cannot go wrong on it. Unfortunately, I feel smug on my assumption, and have not put in requisite effort to calculate the slippage I actually encountered. I probably will put in some work on this.
But I would encourage you to work this experiment: on the model which seems viable with 0.05% costs, try and run on 0.10% cost. If the model turns unprofitable, I think you should reconsider it. You need to build in margin on safety. Backtest has inherent execution assumptions (you trade at exact second when signal was generated, you don't miss any trades, it will include fast moving trades which you will most likely miss, some of the profitable trades will have such high profits that you will not likely see those trades in the real market (curve fitting) ) which are not feasible in live trading. I would say, increase your slippage, half your returns in back-test, double your Maximum drawdown, and these are the numbers you should target.
Also, if you use trailing SL, the slippage tends to work against you - by the time your order gets executed, the price would've moved further out in the wrong direction. But if you have fixed Take-Profit, the slippage works in your favor mostly, because the price momentum is in favorable direction. So if your trade count is high, fixed TP is the way to go, to avoid high slippage.
@sauravkedia I like your posts. I'm fairly new and yet to figure out my strategy. I have some questions for you. Feel free not to answer some if it digs into your system. I try to keep it system level. 1. Are your trades/backtest event driven or formulate required data on the fly to arrive at an entry/exit. Because this can create a slippage due to time overhead on calculations. 2. Do you have any fallback for datafeed? I have seen sometimes kite websocket fails. Also for system to be more resilient. 3. any pointers on your infrastructure?
1. You are right, a vector/array driven backtest (like those in Amibroker etc) make too many assumption on execution which are difficult to find in live conditions. There is time lost in calcualtions. They fill on desired price (say closing price or at crossovers as desired prices) which is not always possible in live market. They don't make allowance for the fact unless you place market orders (which means more slippage), you will also miss trades. However, putting in place a event driven backtesting framework requires significant effort which I have avoided. So, to compensate I put a very high slippage (0.1% of each entry and another 0.1% for each exit) in model, plus enter at Limit. I believe I will never hit these numbers. Indeed a small change in these slippage costs makes a sea difference in profitability in backtests. I also try to reject a couple of best performing trades in backtest to see if the system would have survived without these lucky trades.
2. As I have mentioned in your other post, because of datafeed issues I started with a mechanism where strategy was run on Amibroker (as datafeeds there are reliable, as backfill is available) and execution was on python (using data from websockets) . So, far it has worked well for me and I don't intend to change. However, a big reason that I don't enter a trade immediately at Signal generation. My entries are stop loss based and hence the time lost in capturing a signal in execution engine doesn't really bothers me.
3. On infrastructure, i have taken qstrader from quantstart.com . It gives a great architecture and I have modified it heavily for my requirements. It gives you a great balance of complexity- great architecture which will serve well for foreseeable future yet simple to understand, manage and own. However, you will need to devote time to understand and modify it. After that you can 100% manage on your own without depending upon them to introduce new features. Make no mistakes, before it is useful to you, you will need to spend good 1-2 months. It all depends upon your requirements, the kind of execution you are seeking and how much you want to scale up. For small accounts, it may not be worth the effort.
4. I am very conscious of issues around margin of error and resilience. I work with a simple approach: if it can fail, it will fail. I design the system so as to avoid errors (wherever possible) and with redundancies. The execution architecture helps in this process as it is designed in event-driven and modular fashion. I am using Amibroker because datafeeds there have back-fill. On execution failure at Zerodha, I try and place the orders again by looking at error codes thrown. I throw a decent amount of logs to have a running update on system's health. I try and persist important pieces of data: for example the orders which I have placed etc etc.
@sauravkedia QSTrader is the one, I will be moving into. I see my current custom backtest is slow and it will be solved. However I need to see how to change my ecosystem for event driven approach which QS provides.
Does QS provides an interface to AMI? How do you send signals from AMI to your system - via api? I have stopped using AMI since a long time as it was slow and hanging with multiple datafeed vendors. Have you seen any missed/over done trades between signal trigger and execution(in a way technical slippage - not bid/ask one).
From my AMI experience, the backtest may give good result considering candle closes. However in practice, how do you find a trade success on a running candle vs completed one. In my case I use python for strategies and a completed 5min candle can happen at any point of time like 11:01, 11:02, 11:03, etc.. however AMI handles it for you and does at 11:05, 11:10, 11:15.
Caveat: The signal generation part of my lives strategies is simple, doesn't watch market continuously, exits are via SL - TP- End of Day and I seek additional price confirmation before firing the entry order to broker, so latency at the time of signal generation is not very critical.
1. I work with AMI plus GDFL plugin and so far it has worked flawlessly.
2. I write Amibroker Singals on to Log file in json format. My python code will continuously keep reading this log file to extract trade signals. Log being in json are easily readable.
3. I compare the strategies actual performance in live market and Amibrokers backtest for the same period and find that both have around 95% common trades. On each trade, I check whether profitability roughly matches, and while there are variations, its largely fine. With the kind of algo I run, though its intraday, it works with Amibroker backtests.
I am not clear on last point, but if you are trying to say that you may take a position while candle is being formed, then you can do a simple hack in Amibroker. You backtest on 1 minutes candle but upscale it to 5 min intervals using Amibroker Mutliple Time Frame support. That way for every 5 minute interval you are not restricted to only OHLCV values for that 5 minutes but for all individual 1 minute candles which comprise that 5 minutes. I work on 10 minutes candles but to get better estimation of entry and exit prices and it resolves same bar entry exit issues (which came before?), I use this approach.
I developed a trading strategy using historical Nifty 100 stocks Equity data. The strategy used several indicators requiring OHLC values. The strategy had a Sharpe of 3 + in backtests. Once I started implementing it in Futures it was underperforming, so to double check I obtained futures data for 1 year and the Sharpe was significantly lower, close to 1.8 . The model, however, was showing expected returns on Equity. The problem is during my backtests I considered that transaction cost I would face in futures, which is lower. The strategy is not effective with higher transaction cost that we face in Equities.
Transaction cost assumed one way: Brokerage for futures(0.01%, taken from Zerodha brokerage calculator for 1 lot) + Slippage for 1 lot(assumption, 0.025%) = 0.035%. Is this too low? Is this a common phenomenon that strategies developed on Equity data underperform in futures. If so, what would be a good rule of thumb for underperformance? DATA for backtesting: If the strategies backtested on equities underperform in futures, then it would be better to backtest them on futures. Unfortunately, it's very tricky to get historical futures data. Kite historical also provides just 1 month of past data at intraday level.
1) Multi-indicator strategies are much less robust than simpler ones. Slippages (both data & price) affect them severely. 2) Observe slippages in actual trades and model it into backtesting. This is the best way to get reasonably good backtest results. 3) Design your strategies specifically for a market (equity, futures, ...). "Universal" strategies have limited profitability, because as markets become more efficient, such profitability gets absorbed by traders by same/similar trading patterns. 4) Deep analytics of backtest/livetest results should be done and the insights should be used to refine the strategy for better profitability. Most people skip this step and lose out on profits. Worse, they continue to use a strategy that has no real profitability whatsoever.
@Apoorv If you have developed the strategy on EQ, live test it on EQ for a week or so to see the effectiveness of strategy. Keep transaction charges aside for the moment and try out. Analyze all the trades and find what is going wrong. If you strategy aims for >1% move in EQ, chances are that it will work in futures too. But you cannot enter Futures position using EQ data. (i'm also developing a strategy on EQ due to data constraints) When your EQ live test is completed (get results of atleast 25 trades on various stocks) compare of Futures also gave the same returns. Actually you can run a EQ live strategy along with a similated Futures paper trading in the same instance. This can give you clear info,.
@revendar Thanks for your reply. I did the test with Equity and it was performing as per expectation, Futures, however, were underperforming. Lesson learned. I have modified the strategy to keep it profitable in equity despite the higher transaction cost. For futures, I have got hold of daily data and working on it for strategies. I am on the lookout for intraday Futures data for short-term strategies.
1) On latency - trades get executed within 10 seconds after my script makes the API call. Sometimes the first attempt fails and the trades goes through on second attempt (The script takes care of automatic retry). Sometimes most trades fail (system tries for 10 times max),
2) Slippage - when there are many trades, it has been ok... Tends to even out over time. But on some days the slippage goes through the roof, greatly changing the P/L figure,
I am yet to analyze why the slippages are so high on certain days. I suspect that the trades were fired when the index was on a steep slope...
Happy trading!
Can you help, how are u handling the orders list.
I believe you may be keeping a copy of kite.orders() and then joining Estimated value to it.
Or may be keeping your signals list and joining the other details from kite.orders() to it.
Which approach are you using? I also want to start recording slippages in my trades. Can you please guide...
Further, while placing Market orders at Entry, I typically fetch the prevailing mid price from the Websocket data-feed, and use that mid price as Limit Price to place orders. I do this to minimize slippage. In my experience around 95 percent of such orders get filled. When markets move fast (and that is when one desperately needs fills), I have seen orders remaining unfilled. But please note, in this case there is double latency: 1. latency in websocket feed 2. latency in order placement. In addition, there is processing time spent by algo in between receiving data and placing order.
A trader who is placing market orders to Zerodha by monitoring prices at his end and then firing market orders at certain triggers suffers from these two latencies plus processing time and hence will likely encounter more slippages. Such slippages can be greatly reduced by one of these two methods:
1. Replace, wherever possible, such market orders with a SL-M/Limit order. In both orders, the task of monitoring the trigger price is pushed to Zerodha's co-location facility at exchange, where latency is minimal.
2. Rather than placing a Market order, place a Limit order (at prevailing price) when triggers are fired. However, in this case, trader will miss a few trades. So if the algorithm fires a lot of trades and is sensitive to slippage, the total slippage saved by placing limit orders, hopefully over time, will be more than the notional profit which could be made on missed trades. For an algorithm which trades infrequently, has a high accuracy or where slippage is not a key criterion, trader may stick to market order.
Are you giving this much importance to tour exit also?
Did anyone have data about how much missed due to poor exit plan?
However, you seem to be talking of another issue: which is what is the best way to exit a trade. It is largely in domain of strategy/algo design which we are not discussing here. For a fully automatic algo, backtesting (with a lot precautions) is considered compulsory. Even after that it is very difficult to say if the strategy is going to perform in live market. During the design of the algo and its back-testing, one has to experiment with both his entry plan and exit plan and implement the best-looking plan. You are right here, exit plan is as important as entry plan. Unfortunately, there is no good entry or exit plan. It all depends upon what you want to achieve. If you want to catch one the the few, but large sustained moves, you will have to probably keep a wider stoploss as well as not book profit quickly. However, if you want to place a large number of trades with small profit targets, you may have to exit the first moment the trade starts under-performing. There are plenty of other techniques one can use for example trailing stoploss, parabolic exits, donchian breakouts. But in a nut shell, there is no good and bad exit plan. The reason is that there is no consistent pattern in stock prices. Stock prices are a result of human actions in the market and it is best to assume that they move completely random in short run. So while one exit strategy may look good and work for one month, it may fail in the next month on the same stock. You keep experimenting till you find something that suits you. However, one thing is certain. A lot of algos fail because the price in a lot less slippage than actually experienced in the market. I personally work with a high 0.1% of position value as entry cost and another 0.1% as exit cost while designing my system. This will include all brokerage, taxes and slippages. Hope it helps.
Currently I trade intraday and slippage can make or break my system. Hence, I personally subscribe to philosophy that make the worst case assumption on slippage and work so that you cannot go wrong on it. Unfortunately, I feel smug on my assumption, and have not put in requisite effort to calculate the slippage I actually encountered. I probably will put in some work on this.
But I would encourage you to work this experiment: on the model which seems viable with 0.05% costs, try and run on 0.10% cost. If the model turns unprofitable, I think you should reconsider it. You need to build in margin on safety. Backtest has inherent execution assumptions (you trade at exact second when signal was generated, you don't miss any trades, it will include fast moving trades which you will most likely miss, some of the profitable trades will have such high profits that you will not likely see those trades in the real market (curve fitting) ) which are not feasible in live trading. I would say, increase your slippage, half your returns in back-test, double your Maximum drawdown, and these are the numbers you should target.
1. Are your trades/backtest event driven or formulate required data on the fly to arrive at an entry/exit. Because this can create a slippage due to time overhead on calculations.
2. Do you have any fallback for datafeed? I have seen sometimes kite websocket fails. Also for system to be more resilient.
3. any pointers on your infrastructure?
2. As I have mentioned in your other post, because of datafeed issues I started with a mechanism where strategy was run on Amibroker (as datafeeds there are reliable, as backfill is available) and execution was on python (using data from websockets) . So, far it has worked well for me and I don't intend to change. However, a big reason that I don't enter a trade immediately at Signal generation. My entries are stop loss based and hence the time lost in capturing a signal in execution engine doesn't really bothers me.
3. On infrastructure, i have taken qstrader from quantstart.com . It gives a great architecture and I have modified it heavily for my requirements. It gives you a great balance of complexity-
great architecture which will serve well for foreseeable future yet simple to understand, manage and own. However, you will need to devote time to understand and modify it. After that you can 100% manage on your own without depending upon them to introduce new features. Make no mistakes, before it is useful to you, you will need to spend good 1-2 months. It all depends upon your requirements, the kind of execution you are seeking and how much you want to scale up. For small accounts, it may not be worth the effort.
4. I am very conscious of issues around margin of error and resilience. I work with a simple approach: if it can fail, it will fail. I design the system so as to avoid errors (wherever possible) and with redundancies. The execution architecture helps in this process as it is designed in event-driven and modular fashion. I am using Amibroker because datafeeds there have back-fill. On execution failure at Zerodha, I try and place the orders again by looking at error codes thrown. I throw a decent amount of logs to have a running update on system's health. I try and persist important pieces of data: for example the orders which I have placed etc etc.
Does QS provides an interface to AMI? How do you send signals from AMI to your system - via api? I have stopped using AMI since a long time as it was slow and hanging with multiple datafeed vendors.
Have you seen any missed/over done trades between signal trigger and execution(in a way technical slippage - not bid/ask one).
From my AMI experience, the backtest may give good result considering candle closes. However in practice, how do you find a trade success on a running candle vs completed one. In my case I use python for strategies and a completed 5min candle can happen at any point of time like 11:01, 11:02, 11:03, etc.. however AMI handles it for you and does at 11:05, 11:10, 11:15.
1. I work with AMI plus GDFL plugin and so far it has worked flawlessly.
2. I write Amibroker Singals on to Log file in json format. My python code will continuously keep reading this log file to extract trade signals. Log being in json are easily readable.
3. I compare the strategies actual performance in live market and Amibrokers backtest for the same period and find that both have around 95% common trades. On each trade, I check whether profitability roughly matches, and while there are variations, its largely fine. With the kind of algo I run, though its intraday, it works with Amibroker backtests.
I am not clear on last point, but if you are trying to say that you may take a position while candle is being formed, then you can do a simple hack in Amibroker. You backtest on 1 minutes candle but upscale it to 5 min intervals using Amibroker Mutliple Time Frame support. That way for every 5 minute interval you are not restricted to only OHLCV values for that 5 minutes but for all individual 1 minute candles which comprise that 5 minutes. I work on 10 minutes candles but to get better estimation of entry and exit prices and it resolves same bar entry exit issues (which came before?), I use this approach.
I appreciate if anybody can help me on this endeavor.
Transaction cost assumed one way: Brokerage for futures(0.01%, taken from Zerodha brokerage calculator for 1 lot) + Slippage for 1 lot(assumption, 0.025%) = 0.035%. Is this too low?
Is this a common phenomenon that strategies developed on Equity data underperform in futures. If so, what would be a good rule of thumb for underperformance?
DATA for backtesting: If the strategies backtested on equities underperform in futures, then it would be better to backtest them on futures. Unfortunately, it's very tricky to get historical futures data. Kite historical also provides just 1 month of past data at intraday level.
Look forward to your comments and suggestions.
Tagging people who have shared very helpful stuff in the above discussion. @sauravkedia @revendar @ramatius
Thank you.
Apoorv
1) Multi-indicator strategies are much less robust than simpler ones. Slippages (both data & price) affect them severely.
2) Observe slippages in actual trades and model it into backtesting. This is the best way to get reasonably good backtest results.
3) Design your strategies specifically for a market (equity, futures, ...). "Universal" strategies have limited profitability, because as markets become more efficient, such profitability gets absorbed by traders by same/similar trading patterns.
4) Deep analytics of backtest/livetest results should be done and the insights should be used to refine the strategy for better profitability. Most people skip this step and lose out on profits. Worse, they continue to use a strategy that has no real profitability whatsoever.
Will keep your suggestions in min.
@krishnanm2006 I used the data from 2013 to 2016 from kite. out of sample testing was done on data from 2017.
http://autonifty.com/trading.aspx
I did the test with Equity and it was performing as per expectation, Futures, however, were underperforming. Lesson learned. I have modified the strategy to keep it profitable in equity despite the higher transaction cost. For futures, I have got hold of daily data and working on it for strategies. I am on the lookout for intraday Futures data for short-term strategies.