the latency issue from AWS server (mumbai) to kite api call started again from tomorrow. could you please check with AWS team? I have sent you tracert logs in inbox
Noticed this as well. Especially during first half hour after market open. Many of my calls were failed on UI side which had timeout of 3 seconds for each request. So API calls /positions and /holdings were taking more than 3 seconds to complete. Why are we having connectivity issues lately?
Hi @krtrader we've raised this with AWS again. Since you are also in AWS network, suggest you also raise a ticket with them comparing the ping time and traceroute to api.kite.trade from your local and from your AWS instance. What is the instance type you are using?
We are experiencing high latency (~53ms, earlier it was ~1ms) to connect api.kite.trade from AWS (Mumbai Data center). We are using c5 instance types. From our personal laptop, the latency is ~7ms only. Kindly check this
Here is the traceroute from AWS
tracert api.kite.trade
Tracing route to api.kite.trade [104.18.91.38] over a maximum of 30 hops:
1 * * * Request timed out. 2 * * * Request timed out. 3 * * * Request timed out. 4 * * * Request timed out. 5 * * * Request timed out. 6 <1 ms <1 ms <1 ms 100.65.10.33 7 1 ms <1 ms 1 ms 52.95.67.199 8 6 ms 12 ms 3 ms 52.95.67.150 9 1 ms 1 ms <1 ms 52.95.65.236 10 1 ms 1 ms 1 ms 115.114.89.57.static-Mumbai.vsnl.net.in [115.114.89.57] 11 * * * Request timed out. 12 22 ms 22 ms 32 ms ix-ae-4-2.tcore1.cxr-chennai.as6453.net [180.87.36.9] 13 59 ms 59 ms 59 ms if-ae-34-2.tcore1.svq-singapore.as6453.net [180.87.36.41] 14 56 ms 57 ms 56 ms 120.29.215.101 15 63 ms 63 ms 63 ms 104.18.91.38
This seems like Cloudflare routing this through the fastest available path for your source network. Its something we have no control over. This could depend on multiple factors, like if theres an increased latency between the networks of your source and our destination, Cloudflare will automatically route it through a different path that is faster and that could be a different geographic region also. Thats why you see two different routes from AWS (AWS's ISP) and your laptop (your ISP). This would automatically fix as and when the route becomes faster. Now its true that our servers are in AWS, but all our services are behind Cloudflare for its WAF and DDoS protection. So you are essentially reaching Cloudflare rather than AWS when you try to reach api.kite.trade. You can read up more about this routing here: https://blog.cloudflare.com/argo/ .
@vishnus@sujith I am not an expert on networks, but can this be reported to Cloudflare and sorted? My VM is in Azure Mumbai, but it goes to Dubai and then to Kite/AWS Mumbai!! Infact it is faster from my home network in Chennai. It takes about 1ms from Singapore!!!! I am attaching the tracert log files from these locations.
@gandavadi, As mentioned in the above Cloudflare article, the algorithm finds the best available route. I don't think we can lobby to route through a specific node.
Like I had explained in my previous comment, Cloudflare routes traffic based on the latency between two points. There are n number of variables that could cause it to be routed through a different geographic location. When you were doing the traceroute, probably one of the hops between your AWS server and Cloudflare's closest node might have had a higher latency that could cause them to reroute it through another faster route (its mostly due to an ISP issue and we have seen it happening with our regular clients also while they access kite.zerodha.com) and these are out of our control. If the delay continues to occur, Cloudflare retains the same path. We will raise an issue with Cloudflare again and ask for an update. Meanwhile, could you share a new traceroutes from your AWS servers again?
We just had a call with Cloudflare to understand how this routing from client-side goes over totally different geographic locations. We shared the MTR and traceroutes you had sent with them. These cloud providers rely on multiple ISPs and the routing to the external network (internet) depends upon these ISPs. In the MTR and traceroutes @HowUTrade had shared, you can see that the second last hop IP (180.87.181.187) is of TATA ISP in tokyo and the next hop is Cloudflare edge server in Tokyo. So what happened here is that the traffic from AWS -> ISP of AWS (TATA in this case) which sent traffic through its Tokyo edge -> reached the closest Cloudflare edge server in Tokyo. This is something that they nor we have control over. With ISPs its common to have these issues and especially when it comes to Cloud providers like AWS, GCP, Azure etc they will have multiple ISPs and they could route your external traffic through various ISPs. Please find below screenshots:
Tracing route to api.kite.trade [104.18.91.38] over a maximum of 30 hops:
1 * * * Request timed out. 2 * * * Request timed out. 3 * * * Request timed out. 4 * * * Request timed out. 5 * * * Request timed out. 6 27 ms 27 ms 27 ms be-20-0.ibr01.bom01.ntwk.msn.net [104.44.11.1] 7 26 ms 26 ms 26 ms be-8-0.ibr01.bom30.ntwk.msn.net [104.44.7.168] 8 27 ms 27 ms 27 ms be-10-0.ibr01.dxb20.ntwk.msn.net [104.44.28.160] 9 26 ms 26 ms 26 ms ae102-0.icr02.dxb20.ntwk.msn.net [104.44.20.232] 10 30 ms 26 ms 26 ms ae22-0.ier02.dxb20.ntwk.msn.net [104.44.238.228] -- AZURE in United States 11 30 ms 29 ms 29 ms 185.1.15.41 -- Emirates ISP in Dubai 12 29 ms 29 ms 29 ms 104.18.91.38 -- Cloudflare
Looks like the issue is resolved Here is the latest trace route from AWS, back to ~1ms
Tracing route to api.kite.trade [104.18.91.38] over a maximum of 30 hops:
1 * * * Request timed out. 2 * * * Request timed out. 3 * * * Request timed out. 4 * * * Request timed out. 5 * * * Request timed out. 6 1 ms <1 ms 8 ms 100.65.10.65 7 1 ms <1 ms <1 ms 52.95.65.130 8 31 ms 1 ms <1 ms 52.95.67.38 9 <1 ms <1 ms <1 ms 52.95.67.23 10 1 ms 1 ms 1 ms 99.82.179.153 11 <1 ms <1 ms <1 ms 104.18.91.38
My apologies for my delayed response. I was out of action for a couple of weeks. Here are the details that you have requested. All these are from Azure Mumbai location
@gandavadi I cannot be certain of that. Like @HowUTrade had shared, even with AWS, sometimes traffic goes through overseas before hitting the internet. Unless you are in colo with the exchanges I don't think there's a permanent solution to get ultra-low latency. The moment internet is involved, the routing is totally on ISP's control.
The issue started again from 06-May-2019
Can we get permanent fix for this issue from AWS? Could you please inform to them?
can we get solution for this? i am facing this since 3 weeks. it increases overall slippage
I raised ticket with AWS team last week, but not reply yet.
Did u get any response from AWS? i am still facing the issue
The API calls started taking around 500 ms again from 12th Feb 2020.
earlier it was taking <100 ms.
Seems same issue as before. Could you please check with AWS team? I am running my app in AWS Mumbai.
@HowUTrade are you facing similar issue?
Can you do traceroute and private message the logs?
The issue got resolved. Thanks for the support.
We are experiencing high latency (~53ms, earlier it was ~1ms) to connect api.kite.trade from AWS (Mumbai Data center). We are using c5 instance types. From our personal laptop, the latency is ~7ms only.
Kindly check this
Here is the traceroute from AWS
Could you pls update on this?
I have shared this thread across the team. We will get back to you in a while.
This seems like Cloudflare routing this through the fastest available path for your source network. Its something we have no control over. This could depend on multiple factors, like if theres an increased latency between the networks of your source and our destination, Cloudflare will automatically route it through a different path that is faster and that could be a different geographic region also. Thats why you see two different routes from AWS (AWS's ISP) and your laptop (your ISP). This would automatically fix as and when the route becomes faster. Now its true that our servers are in AWS, but all our services are behind Cloudflare for its WAF and DDoS protection. So you are essentially reaching Cloudflare rather than AWS when you try to reach api.kite.trade. You can read up more about this routing here: https://blog.cloudflare.com/argo/ .
Thanks for the update.
I am not an expert on networks, but can this be reported to Cloudflare and sorted? My VM is in Azure Mumbai, but it goes to Dubai and then to Kite/AWS Mumbai!! Infact it is faster from my home network in Chennai. It takes about 1ms from Singapore!!!! I am attaching the tracert log files from these locations.
Please report this to Cloudflare and fix it.
As mentioned in the above Cloudflare article, the algorithm finds the best available route. I don't think we can lobby to route through a specific node.
"Lobby"??!!....wow....So you mean to say that we have to live with this issue? No solution or suggestions or improvement possible?
Like I had explained in my previous comment, Cloudflare routes traffic based on the latency between two points. There are n number of variables that could cause it to be routed through a different geographic location. When you were doing the traceroute, probably one of the hops between your AWS server and Cloudflare's closest node might have had a higher latency that could cause them to reroute it through another faster route (its mostly due to an ISP issue and we have seen it happening with our regular clients also while they access kite.zerodha.com) and these are out of our control. If the delay continues to occur, Cloudflare retains the same path. We will raise an issue with Cloudflare again and ask for an update. Meanwhile, could you share a new traceroutes from your AWS servers again?
You can read more about how this works here: https://blog.cloudflare.com/argo/
Also could you run these from your AWS server and share the results:
1. curl https://api.kite.trade/cdn-cgi/trace/
2. nslookup api.kite.trade
3. mtr -rwc 25 api.kite.trade
Thanks
Attached pls find the latest results from AWS Mumbai. To surprise, it is taking double the time (~140ms) than the last reported.
Source IP: 13.126.85.127
Instance Type: c5.large
Availability Zone: ap-south-1c
@gandavadi could you post the same for Azure Mumbai?
Thanks
We just had a call with Cloudflare to understand how this routing from client-side goes over totally different geographic locations. We shared the MTR and traceroutes you had sent with them. These cloud providers rely on multiple ISPs and the routing to the external network (internet) depends upon these ISPs. In the MTR and traceroutes @HowUTrade had shared, you can see that the second last hop IP (180.87.181.187) is of TATA ISP in tokyo and the next hop is Cloudflare edge server in Tokyo. So what happened here is that the traffic from AWS -> ISP of AWS (TATA in this case) which sent traffic through its Tokyo edge -> reached the closest Cloudflare edge server in Tokyo. This is something that they nor we have control over. With ISPs its common to have these issues and especially when it comes to Cloud providers like AWS, GCP, Azure etc they will have multiple ISPs and they could route your external traffic through various ISPs. Please find below screenshots:
The traceroute @HowUTrade shared:
Thanks
Thanks for the update.
Looks like the issue is resolved
Here is the latest trace route from AWS, back to ~1ms
My apologies for my delayed response. I was out of action for a couple of weeks. Here are the details that you have requested. All these are from Azure Mumbai location
Looks fantastic. I will move to AWS if there is no other solution.
@vishnus Let me know if moving to AWS is the only option I have.
Thanks