Scanning limits

itsram90
Hi,

i know that at one point of time we can only have three hits in KITE api. in one second my ml algo is scanning index and all underlying eqs and there is separate block which is only working over order management so by total 3 hits in sec.

normally it runs perfectly fine but some time gives too many request and exits. can you suggest me something to overcome this challenge ? (occurrence is random sometimes after 30min or some times after 1hr etc...)

Thanks
Ram
Tagged:
  • Kailash
    Are you sure your API isn't sending more than 3 concurrent requests?

    You should connect to the WebSocket API and stream live data rather than polling the quotes API.
  • itsram90
    do they count differently, i was thinking that all this counts as one, i am using websocket quote and historical all three.
  • sujith
    Hi @Ram,
    We have rate limit only for http requests, not for websockets. So at any point of time you can have websocket running and make up to 3 http requests per second.
  • sameer
    sameer edited January 2017
    Currently after sending requests I sleep for 500msec but still sometimes I get 429 err

    I come up with following theory:

    I think we might get 429 error even if we sent 2 requests in 1 sec

    Let us say I send request at 13:1:15:160, 13:1:15:860, 13:1:16:160 and 13:1:16:860

    Let us say, kite server receives(not in Kite's control) or processes(maybe Kite team can do some improvement here) request at 13:1:16:120, 13:1:16:140 , 13:1:16:160, 13:1:16:880 then Kite server will throw 429
  • sameer
    this is just guess ... I might be wrong
    kite team can validate above theory
  • sujith
    Hi @sameer,
    Let us say I send request at 13:1:15:160, 13:1:15:860, 13:1:16:160 and 13:1:16:860
    This scenario is considered as 2 request per second.
    But for this to work your time must exactly match our server time which is not possible.
    That is the reason why you may get 429 error.
  • sameer
    sameer edited January 2017
    Hi @sujith,
    My point is:

    How does Kite server know I sent request at "13:1:15:160, 13:1:15:860, 13:1:16:160 and 13:1:16:860" ?

    While deciding limit of 3 req.s per sec, Kite server will
    either A) Consider time T1 when request received by Kite server
    or B ) consider time T2 when request processed by Kite server

    If (A) is already implemented, then there is nothing much can be done(as then problem is due to network delays)
    But if Kite server uses approach-B, then there is still scope for improvement by Kite Server

    As I said (and as explained is example above), currentely I am sending only two req.s per sec(from AWS DC) and still sometimes (like one in 100 requests) get 429 error.
  • itsram90
    randomness of these throwbacks is really a major problem, working on high intensity algos becomes difficult if i use time.sleep for even 1 sec after everystep. i am currently deploying all this over aws mumbai server but you think if we get these things delayed by sometime whole logic of taking profit from listed buyers and sellers will fail. i agree with @sameer improvement is required in this area.
    i was thinking to use websocket & store data streem over my hadoop database so that at least i will be able to reduce calls for data & then it will be just managing positions & placing order.

    @sameer see if you can move your image to aws mumbai. it will help you to reduce some milliseconds of latency

  • itsram90
    also will there be any improvement after API infra is ramped up ?
  • Kailash
    @itsram90 For anything realtime, you should use WebSockets. Polling HTTP APIs is not very scalable.

    @sameer We are considering revising the rate limits soon. It was set in the first place because a) many users were incessantly polling the APIs needlessly (dozens to hundreds of non stop requests) b) to avoid malicious activity.
  • itsram90
    kudos for you guys working on sunday :) cheers
  • Kailash
    @itsram90 Cheers :) It's practically a 14x7 affair.
  • Vijaykrsingh
    Great effort guys
Sign In or Register to comment.