I'm trying download the historical data for multiple symbols using python kiteconnect. Below is the error I'm getting intermittently .
Please clarify below doubts: Is there any constraint that we cant access multiple symbols at a time and we should not use the kite.historical() method in parallely?
Do we need to wait in between two symbols while fetching data?
Traceback (most recent call last): File "", line 10, in File "/usr/local/lib/python2.7/dist-packages/kiteconnect/__init__.py", line 397, in historical "interval": interval}) File "/usr/local/lib/python2.7/dist-packages/kiteconnect/__init__.py", line 435, in _get return self._request(route, "GET", params) File "/usr/local/lib/python2.7/dist-packages/kiteconnect/__init__.py", line 521, in _request raise(exp(data["message"], code=r.status_code)) kiteconnect.exceptions.TokenException: Invalid API credentials
1) The historical() method returns data for one instrument at a time, but calling them sequentially for various instruments should not be an issue. 2) 'Invalid credentials' should not arise unless the session has expired.
Is it possible to share the code snippet that's resulting in the errors?
Please find the below code which Im using .I'm trying to execute get_rates methode parallelly to capture stock prices.
from kiteconnect import KiteConnect from datetime import date, timedelta as td import multiprocessing as mp import pandas as pd import time import random api_key='xxxxxxxxx' kite_Accnt1=KiteConnect(api_key=api_key) #kite_Accnt1 = KiteConnect(kite_Accnt1_api_key="xxxxxx") redirect_url=kite_Accnt1.login_url() print redirect_url kite_Accnt1_secret_key="xxxxxxxxxxxx" kite_Accnt1_request_token_here=raw_input('Provide the request_token_here') kite_Accnt1_request_token_here=kite_Accnt1_request_token_here.strip() data1 =kite_Accnt1.request_access_token(kite_Accnt1_request_token_here,secret=kite_Accnt1_secret_key) kite_Accnt1.set_access_token(data1["access_token"])
def get_rates(Inst): def capture(Inst): time.sleep(random.randrange(5)) his=kite_Accnt1.historical(int(nse_inst[Inst]),day1,day2,'minute') Close=[k['close'] for k in his] Open_1=[k['open'] for k in his] Low=[k['low'] for k in his] High=[k['high'] for k in his] date1=[k['date'] for k in his] Volume=[k['volume'] for k in his] df=pd.DataFrame([Close,Open_1,Low,High,date1,Volume]).transpose() df.columns=['Close','Open','Low','High','Date','Volume'] df['Inst']=[nse_inst[Inst] for k in Close] df['date_1']=[day for k in Close] return df try: df_stoc=capture(Inst) except: try: time.sleep(random.randrange(4,5)) df_stoc=capture(Inst) except: df_stoc=pd.DataFrame([['Close_Error'],['Open_error',],['Lowe_error'],['High_error'],[day],[nse_inst[Inst]]]).transpose() df_stoc.columns=['Close','Open','Low','High','Date','Volume'] df_stoc['date_1']=[day for k in ['Close_Error']] return df_stoc
for day in days_list[0:2]: pool=mp.Pool(10) Results=pool.map(get_rates,range(0,15,1)) #Results=pool.map(get_rates,range(0,len(nse_inst),1))=0 pool.close() Store_from_days.append(pd.concat(Results))
@menaveenn I'm not sure, but I have a feeling the multi-processing routine could be using Kite Connect instances that have been initialised without the right access_token. Could you try a different approach without the mp module?
@Kailash I was able to read data with multiprocessing from few cores.For example if I have server having 15 cores,I was able to get results from 12 to 13 cores successfully remaining 2 or 3 cores failing.And also when I'm running in simple for loop also it is failing to fetch the results some times.
Sure @Kailash I will try with some different approach,Thanks for your time .
2) 'Invalid credentials' should not arise unless the session has expired.
Is it possible to share the code snippet that's resulting in the errors?
Please find the below code which Im using .I'm trying to execute get_rates methode parallelly to capture stock prices. Thanks and Regards,
Naveen
I'm running in simple for loop also it is failing to fetch the results some times.
Sure @Kailash I will try with some different approach,Thanks for your time .
Thanks,
Naveen