Hi, I have a quick question like whenever i want to download tick data first i have to serach for its token. Is it any possible way to find the instrument token by its name? It will be really very help full.
You have to download the instrument token file at the beginning of the day and set up a instrument token to symbolname lookup table using whatever method suits you. I use redis hashes or python dictionaries where the instrument token is the key and the value is the symbolname. Dictionaries are python specific. Redis is language agnostic and it works if you have a mixed environment where you have different programs written in different languages for any reason.
So as you iterate over the individual components of each tick you would have a line that does the lookup and returns the symbolname. Something like #python symbolname = instrumentsDict[ '260105' ] # will return "NIFTY BANK" as value of symbolname
#redis symbolname = r.hget( 'instruments' , 260105 ) # will return "NIFTY BANK" as value of symbolname
You can download the instrument list and set up the lookup table well before the market opens anytime between 8 to 9 am. And automate the whole thing using cron scheduler.
Also I would like to add that you can set up multiple lookup tables going both ways. One for instrument token to symbolname mapping and the other for symbolname to instrument token mapping.
So If you have a symbolname and want to look up its instrumenttoken you can set up another redis hash or python dictionary where the symbolname is the key and the instrumenttoken is the value and then you can do a look up like #python insttoken = symbolDict[ 'ASIANPAINT' ] # will return "60417" as value of insttoken
#redis insttoken = r.hget( 'symbols' , 'ASIANPAINT' ) # will return "60417" as value of insttoken
Since most of my code is python, what I have done is created a module which runs via cron scheduler once a day at about 8:30 am and saves the dictionaries as a pickle file. In any code/script that I need this functionality, I just import my custom module, which creates the mapping if it does not exist or is older than one day. If the mapping exists it loads the mapping from the saved pickle file and the data is available to use immediately. No processing this again and again every time a script is run. It is done just once a day.
If you are using python, then you have to maintain a list/pandas dataframe(csv) of instrument symbols, in which you want to trade in. And apply a loop, which run each morning that match your instrument symbols with instrument tokens. You can create a dictionary mapping your tokens with symbols.
Also, if you going for multi-language system, then use redis. I have a custom nlp model in python and web/news scrapper, which analyze all the news and do a sentiment and fundamental analysis of certain stocks. And push data in, which can be utilized in a different program, which is in different language.
So as you iterate over the individual components of each tick you would have a line that does the lookup and returns the symbolname. Something like
#python
symbolname = instrumentsDict[ '260105' ] # will return "NIFTY BANK" as value of symbolname
#redis
symbolname = r.hget( 'instruments' , 260105 ) # will return "NIFTY BANK" as value of symbolname
You can download the instrument list and set up the lookup table well before the market opens anytime between 8 to 9 am. And automate the whole thing using cron scheduler.
One for instrument token to symbolname mapping and the other for symbolname to instrument token mapping.
So If you have a symbolname and want to look up its instrumenttoken you can set up another redis hash or python dictionary where the symbolname is the key and the instrumenttoken is the value and then you can do a look up like
#python
insttoken = symbolDict[ 'ASIANPAINT' ] # will return "60417" as value of insttoken
#redis
insttoken = r.hget( 'symbols' , 'ASIANPAINT' ) # will return "60417" as value of insttoken
Since most of my code is python, what I have done is created a module which runs via cron scheduler once a day at about 8:30 am and saves the dictionaries as a pickle file.
In any code/script that I need this functionality, I just import my custom module, which creates the mapping if it does not exist or is older than one day. If the mapping exists it loads the mapping from the saved pickle file and the data is available to use immediately. No processing this again and again every time a script is run. It is done just once a day.
Also, if you going for multi-language system, then use redis. I have a custom nlp model in python and web/news scrapper, which analyze all the news and do a sentiment and fundamental analysis of certain stocks. And push data in, which can be utilized in a different program, which is in different language.