I just want to know if using AWS (Mumbai) or a high-end computing PC is better for execution, bulk data, or OHLC processing. I see that AWS is more expensive than PCs, and AWS does not provide a high-end AMD processor or RAM. Is that correct? When commuting, I see that using AWS is more practical than PC.
Kite Connect is just a REST like APIs. From API's perspective computation power is not much, but everything depends on what you are doing before making an API call and how you can do it. A basic PC will do the job. If you are located in a good place where the availability and speed of internet is good and reliable then local PC would do. After your setup is complete, you can run on aws and check if it gives you a significant advantage. If your ISP is not good then it is better to start on aws itself. PS: Please keep in mind that Kite Connect is not suitable for HFT or latency based trading.
AWS is anyday better. When you run on your local system the main issue is the broadband internet connection. There will always be network disruptions on home broadband connections. AWS runs in the best of class datacenters with multiple redundancies and hence you will not get any network issues.
Also there is a whole lot of difference between server grade hardware and desktop grade hardware. I have seen that when I am processing ticks, my AWS server runs significantly faster than my local machine though the local machine has a higher clock speed.
Also AWS does provide AMD servers. The following is the output of cpuinfo on my aws server
Yes AWS is expensive so you need to run live production code on AWS and then backup and download the data to local machine for any backtest/dev/staging etc. Storage costs on AWS can quickly add up to multiples of your compute costs if one is not careful.
Depends on how many instruments you want to subscribe to and what you want to do exactly. I track and process all ticks for nifty and banknifty. Thats about 300-350 instruments and I run it on a m6a.2xlarge (8 cpu, 32gb, 200gb SSD) with considerable headroom.
You cant compare apples to oranges (AMD 7950X-64GB to AWS ) as your AMD 7950X-64GB desktop might be sitting idle 90% of the time and it doesn't really hurt you because you are not paying for it monthly. But on AWS you need to be careful of sizing your instance carefully. If you get a equivalent 32 core machine with 64 gb ram and its cpu and RAM utilisation is less than 10% you will be throwing away money because you are being billed monthly. And on AWS everything adds up -storage, network data transferred etc. Calculating AWS instance sizing and costs is a different ballgame altogether.
@MAG Except for the efficient virtual machine advantage when commuting and the lower latency, I believe AWS is not ideal for me. I checked the pricing of the instance; it is actually head-over for me to keep the subscription. I am still using the desktop. it will not be idle 90% of the time, as I presently have 12 projects running on the desktop. and attempting to grow on numerous initiatives. Thank you for your help with AWS.
@ANL The code needs to be efficient and optimised. Memory and CPU comes later, for my program I am subscribing around 350 instruments, and storing past 3 hours data into RAM, it is not even taking 1 GB of ram.
The execution of my algo is not even 100 µs. So don’t think much about machine, unless you are going toward deep learning where you need good GPU and CPU, where you won’t bottleneck.
The main issue that no one is mentioning is that home broadband will never be stable. And if your network is unstable there is no point in having a server grade system at your home. You can write the best trading strategy, you can write the most performant code and it will all be useless when you have a open position that you cannot close because your network flaked on you resulting in probable losses.
Yes but AWS or any cloud provider is expensive. So 1. First of all you can go to AWS only if you have a budget of at least 7-10K per month which ideally should come from your trading profits. 2. Your code has to be optimised to the best possible extent. And you need to have robust automated devops systems to backup and download old data keeping only a few days data on the cloud server. I think on AWS a 500GB GP3 SSD costs about 6K a month for storage alone not accounting for any snapshots or backups.
@kakush30 As you stated, you are storing 3 hours of data in the in-built memory. While retrieving from the inbuilt memory, does it have a chance to miss the data?
what kind of data you retrieve from the API. If you are retrieving from a websocket, it will be a bottleneck or not a good idea to store in inbuilt memory. ?
Can you please explain whether you are dealing with high-frequency data or low-frequency data?
@ANL yeah, with python, it will definetely bottleneck, even with async, threading or multiprocessing (I have tried everything), it will start to miss data/bottleneck when getting it from websocket.
And not just 1 or 2 ticks, sometime it will miss for like whole 1 min for a certain instruments. That's why I had to shift to Go and rust for certain sections. As for memory, if it is getting stored into it, there is almost no chance it will get miss, unless you overload the memory.
The websocket part is in go, in go you have greater control over it, with pointers,context,channel buffers & goroutines, so it is very highly unlikely you miss data (I have never seen it). Definetely I have seen problems with NSE data, as I have reported here https://kite.trade/forum/discussion/comment/44431/#Comment_44431
Might be happening because of race conditions at NSE, but for BSE data it is quite fine. I am thinking of changing websocket to rust, but dont want to touch it right now, as it is working quite fine.
JFYI, Python is not a bottleneck. Its all in the way you code. I am subscribing to around 500 instruments and am able to process ticks and generate candles for all 500 instruments in under .2 seconds using python. In the last six years never missed a tick due to processing bottlenecks. Sometimes there have been network issues leading to loss of data very early on before I moved to AWS, but never due to code performance issues.
@ANL Which machine and config you're using / planning to use on AWS? Based on my experience, AWS is more reliable even in comparison with high end PC as network/electricity in some areas aren't in our control.
Coming to the cost aspects, I am using m5.xlarge EC2 instance with 64GB volume and it costs me 1.4$ a day and roughly 31$ a month (that's roughly 2500 INR). It all depends on how optimised your code is, how you're managing the data in the backend ensuring ideal TTL (time to live) data policies and starting an instance only when it's needed. For me, storing tick level data of 617 (Stocks + Indexes) tickers, takes up 450MB storage per trading day. I usually store the tick level data for t + 30 days for backtesting, simulation and then aggregate it to OHLC at 1min, 5min, 15min intervals.
Don't think of AWS EC2 instance configuration as of to your local machine, we don't really need 500GB SSD, more than 8 cores CPU etc. unless you're doing really fancy stuff.
@kakush30@MAG I have changed my local ISP to Airtel Xstream fiber. Now I get www.google.com ---- Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 4ms, Maximum = 5ms, Average = 4ms
For kite.zerodha.com ---- Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 4ms, Maximum = 5ms, Average = 4ms
I was getting 18ms latency from local IP, now 4ms is far better and efficient than older.
@rohanrg I am using a Surface Pro 4 Core/16GB, but it cannot perform as per my needs. So, that's why I am planning to use a higher configuration. Currently, I have 12 different projects running, and some code should run 24 hours a day. I am confused about the subscription plans and choosing the right instances to meet my requirements.
I just required a higher configuration than my machine. I'm looking for an 8core/64GB or 32GB if possible.
Please note that, as I stated before, as of now I don't have network issues, any latency issues, or electricity issues.
this is what I get on my m5 instance, which is significantly lesser than the stats you posted --- kite.zerodha.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 0.440/0.483/0.528/0.031 ms
@rohanrg You stated that you are storing tick data for 650 tokens and then processing OHLC. So, which data storage are you depending on for this process?
I think you may have delayed OHLC by 1 minute, right? because in the live scenario, you cannot process all the data in one minute for a 1-minute interval as the websocket streams at high frequency, so if you have 617 instruments, how is it possible?
@ANL You're absolutely right, handling real-time tick data for 650 instruments is no easy feat. To address this, I use TimescaleDB with continuous aggregates. It allows to precompute and continuously update data like OHLCV, ensuring minimal processing delay. The aggregation frequency varies based on the interval (e.g., every minute for 1-minute data). It's a balance between real-time insights and computational efficiency.
@rohanrg Thanks The timescale is PostgreSQL DB, and I have not used or tested this DB. If you are facing any delay, I would recommend using Redis. It is faster than other DBs to handle high Fz data. But if you have not tested Redis , then it might be a new learning curve.
@ANL Thanks for your input! I've actually been using TimescaleDB, and it has worked quite well for me in terms of storage, computation, and accessibility. It's a powerful choice for managing time-series data within a PostgreSQL environment. While Redis is fast, I've found TimescaleDB to offer a great balance between speed and versatility for my specific use case.
@MAG What's your opinion of using TimescaleDB over Redis? PostgreSQL DB is storing data on the ROM, not in RAM. TimescaleDB has time series, but Redis also has time series. Redis is using RAM, so efficiency and faster performance are intact in Redis DB?
@rohanrg 1. Could you please give some clarity regarding the way you store data in TimescaleDB? Actually, I am storing OHLC data and some other computations every minute for a 1-minute interval. One code has multiple operations beyond just storing OHLC. So, I would like to know if your code is doing multiple data operations, not only storing OHLC in TimescaleDB.
2. Can you please check the cost of M5.xlarge because the cost for the instance is more than what you mentioned, INR 2500? As per my checks, the cost is nearly INR 12K for 4 cores/16GB Linux. Can you give clarity on exactly what instance you are using?
@ANL Redis will always outperform any DB that use physical drive. If you need to store data in physical drive, then first store in RAM, do the computation, store lets say X number of result's iterations in RAM, and then dump it into DB in single go, and then repeat this process.
I have checked it with MongoDB, if without indexing, a query takes around 10-15 ms, while through redis around 50-100 μs. That much is the difference.
@kakush30 My project has to handle high-frequency data in a live scenario. I think Redis is the best choice. I don't store a lot of data on the local drive as my code logic is different. As of now, I have to handle high-frequency order books. Handling order books is different from handling OHLC. Here, we have multiple order books in seconds. So while using Redis, I am facing some delay in fetching order books for more than 200 tokens. So, I am deciding which approach is best to handle as per my needs.
1. I store tick data in a hyper table and have 5mins, 15mins, 1 hour materialised view on it which has continuous aggregate policy meaning these views will update as soon as there's new tick, similar to what you see on trade charts. I am doing multiple operations on these views through Python codes however I am using a different db engine to fetch that data , and not disturbing the parallel data loading into a tick_data table.
It won't match the performance of Redis however it's slightly faster than the ticks we see on Kite trading charts. Also I don't really take trades until there is enough conviction as per SMC, hence it works for my case. I suggest you to compare this approach with your Redis version and see if it works for you or not.
2. As discussed before, ec2 instance only runs for the trading hours, it doesn't run on weekends, or on non-trading days, have configured that in Lambda function which triggers the ec2 instance to start only on trading days. Hence I save huge costs on virtual machine.
@ANL I havent used timescaledb or postgres at all. So I have no idea. I use a combination of mongodb and redis with python and it works fine for me.
Like I said in some other thread - its not the DB or solution you use that matters. Its the way you write the code. Initially my candle creation was taking 2 seconds for approx 500 instruments using mongodb and python. I could look at a faster db and faster programming language like go/rust and spend the next 2 months learning, rewriting and testing new tech or I could do a deep dive into my existing system and look for optimisations. I did exactly that and with just a few minor changes brought down the 2 seconds to .15 seconds.
The same way I could see my EC2 instance logs and see that it was running at 100 load and increase the instance size incurring additional costs or switch from using continuous loops with sleep to using event handlers for eg using brpop to free up cpu cycles and get more done with the same hardware. Doing so brought down the cpu usage of one of my programs from 100% of one core to 3% of one core.
Whether you use influx/mongo/redis/mysql/pgsql/timescale or any other database, these will be performant enough for a few hundred or even a few thousand instruments. The performance bottleneck will basically be either 1. Your compute - the amount of cores and memory. 2. Your code itself. You could write bad logic in a high performance language like go/rust and it could run slower than a well optimised python program even though they are both having the exact same input and output.
@ANL What I have seen, most of the time bottleneck always comes from Databases. I dont ever suggest to use any databases like postgres or nosql when trying to achieve computation in μ-seconds timeframe. Even M.2 drives bottleneck. In your case if redis is also bottlenecking , then you have to workout some other solutions, like using python dict, and then check execution time, its a hit and try. If all other fails, then you have to think about moving toward low level lang.
Mostly, what I do, when I face such problems, is to write a simple logic in low level, and test the execution time. If succeed then I think about moving toward it.
@kakush30 Redis is not a bottleneck; it is faster than any other DB. If we have learned well, this is one of the best choices. Now I am facing some lag because either my code or my machine has some issues. scrutinizing the lagging issues. What I learned is that using DB is as per our needs and requirements. All DB's are good at particular uses. So, in my case, I can say that Redis is fully filling my requirements, but the code for integration is a little bit hard for me because I am still learning Redis. As I stated before, I am running 12 different projects where storing the data is less and not required for my project. If I need to store in Redis, that is a better choice. Redis has plenty of modules, one of which is Time Series, where we can easily aggregate data as per our choices.
I would like to say that whatever we are doing first, our code should be optimized and efficient to handle our logic, even if we have complicated algorithms.
Zerodha uses Postgres and redis and it works at their scale. So I do not understand the discussion and comment about database speed/performance. None of us alone or together are operating at zerodhas scale. So if you think you are having performance issues, your code is the problem. Not the database, not the programming language.
Thats the end of the road on this thread for me. Good luck.
Nobody is questioning redis or postgres reliablity, performance or robustness. Its about usecase.
https://zerodha.tech/blog/scaling-with-common-sense/ And here Kallash Nadh explaining that what they faced bottlenecks with Databases. If someone thinks code is always the problem, then you are boxing yourself into corner. Otherwise why using the stack, just built your whole system in a single language.
@kakush30 Database bottlenecks, in my opinion, will arise in both scenarios if the code is not optimized for handling larger data operations or is concurrent, as Zerodha is. They may have bottlenecks as a result of dealing with millions to billions of requests every day. In any case, in the retail trader for example, we normally do not manage large transactions and just need to retrieve ticks using Rest/websocket API services. It's not a significant difficulty even though we're not dealing with HFT, we can use any efficient database for our small-scale goals.
If you are located in a good place where the availability and speed of internet is good and reliable then local PC would do. After your setup is complete, you can run on aws and check if it gives you a significant advantage.
If your ISP is not good then it is better to start on aws itself.
PS: Please keep in mind that Kite Connect is not suitable for HFT or latency based trading.
AWS is anyday better. When you run on your local system the main issue is the broadband internet connection. There will always be network disruptions on home broadband connections. AWS runs in the best of class datacenters with multiple redundancies and hence you will not get any network issues.
Also there is a whole lot of difference between server grade hardware and desktop grade hardware.
I have seen that when I am processing ticks, my AWS server runs significantly faster than my local machine though the local machine has a higher clock speed.
Also AWS does provide AMD servers. The following is the output of cpuinfo on my aws server
Yes AWS is expensive so you need to run live production code on AWS and then backup and download the data to local machine for any backtest/dev/staging etc. Storage costs on AWS can quickly add up to multiples of your compute costs if one is not careful.
You cant compare apples to oranges (AMD 7950X-64GB to AWS ) as your AMD 7950X-64GB desktop might be sitting idle 90% of the time and it doesn't really hurt you because you are not paying for it monthly. But on AWS you need to be careful of sizing your instance carefully. If you get a equivalent 32 core machine with 64 gb ram and its cpu and RAM utilisation is less than 10% you will be throwing away money because you are being billed monthly. And on AWS everything adds up -storage, network data transferred etc. Calculating AWS instance sizing and costs is a different ballgame altogether.
The execution of my algo is not even 100 µs. So don’t think much about machine, unless you are going toward deep learning where you need good GPU and CPU, where you won’t bottleneck.
Yes but AWS or any cloud provider is expensive.
So
1. First of all you can go to AWS only if you have a budget of at least 7-10K per month which ideally should come from your trading profits.
2. Your code has to be optimised to the best possible extent. And you need to have robust automated devops systems to backup and download old data keeping only a few days data on the cloud server. I think on AWS a 500GB GP3 SSD costs about 6K a month for storage alone not accounting for any snapshots or backups.
There is a lot more - Cant put everything here.
what kind of data you retrieve from the API. If you are retrieving from a websocket, it will be a bottleneck or not a good idea to store in inbuilt memory. ?
Can you please explain whether you are dealing with high-frequency data or low-frequency data?
And not just 1 or 2 ticks, sometime it will miss for like whole 1 min for a certain instruments. That's why I had to shift to Go and rust for certain sections. As for memory, if it is getting stored into it, there is almost no chance it will get miss, unless you overload the memory.
The websocket part is in go, in go you have greater control over it, with pointers,context,channel buffers & goroutines, so it is very highly unlikely you miss data (I have never seen it). Definetely I have seen problems with NSE data, as I have reported here
https://kite.trade/forum/discussion/comment/44431/#Comment_44431
Might be happening because of race conditions at NSE, but for BSE data it is quite fine. I am thinking of changing websocket to rust, but dont want to touch it right now, as it is working quite fine.
Coming to the cost aspects, I am using m5.xlarge EC2 instance with 64GB volume and it costs me 1.4$ a day and roughly 31$ a month (that's roughly 2500 INR). It all depends on how optimised your code is, how you're managing the data in the backend ensuring ideal TTL (time to live) data policies and starting an instance only when it's needed. For me, storing tick level data of 617 (Stocks + Indexes) tickers, takes up 450MB storage per trading day. I usually store the tick level data for t + 30 days for backtesting, simulation and then aggregate it to OHLC at 1min, 5min, 15min intervals.
Don't think of AWS EC2 instance configuration as of to your local machine, we don't really need 500GB SSD, more than 8 cores CPU etc. unless you're doing really fancy stuff.
www.google.com
----
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 4ms, Maximum = 5ms, Average = 4ms
For kite.zerodha.com
----
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 4ms, Maximum = 5ms, Average = 4ms
I was getting 18ms latency from local IP, now 4ms is far better and efficient than older.
So, that's why I am planning to use a higher configuration. Currently, I have 12 different projects running, and some code should run 24 hours a day. I am confused about the subscription plans and choosing the right instances to meet my requirements.
I just required a higher configuration than my machine. I'm looking for an 8core/64GB or 32GB if possible.
Please note that, as I stated before, as of now I don't have network issues, any latency issues, or electricity issues.
this is what I get on my m5 instance, which is significantly lesser than the stats you posted
--- kite.zerodha.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 0.440/0.483/0.528/0.031 ms
I think you may have delayed OHLC by 1 minute, right? because in the live scenario, you cannot process all the data in one minute for a 1-minute interval as the websocket streams at high frequency, so if you have 617 instruments, how is it possible?
Your insight might be helpful.
Hope this clarifies. For more details, refer to this article - https://www.timescale.com/blog/massive-scale-for-time-series-workloads-introducing-continuous-aggregates-for-distributed-hypertables-in-timescaledb-2-5/
The timescale is PostgreSQL DB, and I have not used or tested this DB. If you are facing any delay, I would recommend using Redis. It is faster than other DBs to handle high Fz data. But if you have not tested Redis , then it might be a new learning curve.
1. Could you please give some clarity regarding the way you store data in TimescaleDB? Actually, I am storing OHLC data and some other computations every minute for a 1-minute interval. One code has multiple operations beyond just storing OHLC. So, I would like to know if your code is doing multiple data operations, not only storing OHLC in TimescaleDB.
2. Can you please check the cost of M5.xlarge because the cost for the instance is more than what you mentioned, INR 2500? As per my checks, the cost is nearly INR 12K for 4 cores/16GB Linux. Can you give clarity on exactly what instance you are using?
I have checked it with MongoDB, if without indexing, a query takes around 10-15 ms, while through redis around 50-100 μs. That much is the difference.
1. I store tick data in a hyper table and have 5mins, 15mins, 1 hour materialised view on it which has continuous aggregate policy meaning these views will update as soon as there's new tick, similar to what you see on trade charts. I am doing multiple operations on these views through Python codes however I am using a different db engine to fetch that data , and not disturbing the parallel data loading into a tick_data table.
It won't match the performance of Redis however it's slightly faster than the ticks we see on Kite trading charts. Also I don't really take trades until there is enough conviction as per SMC, hence it works for my case. I suggest you to compare this approach with your Redis version and see if it works for you or not.
2. As discussed before, ec2 instance only runs for the trading hours, it doesn't run on weekends, or on non-trading days, have configured that in Lambda function which triggers the ec2 instance to start only on trading days. Hence I save huge costs on virtual machine.
I use a combination of mongodb and redis with python and it works fine for me.
Like I said in some other thread - its not the DB or solution you use that matters. Its the way you write the code.
Initially my candle creation was taking 2 seconds for approx 500 instruments using mongodb and python.
I could look at a faster db and faster programming language like go/rust and spend the next 2 months learning, rewriting and testing new tech or I could do a deep dive into my existing system and look for optimisations.
I did exactly that and with just a few minor changes brought down the 2 seconds to .15 seconds.
The same way I could see my EC2 instance logs and see that it was running at 100 load and increase the instance size incurring additional costs or switch from using continuous loops with sleep to using event handlers for eg using brpop to free up cpu cycles and get more done with the same hardware. Doing so brought down the cpu usage of one of my programs from 100% of one core to 3% of one core.
Whether you use influx/mongo/redis/mysql/pgsql/timescale or any other database, these will be performant enough for a few hundred or even a few thousand instruments. The performance bottleneck will basically be either
1. Your compute - the amount of cores and memory.
2. Your code itself. You could write bad logic in a high performance language like go/rust and it could run slower than a well optimised python program even though they are both having the exact same input and output.
In your case if redis is also bottlenecking , then you have to workout some other solutions, like using python dict, and then check execution time, its a hit and try. If all other fails, then you have to think about moving toward low level lang.
Mostly, what I do, when I face such problems, is to write a simple logic in low level, and test the execution time. If succeed then I think about moving toward it.
and not required for my project. If I need to store in Redis, that is a better choice. Redis has plenty of modules, one of which is Time Series, where we can easily aggregate data as per our choices.
I would like to say that whatever we are doing first, our code should be optimized and efficient to handle our logic, even if we have complicated algorithms.
https://zerodha.tech/blog/hello-world/
Zerodha uses Postgres and redis and it works at their scale. So I do not understand the discussion and comment about database speed/performance. None of us alone or together are operating at zerodhas scale. So if you think you are having performance issues, your code is the problem. Not the database, not the programming language.
Thats the end of the road on this thread for me.
Good luck.
https://zerodha.tech/blog/scaling-with-common-sense/
And here Kallash Nadh explaining that what they faced bottlenecks with Databases. If someone thinks code is always the problem, then you are boxing yourself into corner. Otherwise why using the stack, just built your whole system in a single language.