New Python Library for Technical Indicators

arkochhar
Hello everyone,

I would like to invite you all algo traders to review and contribute of a library of technical indicators I am try to build. Currently I have added EMA, ATR, SuperTrend and MACD indicators to this library. I seek your review and contributions in following areas:
  1. Additional technical indicators to the list
  2. Optimisations to the existing algorithms
  3. Help create as pip package
  4. Any other useful activity to improve the project
The library is released under GPL 3.0 on GitHub
  • arkochhar
    This is excellent resource. However SuperTrend is missing. Maybe I will add a branch adding the indicator to this library. Thanks for the pointer though.
  • akshay12489
    @arkochhar Thanks for such great help. The one query I have about the Supertrend indicator. Is it non- repainting?
  • arkochhar
    @akshay12489, I am not well versed with the terms repainting and non-repainting. However, I believe SuperTrend should be a repainting algorithm. Anyhow try it for yourself from GitHub! I have made some updates to the code base.
  • arkochhar
    @abhizerodha, I checked this library, it seems it meant for Python 2.7. It was giving lot of errors on Python 3.6. And also it did not have the SuperTrend indicator. Anyhow I have released some updates to my code on GitHub, which includes SuperTrend. It would be nice if a Python algo trader can test it rigorously and give their feedback.
  • cisk
    Hey Guys, I hope the thread is still alive. I am working on creating my trading strategy and would like to get few tips. Given that I am making my "enter" decision on every tick update, what is the efficient way of comparing LTP with technical indicators applied on previous candles? My worry is that I will have to poll historical data for every tick update, which sounds bit crude and avoidable. Any suggestions? @arkochhar
  • ankur0101
    @arkochhar great, can you please share "how to" & sample code?
  • arkochhar
    @cisk, what programming language are you working with? This could be done in Python/Pandas. Please share more details. Thanks
  • cisk
    @arkochhar At present, I am using python and mysql. Planning to use pandas on top of this.
  • ankur0101
    @arkochhar , thanks, but I am facing following error message:



    I am passing following format of historical data:
    [
    "2018-01-29T09:15:00+0530",
    1747.8,
    1750.95,
    1741.4,
    1746.05,
    110800
    ],
    [
    "2018-01-29T10:15:00+0530",
    1746.05,
    1750,
    1740.6,
    1741.2,
    64000
    ],
    [
    "2018-01-29T11:15:00+0530",
    1741.2,
    1746.75,
    1738.5,
    1745.85,
    57600
    ]
    Need help.
  • arkochhar
    arkochhar edited January 2018
    @ankur0101, it seems that you are not providing the data in pandas DataFrame format. What you get from Zerodha API is in JSON format. You will need to convert it into a DataFrame. This library works with pandas DataFrame format only. Pandas are super efficient when it comes to computing time series and tabular data. There is example given in the code file. If possible, please share your usage code snippet which will help me assist you further.
    """
    Usage :
    data = {
    "data": {
    "candles": [
    ["05-09-2013", 5553.75, 5625.75, 5552.700195, 5592.950195, 274900],
    ["06-09-2013", 5617.450195, 5688.600098, 5566.149902, 5680.399902, 253000],
    ["10-09-2013", 5738.5, 5904.850098, 5738.200195, 5896.75, 275200],
    ["11-09-2013", 5887.25, 5924.350098, 5832.700195, 5913.149902, 265000],
    ["12-09-2013", 5931.149902, 5932, 5815.799805, 5850.700195, 273000],
    ...
    ["27-01-2014", 6186.299805, 6188.549805, 6130.25, 6135.850098, 190400],
    ["28-01-2014", 6131.850098, 6163.600098, 6085.950195, 6126.25, 184100],
    ["29-01-2014", 6161, 6170.450195, 6109.799805, 6120.25, 146700],
    ["30-01-2014", 6067, 6082.850098, 6027.25, 6073.700195, 208100],
    ["31-01-2014", 6082.75, 6097.850098, 6067.350098, 6089.5, 146700]
    ]
    }
    }

    # Date must be present as a Pandas DataFrame with ['Date', 'Open', 'High', 'Low', 'Close', 'Volume'] as columns
    df = pd.DataFrame(data["data"]["candles"], columns=['Date', 'Open', 'High', 'Low', 'Close', 'Volume'])

    # Columns as added by each function specific to their computations
    EMA(df, 'close', 'ema_5', 5)
    ATR(df, 14)
    SuperTrend(df, 10, 3)
    MACD(df)
    """
  • arkochhar
    @cisk, if and when you start using Pandas library, you will not need to consider recomputing time for DataFrame. It is super fast, much faster than any possible manual computation.

    Here are some stats from my i7 machine...
    SuperTrend Test
    Time taken by Pandas computations for SuperTrend 2.4667539596557617
    Time taken by manual computations for SuperTrend 89.90594506263733
    ST Stats
    Total Rows: 5246
    Columns Match: 5246
    Success Rate: 100.0%
    STX Stats
    Total Rows: 5246
    Columns Match: 5246
    Success Rate: 100.0%
    The above time is in seconds for 5,246 candles.

    For your original question computing LTP data to the historical data, here is a possible suggested algo...
    1. Read historical data into a DataFrame for T-1 candles.
    2. Continuously read LTP and construct OHLC candle for time period, T as a DataFrame.
    3. Concatenate the two, historical (first) with the LTP (second) DataFrames.
    4. Recompute your preferred indicator from the library.
    5. Go to step 2 after a short time interval.
    Hope this helps!
  • cisk
    @arkochhar Thank you for your elaborate explanation. It is really helpful.
    However, I am not sure on what you meant in step 3. Why would I concatenate my historical data, which is for candles, with my LTP data, which is for ticks. Please help me in understanding this. Thank you
  • ankur0101
    @arkochhar This is my code

    url = "https://api.kite.trade/instruments/historical/14607362/60minute?from=2018-01-01+09:30:00&to=2018-01-29+15:30:00"
    Authorization = "token abc:pqr"
    headers = {
    'X-Kite-Version': "3",
    'Authorization': Authorization,
    'Cache-Control': "no-cache",
    }
    response = requests.request("GET", url, headers=headers)
    json_response = json.loads(response.text)

    df = pd.DataFrame(json_response["data"]["candles"], columns=['Date', 'Open', 'High', 'Low', 'Close', 'Volume'])
    SuperTrend(df, 144, 3)
    Now it is properly pulling data from zerodha historical data, I passed it into panda data frame. Is it correct way or I am missing anything?
  • arkochhar
    @ankur0101, it looks like that your code is correct now. I hope you are using correct parameters for SuperTrend.
  • arkochhar
    @cisk, just to elaborate on point 3;

    - Assume T to be current period (candle to current period)
    - So from historical API, you will get candles till T-1 period in a OHLC DataFrame
    - You construct OHLC DataFrame (with one candle) for period T from the LTP data you receive from the live API
    - Then you concatenate the Historical API OHLC DataFrame with Live API LTP OHLC DataFrame
    - You run your indication on the combined DataFrame from the previous step

    Hope this clarifies.

    Thanks
  • ankur0101
    @arkochhar for my above code, if I use columns=['Date', 'Open', 'High', 'Low', 'Close', 'Volume']), I am getting following error:
    Traceback (most recent call last):
    File "runScan.py", line 43, in <module>
    df = pd.DataFrame(json_response["data"]["candles"], columns=['Date', 'Open', 'High', 'Low', 'Close', 'Volume'])
    TypeError: 'NoneType' object has no attribute '__getitem__'
    and for columns=['date', 'open', 'high', 'low', 'close', 'volume']), I get following:

    Traceback (most recent call last):
    File "runScan.py", line 44, in <module>
    SuperTrend(df, 144, 3)
    File "/root/VolumeStartegy/indicators/__init__.py", line 194, in SuperTrend
    ATR(df, period, ohlc=ohlc)
    File "/root/VolumeStartegy/indicators/__init__.py", line 173, in ATR
    EMA(df, 'TR', atr, period, alpha=True)
    File "/root/VolumeStartegy/indicators/__init__.py", line 134, in EMA
    con = pd.concat([df[:period][base].rolling(window=period).mean(), df[period:][base]])
    File "/usr/lib/python2.7/dist-packages/pandas/core/generic.py", line 2360, in __getattr__
    (type(self).__name__, name))
    AttributeError: 'Series' object has no attribute 'rolling'
    What is wrong here? I am puzzled.
  • arkochhar
    @ankur0101, which version of python/pandas are you using? My first guess is that you are using python 2. Please note that this library has been developed on python 3. Also the 'rolling' function was used differently in older versions of pandas. My suggestion is to use it with python 3 and pandas version 1.17 and above.
  • cisk
    @arkochhar Thanks for the inputs. I'll incorporate and get back to you :smiley:
  • ankur0101
    @arkochhar I will install Python 3 with pandas 1.17+ and check. Thanks
  • ankur0101
    @arkochhar I dont see pandas version 1.17 at https://pypi.python.org/pypi/pandas/

    So with Python3 and Pandas 0.22, I get following error:

    Traceback (most recent call last):
    File "runScan3.py", line 45, in <module>
    df = pd.DataFrame(json_response["data"]["candles"], columns=['date', 'open', 'high', 'low', 'close', 'volume'])
    TypeError: 'NoneType' object is not subscriptable
  • ankur0101
    @arkochhar Finally I am able to get the output:

    SuperTrend Test
    Time taken by Pandas computations for SuperTrend 0.12024164199829102
    Time taken by manual computations for SuperTrend 0.13067221641540527
    ST Stats
    Total Rows: 125
    Columns Match: 125
    Success Rate: 100.0%
    STX Stats
    Total Rows: 125
    Columns Match: 125
    Success Rate: 100.0%
    But how to get the up or down trend of every candle?
  • mailboxofrafiq
    @arkochhar Do you know any Java library for EMA-20?
  • arkochhar
    @ankur0101, here is a sample output using DataFrame print options;

    SuperTrend(df, 14, 3)
    print(df.tail().to_string())

    Output:
    Open High Low Close TR ATR_14 ST_14_3 STX_14_3
    Date
    2018-01-24 11069.35 11110.10 11046.15 11086.00 63.95 95.836665 10790.615005 up
    2018-01-25 11095.60 11095.60 11009.20 11069.65 86.40 95.162618 10790.615005 up
    2018-01-29 11079.35 11171.55 11075.95 11130.40 101.90 95.643859 10836.818423 up
    2018-01-31 11018.80 11058.50 10979.30 11027.70 151.10 99.605012 10836.818423 up
    2018-02-02 10938.20 10954.95 10736.10 10760.60 291.60 113.318940 11185.481819 down
    Pandas has lot of printing options for DataFrame. All computations are stored in the DataFrame itself. The function adds four new columns for SuperTrend for 14 period and 3 multiplier, TR (True Range), ATR_14 (Average True Range for 14 period), ST_14_3 (SuperTrend value for 14 period and 3 multiplier) and STX_14_3 (SuperTrend indicator for 14 period and 3 multiplier).

    Hope it helps!
  • arkochhar
    @mailboxofrafiq, I haven't researched any Java library for Technical Indicators.
  • ankur0101
    @arkochhar working perfectly fine.

    Thank you so much for your support. :)
  • RajeshSivadasan
    Thanks @arkochhar . This is great for a good start. I bumped upon the TA library at https://github.com/mrjbq7/ta-lib and wanted your views on its usage with Kiteconnect
  • arkochhar
    @RajeshSivadasan, as per my research of TA-Lib, there was no implementation of SuperTrend in it, which led me to write my own library. Also I believe that TA-Lib was tested on Python 2.7 and not on Python 3.
  • Nikunj5538
    @arkochhar Is there any node library for Technical Indicator?
  • rishiajmera
    rishiajmera edited April 2018
    @arkochhar When I try to give Heiken Ashi candles as input to calculate the super trend, it gives super trend based simple ohlc candlesticks, not based on Heiken Ashi candles. Can you please suggest some solution?
  • arkochhar
    @rishiajmera, please refer to SuperTrend API definition

    def SuperTrend(df, period, multiplier, ohlc=['Open', 'High', 'Low', 'Close'])

    When you create Heiken Ashi candles, the function adds HA_ before the candle column name. So the returned DataFrame will have new columns as HA_Open, HA_High, HA_Low, HA_Close

    So when you call SuperTrend, please pass ohlc=['HA_Open', 'HA_High', 'HA_Low', 'HA_Close'] to the function.

    Hope this clarifies.
  • rishiajmera
    @arkochhar Hey, Thanks for replying. Yes I have done that. and I also replaced the 'close' in the code with ohlc[3] inside the function.
    I even removed the ['Open', 'High', 'Low', 'Close'] candles from the data frame and passed just the ['HA_Open', 'HA_High', 'HA_Low', 'HA_Close'] heiken ashi candles. But still, it is giving the super trend with simple OHLC candles which I am not able to figure out.
  • sanampatel
    @ankur0101 What you been done? cause I'm getting same error like you!
  • ankur0101
    @sanampatel , post the error message, screenshot is preferred
  • sanampatel
    sanampatel edited May 2018
    @ankur0101 Please find below.

    [342 rows x 6 columns]
    Traceback (most recent call last):
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
    packages/pandas/core/indexes/base.py", line 2525, in get_loc
    return self._engine.get_loc(key)
    File "pandas/_libs/index.pyx", line 117, in pandas._libs.index.IndexEngine.get_loc
    File "pandas/_libs/index.pyx", line 139, in pandas._libs.index.IndexEngine.get_loc
    File "pandas/_libs/hashtable_class_helper.pxi", line 1265, in
    pandas._libs.hashtable.PyObjectHashTable.get_item
    File "pandas/_libs/hashtable_class_helper.pxi", line 1273, in
    pandas._libs.hashtable.PyObjectHashTable.get_item
    KeyError: 'Close'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
    File "simple.py", line 35, in <module>
    RSI(df)
    File "/Users/macpro/code/python/indicators.py", line 336, in RSI
    delta = df[base].diff()
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
    packages/pandas/core/frame.py", line 2139, in __getitem__
    return self._getitem_column(key)
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
    packages/pandas/core/frame.py", line 2146, in _getitem_column
    return self._get_item_cache(key)
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
    packages/pandas/core/generic.py", line 1842, in _get_item_cache
    values = self._data.get(item)
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
    packages/pandas/core/internals.py", line 3843, in get
    loc = self.items.get_loc(item)
    File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
    packages/pandas/core/indexes/base.py", line 2527, in get_loc
    return self._engine.get_loc(self._maybe_cast_indexer(key))
    File "pandas/_libs/index.pyx", line 117, in pandas._libs.index.IndexEngine.get_loc
    File "pandas/_libs/index.pyx", line 139, in pandas._libs.index.IndexEngine.get_loc
    File "pandas/_libs/hashtable_class_helper.pxi", line 1265, in
    pandas._libs.hashtable.PyObjectHashTable.get_item
    File "pandas/_libs/hashtable_class_helper.pxi", line 1273, in
    pandas._libs.hashtable.PyObjectHashTable.get_item
    KeyError: 'Close'
  • rishiajmera
    rishiajmera edited May 2018
    @sanampatel There might be 'close' in your data frame. In the code, you need to change it at some places manually
  • sanampatel
    @rishiajmera For some indicators, it shows an error for the Open column as well!
  • rishiajmera
    rishiajmera edited May 2018
    @sanampatel At some places in the function, Open, High, low, and Closes are hard-coded instead of using function arguments. So you need to find them and change it with arguments.
  • algotrader29
    @arkochhar
    I have finally able to use the library in my code for supertrend, all credit goes to you. Infact if you google 'Supertrend python' this is the first and only relevant thread that come up. (take a bow)

    i am looking for the stochastic code, is there anything simpler than TA-lib? need to implement Stoch(14) as we get in the zerodha kite, with smoothing (not sure about the setting). Any pointers?
  • arkochhar
    @algotrader29 thanks for your comments. I am glad that this code is helping so many people around. As for Stochastics, I haven't done much research. Maybe time permitting I will see if it can be added to the library.
  • sshiremath2000
    I calculated RSI with this lib for period of 14. but when i compared with values plotted in kite page, the values are different. is there any calculation difference between this lib and the calculation that kite follows?
  • sshiremath2000
    i just tried out https://github.com/mrjbq7/ta-lib
    using RSI lib from this works fine. i could even plot candles and RSI in plotly and crosscheck with kite chart
  • arkochhar
    @sshiremath2000 this library was primarily developed for SuperTrend. I just added other indicators. RSI may not have been tested well. Please provide the error or deviation in details or better still if you can debug and contribute to the library. Thanks
  • jamipraveenkumar
    jamipraveenkumar edited February 2019
    url = "https://api.kite.trade/instruments/historical/256265/minute?from=2018-01-01+09:30:00&to=2018-01-29+15:30:00"
    Authorization = "token xx:xx"
    headers = {
    'X-Kite-Version': "3",
    'Authorization': Authorization,
    'Cache-Control': "no-cache",
    }
    response = requests.request("GET", url, headers=headers)
    json_response = json.loads(response.text)

    df = pd.DataFrame(json_response["data"]["candles"], columns=['Date', 'Open', 'High', 'Low', 'Close', 'Volume'])
    SuperTrend(df, 7, 3)
    print(SuperTrend(df,7,3))<code class="CodeInline">



    : Date Open ... Supertrend_7_3 STDirection_7_3
    0 2018-09-28T09:30:00+0530 10953.50 ... 0.000000 nan
    1 2018-09-28T09:31:00+0530 10935.95 ... 0.000000 nan
    2 2018-09-28T09:32:00+0530 10940.20 ... 0.000000 nan
    3 2018-09-28T09:33:00+0530 10929.45 ... 0.000000 nan
    4 2018-09-28T09:34:00+0530 10948.45 ... 0.000000 nan
    5 2018-09-28T09:35:00+0530 10945.70 ... 0.000000 nan
    6 2018-09-28T09:36:00+0530 10948.40 ... 0.000000 nan
    7 2018-09-28T09:37:00+0530 10938.90 ... 10986.796939 down
    8 2018-09-28T09:38:00+0530 10938.00 ... 10983.125948 down
    9 2018-09-28T09:39:00+0530 10943.95 ... 10978.922241 down
    10 2018-09-28T09:40:00+0530 10937.75 ... 10978.922241 down
    11 2018-09-28T09:41:00+0530 10949.05 ... 10978.922241 down
    12 2018-09-28T09:42:00+0530 10950.75 ... 10978.922241 down
    13 2018-09-28T09:43:00+0530 10958.35 ... 10978.922241 down
    14 2018-09-28T09:44:00+0530 10949.95 ... 10978.922241 down
    15 2018-09-28T09:45:00+0530 10955.60 ... 10978.922241 down
    16 2018-09-28T09:46:00+0530 10956.70 ... 10978.922241 down
    17 2018-09-28T09:47:00+0530 10956.75 ... 10978.922241 down
    18 2018-09-28T09:48:00+0530 10945.95 ... 10975.757215 down
    19 2018-09-28T09:49:00+0530 10939.75 ... 10967.877613 down
    20 2018-09-28T09:50:00+0530 10939.05 ... 10967.877613 down
    21 2018-09-28T09:51:00+0530 10952.00 ... 10967.877613 down
    22 2018-09-28T09:52:00+0530 10955.40 ... 10967.877613 down
    23 2018-09-28T09:53:00+0530 10960.55 ... 10936.332013 up
    24 2018-09-28T09:54:00+0530 10971.30 ... 10947.320297 up
    25 2018-09-28T09:55:00+0530 10978.15 ... 10948.213826 up
    26 2018-09-28T09:56:00+0530 10976.80 ... 10948.213826 up
    27 2018-09-28T09:57:00+0530 10974.60 ... 10950.814239 up
    28 2018-09-28T09:58:00+0530 10979.50 ... 10951.708634 up
    29 2018-09-28T09:59:00+0530 10975.20 ... 10951.708634 up
    ... ... ... ... ... ...
    36466 2019-02-20T10:01:00+0530 10674.95 ... 10683.945665 down
    36467 2019-02-20T10:02:00+0530 10675.85 ... 10683.945665 down
    36468 2019-02-20T10:03:00+0530 10674.75 ... 10683.945665 down
    36469 2019-02-20T10:04:00+0530 10672.85 ... 10683.905677 down
    36470 2019-02-20T10:05:00+0530 10670.95 ... 10681.029866 down
    36471 2019-02-20T10:06:00+0530 10667.40 ... 10678.172028 down
    36472 2019-02-20T10:07:00+0530 10664.15 ... 10677.476024 down
    36473 2019-02-20T10:08:00+0530 10667.40 ... 10677.476024 down
    36474 2019-02-20T10:09:00+0530 10668.50 ... 10677.476024 down
    36475 2019-02-20T10:10:00+0530 10670.65 ... 10663.852675 up
    36476 2019-02-20T10:11:00+0530 10677.75 ... 10666.602779 up
    36477 2019-02-20T10:12:00+0530 10680.40 ... 10669.973811 up
    36478 2019-02-20T10:13:00+0530 10681.45 ... 10669.973811 up
    36479 2019-02-20T10:14:00+0530 10678.85 ... 10669.973811 up
    36480 2019-02-20T10:15:00+0530 10679.00 ... 10669.973811 up
    36481 2019-02-20T10:16:00+0530 10676.85 ... 10669.973811 up
    36482 2019-02-20T10:17:00+0530 10678.75 ... 10669.973811 up
    36483 2019-02-20T10:18:00+0530 10679.40 ... 10669.973811 up
    36484 2019-02-20T10:19:00+0530 10677.00 ... 10669.973811 up
    36485 2019-02-20T10:20:00+0530 10677.15 ... 10669.973811 up
    36486 2019-02-20T10:21:00+0530 10675.30 ... 10669.973811 up
    36487 2019-02-20T10:22:00+0530 10674.30 ... 10669.973811 up
    36488 2019-02-20T10:23:00+0530 10673.90 ... 10669.973811 up
    36489 2019-02-20T10:24:00+0530 10672.70 ... 10669.973811 up
    36490 2019-02-20T10:25:00+0530 10672.75 ... 10669.973811 up
    36491 2019-02-20T10:26:00+0530 10673.75 ... 10669.973811 up
    36492 2019-02-20T10:27:00+0530 10673.85 ... 10669.973811 up
    36493 2019-02-20T10:28:00+0530 10676.00 ... 10669.973811 up
    36494 2019-02-20T10:29:00+0530 10673.05 ... 10669.973811 up
    36495 2019-02-20T10:30:00+0530 10674.95 ... 10669.973811 up

    [36496 rows x 10 columns]

    getting signals but need backtest
  • jamipraveenkumar
    See the documentation here:
    http://pandas.pydata.org/pandas-docs/stable/indexing.html#ix-indexer-is-deprecated
    i, 'final_ub_t'] else \
    C:/Users/INDI/.PyCharm2018.3/config/scratches/scratch.py:197: DeprecationWarning:
    .ix is deprecated. Please use
    .loc for label based indexing or
    .iloc for positional indexing

    See the documentation here:
    http://pandas.pydata.org/pandas-docs/stable/indexing.html#ix-indexer-is-deprecated
    df.ix[i, 'final_lb_t'] if df.ix[i - 1, st_test] == df.ix[i - 1, 'final_lb_t'] and df.ix[i, 'Close'] >= \
    C:/Users/INDI/.PyCharm2018.3/config/scratches/scratch.py:198: DeprecationWarning:
    .ix is deprecated. Please use
    .loc for label based indexing or
    .iloc for positional indexing
  • bhupathituraga
    File "C:\Users\INDI\AppData\Roaming\Python\Python37\site-packages\pandas\core\indexes\base.py", line 2656, in get_loc
    ST Stats
    return self._engine.get_loc(key)
    File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
    File "pandas\_libs\index.pyx", line 132, in pandas._libs.index.IndexEngine.get_loc
    File "pandas\_libs\hashtable_class_helper.pxi", line 1601, in pandas._libs.hashtable.PyObjectHashTable.get_item
    File "pandas\_libs\hashtable_class_helper.pxi", line 1608, in pandas._libs.hashtable.PyObjectHashTable.get_item
    KeyError: 'ST_7_3'

    During handling of the above exception, another exception occurred:


    getting error

    i used .loc instead of .ix
  • balu
    the below code is throwing incorrect values for ADX,+DI,-DI, can you please help to fix the formula

    def average_directional_movement_index(df, n, n_ADX):
    i = 0
    UpI = []
    DoI = []
    while i + 1 <= df.index[-1]:
    UpMove = df.loc[i + 1, 'High'] - df.loc[i, 'High']
    DoMove = df.loc[i, 'Low'] - df.loc[i + 1, 'Low']
    if UpMove > DoMove and UpMove > 0:
    UpD = UpMove
    else:
    UpD = 0
    UpI.append(UpD)
    if DoMove > UpMove and DoMove > 0:
    DoD = DoMove
    else:
    DoD = 0
    DoI.append(DoD)
    i = i + 1
    i = 0
    TR_l = [0]
    while i < df.index[-1]:
    TR = max(df.loc[i + 1, 'High'], df.loc[i, 'Close']) - min(df.loc[i + 1, 'Low'], df.loc[i, 'Close'])
    TR_l.append(TR)
    i = i + 1
    TR_s = pd.Series(TR_l)
    ATR = pd.Series(TR_s.ewm(span=n, min_periods=n).mean())
    UpI = pd.Series(UpI)
    DoI = pd.Series(DoI)
    PosDI = pd.Series(UpI.ewm(span=n, min_periods=n).mean() / ATR)
    NegDI = pd.Series(DoI.ewm(span=n, min_periods=n).mean() / ATR)
    ADX = pd.Series((abs(PosDI - NegDI) / (PosDI + NegDI)).ewm(span=n_ADX, min_periods=n_ADX).mean(),
    name='ADX_' + str(n) + '_' + str(n_ADX))
    df = df.join(ADX)
    return df
  • sachinstlko09
    @arkochhar I am using PHP code. I want to build logic where I can place order according to indicator calculated value. Indicators like SMA, EMA, RSI etc. So, Is there any API or some pre define classes; where I can get calculated value of indicator according to some define attribute.
  • ranjan_barat
    Hey @arkochhar, @sujith, any suggestion on Calculating Total Volume for the 1 Minute OHLC , I have tried different permutations but still there is difference between the Actual Volume Traded Provide by Zerodha vs Calculated by Subtracting the (High - Low) or (Open Volume - Close Volume) from Volume Ticker Value
  • satishgv1985
    Thanks guys most of the comments really helped to find my solution. Especially HAC data looks good.
  • ganeshv02
    Please use below to calculate stochastics %k and %d line that matches kite web interface. This is in python but you can convert to any other programming language. I will update the @arkochhar github indicators.py file via a pull request so everyone can benefit.

    For some reason I am not able to paste below in pretty format (not sure why!)

    def calculate_stochastics(df, period=14, smooth_k_period=3, d_period=3):
    highest_high = df["high"].rolling(center=False, window=period).max()
    lowest_low = df["low"].rolling(center=False, window=period).min()
    df['%_k'] = pd.Series(round(((df["close"] - lowest_low) / (highest_high - lowest_low) * 100), 2))
    df['k_line'] = round((df["%_k"].rolling(center=False, window=smooth_k_period).mean()), 2)
    df['d_line'] = round((df["k_line"].rolling(center=False, window=d_period).mean()), 2)
    return df
Sign In or Register to comment.