Preferred gradients : evidence of emergent symmetry in financial markets
- Authors: Beukes, Gideon Jacobus
- Date: 2012-06-19
- Subjects: Preferred Gradient Hypothesis , Random Walk Hypothesis , Symmetry (Mathematics) , Gradients , Financial instruments , Technical analysis (Investment analysis)
- Type: Mini-Dissertation
- Identifier: uj:8769 , http://hdl.handle.net/10210/5120
- Description: M.Comm. , The dependence claim of the Random Walk Hypothesis, formulated by Louis Bachelier in 1900, states that asset price movement is based on a stochastic process (Shafer & Vovk, 2001) in which subsequent prices in a series are random in relation to previous ones (Malkiel, 2003:3). Malkiel (1973) identifies a major implication of the Random Walk Hypothesis, namely that future directions cannot be extrapolated from past actions. In the context of financial markets, the implication is a rejection of the notion that fundamental and technical analyses have value as instruments of investment analysis. The Preferred Gradient Hypothesis, developed by Daan Joubert, an independent technical analyst, rejects the dependence claim of the Random Walk Hypothesis by suggesting the existence of a form of regularity in asset price movement. The regularity, based on the notion that trend reversals tend to occur along preferred gradients, is claimed to manifest as an emergent phenomenon with its origin in the complex, self-adaptive nature of the financial markets. The phenomenon is claimed to manifest across time scales and financial instruments. Three objectives are formulated for the purposes of the study, each relating to a specific claim of the Preferred Gradient Hypothesis. The primary objective is to verify objectively whether the presence of preferred gradients on price charts can be shown to be statistically significant. The secondary objectives are to verify objectively whether the presence of preferred gradients across financial instruments and time scales can be shown to be statistically significant. The samples employed in the study are drawn from two populations and consist of two components, namely the exchange rates of a set of ten currencies against the United States Dollar and the prices of five commodities – Brent crude oil, ethanol, gold, silver and soybeans. The sample data, which consist of daily commodity prices and exchange rates, are converted to datasets spanning daily, weekly and monthly timescales by means of a custom-developed software application entitled the Preferred Gradient Hypothesis Workbench. A subset of each dataset is selected for analysis based on the criterion that the particular subset exhibits enough price variation to enable sensible analysis. Two charts are constructed for each dataset, namely a control (R) chart and a treated (P) chart by using the Preferred Gradient Hypothesis-prescribed method. Each of the charts is analysed by the Preferred Gradient Hypothesis Workbench and the total number of intersections between trend lines and trend reversals, recorded. A trend line is deemed to have intersected a trend reversal if it passes the reversal within a distance of one percent of the total range of the chart’s Y-axis while intersecting the chart’s X-axis at the same point as the reversal. The number of intersections on each pair of control and treated charts are stored as datasets to which the t-Test for Paired Samples is applied. Statistical analysis based on the t-Test for Paired Samples with an alpha value of 0.05 results in the research hypothesis being rejected for the daily and weekly exchange rate datasets as well as for the daily commodity price dataset, but not for the monthly exchange rate dataset. Evaluation of these results, in terms of the tests formulated for each research objective, indicates that the notion that the presence of preferred gradients on price charts is statistically significant cannot be rejected. The notions that its presence is statistically significant over different time scales and financial instruments are, however, rejected. The statistical analysis is followed by a discussion of the research results in which the results are interpreted within the context of the research objectives. The section also features a discussion regarding problems that were encountered during the research and software development processes. The study is concluded with suggestions regarding methodological improvements as well as the identification of topics relating to the research subject which may be investigated in future research projects.
- Full Text:
- Authors: Beukes, Gideon Jacobus
- Date: 2012-06-19
- Subjects: Preferred Gradient Hypothesis , Random Walk Hypothesis , Symmetry (Mathematics) , Gradients , Financial instruments , Technical analysis (Investment analysis)
- Type: Mini-Dissertation
- Identifier: uj:8769 , http://hdl.handle.net/10210/5120
- Description: M.Comm. , The dependence claim of the Random Walk Hypothesis, formulated by Louis Bachelier in 1900, states that asset price movement is based on a stochastic process (Shafer & Vovk, 2001) in which subsequent prices in a series are random in relation to previous ones (Malkiel, 2003:3). Malkiel (1973) identifies a major implication of the Random Walk Hypothesis, namely that future directions cannot be extrapolated from past actions. In the context of financial markets, the implication is a rejection of the notion that fundamental and technical analyses have value as instruments of investment analysis. The Preferred Gradient Hypothesis, developed by Daan Joubert, an independent technical analyst, rejects the dependence claim of the Random Walk Hypothesis by suggesting the existence of a form of regularity in asset price movement. The regularity, based on the notion that trend reversals tend to occur along preferred gradients, is claimed to manifest as an emergent phenomenon with its origin in the complex, self-adaptive nature of the financial markets. The phenomenon is claimed to manifest across time scales and financial instruments. Three objectives are formulated for the purposes of the study, each relating to a specific claim of the Preferred Gradient Hypothesis. The primary objective is to verify objectively whether the presence of preferred gradients on price charts can be shown to be statistically significant. The secondary objectives are to verify objectively whether the presence of preferred gradients across financial instruments and time scales can be shown to be statistically significant. The samples employed in the study are drawn from two populations and consist of two components, namely the exchange rates of a set of ten currencies against the United States Dollar and the prices of five commodities – Brent crude oil, ethanol, gold, silver and soybeans. The sample data, which consist of daily commodity prices and exchange rates, are converted to datasets spanning daily, weekly and monthly timescales by means of a custom-developed software application entitled the Preferred Gradient Hypothesis Workbench. A subset of each dataset is selected for analysis based on the criterion that the particular subset exhibits enough price variation to enable sensible analysis. Two charts are constructed for each dataset, namely a control (R) chart and a treated (P) chart by using the Preferred Gradient Hypothesis-prescribed method. Each of the charts is analysed by the Preferred Gradient Hypothesis Workbench and the total number of intersections between trend lines and trend reversals, recorded. A trend line is deemed to have intersected a trend reversal if it passes the reversal within a distance of one percent of the total range of the chart’s Y-axis while intersecting the chart’s X-axis at the same point as the reversal. The number of intersections on each pair of control and treated charts are stored as datasets to which the t-Test for Paired Samples is applied. Statistical analysis based on the t-Test for Paired Samples with an alpha value of 0.05 results in the research hypothesis being rejected for the daily and weekly exchange rate datasets as well as for the daily commodity price dataset, but not for the monthly exchange rate dataset. Evaluation of these results, in terms of the tests formulated for each research objective, indicates that the notion that the presence of preferred gradients on price charts is statistically significant cannot be rejected. The notions that its presence is statistically significant over different time scales and financial instruments are, however, rejected. The statistical analysis is followed by a discussion of the research results in which the results are interpreted within the context of the research objectives. The section also features a discussion regarding problems that were encountered during the research and software development processes. The study is concluded with suggestions regarding methodological improvements as well as the identification of topics relating to the research subject which may be investigated in future research projects.
- Full Text:
Empirical evaluation of existing backtesting techniques for market risk models
- Authors: Sangweni, Xolile Zodwa
- Date: 2019
- Subjects: Technical analysis (Investment analysis) , Financial risk management
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/403022 , uj:33753
- Description: Abstract : This study investigates the performance of different backtesting techniques at evaluating market risk models with different conditional distributions. The study classifies existing backtesting techniques into two groups: Traditional backtesting techniques (Independence, unconditional Test, Conditional Coverage Test) that look only at the likelihood of the occurrence of a violation; and Modern backtesting techniques (Duration-Based Test, Dynamic Quantile Test, Dynamic Binary Test) that look either at the time elapsed between two consecutive violations and the relationship between violations and past violations. To achieve this, the study builds different types of conditional and unconditional market risk models. Unconditional market risk models include the historical simulation and variance covariance methods. The conditional market risk models are built by making use of eGARCH processes with different types of conditional distribution (asymmetric and extreme value distributions). The empirical analysis is based on daily return series of the following stock markets obtained from Bloomberg: - S&P500, FTSE 100, Africa All Share Index, and Nikkei 225. The sample period spans from 2006/01/02 to 2018/09/10. This sample period is then divided into two overlapping subsamples representing financial crisis, and tranquil period respectively. The results suggest that traditional backtesting techniques perform better at evaluating the different types of market risk models for both financial crisis and tranquil periods. However, the modern backtesting techniques perform well during a financial crisis and give misleading results during tranquil period. The finding of this study suggest that for an out-sample data of 250 days1, the best backtesting techniques to evaluate a model is the traditional backtesting techniques. Given the findings of this study, regulators and other decision makers can inform financial institutions operating in their respective jurisdictions to be cautious when using modern backtesting techniques in the process of evaluating market risk models for the computation of the regulatory minimum capital requirement. , M.Com. (Financial Economics)
- Full Text:
- Authors: Sangweni, Xolile Zodwa
- Date: 2019
- Subjects: Technical analysis (Investment analysis) , Financial risk management
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/403022 , uj:33753
- Description: Abstract : This study investigates the performance of different backtesting techniques at evaluating market risk models with different conditional distributions. The study classifies existing backtesting techniques into two groups: Traditional backtesting techniques (Independence, unconditional Test, Conditional Coverage Test) that look only at the likelihood of the occurrence of a violation; and Modern backtesting techniques (Duration-Based Test, Dynamic Quantile Test, Dynamic Binary Test) that look either at the time elapsed between two consecutive violations and the relationship between violations and past violations. To achieve this, the study builds different types of conditional and unconditional market risk models. Unconditional market risk models include the historical simulation and variance covariance methods. The conditional market risk models are built by making use of eGARCH processes with different types of conditional distribution (asymmetric and extreme value distributions). The empirical analysis is based on daily return series of the following stock markets obtained from Bloomberg: - S&P500, FTSE 100, Africa All Share Index, and Nikkei 225. The sample period spans from 2006/01/02 to 2018/09/10. This sample period is then divided into two overlapping subsamples representing financial crisis, and tranquil period respectively. The results suggest that traditional backtesting techniques perform better at evaluating the different types of market risk models for both financial crisis and tranquil periods. However, the modern backtesting techniques perform well during a financial crisis and give misleading results during tranquil period. The finding of this study suggest that for an out-sample data of 250 days1, the best backtesting techniques to evaluate a model is the traditional backtesting techniques. Given the findings of this study, regulators and other decision makers can inform financial institutions operating in their respective jurisdictions to be cautious when using modern backtesting techniques in the process of evaluating market risk models for the computation of the regulatory minimum capital requirement. , M.Com. (Financial Economics)
- Full Text:
History never repeats itself, but it rhymes : dot-com bubble indicators and the potential for future speculative bubbles
- Authors: Van Rooyen, Gustaf
- Date: 2021
- Subjects: Finance - Management , Technical analysis (Investment analysis) , Venture capital , Investments - Psychological aspects
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/485582 , uj:44142
- Description: Abstract: The occurrence of speculative economic bubbles which has been noted from as early as the 17th century, has a devastating impact on investor wealth and the global economy. One such notable example is the Dot-com bubble which occurred during the end of the 1990s in the United States, in which seemingly irreversible increases followed by an equally dramatic decline in technology share prices were noted as a result of investor optimism in new internet and technology (.com) companies. It is well established that speculative bubbles are preceded by an abundance of liquidity and an extraneous event which leads to investor irrationality, such as the development of new technological ideas and services. As a result of ongoing 21st century technological innovations, similarities between the previous Dot-com bubble and the current conditions and market dynamics have been noted. This study aims to determine whether a second Dot-com bubble or a new speculative bubble is developing within the U.S. technology market. In order to address the research objectives and with reference to previous empirical literature, this study is divided into three areas of investigation. The first relates to the use of technical and fundamental analysis of U.S. indices which are benchmarked against overall U.S. market indices, in order to identify a similar bubble pattern to that of the Dot-com bubble. The second relates to an investigation of IPO underpricing levels, to determine whether the current levels are comparable to those noted during the Dot-com bubble period. The third relates to an investigation to determine whether the current conditions within the U.S. technology market such as federal interest rates and levels of venture capital investment are comparable to the conditions present during the Dot-com bubble period. The technical and fundamental analysis identified increases and deviations from the overall market, similar to that noted during the beginning of the Dot-com bubble. The latest increases and deviations are however supported by better fundamental values such as earnings. The IPO underpricing investigation showed that underpricing levels are similar to those that were noted during the first three years of the Dot-com bubble, but not elevated to the levels noted during its peak. The investigation of the U.S. technology market noted low interest rates and increasing levels of venture capital funding, suggesting that there are increased levels of liquidity within the U.S. economy, which is a contributing factor in sustaining speculative bubbles. These results suggest that although the public U.S. technology market, which includes the technology indices and IPO underpricing levels, resembles the beginning of the Dot-com bubble, and that this trend is supported by sounder fundamentals and that this does not constitute a speculative bubble. However, strong indications of a potential speculative bubble exist within the private technology market as a result of private technology companies with valuation metrics which exceed Dot-com bubble period levels. With this in mind, the research objectives of this study were achieved, and this study emphasises the importance of identifying potential speculative bubbles to proactively react or intervene, in order to minimise their potential damage to the global economy. , M.Com. (Financial Management)
- Full Text:
- Authors: Van Rooyen, Gustaf
- Date: 2021
- Subjects: Finance - Management , Technical analysis (Investment analysis) , Venture capital , Investments - Psychological aspects
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/485582 , uj:44142
- Description: Abstract: The occurrence of speculative economic bubbles which has been noted from as early as the 17th century, has a devastating impact on investor wealth and the global economy. One such notable example is the Dot-com bubble which occurred during the end of the 1990s in the United States, in which seemingly irreversible increases followed by an equally dramatic decline in technology share prices were noted as a result of investor optimism in new internet and technology (.com) companies. It is well established that speculative bubbles are preceded by an abundance of liquidity and an extraneous event which leads to investor irrationality, such as the development of new technological ideas and services. As a result of ongoing 21st century technological innovations, similarities between the previous Dot-com bubble and the current conditions and market dynamics have been noted. This study aims to determine whether a second Dot-com bubble or a new speculative bubble is developing within the U.S. technology market. In order to address the research objectives and with reference to previous empirical literature, this study is divided into three areas of investigation. The first relates to the use of technical and fundamental analysis of U.S. indices which are benchmarked against overall U.S. market indices, in order to identify a similar bubble pattern to that of the Dot-com bubble. The second relates to an investigation of IPO underpricing levels, to determine whether the current levels are comparable to those noted during the Dot-com bubble period. The third relates to an investigation to determine whether the current conditions within the U.S. technology market such as federal interest rates and levels of venture capital investment are comparable to the conditions present during the Dot-com bubble period. The technical and fundamental analysis identified increases and deviations from the overall market, similar to that noted during the beginning of the Dot-com bubble. The latest increases and deviations are however supported by better fundamental values such as earnings. The IPO underpricing investigation showed that underpricing levels are similar to those that were noted during the first three years of the Dot-com bubble, but not elevated to the levels noted during its peak. The investigation of the U.S. technology market noted low interest rates and increasing levels of venture capital funding, suggesting that there are increased levels of liquidity within the U.S. economy, which is a contributing factor in sustaining speculative bubbles. These results suggest that although the public U.S. technology market, which includes the technology indices and IPO underpricing levels, resembles the beginning of the Dot-com bubble, and that this trend is supported by sounder fundamentals and that this does not constitute a speculative bubble. However, strong indications of a potential speculative bubble exist within the private technology market as a result of private technology companies with valuation metrics which exceed Dot-com bubble period levels. With this in mind, the research objectives of this study were achieved, and this study emphasises the importance of identifying potential speculative bubbles to proactively react or intervene, in order to minimise their potential damage to the global economy. , M.Com. (Financial Management)
- Full Text:
- «
- ‹
- 1
- ›
- »