Markov chain Monte Carlo methods for finite element model updating
- Authors: Joubert, Daniel Johannes
- Date: 2015
- Subjects: Finite Element Method , Markov processes , Monte Carlo method , Computer simulation
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/57283 , uj:16375
- Description: Abstract: Finite Element model updating is a computation tool aimed at aligning the computed dynamic properties in the Finite Element (FE) model, i.e. eigenvalues and eigenvectors, and experimental modal data of rigid body structures. Generally, FE models have very high degrees of freedom, often several thousands. The Finite Element Method (FEM) is only able to accurately predict a few of the natural frequencies and mode shapes (eigenvalues and eigenvectors). In order to ensure the validity of the FEM, a chosen number of the natural frequencies and mode shapes are experimentally measured. These are often misaligned or in disagreement with the results from the computed FEM. Finite Element (FE) model updating is a concept wherein a variety of methods are used to compute physically accurate modal frequencies for structures, accounting for random behavior of material properties under dynamic conditions, this behavior can be termed stochastic. The author applies two methods applied in recent years, and one new algorithm to further investigate the effectiveness of introducing multivariate Gaussian mixture models and Bayesian analysis in the model updating context. The focus is largely based on Markov Chain Monte Carlo methods whereby all inference on uncertainties is based on the posterior probability distribution obtained from Bayes’ theorem. Observations are obtained sequentially providing an on-line inference in approximating the posterior probability. In the coming Chapters detailed descriptions will cover all the theory and arithmetic involved for the simulated algorithms. The three algorithms are, the standard Metropolis Hastings (MH), Adaptive Metropolis Hastings (AMH), and Monte Carlo Dynamically Weighted Importance Sampling (MCDWIS). Metropolis Hastings (MH) is a well-known Markov Chain Monte Carlo (MCMC) sampling method. The desired result from this algorithm is a good acceptance rate and good correlation between the computed stochastic parameters. The Adaptive Metropolis Hastings (AMH) algorithm adaptively scales the covariance matrix ‘on the fly’ to see convergence to a Gaussian target distribution. From the AMH algorithm we want to observe the adaptation of the scaling factor and the covariance matrix. Monte Carlo Dynamically Weighted Importance Sampling (MCDWIS) is an algorithm which combines Importance Sampling theory with Dynamic Weighting theory in a population control scheme, namely the Adaptive Pruned Enriched Population Control Scheme (APEPCS). The motivation behind applying MCDWIS is in the complexity of computing normalizing constants in higher dimensional or multimodal systems. In addition, a dynamic weighting step with an Adaptive Pruned Enriched Population Control Scheme (APEPCS) allows for further control over weighted samples and population size. The performance of the MCDWIS simulation... , M.Ing. Mechanical Engineering
- Full Text:
- Authors: Joubert, Daniel Johannes
- Date: 2015
- Subjects: Finite Element Method , Markov processes , Monte Carlo method , Computer simulation
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/57283 , uj:16375
- Description: Abstract: Finite Element model updating is a computation tool aimed at aligning the computed dynamic properties in the Finite Element (FE) model, i.e. eigenvalues and eigenvectors, and experimental modal data of rigid body structures. Generally, FE models have very high degrees of freedom, often several thousands. The Finite Element Method (FEM) is only able to accurately predict a few of the natural frequencies and mode shapes (eigenvalues and eigenvectors). In order to ensure the validity of the FEM, a chosen number of the natural frequencies and mode shapes are experimentally measured. These are often misaligned or in disagreement with the results from the computed FEM. Finite Element (FE) model updating is a concept wherein a variety of methods are used to compute physically accurate modal frequencies for structures, accounting for random behavior of material properties under dynamic conditions, this behavior can be termed stochastic. The author applies two methods applied in recent years, and one new algorithm to further investigate the effectiveness of introducing multivariate Gaussian mixture models and Bayesian analysis in the model updating context. The focus is largely based on Markov Chain Monte Carlo methods whereby all inference on uncertainties is based on the posterior probability distribution obtained from Bayes’ theorem. Observations are obtained sequentially providing an on-line inference in approximating the posterior probability. In the coming Chapters detailed descriptions will cover all the theory and arithmetic involved for the simulated algorithms. The three algorithms are, the standard Metropolis Hastings (MH), Adaptive Metropolis Hastings (AMH), and Monte Carlo Dynamically Weighted Importance Sampling (MCDWIS). Metropolis Hastings (MH) is a well-known Markov Chain Monte Carlo (MCMC) sampling method. The desired result from this algorithm is a good acceptance rate and good correlation between the computed stochastic parameters. The Adaptive Metropolis Hastings (AMH) algorithm adaptively scales the covariance matrix ‘on the fly’ to see convergence to a Gaussian target distribution. From the AMH algorithm we want to observe the adaptation of the scaling factor and the covariance matrix. Monte Carlo Dynamically Weighted Importance Sampling (MCDWIS) is an algorithm which combines Importance Sampling theory with Dynamic Weighting theory in a population control scheme, namely the Adaptive Pruned Enriched Population Control Scheme (APEPCS). The motivation behind applying MCDWIS is in the complexity of computing normalizing constants in higher dimensional or multimodal systems. In addition, a dynamic weighting step with an Adaptive Pruned Enriched Population Control Scheme (APEPCS) allows for further control over weighted samples and population size. The performance of the MCDWIS simulation... , M.Ing. Mechanical Engineering
- Full Text:
The conceptual design and evaluation of research reactors utilizing a Monte Carlo and diffusion based computational modeling tool
- Authors: Govender, Nicolin
- Date: 2012-08-06
- Subjects: Materials testing reactors , Monte Carlo method , Computer simulation
- Type: Thesis
- Identifier: uj:8934 , http://hdl.handle.net/10210/5406
- Description: M.Sc. , Due to the demand for medical isotopes, new Materials Testing Reactors (MTR's) are being considered and built globally. Different countries all have varying design requirements resulting in a plethora of different designs. South-Africa is also considering a new MTR reactor for dedicated medical radio-isotope production. A neutronic analysis of these various designs is used to ascertain/evaluate the viability of each. Most safety and utilization parameters can be calculated from the neutron flux. The code systems that are used to perform these analysis are either stochastic or deterministic in nature. In performing such an analysis the tracking of the depletion of isotopes is essential, to ensure that the modeled macroscopic cross-sections are as close as possible to that of the actual reactor. Stochastic methods are currently too slow when performing depletion analysis, but are very accurate and flexible. Deterministic based methods, on the other hand are much faster, but are generally not as accurate or flexible due to the approximations made in solving the Boltzmann Transport Equation. The aim of this work is therefore to synergistically use a deterministic (diffusion) code to obtain an equilibrium material distribution for a given design and a stochastic (Monte Carlo) code to evaluate the neutronics of the resulting core model - therefore applying a hybrid approach to conceptual core design. A comparison between the hybrid approach and the diffusion code demonstrates the limitations and strengths of the diffusion-based calculational path for various core designs. In order to facilitate the described process, and implement it in a consistent manner, a computational tool termed COREGEN has been developed. This tool facilitates the creation of neutronics models of conceptual reactor cores for both the Monte Carlo and diffusion codes in order to implement the described hybrid approach. The system uses the Monte-Carlo based MCNP code system developed at Los Alamos National Laboratory as stochastic solver, and the nodal diffusion based OSCAR-4 code system developed at Necsa as the deterministic solver. Given basic input for a core design, COREGEN will generate a detailed OSCAR-4 and MCNP input model. An equilibrium core obtained by running OSCAR-4, is then used in the MCNP model. COREGEN will analyze the most important core parameters with both codes and provide comparisons. In this work, various MTR reactor designs are evaluated to meet the primary requirement of isotope production. A heavy water reflected core with 20 isotope production rigs was found to be the most promising candidate. Based on the comparison of the various parameters between Monte Carlo and diffusion for the various cores, we found that the diffusion based OSCAR-4 system compares well to Monte Carlo in the neutronic analysis of cores with in-core irradiation positions (average error 4.5% in assembly power). However, for the heavy water reflected cores with ex-core rigs, the diffusion method differs significantly from the MonteCarlo solution in the rig positions (average error 17.0% in assembly power) and parameters obtained from OSCAR must be used with caution in these ex-core regions. The solution of the deterministic approach in in-core regions corresponded to the stochastic approach within 7% (in assembly averaged power) for all core designs.
- Full Text:
- Authors: Govender, Nicolin
- Date: 2012-08-06
- Subjects: Materials testing reactors , Monte Carlo method , Computer simulation
- Type: Thesis
- Identifier: uj:8934 , http://hdl.handle.net/10210/5406
- Description: M.Sc. , Due to the demand for medical isotopes, new Materials Testing Reactors (MTR's) are being considered and built globally. Different countries all have varying design requirements resulting in a plethora of different designs. South-Africa is also considering a new MTR reactor for dedicated medical radio-isotope production. A neutronic analysis of these various designs is used to ascertain/evaluate the viability of each. Most safety and utilization parameters can be calculated from the neutron flux. The code systems that are used to perform these analysis are either stochastic or deterministic in nature. In performing such an analysis the tracking of the depletion of isotopes is essential, to ensure that the modeled macroscopic cross-sections are as close as possible to that of the actual reactor. Stochastic methods are currently too slow when performing depletion analysis, but are very accurate and flexible. Deterministic based methods, on the other hand are much faster, but are generally not as accurate or flexible due to the approximations made in solving the Boltzmann Transport Equation. The aim of this work is therefore to synergistically use a deterministic (diffusion) code to obtain an equilibrium material distribution for a given design and a stochastic (Monte Carlo) code to evaluate the neutronics of the resulting core model - therefore applying a hybrid approach to conceptual core design. A comparison between the hybrid approach and the diffusion code demonstrates the limitations and strengths of the diffusion-based calculational path for various core designs. In order to facilitate the described process, and implement it in a consistent manner, a computational tool termed COREGEN has been developed. This tool facilitates the creation of neutronics models of conceptual reactor cores for both the Monte Carlo and diffusion codes in order to implement the described hybrid approach. The system uses the Monte-Carlo based MCNP code system developed at Los Alamos National Laboratory as stochastic solver, and the nodal diffusion based OSCAR-4 code system developed at Necsa as the deterministic solver. Given basic input for a core design, COREGEN will generate a detailed OSCAR-4 and MCNP input model. An equilibrium core obtained by running OSCAR-4, is then used in the MCNP model. COREGEN will analyze the most important core parameters with both codes and provide comparisons. In this work, various MTR reactor designs are evaluated to meet the primary requirement of isotope production. A heavy water reflected core with 20 isotope production rigs was found to be the most promising candidate. Based on the comparison of the various parameters between Monte Carlo and diffusion for the various cores, we found that the diffusion based OSCAR-4 system compares well to Monte Carlo in the neutronic analysis of cores with in-core irradiation positions (average error 4.5% in assembly power). However, for the heavy water reflected cores with ex-core rigs, the diffusion method differs significantly from the MonteCarlo solution in the rig positions (average error 17.0% in assembly power) and parameters obtained from OSCAR must be used with caution in these ex-core regions. The solution of the deterministic approach in in-core regions corresponded to the stochastic approach within 7% (in assembly averaged power) for all core designs.
- Full Text:
Schedule risk analysis of railway projects using monte-carlo simulation
- Authors: Mabeba, Motlatso
- Date: 2018
- Subjects: Risk management , Railroads - Management , Monte Carlo method
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/286098 , uj:30951
- Description: M.Ing. (Engineering Management) , Abstract: Railways have been used throughout history for the transportation of goods. Even though the inception of rail improved civilization, due to its inefficiencies, road transport is dominating the freight and logistics industry. Company A, which has the largest market share in rail has embarked on projects in an attempt to improve rail efficiencies by moving more volumes of freight timeously. Most of the projects of Company A have failed largely due to the poor planning of the projects in the feasibility stages. Most of the planning schedules are overoptimistic and thus unreliable. The scope of this study is to investigate the way in which planning schedules of Company A are developed by undertaking a schedule risk analysis and using Monte Carlo simulation to validate the schedule. If projects of Company A can be planned better, using schedule risk analysis, projects can become more successful in terms of delivering projects on time and then execute the projects on time.
- Full Text:
- Authors: Mabeba, Motlatso
- Date: 2018
- Subjects: Risk management , Railroads - Management , Monte Carlo method
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/286098 , uj:30951
- Description: M.Ing. (Engineering Management) , Abstract: Railways have been used throughout history for the transportation of goods. Even though the inception of rail improved civilization, due to its inefficiencies, road transport is dominating the freight and logistics industry. Company A, which has the largest market share in rail has embarked on projects in an attempt to improve rail efficiencies by moving more volumes of freight timeously. Most of the projects of Company A have failed largely due to the poor planning of the projects in the feasibility stages. Most of the planning schedules are overoptimistic and thus unreliable. The scope of this study is to investigate the way in which planning schedules of Company A are developed by undertaking a schedule risk analysis and using Monte Carlo simulation to validate the schedule. If projects of Company A can be planned better, using schedule risk analysis, projects can become more successful in terms of delivering projects on time and then execute the projects on time.
- Full Text:
Die toepasbaarheid van die Monte Carlo studies op empiriese data van die Suid-Afrikaanse ekonomie
- Authors: McClintock, Michael
- Date: 2014-07-29
- Subjects: Econometric models , Monte Carlo method
- Type: Thesis
- Identifier: uj:11917 , http://hdl.handle.net/10210/11645
- Description: M.Com.(Econometrics) , The objective of this study is to evaluate different estimation techniques that can be used to estimate the coefficients of a model. The estimation techniques were applied to empirical data drawn from the South African economy. The Monte Carlo studies are unique in that data was statistically generated for the experiments. This approach was due to the fact that actual observations on economic variables contain several econometric problems, such as autocorrelation and MUlticollinearity, simultaneously. However, the approach in this study differs in that empirical data is used to evaluate the estimation techniques. The estimation techniques evaluated are : • Ordinary least squares method • Two stage least squares method • Limited information maximum likelihood method • Three stage least squares method • Full information maximum likelihood method. The estimates of the different coefficients are evaluated on the following criteria : • The bias of the estimates • The variance of the estimates • t-values of the estimates • The root mean square error. The ranking of the estimation techniques on the bias criterion is as follows : 1 Full information maximum likelihood method. 2 Ordinary least squares method 3 Three stage least squares method 4 Two stage least squares method 5 Limited information maximum likelihood method The ranking of the estimation techniques on the variance criterion is as follows : 1 Full information maximum likelihood method. 2 Ordinary least squares method 3 Three stage least squares method 4 Two stage least squares method 5 Limited information maximum.likelihood method All the estimation techniques performed poorly with regard to the statistical significance of the estimates. The ranking of the estimation techniques on the t-values of the estimates is thus as follows 1 Three stage least squares method 2 ordinary least squares method 3 Two stage least squares method and the limited information maximum likelihood method 4 Full information maximum likelihood method. The ranking of the estimation techniques on the root mean square error criterion is as follows : 1 Full information maximum likelihood method and the ordinary least squares method 2 Two stage least squares method 3 Limited information maximum likelihood method and the three stage least squares method The results achieved in this study are very similar to those of the Monte Carlo studies. The only exception is the ordinary least squares method that performed better on every criteria dealt with in this study. Though the full information maximum likelihood method performed the best on two of the criteria, its performance was extremely poor on the t-value criterion. The ordinary least squares method is shown, in this study, to be the most constant performer.
- Full Text:
- Authors: McClintock, Michael
- Date: 2014-07-29
- Subjects: Econometric models , Monte Carlo method
- Type: Thesis
- Identifier: uj:11917 , http://hdl.handle.net/10210/11645
- Description: M.Com.(Econometrics) , The objective of this study is to evaluate different estimation techniques that can be used to estimate the coefficients of a model. The estimation techniques were applied to empirical data drawn from the South African economy. The Monte Carlo studies are unique in that data was statistically generated for the experiments. This approach was due to the fact that actual observations on economic variables contain several econometric problems, such as autocorrelation and MUlticollinearity, simultaneously. However, the approach in this study differs in that empirical data is used to evaluate the estimation techniques. The estimation techniques evaluated are : • Ordinary least squares method • Two stage least squares method • Limited information maximum likelihood method • Three stage least squares method • Full information maximum likelihood method. The estimates of the different coefficients are evaluated on the following criteria : • The bias of the estimates • The variance of the estimates • t-values of the estimates • The root mean square error. The ranking of the estimation techniques on the bias criterion is as follows : 1 Full information maximum likelihood method. 2 Ordinary least squares method 3 Three stage least squares method 4 Two stage least squares method 5 Limited information maximum likelihood method The ranking of the estimation techniques on the variance criterion is as follows : 1 Full information maximum likelihood method. 2 Ordinary least squares method 3 Three stage least squares method 4 Two stage least squares method 5 Limited information maximum.likelihood method All the estimation techniques performed poorly with regard to the statistical significance of the estimates. The ranking of the estimation techniques on the t-values of the estimates is thus as follows 1 Three stage least squares method 2 ordinary least squares method 3 Two stage least squares method and the limited information maximum likelihood method 4 Full information maximum likelihood method. The ranking of the estimation techniques on the root mean square error criterion is as follows : 1 Full information maximum likelihood method and the ordinary least squares method 2 Two stage least squares method 3 Limited information maximum likelihood method and the three stage least squares method The results achieved in this study are very similar to those of the Monte Carlo studies. The only exception is the ordinary least squares method that performed better on every criteria dealt with in this study. Though the full information maximum likelihood method performed the best on two of the criteria, its performance was extremely poor on the t-value criterion. The ordinary least squares method is shown, in this study, to be the most constant performer.
- Full Text:
Radiation shielding analysis and optimisation for the Mineral-PET Kimberlite sorting facility using the Monte Carlo calculation code MCNPX
- Authors: Chinaka, Eric Mwanyisa
- Date: 2014-10-08
- Subjects: Tomography, Emission , Positron annihilation , Nuclear reactors - Shielding (Radiation) , Monte Carlo method
- Type: Thesis
- Identifier: uj:12555 , http://hdl.handle.net/10210/12347
- Description: M.Phil. (Energy Studies) , This dissertation details the radiation shielding analysis and optimization performed to design a shield for the mineral-PET (Positron Emission Tomography) facility. PET is a nuclear imaging technique currently used in diagnostic medicine. The technique is based on the detection of 511 keV coincident and co-linear photons produced from the annihilation of a positron (produced by a positron emitter) and a nearby electron. The technique is now being considered for the detection of diamonds in Kimberlite rock, in which mineral-PET technology aims to improve diamond mining through the early detection of diamond bearing rocks. High energy photons are produced via Bremsstrahlung (which occurs when electrons from an accelerator, impinge on a high density target)...
- Full Text:
- Authors: Chinaka, Eric Mwanyisa
- Date: 2014-10-08
- Subjects: Tomography, Emission , Positron annihilation , Nuclear reactors - Shielding (Radiation) , Monte Carlo method
- Type: Thesis
- Identifier: uj:12555 , http://hdl.handle.net/10210/12347
- Description: M.Phil. (Energy Studies) , This dissertation details the radiation shielding analysis and optimization performed to design a shield for the mineral-PET (Positron Emission Tomography) facility. PET is a nuclear imaging technique currently used in diagnostic medicine. The technique is based on the detection of 511 keV coincident and co-linear photons produced from the annihilation of a positron (produced by a positron emitter) and a nearby electron. The technique is now being considered for the detection of diamonds in Kimberlite rock, in which mineral-PET technology aims to improve diamond mining through the early detection of diamond bearing rocks. High energy photons are produced via Bremsstrahlung (which occurs when electrons from an accelerator, impinge on a high density target)...
- Full Text:
The contour tracking of a rugby ball : an application of particle filtering
- Authors: Janse van Rensburg, Tersia
- Date: 2012-02-06
- Subjects: Electric filters , Automatic tracking , Image processing , Monte Carlo method
- Type: Thesis
- Identifier: uj:1996 , http://hdl.handle.net/10210/4350
- Description: M.Ing. , Object tracking in image sequences, in its general form, is very challenging. Due to the prohibitive complexity thereof, research has lead to the idea of tracking a template exposed to low-dimensional deformation such as translation, rotation and scaling. The inherent non-Gaussianity of the data acquired from general tracking problems renders the trusted Kalman filtering methodology futile. For this reason the idea of particle filtering was developed recently. Particle filters are sequential Monte Carlo methods based on multiple point mass (or "particle") representations of probability densities, which can be applied to any dynamical model and which generalize the traditional Kalman filtering methods. To date particle filtering has already been proved to be successful filtering method in different fields of science such as econometrics, signal processing, fluid mechanics, agriculture and aviation. In this dissertation, we discuss the problem of tracking a rugby ball in an image sequence as the ball is being passed to and fro. First, the problem of non-linear Bayesian tracking is focused upon, followed by a particular instance of particle filtering known as the condensation algorithm. Next, the problem of fitting an elliptical contour to the travelling rugby ball is dealt with in detail, after which the problem of tracking this evolving ellipse (representing the rugby ball's edge) over time along the image sequence by means of the condensation algorithm follows. Experimental results are presented and discussed and concluding remarks follow at the end.
- Full Text:
- Authors: Janse van Rensburg, Tersia
- Date: 2012-02-06
- Subjects: Electric filters , Automatic tracking , Image processing , Monte Carlo method
- Type: Thesis
- Identifier: uj:1996 , http://hdl.handle.net/10210/4350
- Description: M.Ing. , Object tracking in image sequences, in its general form, is very challenging. Due to the prohibitive complexity thereof, research has lead to the idea of tracking a template exposed to low-dimensional deformation such as translation, rotation and scaling. The inherent non-Gaussianity of the data acquired from general tracking problems renders the trusted Kalman filtering methodology futile. For this reason the idea of particle filtering was developed recently. Particle filters are sequential Monte Carlo methods based on multiple point mass (or "particle") representations of probability densities, which can be applied to any dynamical model and which generalize the traditional Kalman filtering methods. To date particle filtering has already been proved to be successful filtering method in different fields of science such as econometrics, signal processing, fluid mechanics, agriculture and aviation. In this dissertation, we discuss the problem of tracking a rugby ball in an image sequence as the ball is being passed to and fro. First, the problem of non-linear Bayesian tracking is focused upon, followed by a particular instance of particle filtering known as the condensation algorithm. Next, the problem of fitting an elliptical contour to the travelling rugby ball is dealt with in detail, after which the problem of tracking this evolving ellipse (representing the rugby ball's edge) over time along the image sequence by means of the condensation algorithm follows. Experimental results are presented and discussed and concluding remarks follow at the end.
- Full Text:
Quantile based estimation of treatment effects in censored data
- Authors: Crotty, Nicholas Paul
- Date: 2013-05-27
- Subjects: Monte Carlo method , Estimation theory , Censored observations (Statistics) , Censored data (Statistics) , Regression analysis , Nonparametric statistics - Asymptotic theory
- Type: Thesis
- Identifier: uj:7551 , http://hdl.handle.net/10210/8409
- Description: M.Sc. (Mathematical Statistics) , Comparison of two distributions via use of the quantile comparison function is carried out specifically from possibly censored data. A semi-parametric method which assumes linearity of the quantile comparison function is examined thoroughly for non-censored data and then extended to incorporate censored data. A fully nonparametric method to construct confidence bands for the quantile comparison function is set out. The performance of all methods examined is tested using Monte Carlo Simulation.
- Full Text:
- Authors: Crotty, Nicholas Paul
- Date: 2013-05-27
- Subjects: Monte Carlo method , Estimation theory , Censored observations (Statistics) , Censored data (Statistics) , Regression analysis , Nonparametric statistics - Asymptotic theory
- Type: Thesis
- Identifier: uj:7551 , http://hdl.handle.net/10210/8409
- Description: M.Sc. (Mathematical Statistics) , Comparison of two distributions via use of the quantile comparison function is carried out specifically from possibly censored data. A semi-parametric method which assumes linearity of the quantile comparison function is examined thoroughly for non-censored data and then extended to incorporate censored data. A fully nonparametric method to construct confidence bands for the quantile comparison function is set out. The performance of all methods examined is tested using Monte Carlo Simulation.
- Full Text:
The application of frequency domain methods to two statistical problems
- Potgieter, Gert Diedericks Johannes
- Authors: Potgieter, Gert Diedericks Johannes
- Date: 2012-09-10
- Subjects: Statistical hypothesis testing - Asymptotic theory , CUSUM technique , Monte Carlo method , Statistical astronomy , Variable stars - Observations
- Type: Thesis
- Identifier: uj:9843 , http://hdl.handle.net/10210/7246
- Description: D.Phil. , We propose solutions to two statistical problems using the frequency domain approach to time series analysis. In both problems the data at hand can be described by the well known signal plus noise model. The first problem addressed is the estimation of the underlying variance of a process for the use in a Shewhart or CUSUM control chart when the mean of the process may be changing. We propose an estimator for the underlying variance based on the periodogram of the observed data. Such estimators have properties which make them superior to some estimators currently used in Statistical Quality Control. We also present a CUSUM chart for monitoring the variance which is based upon the periodogram-based estimator for the variance. The second problem, stimulated by a specific problem in Variable Star Astronomy, is to test whether or not the mean of a bivariate time series is constant over the span of observations. We consider two periodogram-based tests for constancy of the mean, derive their asymptotic distributions under the null hypothesis and under local alternatives and show how consistent estimators for the unknown parameters in the proposed model can be found
- Full Text:
- Authors: Potgieter, Gert Diedericks Johannes
- Date: 2012-09-10
- Subjects: Statistical hypothesis testing - Asymptotic theory , CUSUM technique , Monte Carlo method , Statistical astronomy , Variable stars - Observations
- Type: Thesis
- Identifier: uj:9843 , http://hdl.handle.net/10210/7246
- Description: D.Phil. , We propose solutions to two statistical problems using the frequency domain approach to time series analysis. In both problems the data at hand can be described by the well known signal plus noise model. The first problem addressed is the estimation of the underlying variance of a process for the use in a Shewhart or CUSUM control chart when the mean of the process may be changing. We propose an estimator for the underlying variance based on the periodogram of the observed data. Such estimators have properties which make them superior to some estimators currently used in Statistical Quality Control. We also present a CUSUM chart for monitoring the variance which is based upon the periodogram-based estimator for the variance. The second problem, stimulated by a specific problem in Variable Star Astronomy, is to test whether or not the mean of a bivariate time series is constant over the span of observations. We consider two periodogram-based tests for constancy of the mean, derive their asymptotic distributions under the null hypothesis and under local alternatives and show how consistent estimators for the unknown parameters in the proposed model can be found
- Full Text:
Probabilistic techniques and particle removal in the description of South African potable water treatment plant performance.
- Authors: Ceronio, Anthony Dean
- Date: 2008-05-27T13:19:27Z
- Subjects: water treatment plants , particle counting (water treatment plants) , water purification , particle removal , Monte Carlo method
- Type: Thesis
- Identifier: uj:2229 , http://hdl.handle.net/10210/466
- Description: The use of particle counters in potable water treatment is achieving higher levels of acceptance on an ongoing basis. This is due to its superior sensitivity in terms of water clarity determination in comparison to turbidity meters. However, the ability of the particle counter to distinguish between various particle sizes, arguably its biggest advantage over turbidity measurement, is not being utilised fully, due to the large volumes of data generated and the amount of post-measurement data processing required to unlock some of the information. In many cases it is being used purely as a substitute or parallel measurement for turbidity. Furthermore, in the South African context, where data is being generated, the particle count data holds little value as it cannot be compared to generally available data sets to reveal the entire message contained in the count. No record of counts is available to rate new measurements against. , Prof. J. Haarhoff
- Full Text:
- Authors: Ceronio, Anthony Dean
- Date: 2008-05-27T13:19:27Z
- Subjects: water treatment plants , particle counting (water treatment plants) , water purification , particle removal , Monte Carlo method
- Type: Thesis
- Identifier: uj:2229 , http://hdl.handle.net/10210/466
- Description: The use of particle counters in potable water treatment is achieving higher levels of acceptance on an ongoing basis. This is due to its superior sensitivity in terms of water clarity determination in comparison to turbidity meters. However, the ability of the particle counter to distinguish between various particle sizes, arguably its biggest advantage over turbidity measurement, is not being utilised fully, due to the large volumes of data generated and the amount of post-measurement data processing required to unlock some of the information. In many cases it is being used purely as a substitute or parallel measurement for turbidity. Furthermore, in the South African context, where data is being generated, the particle count data holds little value as it cannot be compared to generally available data sets to reveal the entire message contained in the count. No record of counts is available to rate new measurements against. , Prof. J. Haarhoff
- Full Text:
Monte Carlo simulations on a graphics processor unit with applications in inertial navigation
- Authors: Roets, Sarel Frederik
- Date: 2012-03-12
- Subjects: Monte Carlo method , Graphics processing units , Inertial navigation systems , Inertial measurement units
- Type: Thesis
- Identifier: uj:2157 , http://hdl.handle.net/10210/4528
- Description: M.Ing. , The Graphics Processor Unit (GPU) has been in the gaming industry for several years now. Of late though programmers and scientists have started to use the parallel processing or stream processing capabilities of the GPU in general numerical applications. The Monte Carlo method is a processing intensive methods, as it evaluates systems with stochastic components. The stochastic components require several iterations of the systems to develop an idea of how the systems reacts to the stochastic inputs. The stream processing capabilities of GPUs are used for the analysis of such systems. Evaluating low-cost Inertial Measurement Units (IMU) for utilisation in Inertial Navigation Systems (INS) is a processing intensive process. The non-deterministic or stochastic error components of the IMUs output signal requires multiple simulation runs to properly evaluate the IMUs performance when applied as input to an INS. The GPU makes use of stream processing, which allows simultaneous execution of the same algorithm on multiple data sets. Accordingly Monte Carlo techniques are applied to create trajectories for multiple possible outputs of the INS based on stochastically varying inputs from the IMU. The processing power of the GPU allows simultaneous Monte Carlo analysis of several IMUs. Each IMU requires a sensor error model, which entails calibration of each IMU to obtain numerical values for the main error sources of lowcost IMUs namely scale factor, non-orthogonality, bias, random walk and white noise. Three low-cost MEMS IMUs was calibrated to obtain numerical values for their sensor error models. Simultaneous Monte Carlo analysis of each of the IMUs is then done on the GPU with a resulting circular error probability plot. The circular error probability indicates the accuracy and precision of each IMU relative to a reference trajectory and the other IMUs trajectories. Results obtained indicate the GPU to be an alternative processing platform, for large amounts of data, to that of the CPU. Monte Carlo simulations on the GPU was performed 200 % faster than Monte Carlo simulations on the CPU. Results obtained from the Monte Carlo simulations, indicated the Random Walk error to be the main source of error in low-cost IMUs. The CEP results was used to determine the e ect of the various error sources on the INS output.
- Full Text:
- Authors: Roets, Sarel Frederik
- Date: 2012-03-12
- Subjects: Monte Carlo method , Graphics processing units , Inertial navigation systems , Inertial measurement units
- Type: Thesis
- Identifier: uj:2157 , http://hdl.handle.net/10210/4528
- Description: M.Ing. , The Graphics Processor Unit (GPU) has been in the gaming industry for several years now. Of late though programmers and scientists have started to use the parallel processing or stream processing capabilities of the GPU in general numerical applications. The Monte Carlo method is a processing intensive methods, as it evaluates systems with stochastic components. The stochastic components require several iterations of the systems to develop an idea of how the systems reacts to the stochastic inputs. The stream processing capabilities of GPUs are used for the analysis of such systems. Evaluating low-cost Inertial Measurement Units (IMU) for utilisation in Inertial Navigation Systems (INS) is a processing intensive process. The non-deterministic or stochastic error components of the IMUs output signal requires multiple simulation runs to properly evaluate the IMUs performance when applied as input to an INS. The GPU makes use of stream processing, which allows simultaneous execution of the same algorithm on multiple data sets. Accordingly Monte Carlo techniques are applied to create trajectories for multiple possible outputs of the INS based on stochastically varying inputs from the IMU. The processing power of the GPU allows simultaneous Monte Carlo analysis of several IMUs. Each IMU requires a sensor error model, which entails calibration of each IMU to obtain numerical values for the main error sources of lowcost IMUs namely scale factor, non-orthogonality, bias, random walk and white noise. Three low-cost MEMS IMUs was calibrated to obtain numerical values for their sensor error models. Simultaneous Monte Carlo analysis of each of the IMUs is then done on the GPU with a resulting circular error probability plot. The circular error probability indicates the accuracy and precision of each IMU relative to a reference trajectory and the other IMUs trajectories. Results obtained indicate the GPU to be an alternative processing platform, for large amounts of data, to that of the CPU. Monte Carlo simulations on the GPU was performed 200 % faster than Monte Carlo simulations on the CPU. Results obtained from the Monte Carlo simulations, indicated the Random Walk error to be the main source of error in low-cost IMUs. The CEP results was used to determine the e ect of the various error sources on the INS output.
- Full Text:
Toestandberaming by sub-waarneembare nie-lineêre prosesse
- Authors: Wiid, Andries Johannes
- Date: 2014-09-11
- Subjects: Estimation theory , Observers (Control theory) , Monte Carlo method
- Type: Thesis
- Identifier: uj:12251 , http://hdl.handle.net/10210/12016
- Description: M.Ing. (Electrical And Electronic Engineering) , State estimation comprises the estimation of the position and velocity (state) of a target based on the processing of noise-corrupted measurements of its motion. This study views a class of measurement processes where the states are unobservable and cannot be estimated without placing additional constraints on the system. The bearings only target motion problem is taken as being representative of this type of problem. The results of this study indicate that practical state· estimation for systems with unobservable measurement processes is possible with the application of estimation theories and available estimation techniques. Due to the inherent nonlinear geometrical characteristics the problem is classified as a unobservable nonlinear estimation problem. A review of state estimation and estimation techniques is presented. The fundamental bearings only target motion concepts are discussed. A representative selection of bearings only estimators made from the published literature, is evaluated. The evaluation consists of a theoretical analysis and a Monte Carlo simulation of the estimators. Two realistic scenario's are considered. A classification framework is presented which may be useful to practical engineers in selecting suitable estimators. Batch estimators are shown to be more stable and likely to be used in bearings only applications than recursive estimators. The importance of isolating the unobservable states from the observable states by using a modified polar co-ordinate system, is stressed. It is also shown that effective data processing can be achieved by using all available measurements and a maximum likelihood estimator.
- Full Text:
- Authors: Wiid, Andries Johannes
- Date: 2014-09-11
- Subjects: Estimation theory , Observers (Control theory) , Monte Carlo method
- Type: Thesis
- Identifier: uj:12251 , http://hdl.handle.net/10210/12016
- Description: M.Ing. (Electrical And Electronic Engineering) , State estimation comprises the estimation of the position and velocity (state) of a target based on the processing of noise-corrupted measurements of its motion. This study views a class of measurement processes where the states are unobservable and cannot be estimated without placing additional constraints on the system. The bearings only target motion problem is taken as being representative of this type of problem. The results of this study indicate that practical state· estimation for systems with unobservable measurement processes is possible with the application of estimation theories and available estimation techniques. Due to the inherent nonlinear geometrical characteristics the problem is classified as a unobservable nonlinear estimation problem. A review of state estimation and estimation techniques is presented. The fundamental bearings only target motion concepts are discussed. A representative selection of bearings only estimators made from the published literature, is evaluated. The evaluation consists of a theoretical analysis and a Monte Carlo simulation of the estimators. Two realistic scenario's are considered. A classification framework is presented which may be useful to practical engineers in selecting suitable estimators. Batch estimators are shown to be more stable and likely to be used in bearings only applications than recursive estimators. The importance of isolating the unobservable states from the observable states by using a modified polar co-ordinate system, is stressed. It is also shown that effective data processing can be achieved by using all available measurements and a maximum likelihood estimator.
- Full Text:
The quantification of information security risk using fuzzy logic and Monte Carlo simulation.
- Authors: Vorster, Anita
- Date: 2008-06-04T11:27:02Z
- Subjects: Risk assessment , Monte Carlo method , Fuzzy logic , Computer security , Information technology risk assessment
- Type: Thesis
- Identifier: uj:8851 , http://hdl.handle.net/10210/527
- Description: The quantification of information security risks is currently highly subjective. Values for information such as impact and probability, which are estimated during risk analysis, are mostly estimated by people or experts internal or external to the organization. Because the estimation of these values is done by people, all with different backgrounds and personalities, the values are exposed to subjectivity. The chance of any two people estimating the same value for risk analysis information is rare. There will always be a degree of uncertainty and imprecision in the values estimated. It is therefore during the data-gathering phase of risk analysis that the problem of subjectivity lies. To address the problem of subjectivity, techniques that mathematically deal with and present uncertainty and imprecision are used to estimate values for probability and impact. During this research a model for the objective estimation of probability was developed. The model uses mostly input values that are entirely objective, but also a small number of subjective input values. It is in these subjective input values that fuzzy logic and Monte Carlo simulation come into play. Fuzzy logic takes a qualitative subjective value and gives it an objective value, and Monte Carlo simulation complements fuzzy logic by giving a cumulative distribution function to the uncertain, imprecise input variable. In this way subjectivity is dealt with and the result of the model is a probability value that is estimated objectively. The same model that was used for the objective estimation of probability was used to estimate impact objectively. The end result of the research is the combination of the models to use the objective impact and probability values in a formula that calculates risk. The risk factors are then calculated objectively. A prototype was developed as proof that the process of objective information security risk quantification can be implemented in practice. , Prof. L. Labuschagne
- Full Text:
- Authors: Vorster, Anita
- Date: 2008-06-04T11:27:02Z
- Subjects: Risk assessment , Monte Carlo method , Fuzzy logic , Computer security , Information technology risk assessment
- Type: Thesis
- Identifier: uj:8851 , http://hdl.handle.net/10210/527
- Description: The quantification of information security risks is currently highly subjective. Values for information such as impact and probability, which are estimated during risk analysis, are mostly estimated by people or experts internal or external to the organization. Because the estimation of these values is done by people, all with different backgrounds and personalities, the values are exposed to subjectivity. The chance of any two people estimating the same value for risk analysis information is rare. There will always be a degree of uncertainty and imprecision in the values estimated. It is therefore during the data-gathering phase of risk analysis that the problem of subjectivity lies. To address the problem of subjectivity, techniques that mathematically deal with and present uncertainty and imprecision are used to estimate values for probability and impact. During this research a model for the objective estimation of probability was developed. The model uses mostly input values that are entirely objective, but also a small number of subjective input values. It is in these subjective input values that fuzzy logic and Monte Carlo simulation come into play. Fuzzy logic takes a qualitative subjective value and gives it an objective value, and Monte Carlo simulation complements fuzzy logic by giving a cumulative distribution function to the uncertain, imprecise input variable. In this way subjectivity is dealt with and the result of the model is a probability value that is estimated objectively. The same model that was used for the objective estimation of probability was used to estimate impact objectively. The end result of the research is the combination of the models to use the objective impact and probability values in a formula that calculates risk. The risk factors are then calculated objectively. A prototype was developed as proof that the process of objective information security risk quantification can be implemented in practice. , Prof. L. Labuschagne
- Full Text:
Internal balance calibration and uncertainty estimation using Monte Carlo simulation
- Authors: Bidgood, Peter Mark
- Date: 2014-03-18
- Subjects: Wind tunnel balances - Calibration - Simulation methods , Monte Carlo method
- Type: Thesis
- Identifier: uj:4379 , http://hdl.handle.net/10210/9728
- Description: D.Ing. (Mechanical Engineering) , The most common data sought during a wind tunnel test program are the forces and moments acting on an airframe, (or any other test article). The most common source of this data is the internal strain gauge balance. Balances are six degree of freedom force transducers that are required to be of small size and of high strength and stiffness. They are required to deliver the highest possible levels of accuracy and reliability. There is a focus in both the USA and in Europe to improve the performance of balances through collaborative research. This effort is aimed at materials, design, sensors, electronics calibration systems and calibration analysis methods. Recent developments in the use of statistical methods, including modern design of experiments, have resulted in improved balance calibration models. Research focus on the calibration of six component balances has moved to the determination of the uncertainty of measurements obtained in the wind tunnel. The application of conventional statistically-based approaches to the determination of the uncertainty of a balance measurement is proving problematical, and to some extent an impasse has been reached. The impasse is caused by the rapid expansion of the problem size when standard uncertainty determination approaches are used in a six-degree of freedom system that includes multiple least squares regression and iterative matrix solutions. This thesis describes how the uncertainty of loads reported by a six component balance can be obtained by applying a direct simulation of the end-to-end data flow of a balance, from calibration through to installation, using a Monte Carlo Simulation. It is postulated that knowledge of the error propagated into the test environment through the balance will influence the choice of calibration model, and that an improved model, compared to that determined by statistical methods without this knowledge, will be obtained. Statistical approaches to the determination of a balance calibration model are driven by obtaining the best curve-fit statistics possible. This is done by adding as many coefficients to the modelling polynomial as can be statistically defended. This thesis shows that the propagated error will significantly influence the choice of polynomial coefficients. In order to do this a Performance Weighted Efficiency (PWE) parameter is defined. The PWE is a combination of the curve-fit statistic, (the back calculated error for the chosen polynomial), a value representing the overall prediction interval for the model(CI_rand), and a value representing the overall total propagated uncertainty of loads reported by the installed balance...
- Full Text:
- Authors: Bidgood, Peter Mark
- Date: 2014-03-18
- Subjects: Wind tunnel balances - Calibration - Simulation methods , Monte Carlo method
- Type: Thesis
- Identifier: uj:4379 , http://hdl.handle.net/10210/9728
- Description: D.Ing. (Mechanical Engineering) , The most common data sought during a wind tunnel test program are the forces and moments acting on an airframe, (or any other test article). The most common source of this data is the internal strain gauge balance. Balances are six degree of freedom force transducers that are required to be of small size and of high strength and stiffness. They are required to deliver the highest possible levels of accuracy and reliability. There is a focus in both the USA and in Europe to improve the performance of balances through collaborative research. This effort is aimed at materials, design, sensors, electronics calibration systems and calibration analysis methods. Recent developments in the use of statistical methods, including modern design of experiments, have resulted in improved balance calibration models. Research focus on the calibration of six component balances has moved to the determination of the uncertainty of measurements obtained in the wind tunnel. The application of conventional statistically-based approaches to the determination of the uncertainty of a balance measurement is proving problematical, and to some extent an impasse has been reached. The impasse is caused by the rapid expansion of the problem size when standard uncertainty determination approaches are used in a six-degree of freedom system that includes multiple least squares regression and iterative matrix solutions. This thesis describes how the uncertainty of loads reported by a six component balance can be obtained by applying a direct simulation of the end-to-end data flow of a balance, from calibration through to installation, using a Monte Carlo Simulation. It is postulated that knowledge of the error propagated into the test environment through the balance will influence the choice of calibration model, and that an improved model, compared to that determined by statistical methods without this knowledge, will be obtained. Statistical approaches to the determination of a balance calibration model are driven by obtaining the best curve-fit statistics possible. This is done by adding as many coefficients to the modelling polynomial as can be statistically defended. This thesis shows that the propagated error will significantly influence the choice of polynomial coefficients. In order to do this a Performance Weighted Efficiency (PWE) parameter is defined. The PWE is a combination of the curve-fit statistic, (the back calculated error for the chosen polynomial), a value representing the overall prediction interval for the model(CI_rand), and a value representing the overall total propagated uncertainty of loads reported by the installed balance...
- Full Text:
Estimation of discretely sampled continuous diffusion processes with application to short-term interest rate models
- Authors: Van Appel, Vaughan
- Date: 2014-10-13
- Subjects: Estimation theory , Parameter estimation , Stochastic differential equations , Interest rates - Mathematical models , Monte Carlo method , Jackknife (Statistics) , Jump processes
- Type: Thesis
- Identifier: uj:12582 , http://hdl.handle.net/10210/12372
- Description: M.Sc. (Mathematical Statistics) , Stochastic Differential Equations (SDE’s) are commonly found in most of the modern finance used today. In this dissertation we use SDE’s to model a random phenomenon known as the short-term interest rate where the explanatory power of a particular short-term interest rate model is largely dependent on the description of the SDE to the real data. The challenge we face is that in most cases the transition density functions of these models are unknown and therefore, we need to find reliable and accurate alternative estimation techniques. In this dissertation, we discuss estimating techniques for discretely sampled continuous diffusion processes that do not require the true transition density function to be known. Moreover, the reader is introduced to the following techniques: (i) continuous time maximum likelihood estimation; (ii) discrete time maximum likelihood estimation; and (iii) estimating functions. We show through a Monte Carlo simulation study that the parameter estimates obtained from these techniques provide a good approximation to the estimates obtained from the true transition density. We also show that the bias in the mean reversion parameter can be reduced by implementing the jackknife bias reduction technique. Furthermore, the data analysis carried out on South-African interest rate data indicate strongly that single factor models do not explain the variability in the short-term interest rate. This may indicate the possibility of distinct jumps in the South-African interest rate market. Therefore, we leave the reader with the notion of incorporating jumps into a SDE framework.
- Full Text:
- Authors: Van Appel, Vaughan
- Date: 2014-10-13
- Subjects: Estimation theory , Parameter estimation , Stochastic differential equations , Interest rates - Mathematical models , Monte Carlo method , Jackknife (Statistics) , Jump processes
- Type: Thesis
- Identifier: uj:12582 , http://hdl.handle.net/10210/12372
- Description: M.Sc. (Mathematical Statistics) , Stochastic Differential Equations (SDE’s) are commonly found in most of the modern finance used today. In this dissertation we use SDE’s to model a random phenomenon known as the short-term interest rate where the explanatory power of a particular short-term interest rate model is largely dependent on the description of the SDE to the real data. The challenge we face is that in most cases the transition density functions of these models are unknown and therefore, we need to find reliable and accurate alternative estimation techniques. In this dissertation, we discuss estimating techniques for discretely sampled continuous diffusion processes that do not require the true transition density function to be known. Moreover, the reader is introduced to the following techniques: (i) continuous time maximum likelihood estimation; (ii) discrete time maximum likelihood estimation; and (iii) estimating functions. We show through a Monte Carlo simulation study that the parameter estimates obtained from these techniques provide a good approximation to the estimates obtained from the true transition density. We also show that the bias in the mean reversion parameter can be reduced by implementing the jackknife bias reduction technique. Furthermore, the data analysis carried out on South-African interest rate data indicate strongly that single factor models do not explain the variability in the short-term interest rate. This may indicate the possibility of distinct jumps in the South-African interest rate market. Therefore, we leave the reader with the notion of incorporating jumps into a SDE framework.
- Full Text:
- «
- ‹
- 1
- ›
- »