An evaluation of local two-frame dense stereo matching algorithms

- Van der Merwe, Juliaan Werner

**Authors:**Van der Merwe, Juliaan Werner**Date:**2012-06-06**Subjects:**Computer vision , Stereo vision , Depth perception , Image processing , Computer algorithms**Type:**Thesis**Identifier:**uj:2512 , http://hdl.handle.net/10210/4966**Description:**M. Ing. , The process of extracting depth information from multiple two-dimensional images taken of the same scene is known as stereo vision. It is of central importance to the field of machine vision as it is a low level task required for many higher level applications. The past few decades has witnessed the development of hundreds of different stereo vision algorithms. This has made it difficult to classify and compare the various approaches to the problem. In this research we provide an overview of the types of approaches that exist to solve the problem of stereo vision. We focus on a specific subset of algorithms, known as local stereo algorithms. Our goal is to critically analyse and compare a representative sample of local stereo algorithm in terms of both speed and accuracy. We also divide the algorithms into discrete interchangeable components and experiment to determine the effect that each of the alternative components has on an algorithm’s speed and accuracy. We investigate even further to quantify and analyse the effect of various design choices within specific algorithm components. Finally we assemble all of the knowledge gained through the experimentation to compose and optimise a novel algorithm. The experimentation highlighted the fact that by far the most important component of a local stereo algorithm is the manner in which it aggregates matching costs. All of the top performing local stereo algorithms dynamically define the shape of the windows over which the matching costs are aggregated. This is done in a manner that aims to only include pixels in a window that is likely to be at the same depth as the depth of the centre pixel of the window. Since the depth is unknown, the cost aggregation techniques use colour and proximity information to best guess whether pixels are at the same depth when defining the shape of the aggregation windows. Local stereo algorithms are usually less accurate than global methods but they are supposed to be faster and more parallelisable. These cost aggregation techniques result in very accurate depth estimates but unfortunately they are also very expensive computationally. We believe the focus of local stereo algorithm development should be speed. Using the experimental results we developed an algorithm that achieves accuracies in the same order of magnitude as the state-of-the-art algorithms while reducing the computation time by over 50%.**Full Text:**

**Authors:**Van der Merwe, Juliaan Werner**Date:**2012-06-06**Subjects:**Computer vision , Stereo vision , Depth perception , Image processing , Computer algorithms**Type:**Thesis**Identifier:**uj:2512 , http://hdl.handle.net/10210/4966**Description:**M. Ing. , The process of extracting depth information from multiple two-dimensional images taken of the same scene is known as stereo vision. It is of central importance to the field of machine vision as it is a low level task required for many higher level applications. The past few decades has witnessed the development of hundreds of different stereo vision algorithms. This has made it difficult to classify and compare the various approaches to the problem. In this research we provide an overview of the types of approaches that exist to solve the problem of stereo vision. We focus on a specific subset of algorithms, known as local stereo algorithms. Our goal is to critically analyse and compare a representative sample of local stereo algorithm in terms of both speed and accuracy. We also divide the algorithms into discrete interchangeable components and experiment to determine the effect that each of the alternative components has on an algorithm’s speed and accuracy. We investigate even further to quantify and analyse the effect of various design choices within specific algorithm components. Finally we assemble all of the knowledge gained through the experimentation to compose and optimise a novel algorithm. The experimentation highlighted the fact that by far the most important component of a local stereo algorithm is the manner in which it aggregates matching costs. All of the top performing local stereo algorithms dynamically define the shape of the windows over which the matching costs are aggregated. This is done in a manner that aims to only include pixels in a window that is likely to be at the same depth as the depth of the centre pixel of the window. Since the depth is unknown, the cost aggregation techniques use colour and proximity information to best guess whether pixels are at the same depth when defining the shape of the aggregation windows. Local stereo algorithms are usually less accurate than global methods but they are supposed to be faster and more parallelisable. These cost aggregation techniques result in very accurate depth estimates but unfortunately they are also very expensive computationally. We believe the focus of local stereo algorithm development should be speed. Using the experimental results we developed an algorithm that achieves accuracies in the same order of magnitude as the state-of-the-art algorithms while reducing the computation time by over 50%.**Full Text:**

An implementation of a branch-and-cut algorithm for the travelling salesman problem

**Authors:**Geldenhuys, Christel Erna**Date:**2012-09-11**Subjects:**Traveling-salesman problem , Computer algorithms**Type:**Thesis**Identifier:**uj:9939 , http://hdl.handle.net/10210/7337**Description:**M.Sc. (Computer Science) , The Travelling Salesman Problem (TSP) comprises the following: A salesman is required, by definition of his job description, to visit a set of cities. He leaves from a base city, visits each of the other cities exactly once and then returns to his base city. In travelling between city pairs, he incurs certain costs. It is a travelling salesman's constant endeavour, therefore, to find the cheapest route. A combinatorial optimisation problem involves the selection of the best alternative from a finite set of alternatives. The TSP is one of the best known and most studied combinatorial optimisation problems. At present, the branch-and-cut algorithm of Padberg and Rinaldi is the most successful algorithm for solving large-scale instances of the TSP. Padberg and Rinaldi used a general LP solver to solve the subproblem P(L, F0 , F1 ), where L is a set of side constraints, F0 is the set of variables fixed at 0 and F1 is the set of variables fixed at 1. As noted by Smith and Leenen, however, this subproblem has a special structure that was exploited by them to solve the subproblem more efficiently. In this dissertation, we would like to present improvements on Leenen's contribution. For this purpose, we compared our results with those of a commercial LP solver. It was found that our program on average executed in half the time of that of the commercial LP solver. The problem P(L, F0 , F1;) has to be solved many times in the branch-and-cut algorithm before a solution to the TSP is obtained. For large-scale instances of the TSP, a substantial portion of the execution time of the entire branch-and-cut algorithm is spent in the linearprogram optimiser. The branch-and-cut algorithm could,• therefore, be potentially more efficient if the special structure were utilised. We constructed a full implementation of the branch-and-cut algorithm, utilising the special structure. We did not, however, implement all the refinements discussed by Padberg and Rinaldi. Padberg and Rinaldi used four classes of TSP constraints: subtour elimination, 2-matching, comb and clique-tree inequalities. Owing to time constraints and the complexity of identifying the other classes of constraints, we decided to implement subtour elimination constraints only. We subsequently compared our results with those of Padberg and Rinaldi, and realised that we totally underestimated the importance of using more classes of constraints. Our search trees had become extremely huge. We realised, therefore, that more classes of constraints were essential for solving large instances of the TSP. Even though numerical errors still posed a problem, they might disappear if the size of the search tree were reduced to that obtained by Padberg and Rinaldi.**Full Text:**

**Authors:**Geldenhuys, Christel Erna**Date:**2012-09-11**Subjects:**Traveling-salesman problem , Computer algorithms**Type:**Thesis**Identifier:**uj:9939 , http://hdl.handle.net/10210/7337**Description:**M.Sc. (Computer Science) , The Travelling Salesman Problem (TSP) comprises the following: A salesman is required, by definition of his job description, to visit a set of cities. He leaves from a base city, visits each of the other cities exactly once and then returns to his base city. In travelling between city pairs, he incurs certain costs. It is a travelling salesman's constant endeavour, therefore, to find the cheapest route. A combinatorial optimisation problem involves the selection of the best alternative from a finite set of alternatives. The TSP is one of the best known and most studied combinatorial optimisation problems. At present, the branch-and-cut algorithm of Padberg and Rinaldi is the most successful algorithm for solving large-scale instances of the TSP. Padberg and Rinaldi used a general LP solver to solve the subproblem P(L, F0 , F1 ), where L is a set of side constraints, F0 is the set of variables fixed at 0 and F1 is the set of variables fixed at 1. As noted by Smith and Leenen, however, this subproblem has a special structure that was exploited by them to solve the subproblem more efficiently. In this dissertation, we would like to present improvements on Leenen's contribution. For this purpose, we compared our results with those of a commercial LP solver. It was found that our program on average executed in half the time of that of the commercial LP solver. The problem P(L, F0 , F1;) has to be solved many times in the branch-and-cut algorithm before a solution to the TSP is obtained. For large-scale instances of the TSP, a substantial portion of the execution time of the entire branch-and-cut algorithm is spent in the linearprogram optimiser. The branch-and-cut algorithm could,• therefore, be potentially more efficient if the special structure were utilised. We constructed a full implementation of the branch-and-cut algorithm, utilising the special structure. We did not, however, implement all the refinements discussed by Padberg and Rinaldi. Padberg and Rinaldi used four classes of TSP constraints: subtour elimination, 2-matching, comb and clique-tree inequalities. Owing to time constraints and the complexity of identifying the other classes of constraints, we decided to implement subtour elimination constraints only. We subsequently compared our results with those of Padberg and Rinaldi, and realised that we totally underestimated the importance of using more classes of constraints. Our search trees had become extremely huge. We realised, therefore, that more classes of constraints were essential for solving large instances of the TSP. Even though numerical errors still posed a problem, they might disappear if the size of the search tree were reduced to that obtained by Padberg and Rinaldi.**Full Text:**

Balancing of non-binary sequences using Gray code prefixes

**Authors:**Mambou, Elie Ngomseu**Date:**2016**Subjects:**Error-correcting codes (Information theory) , Gray codes , Computer algorithms , Adaptive control systems**Language:**English**Type:**Masters (Thesis)**Identifier:**http://hdl.handle.net/10210/225215 , uj:22741**Description:**Abstract: Please refer to full text to view abstract , M.Ing.**Full Text:**

**Authors:**Mambou, Elie Ngomseu**Date:**2016**Subjects:**Error-correcting codes (Information theory) , Gray codes , Computer algorithms , Adaptive control systems**Language:**English**Type:**Masters (Thesis)**Identifier:**http://hdl.handle.net/10210/225215 , uj:22741**Description:**Abstract: Please refer to full text to view abstract , M.Ing.**Full Text:**

Contributions towards an implementation of a branch-and-cut algorithm for the travelling salesman problem

**Authors:**Leenen, Louise**Date:**2014-09-30**Subjects:**Computer algorithms , Traveling-salesman problem**Type:**Thesis**Identifier:**http://ujcontent.uj.ac.za8080/10210/374677 , uj:12450 , http://hdl.handle.net/10210/12237**Description:**M.Sc. (Computer Science) , The STSP (symmetric travelling salesman problem) involves finding the cheapest tour through a number of cities. It is a difficult problem and until recently algorithms for the TSP could not find the optimal tour in a reasonable time if the number of cities exceeded 100. In 1987 Padberg and Rinaldi published their computational experience with a new branch-and-cut algorithm. They were able to solve problems with up to 2392 cities on a CDC CYBER 205 supercomputer. Padberg and Rinaldi used a standard LP (linear programming) solver in their implementation of the branch-and-cut algorithm. The algorithm first solves the continuous 2-matching problem (RMP2) using the LP solver. It then repeatedly identifies constraints of the TSP which are not satisfied by the current RMP2-solution and solve RMP2 with the identified TSP-constraints as side constraints. However, RMP2 is a linear programming problem with a very special structure which we exploited in an implementation of the primal simplex algorithm for RMP2. Our computational experience with this implementation indicates that it is almost 400 times faster than a commercial LP solver on problems with 200 cities. We developed an implementation of the dual simplex algorithm which exploits the special structure of both RMP2 and the side constraints identified in the branch-and-cut algorithm. An existing set of side constraints for solving a 48-eity problem was used to test our implementation of the dual simplex algorithm. We implemented the procedure described by Padberg & Rinaldi to identify subtour elimination side constraints (one type of side constraint) for the 48-eity problem. Our implementation of the identification procedure was then used in conjunction with our implementation of the dual simplex algorithm. The maximum flow problem has to be solved in the algorithm for identification of subtour elimination constraints. We implemented the Sleator-Tarjan algorithm for this purpose.**Full Text:**

**Authors:**Leenen, Louise**Date:**2014-09-30**Subjects:**Computer algorithms , Traveling-salesman problem**Type:**Thesis**Identifier:**http://ujcontent.uj.ac.za8080/10210/374677 , uj:12450 , http://hdl.handle.net/10210/12237**Description:**M.Sc. (Computer Science) , The STSP (symmetric travelling salesman problem) involves finding the cheapest tour through a number of cities. It is a difficult problem and until recently algorithms for the TSP could not find the optimal tour in a reasonable time if the number of cities exceeded 100. In 1987 Padberg and Rinaldi published their computational experience with a new branch-and-cut algorithm. They were able to solve problems with up to 2392 cities on a CDC CYBER 205 supercomputer. Padberg and Rinaldi used a standard LP (linear programming) solver in their implementation of the branch-and-cut algorithm. The algorithm first solves the continuous 2-matching problem (RMP2) using the LP solver. It then repeatedly identifies constraints of the TSP which are not satisfied by the current RMP2-solution and solve RMP2 with the identified TSP-constraints as side constraints. However, RMP2 is a linear programming problem with a very special structure which we exploited in an implementation of the primal simplex algorithm for RMP2. Our computational experience with this implementation indicates that it is almost 400 times faster than a commercial LP solver on problems with 200 cities. We developed an implementation of the dual simplex algorithm which exploits the special structure of both RMP2 and the side constraints identified in the branch-and-cut algorithm. An existing set of side constraints for solving a 48-eity problem was used to test our implementation of the dual simplex algorithm. We implemented the procedure described by Padberg & Rinaldi to identify subtour elimination side constraints (one type of side constraint) for the 48-eity problem. Our implementation of the identification procedure was then used in conjunction with our implementation of the dual simplex algorithm. The maximum flow problem has to be solved in the algorithm for identification of subtour elimination constraints. We implemented the Sleator-Tarjan algorithm for this purpose.**Full Text:**

Development of effective cuckoo search algorithms for optimisation purposes

**Authors:**Mareli, Mahlaku**Date:**2018**Subjects:**Computational intelligence , Computer algorithms , Mathematical optimization - Data processing , Computer networks**Language:**English**Type:**Doctoral (Thesis)**Identifier:**http://hdl.handle.net/10210/295009 , uj:32110**Description:**Abstract: Optimisation, the process of finding either a maximum of a minimum of the problem at hand plays a key role in several disciplines including engineering and science. In this thesis, different Cuckoo Search algorithms are developed for effective optimisation purposes. These algorithms are tested on ten mathematical test functions and then used to optimise a Back-Propagation Neural Network used for short-term electricity load forecasting for South African data, with the focus on the City of Johannesburg. The original Cuckoo Search algorithm is based on random walk step sizes derived from Lévy probability distribution and the switching parameter between local and global random walks is constant. However, other probability distributions like Cauchy, Gaussian and Gamma have also been used and the switching parameter can be changed dynamically. The first contribution of the thesis is the development a new Cuckoo Search algorithm whose random step sizes are derived from Pareto probability distribution function. This new Pareto-based Cuckoo Search algorithm is tested on ten benchmark test functions together with other Cuckoo Search algorithms using step sizes derived from Gaussian, Cauchy, Gamma and Lévy probability density functions. When using the confidence interval analysis, the Lévy-based Cuckoo Search algorithm outperforms the Pareto based Cuckoo. However, confidence interval results are only superior due to only one test function whereby Lévy-based Cuckoo Search performed well. Moreover, the Pareto-based Cuckoo shows superior performance in comparison to the other algorithms, leading in seven test functions out of ten when tested for convergence. The second contribution is the implementation of Cuckoo Search algorithms with dynamically increasing switching parameters between local and random walks. The first improvement done on Cuckoo Search algorithm is the implementation of linear increasing switching parameter, the second is the implementation of power increasing switching parameter and the third improvement is the implementation of exponential increasing switching parameter. When tested on benchmark test functions, the exponentially increasing Cuckoo Search algorithm outperforms the other algorithms by obtaining the longest confidence interval of 4.50566 while the next algorithm (original Cuckoo Search) obtains an interval of 3.9699. Moreover, using convergence plots, both... , D.Ing. (Electrical and Electronic Engineering)**Full Text:**

**Authors:**Mareli, Mahlaku**Date:**2018**Subjects:**Computational intelligence , Computer algorithms , Mathematical optimization - Data processing , Computer networks**Language:**English**Type:**Doctoral (Thesis)**Identifier:**http://hdl.handle.net/10210/295009 , uj:32110**Description:**Abstract: Optimisation, the process of finding either a maximum of a minimum of the problem at hand plays a key role in several disciplines including engineering and science. In this thesis, different Cuckoo Search algorithms are developed for effective optimisation purposes. These algorithms are tested on ten mathematical test functions and then used to optimise a Back-Propagation Neural Network used for short-term electricity load forecasting for South African data, with the focus on the City of Johannesburg. The original Cuckoo Search algorithm is based on random walk step sizes derived from Lévy probability distribution and the switching parameter between local and global random walks is constant. However, other probability distributions like Cauchy, Gaussian and Gamma have also been used and the switching parameter can be changed dynamically. The first contribution of the thesis is the development a new Cuckoo Search algorithm whose random step sizes are derived from Pareto probability distribution function. This new Pareto-based Cuckoo Search algorithm is tested on ten benchmark test functions together with other Cuckoo Search algorithms using step sizes derived from Gaussian, Cauchy, Gamma and Lévy probability density functions. When using the confidence interval analysis, the Lévy-based Cuckoo Search algorithm outperforms the Pareto based Cuckoo. However, confidence interval results are only superior due to only one test function whereby Lévy-based Cuckoo Search performed well. Moreover, the Pareto-based Cuckoo shows superior performance in comparison to the other algorithms, leading in seven test functions out of ten when tested for convergence. The second contribution is the implementation of Cuckoo Search algorithms with dynamically increasing switching parameters between local and random walks. The first improvement done on Cuckoo Search algorithm is the implementation of linear increasing switching parameter, the second is the implementation of power increasing switching parameter and the third improvement is the implementation of exponential increasing switching parameter. When tested on benchmark test functions, the exponentially increasing Cuckoo Search algorithm outperforms the other algorithms by obtaining the longest confidence interval of 4.50566 while the next algorithm (original Cuckoo Search) obtains an interval of 3.9699. Moreover, using convergence plots, both... , D.Ing. (Electrical and Electronic Engineering)**Full Text:**

Improved machine learning algorithms with application to medical diagnosis

**Authors:**Mienye, Ibomoiye Domor**Date:**2021**Subjects:**Machine learning , Computer algorithms , Diagnosis , Heart - Diseases**Language:**English**Type:**Doctoral (Thesis)**Identifier:**http://hdl.handle.net/10210/479435 , uj:43367**Description:**Abstract: Medical data generated from hospitals are an increasing source of information for automatic medical diagnosis. These data contain latent patterns and correlations that can result in better diagnosis when appropriately processed. Most applications of machine learning (ML) to these patient records have focused on utilizing the ML algorithms directly, which usually results in suboptimal performance as most medical datasets are quite imbalanced. Also, labelling the enormous medical data is a challenging and expensive task. In order to solve these problems, recent research has focused on the development of improved ML methods, mainly preprocessing pipelines and feature learning methods. This thesis presents four machine learning approaches aimed at improving the medical diagnosis performance using publicly available datasets... , D.Ing. (Electrical and Electronic Engineering)**Full Text:**

**Authors:**Mienye, Ibomoiye Domor**Date:**2021**Subjects:**Machine learning , Computer algorithms , Diagnosis , Heart - Diseases**Language:**English**Type:**Doctoral (Thesis)**Identifier:**http://hdl.handle.net/10210/479435 , uj:43367**Description:**Abstract: Medical data generated from hospitals are an increasing source of information for automatic medical diagnosis. These data contain latent patterns and correlations that can result in better diagnosis when appropriately processed. Most applications of machine learning (ML) to these patient records have focused on utilizing the ML algorithms directly, which usually results in suboptimal performance as most medical datasets are quite imbalanced. Also, labelling the enormous medical data is a challenging and expensive task. In order to solve these problems, recent research has focused on the development of improved ML methods, mainly preprocessing pipelines and feature learning methods. This thesis presents four machine learning approaches aimed at improving the medical diagnosis performance using publicly available datasets... , D.Ing. (Electrical and Electronic Engineering)**Full Text:**

Improved steganography through the strategic placement of information and the optimization thereof through the use of evolutionary algorithms

**Authors:**Cilliers, Michael**Date:**2015**Subjects:**Cryptography , Computer algorithms , Genetic algorithms , Image processing - Digital techniques , Evolutionary computation**Language:**English**Type:**Masters (Thesis)**Identifier:**http://hdl.handle.net/10210/84521 , uj:19232**Description:**Abstract: The development of steganography techniques does not occur in isolation. There is an arms race between steganography and steganalysis techniques. The development of a steganography technique that could adapt when needed could be beneficial. This research begins with a literature study exploring existing methods. Both steganog- raphy and steganalysis approaches are covered in order to get an overview of the environ- ment. Optimization methods are also examined to find a suitable method of optimizing the developed algorithm. The information that is gathered in the literature study was then used to develop a steganography algorithm that aims to decrease detectability through the strategic placement of information. The algorithm is developed in such a way as to allow for optimization. A genetic algorithm is implemented to help optimize the embedding of the information in a specific environment. This should allow the algorithm to be re- optimized when new steganalysis techniques are developed. The algorithm should thus remain relevant as steganalysis advances. The developed algorithms show that the placement of information in the image has an effect on its detectability. The developed algorithm even outperforms the random distribution LSB technique. The optimization that was implemented also produced positive results. The dissertation also includes the development of a novel evolutionary algorithm that drew its inspiration from the aphid life cycle. By switching between reproduction strate- gies the algorithm is able to adjust the balance between exploration and exploitation. , M.Sc. (Computer Science)**Full Text:**

**Authors:**Cilliers, Michael**Date:**2015**Subjects:**Cryptography , Computer algorithms , Genetic algorithms , Image processing - Digital techniques , Evolutionary computation**Language:**English**Type:**Masters (Thesis)**Identifier:**http://hdl.handle.net/10210/84521 , uj:19232**Description:**Abstract: The development of steganography techniques does not occur in isolation. There is an arms race between steganography and steganalysis techniques. The development of a steganography technique that could adapt when needed could be beneficial. This research begins with a literature study exploring existing methods. Both steganog- raphy and steganalysis approaches are covered in order to get an overview of the environ- ment. Optimization methods are also examined to find a suitable method of optimizing the developed algorithm. The information that is gathered in the literature study was then used to develop a steganography algorithm that aims to decrease detectability through the strategic placement of information. The algorithm is developed in such a way as to allow for optimization. A genetic algorithm is implemented to help optimize the embedding of the information in a specific environment. This should allow the algorithm to be re- optimized when new steganalysis techniques are developed. The algorithm should thus remain relevant as steganalysis advances. The developed algorithms show that the placement of information in the image has an effect on its detectability. The developed algorithm even outperforms the random distribution LSB technique. The optimization that was implemented also produced positive results. The dissertation also includes the development of a novel evolutionary algorithm that drew its inspiration from the aphid life cycle. By switching between reproduction strate- gies the algorithm is able to adjust the balance between exploration and exploitation. , M.Sc. (Computer Science)**Full Text:**

Motion planning algorithms for autonomous robots in static and dynamic environments

**Authors:**Mkhize, Zanele G. N.**Date:**2012-08-01**Subjects:**Computer algorithms , Robots - Dynamics , Robots - Programming , Robots - Design and construction , Robots - Simulation methods**Type:**Thesis**Identifier:**uj:8892 , http://hdl.handle.net/10210/5364**Description:**M.Ing. , The objective of this research is to present motion planning methods for an autonomous robot. Motion planning is one of the most important issues in robotics. The goal of motion planning is to find a path from a starting position to a goal position while avoiding obstacles in the environment. The robot's environment can be static or dynamic. Motion planning problems can be addressed using either classical approaches or obstacle-avoidance approaches. The classical approaches discussed in this work are: Voronoi, Visibility graph, Cell decomposition and Potential field. The obstacle avoidance approaches discussed in this research are: Neural network, Bug Algorithms, Dynamic Window Approach, Vector field histogram, Bubble band technique and Curvature velocity techniques. In this dissertation, simulation results and experimental results are presented. In the simulation, we address the motion planning issues using points extracted from a map. Algorithms used for simulation are: Voronoi algorithm, Hopfield neural network, Potential field and A* search algorithm. The simulation results show that the approaches used are effective and can be applied to real robots to solve motion planning problems. In the experiment, the Dynamic Window Approach (DWA) is used for obstacle-avoidance, a Pioneer robot explores the environment using an open source system, ROS (Robot Operating System). The experiment proved that DWA can be used to avoid obstacles in real time. keywords Motion planning, autonomous robot, optimal path problems, environment, search algorithm, classical approaches, obstacle avoidance approaches, exploration.**Full Text:**

**Authors:**Mkhize, Zanele G. N.**Date:**2012-08-01**Subjects:**Computer algorithms , Robots - Dynamics , Robots - Programming , Robots - Design and construction , Robots - Simulation methods**Type:**Thesis**Identifier:**uj:8892 , http://hdl.handle.net/10210/5364**Description:**M.Ing. , The objective of this research is to present motion planning methods for an autonomous robot. Motion planning is one of the most important issues in robotics. The goal of motion planning is to find a path from a starting position to a goal position while avoiding obstacles in the environment. The robot's environment can be static or dynamic. Motion planning problems can be addressed using either classical approaches or obstacle-avoidance approaches. The classical approaches discussed in this work are: Voronoi, Visibility graph, Cell decomposition and Potential field. The obstacle avoidance approaches discussed in this research are: Neural network, Bug Algorithms, Dynamic Window Approach, Vector field histogram, Bubble band technique and Curvature velocity techniques. In this dissertation, simulation results and experimental results are presented. In the simulation, we address the motion planning issues using points extracted from a map. Algorithms used for simulation are: Voronoi algorithm, Hopfield neural network, Potential field and A* search algorithm. The simulation results show that the approaches used are effective and can be applied to real robots to solve motion planning problems. In the experiment, the Dynamic Window Approach (DWA) is used for obstacle-avoidance, a Pioneer robot explores the environment using an open source system, ROS (Robot Operating System). The experiment proved that DWA can be used to avoid obstacles in real time. keywords Motion planning, autonomous robot, optimal path problems, environment, search algorithm, classical approaches, obstacle avoidance approaches, exploration.**Full Text:**

Optimising the frequency assignment problem utilizing particle swarm optimisation

**Authors:**Bezuidenhout, William**Date:**2014-10-08**Subjects:**Swarm intelligence , Cell phone systems , Computer algorithms , Radio frequency allocation**Type:**Thesis**Identifier:**uj:12550 , http://hdl.handle.net/10210/12342**Description:**M.Sc. (Information Technology) , A new particle swarm optimisation (PSO) algorithm that produces solutions to the xed spectrum frequency assignment problem (FS-FAP) is presented. Solutions to the FS-FAP are used to allocate frequencies in a mobile telecommunications network and must have low interference. The standard PSO algorithm's velocity method and global selection is ill suited for the frequency assignment problem (FAP). Therefore using the standard PSO algorithm as base, new techniques are developed to allow it to operate on the FAP. The new techniques include two velocity methods and three global selection schemes. This study presents the results of the algorithm operating on the Siemens set of COST 259 problems and shows that it is viable applying the PSO to the FAP.**Full Text:**

**Authors:**Bezuidenhout, William**Date:**2014-10-08**Subjects:**Swarm intelligence , Cell phone systems , Computer algorithms , Radio frequency allocation**Type:**Thesis**Identifier:**uj:12550 , http://hdl.handle.net/10210/12342**Description:**M.Sc. (Information Technology) , A new particle swarm optimisation (PSO) algorithm that produces solutions to the xed spectrum frequency assignment problem (FS-FAP) is presented. Solutions to the FS-FAP are used to allocate frequencies in a mobile telecommunications network and must have low interference. The standard PSO algorithm's velocity method and global selection is ill suited for the frequency assignment problem (FAP). Therefore using the standard PSO algorithm as base, new techniques are developed to allow it to operate on the FAP. The new techniques include two velocity methods and three global selection schemes. This study presents the results of the algorithm operating on the Siemens set of COST 259 problems and shows that it is viable applying the PSO to the FAP.**Full Text:**

Performance evaluation of transmission control protocol network congestion control algorithms

**Authors:**Dzivhani, Mulalo**Date:**2018**Subjects:**TCP/IP (Computer network protocol) , Computer algorithms , Telecommunication - Traffic - Management**Language:**English**Type:**Masters (Thesis)**Identifier:**http://hdl.handle.net/10210/480041 , uj:43442**Description:**Abstract: Transmission Control Protocol (TCP) guarantees reliable packet-based data communication from the sender to the receiver, and sequences segments to ensure data is delivered in-order to an application. TCP carries most of the Internet traffic, so the Internet’s network performance is largely based on the performance of TCP. TCP algorithms are generally implemented end-to-end with network algorithms, which could be passive or active queue management (AQM). TCP algorithms differ in the way they use bandwidth and in the way, they implement a growth policy for the congestion window. In routers, network algorithms are tasked with queue management. Passive network algorithms, such as Drop Tail, simply drop packets when the queue is full. AQMs, such as CoDel and RED, proactively avoid congestion by dropping packets before the buffer is full, based on defined indicators. The TCP family has not been able to achieve consistently high performance in all scenarios, especially in high-delay networks, and it is affected by network buffer management mechanisms. In this study, we investigate the performance of combinations of TCP end-to-end algorithms and network buffer management algorithms. Although several previous studies have evaluated the performance of TCP congestion control algorithms, there have not been enough studies on the effect of network algorithms on various TCP end-to-end algorithms... , M.Ing. (Electrical and Electronic Engineering Science)**Full Text:**

**Authors:**Dzivhani, Mulalo**Date:**2018**Subjects:**TCP/IP (Computer network protocol) , Computer algorithms , Telecommunication - Traffic - Management**Language:**English**Type:**Masters (Thesis)**Identifier:**http://hdl.handle.net/10210/480041 , uj:43442**Description:**Abstract: Transmission Control Protocol (TCP) guarantees reliable packet-based data communication from the sender to the receiver, and sequences segments to ensure data is delivered in-order to an application. TCP carries most of the Internet traffic, so the Internet’s network performance is largely based on the performance of TCP. TCP algorithms are generally implemented end-to-end with network algorithms, which could be passive or active queue management (AQM). TCP algorithms differ in the way they use bandwidth and in the way, they implement a growth policy for the congestion window. In routers, network algorithms are tasked with queue management. Passive network algorithms, such as Drop Tail, simply drop packets when the queue is full. AQMs, such as CoDel and RED, proactively avoid congestion by dropping packets before the buffer is full, based on defined indicators. The TCP family has not been able to achieve consistently high performance in all scenarios, especially in high-delay networks, and it is affected by network buffer management mechanisms. In this study, we investigate the performance of combinations of TCP end-to-end algorithms and network buffer management algorithms. Although several previous studies have evaluated the performance of TCP congestion control algorithms, there have not been enough studies on the effect of network algorithms on various TCP end-to-end algorithms... , M.Ing. (Electrical and Electronic Engineering Science)**Full Text:**

The extraction of quantitative mineralogical parameters from X-ray micro-tomography data using image processing techniques in three dimensions

**Authors:**Shipman, William John**Date:**2017**Subjects:**Computer algorithms , Computer graphics , Machine learning , Artificial intelligence**Language:**English**Type:**Doctoral (Thesis)**Identifier:**http://hdl.handle.net/10210/263159 , uj:27815**Description:**D.Ing. (Electrical Engineering) , Abstract: Process Mineralogy is the application of mineralogical techniques to the exploration of ore deposits and the design and optimisation of mineral processing flowsheets. Samples can be drill cores, rocks and milled particles, to give a few examples. X-ray microtomography has emerged as a complementary technique to the existing two-dimensional imaging modalities and bulk mineralogical methods. The applications of analysing X-ray micro-tomography scans include analysing packed particle beds to determine particlesize distributions, mineral exposure and liberation, as well as analysing the pore network within ores targeted by the oil and gas industry. X-ray micro-tomography suffers from several artefacts, including beam hardening, blurring and streaks, of which beam hardening and streaks are particularly problematic and common when scanning metal-bearing ores. A fundamental step in analysing a tomogram is to segment the different groups of minerals that are present within the sample. This is necessary to measure mineral grain properties and as a precursor to segmenting and analysing particles in a crushed or milled sample. In order for X-ray micro-tomography to provide accurate measurements, this first step of segmenting minerals must be performed accurately. Machine learning has been used in image processing for a variety of applications, including the analysis of optical microscopy images for medical purposes, and recently the analysis of tomograms. The primary focus of this work is the application of machine learning algorithms to the segmentation of minerals, as well as a means for measuring the accuracy of those algorithms. Four main problem areas were identified in this work. The first is the need for a suitable algorithm for filtering tomograms to reduce the quantity of noise that is present while minimising the additional blurring of the edges of mineral grains. The second problem statement focuses specifically on machine learning, while the third problem statement is directed at the description of voxels by means of several features. The fourth problem area is measuring the accuracy of any measurements made on the segmented tomograms. Without an analysis of the measurement accuracy, X-ray micro-tomography will not be accepted by the industry at large. This work demonstrates a method by which back-scattered electron images from a scanning electron microscope may be aligned to a tomogram, and used to quantify the accuracy of mineral segmentation algorithms...**Full Text:**

**Authors:**Shipman, William John**Date:**2017**Subjects:**Computer algorithms , Computer graphics , Machine learning , Artificial intelligence**Language:**English**Type:**Doctoral (Thesis)**Identifier:**http://hdl.handle.net/10210/263159 , uj:27815**Description:**D.Ing. (Electrical Engineering) , Abstract: Process Mineralogy is the application of mineralogical techniques to the exploration of ore deposits and the design and optimisation of mineral processing flowsheets. Samples can be drill cores, rocks and milled particles, to give a few examples. X-ray microtomography has emerged as a complementary technique to the existing two-dimensional imaging modalities and bulk mineralogical methods. The applications of analysing X-ray micro-tomography scans include analysing packed particle beds to determine particlesize distributions, mineral exposure and liberation, as well as analysing the pore network within ores targeted by the oil and gas industry. X-ray micro-tomography suffers from several artefacts, including beam hardening, blurring and streaks, of which beam hardening and streaks are particularly problematic and common when scanning metal-bearing ores. A fundamental step in analysing a tomogram is to segment the different groups of minerals that are present within the sample. This is necessary to measure mineral grain properties and as a precursor to segmenting and analysing particles in a crushed or milled sample. In order for X-ray micro-tomography to provide accurate measurements, this first step of segmenting minerals must be performed accurately. Machine learning has been used in image processing for a variety of applications, including the analysis of optical microscopy images for medical purposes, and recently the analysis of tomograms. The primary focus of this work is the application of machine learning algorithms to the segmentation of minerals, as well as a means for measuring the accuracy of those algorithms. Four main problem areas were identified in this work. The first is the need for a suitable algorithm for filtering tomograms to reduce the quantity of noise that is present while minimising the additional blurring of the edges of mineral grains. The second problem statement focuses specifically on machine learning, while the third problem statement is directed at the description of voxels by means of several features. The fourth problem area is measuring the accuracy of any measurements made on the segmented tomograms. Without an analysis of the measurement accuracy, X-ray micro-tomography will not be accepted by the industry at large. This work demonstrates a method by which back-scattered electron images from a scanning electron microscope may be aligned to a tomogram, and used to quantify the accuracy of mineral segmentation algorithms...**Full Text:**

The parallel solution of some discrete optimization problems

**Authors:**Loots, Wim**Date:**2014-10-07**Subjects:**Mathematical optimization - Data processing , Computer algorithms**Type:**Thesis**Identifier:**uj:12531 , http://hdl.handle.net/10210/12325**Description:**M.Sc. (Computer Science) , Please refer to full text to view abstract**Full Text:**

**Authors:**Loots, Wim**Date:**2014-10-07**Subjects:**Mathematical optimization - Data processing , Computer algorithms**Type:**Thesis**Identifier:**uj:12531 , http://hdl.handle.net/10210/12325**Description:**M.Sc. (Computer Science) , Please refer to full text to view abstract**Full Text:**

- «
- ‹
- 1
- ›
- »