A modular agent-based communications framework for autonomous vehicles in a simulated urban environment
- Authors: Chhaya, Meraj Mohamed Anis
- Date: 2015
- Subjects: Autonomous vehicles , Automobiles - Automatic control , Intelligent agents (Computer software) , Multiagent systems , Artificial intelligence
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/84586 , uj:19239
- Description: Abstract: Autonomous vehicles, also known as self-driving cars, refer to vehicles that can travel on public roads to destinations with minimal to no interaction with human beings. These types of cars can respond to traffic-related incidents faster and more precisely than human beings, thus potentially reducing the number of traffic accidents, subsequent pedestrian injuries and even fatalities. Autonomous vehicles have been a major focus of artificial intelligence research over the past few years, with major developments being contributed to the industry by Google, Bosch, and leading companies in the automobile industry. The study presented in the dissertation explores the use of Intelligent Agents as a computer science abstraction that encapsulates the several components of an autonomous vehicle, in order to promote component modularity and to allow the inclusion of newer technologies that could further improve the effectiveness of autonomous vehicles. A particular recent advancement in the field of autonomous vehicles is the use of intervehicle communications, which supplements the array of sensors provided in the vehicles, in case of failure or inability to produce sufficient data that would be necessary for the vehicle to make a decision. The agent model proposed in the dissertation places a paramount importance on the communications mechanism, incorporating it in its agent architecture, in order to produce an autonomous vehicle model that is safer and more effective than current solutions. The autonomous vehicle agent model, given research constraints, was deployed in a simulated 3D urban traffic environment, where it was tested in a number of scenarios where a vehicle's sensors failed or provided insufficient data, preventing a safe journey for the vehicle's passengers, passengers of other vehicles and pedestrians in the simulated environment. The results of the tests demonstrated that an inter-vehicle communications mechanism, even with limited transmission range, effectively complements the existing modules of an autonomous vehicle, and is especially useful in case one of the modules fails. , M.Sc. (Information Technology)
- Full Text:
- Authors: Chhaya, Meraj Mohamed Anis
- Date: 2015
- Subjects: Autonomous vehicles , Automobiles - Automatic control , Intelligent agents (Computer software) , Multiagent systems , Artificial intelligence
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/84586 , uj:19239
- Description: Abstract: Autonomous vehicles, also known as self-driving cars, refer to vehicles that can travel on public roads to destinations with minimal to no interaction with human beings. These types of cars can respond to traffic-related incidents faster and more precisely than human beings, thus potentially reducing the number of traffic accidents, subsequent pedestrian injuries and even fatalities. Autonomous vehicles have been a major focus of artificial intelligence research over the past few years, with major developments being contributed to the industry by Google, Bosch, and leading companies in the automobile industry. The study presented in the dissertation explores the use of Intelligent Agents as a computer science abstraction that encapsulates the several components of an autonomous vehicle, in order to promote component modularity and to allow the inclusion of newer technologies that could further improve the effectiveness of autonomous vehicles. A particular recent advancement in the field of autonomous vehicles is the use of intervehicle communications, which supplements the array of sensors provided in the vehicles, in case of failure or inability to produce sufficient data that would be necessary for the vehicle to make a decision. The agent model proposed in the dissertation places a paramount importance on the communications mechanism, incorporating it in its agent architecture, in order to produce an autonomous vehicle model that is safer and more effective than current solutions. The autonomous vehicle agent model, given research constraints, was deployed in a simulated 3D urban traffic environment, where it was tested in a number of scenarios where a vehicle's sensors failed or provided insufficient data, preventing a safe journey for the vehicle's passengers, passengers of other vehicles and pedestrians in the simulated environment. The results of the tests demonstrated that an inter-vehicle communications mechanism, even with limited transmission range, effectively complements the existing modules of an autonomous vehicle, and is especially useful in case one of the modules fails. , M.Sc. (Information Technology)
- Full Text:
South African inflation forecasting using genetically optimised neural networks
- Authors: Molwantoa, Lebogang
- Date: 2014-03-03
- Subjects: Neural networks (Computer science) , Artificial intelligence , Inflation (Finance) - Forecasting , Genetic algorithms , Inflation (Finance) - South Africa
- Type: Thesis
- Identifier: uj:4219 , http://hdl.handle.net/10210/9577
- Description: M.Com. (Financial Economics) , Forecasting inflation is an important concern for economists and business alike throughout the world. Despite the relative success of macroeconomic forecasting models in forecasting inflation, there is potential to improve these models to account for nonlinear relationships between inflation and the chosen independent variables. Artificial neural networks (ANNs) have found increased applicability as a potential nonlinear forecasting tool that accounts for nonlinearity found in data. In this study, we investigate the ability of genetically optimised neural networks to forecast South African inflation. The results were compared to economic forecasts obtained from traditional econometric models as well as macroeconomic structural models. The results obtained show that the genetically optimised neural networks indicate some ability to be used as potential forecasting tools. Their biggest advantage over the traditional forecasting techniques is that they do not impose the restriction of linearity on the data to be forecasted.
- Full Text:
- Authors: Molwantoa, Lebogang
- Date: 2014-03-03
- Subjects: Neural networks (Computer science) , Artificial intelligence , Inflation (Finance) - Forecasting , Genetic algorithms , Inflation (Finance) - South Africa
- Type: Thesis
- Identifier: uj:4219 , http://hdl.handle.net/10210/9577
- Description: M.Com. (Financial Economics) , Forecasting inflation is an important concern for economists and business alike throughout the world. Despite the relative success of macroeconomic forecasting models in forecasting inflation, there is potential to improve these models to account for nonlinear relationships between inflation and the chosen independent variables. Artificial neural networks (ANNs) have found increased applicability as a potential nonlinear forecasting tool that accounts for nonlinearity found in data. In this study, we investigate the ability of genetically optimised neural networks to forecast South African inflation. The results were compared to economic forecasts obtained from traditional econometric models as well as macroeconomic structural models. The results obtained show that the genetically optimised neural networks indicate some ability to be used as potential forecasting tools. Their biggest advantage over the traditional forecasting techniques is that they do not impose the restriction of linearity on the data to be forecasted.
- Full Text:
The application of artificial intelligence within information security.
- Authors: De Ru, Willem Gerhardus
- Date: 2012-08-17
- Subjects: Artificial intelligence , Computer security , Fuzzy logic , Information resources management , Electronic data processing departments - Security measures
- Type: Thesis
- Identifier: uj:2641 , http://hdl.handle.net/10210/6087
- Description: D.Phil. , Computer-based information systems will probably always have to contend with security issues. Much research have already gone into the field of information security. These research results have yielded some very sophisticated and effective security mechanisms and procedures. However, due to the ever increasing sophistication of criminals, combined with the ever changing and evolving information technology environment, some limitations still exist within the field of information security. Recent years have seen the proliferation of products embracing so-called artificial intelligence technologies. These products are in fields as diverse as engineering, business and medicine. The successes achieved in these fields pose the question whether artificial intelligence has a role to play within the field of information security. This thesis discusses limitations within information security and proposes ways in which artificial intelligence can be effectively applied to address these limitations. Specifically, the fields of authentication and risk analysis are identified as research fields where artificial intelligence has much to offer. These fields are explored in the context of their limitations and ways in which artificial intelligence can be applied to address these limitations. This thesis identifies two mainstream approaches in the attainment of artificial intelligence. These mainstream approaches are referred to as the "traditional" approach and the "non-traditional" approach. The traditional approach is based on symbolic processing, as opposed to the non-traditional approach, which is based on an abstraction of human reasoning. A representative technology from each of these mainstream approaches is selected to research their applicability within information security. Actual working prototypes of artificial intelligence techniques were developed to substantiate the results obtained in this research.
- Full Text:
- Authors: De Ru, Willem Gerhardus
- Date: 2012-08-17
- Subjects: Artificial intelligence , Computer security , Fuzzy logic , Information resources management , Electronic data processing departments - Security measures
- Type: Thesis
- Identifier: uj:2641 , http://hdl.handle.net/10210/6087
- Description: D.Phil. , Computer-based information systems will probably always have to contend with security issues. Much research have already gone into the field of information security. These research results have yielded some very sophisticated and effective security mechanisms and procedures. However, due to the ever increasing sophistication of criminals, combined with the ever changing and evolving information technology environment, some limitations still exist within the field of information security. Recent years have seen the proliferation of products embracing so-called artificial intelligence technologies. These products are in fields as diverse as engineering, business and medicine. The successes achieved in these fields pose the question whether artificial intelligence has a role to play within the field of information security. This thesis discusses limitations within information security and proposes ways in which artificial intelligence can be effectively applied to address these limitations. Specifically, the fields of authentication and risk analysis are identified as research fields where artificial intelligence has much to offer. These fields are explored in the context of their limitations and ways in which artificial intelligence can be applied to address these limitations. This thesis identifies two mainstream approaches in the attainment of artificial intelligence. These mainstream approaches are referred to as the "traditional" approach and the "non-traditional" approach. The traditional approach is based on symbolic processing, as opposed to the non-traditional approach, which is based on an abstraction of human reasoning. A representative technology from each of these mainstream approaches is selected to research their applicability within information security. Actual working prototypes of artificial intelligence techniques were developed to substantiate the results obtained in this research.
- Full Text:
Chaotic neural network swarm optimization
- Sun, Y-X
- Authors: Sun, Y-X
- Date: 2007
- Subjects: Artificial intelligence , Chaos theory , Computer simulation , Convergence of numerical methods , Global optimization , Hopfield neural networks
- Language: Chinese
- Type: Article
- Identifier: http://hdl.handle.net/10210/18234 , uj:15975 , ISSN: 1671-5497 , Citation: Sun, Y-X. et al. 2007. Chaotic neural network swarm optimization. Engineering village, 37(9):113-116.
- Description: A single particle structure of particle swarm optimization was analyzed which is found to have some properties of a Chaos-Hopfield neural net work. A new model of particle swarm optimization is presented. The model is a deterministic Chaos-Hopfield neural network swarm which is different from the existing one with stochastic parameters. Its search orbits show an evolution process of inverse period bifurcation from chaos to periodic orbits then to sink. In this evolution process, the initial chaos-like search expands the optimal scope, and inverse period bifurcation determines the stability and convergence of the search. Moreover, the convergence is theoretically analyzed. Finally, the numerical simulation shows the basic procedure of the proposed model and verifies its efficiency.
- Full Text:
- Authors: Sun, Y-X
- Date: 2007
- Subjects: Artificial intelligence , Chaos theory , Computer simulation , Convergence of numerical methods , Global optimization , Hopfield neural networks
- Language: Chinese
- Type: Article
- Identifier: http://hdl.handle.net/10210/18234 , uj:15975 , ISSN: 1671-5497 , Citation: Sun, Y-X. et al. 2007. Chaotic neural network swarm optimization. Engineering village, 37(9):113-116.
- Description: A single particle structure of particle swarm optimization was analyzed which is found to have some properties of a Chaos-Hopfield neural net work. A new model of particle swarm optimization is presented. The model is a deterministic Chaos-Hopfield neural network swarm which is different from the existing one with stochastic parameters. Its search orbits show an evolution process of inverse period bifurcation from chaos to periodic orbits then to sink. In this evolution process, the initial chaos-like search expands the optimal scope, and inverse period bifurcation determines the stability and convergence of the search. Moreover, the convergence is theoretically analyzed. Finally, the numerical simulation shows the basic procedure of the proposed model and verifies its efficiency.
- Full Text:
Reusable component oriented agents: a new architecture
- Authors: Boshoff, Willem Hendrik
- Date: 2008-05-13T08:40:46Z
- Subjects: Artificial intelligence , Intelligent agents (Computer software) , Plug-ins (Computer programs)
- Type: Thesis
- Identifier: uj:7094 , http://hdl.handle.net/10210/368
- Description: Researchers in artificial intelligence and agent technologies are presented with a massive array of various technologies that they might use for their research projects. It is difficult for researchers to test their theories effectively in the field. It takes a great deal of time to develop the platform on which the newly created agent will be tested, with little or no time left for troubleshooting and the investigation of further solutions. Every time a new technique or agent is researched, the agent has to be redeveloped from the ground up. This makes it difficult for researchers to compare their own theories with previously developed components. With the wide range of technologies and techniques available, there is no easy way to effectively make use of the various components, as each tool uses different technologies that cannot be combined easily. This dissertation outlines the new plug-in oriented agent architecture (POAA) and describes the agents that use the POAA. POAA agents make extensive use of functional and controller-based plug-ins in order to extend the functionality and behaviour of the agent. The architecture was designed to facilitate machine learning and agent mobility techniques. POAA agents are created by mounting newly created dynamic plug-in components into the static structure of the agent. The static structure of the agent serves as the basis of agent functionality and as the controller for the agent’s life cycle. The static and dynamic components of the POAA agent interact with each other in order to perform the agent’s required tasks. The use of plug-ins will greatly improve the effectiveness of researchers, as only a single, standard architecture will exist. Researchers only need design and develop the plug-in required for their specific agent to function as desired. This will also facilitate the comparison of various tools and methods, as only the components being reviewed need to be interchanged to measure system performance. The use of different plug-in architectures is also investigated. This includes deciding if the plug-in base will be configured at application run-time or at the time of application compilation. This dissertation focuses on techniques that will facilitate machine learning and agent mobility. For these purposes, extensive use is made of the machine learning tool WEKA developed by University of Waikato in New Zealand [Wi00]. The use of Java in the prototype will also facilitate the cross platform capability of the proposed agents. , Prof. E.M. Ehlers
- Full Text:
- Authors: Boshoff, Willem Hendrik
- Date: 2008-05-13T08:40:46Z
- Subjects: Artificial intelligence , Intelligent agents (Computer software) , Plug-ins (Computer programs)
- Type: Thesis
- Identifier: uj:7094 , http://hdl.handle.net/10210/368
- Description: Researchers in artificial intelligence and agent technologies are presented with a massive array of various technologies that they might use for their research projects. It is difficult for researchers to test their theories effectively in the field. It takes a great deal of time to develop the platform on which the newly created agent will be tested, with little or no time left for troubleshooting and the investigation of further solutions. Every time a new technique or agent is researched, the agent has to be redeveloped from the ground up. This makes it difficult for researchers to compare their own theories with previously developed components. With the wide range of technologies and techniques available, there is no easy way to effectively make use of the various components, as each tool uses different technologies that cannot be combined easily. This dissertation outlines the new plug-in oriented agent architecture (POAA) and describes the agents that use the POAA. POAA agents make extensive use of functional and controller-based plug-ins in order to extend the functionality and behaviour of the agent. The architecture was designed to facilitate machine learning and agent mobility techniques. POAA agents are created by mounting newly created dynamic plug-in components into the static structure of the agent. The static structure of the agent serves as the basis of agent functionality and as the controller for the agent’s life cycle. The static and dynamic components of the POAA agent interact with each other in order to perform the agent’s required tasks. The use of plug-ins will greatly improve the effectiveness of researchers, as only a single, standard architecture will exist. Researchers only need design and develop the plug-in required for their specific agent to function as desired. This will also facilitate the comparison of various tools and methods, as only the components being reviewed need to be interchanged to measure system performance. The use of different plug-in architectures is also investigated. This includes deciding if the plug-in base will be configured at application run-time or at the time of application compilation. This dissertation focuses on techniques that will facilitate machine learning and agent mobility. For these purposes, extensive use is made of the machine learning tool WEKA developed by University of Waikato in New Zealand [Wi00]. The use of Java in the prototype will also facilitate the cross platform capability of the proposed agents. , Prof. E.M. Ehlers
- Full Text:
The extraction of quantitative mineralogical parameters from X-ray micro-tomography data using image processing techniques in three dimensions
- Authors: Shipman, William John
- Date: 2017
- Subjects: Computer algorithms , Computer graphics , Machine learning , Artificial intelligence
- Language: English
- Type: Doctoral (Thesis)
- Identifier: http://hdl.handle.net/10210/263159 , uj:27815
- Description: D.Ing. (Electrical Engineering) , Abstract: Process Mineralogy is the application of mineralogical techniques to the exploration of ore deposits and the design and optimisation of mineral processing flowsheets. Samples can be drill cores, rocks and milled particles, to give a few examples. X-ray microtomography has emerged as a complementary technique to the existing two-dimensional imaging modalities and bulk mineralogical methods. The applications of analysing X-ray micro-tomography scans include analysing packed particle beds to determine particlesize distributions, mineral exposure and liberation, as well as analysing the pore network within ores targeted by the oil and gas industry. X-ray micro-tomography suffers from several artefacts, including beam hardening, blurring and streaks, of which beam hardening and streaks are particularly problematic and common when scanning metal-bearing ores. A fundamental step in analysing a tomogram is to segment the different groups of minerals that are present within the sample. This is necessary to measure mineral grain properties and as a precursor to segmenting and analysing particles in a crushed or milled sample. In order for X-ray micro-tomography to provide accurate measurements, this first step of segmenting minerals must be performed accurately. Machine learning has been used in image processing for a variety of applications, including the analysis of optical microscopy images for medical purposes, and recently the analysis of tomograms. The primary focus of this work is the application of machine learning algorithms to the segmentation of minerals, as well as a means for measuring the accuracy of those algorithms. Four main problem areas were identified in this work. The first is the need for a suitable algorithm for filtering tomograms to reduce the quantity of noise that is present while minimising the additional blurring of the edges of mineral grains. The second problem statement focuses specifically on machine learning, while the third problem statement is directed at the description of voxels by means of several features. The fourth problem area is measuring the accuracy of any measurements made on the segmented tomograms. Without an analysis of the measurement accuracy, X-ray micro-tomography will not be accepted by the industry at large. This work demonstrates a method by which back-scattered electron images from a scanning electron microscope may be aligned to a tomogram, and used to quantify the accuracy of mineral segmentation algorithms...
- Full Text:
- Authors: Shipman, William John
- Date: 2017
- Subjects: Computer algorithms , Computer graphics , Machine learning , Artificial intelligence
- Language: English
- Type: Doctoral (Thesis)
- Identifier: http://hdl.handle.net/10210/263159 , uj:27815
- Description: D.Ing. (Electrical Engineering) , Abstract: Process Mineralogy is the application of mineralogical techniques to the exploration of ore deposits and the design and optimisation of mineral processing flowsheets. Samples can be drill cores, rocks and milled particles, to give a few examples. X-ray microtomography has emerged as a complementary technique to the existing two-dimensional imaging modalities and bulk mineralogical methods. The applications of analysing X-ray micro-tomography scans include analysing packed particle beds to determine particlesize distributions, mineral exposure and liberation, as well as analysing the pore network within ores targeted by the oil and gas industry. X-ray micro-tomography suffers from several artefacts, including beam hardening, blurring and streaks, of which beam hardening and streaks are particularly problematic and common when scanning metal-bearing ores. A fundamental step in analysing a tomogram is to segment the different groups of minerals that are present within the sample. This is necessary to measure mineral grain properties and as a precursor to segmenting and analysing particles in a crushed or milled sample. In order for X-ray micro-tomography to provide accurate measurements, this first step of segmenting minerals must be performed accurately. Machine learning has been used in image processing for a variety of applications, including the analysis of optical microscopy images for medical purposes, and recently the analysis of tomograms. The primary focus of this work is the application of machine learning algorithms to the segmentation of minerals, as well as a means for measuring the accuracy of those algorithms. Four main problem areas were identified in this work. The first is the need for a suitable algorithm for filtering tomograms to reduce the quantity of noise that is present while minimising the additional blurring of the edges of mineral grains. The second problem statement focuses specifically on machine learning, while the third problem statement is directed at the description of voxels by means of several features. The fourth problem area is measuring the accuracy of any measurements made on the segmented tomograms. Without an analysis of the measurement accuracy, X-ray micro-tomography will not be accepted by the industry at large. This work demonstrates a method by which back-scattered electron images from a scanning electron microscope may be aligned to a tomogram, and used to quantify the accuracy of mineral segmentation algorithms...
- Full Text:
Effective use of artificial intelligence in predicting energy consumption and underground dam levels in two gold mines in South Africa
- Authors: Hasan, Ali N.
- Date: 2015-02-12
- Subjects: Artificial intelligence , Artificial intelligence - Engineering applications , Expert systems (Computer science) Electric power consumption
- Type: Thesis
- Identifier: uj:13316 , http://hdl.handle.net/10210/13332
- Description: D.Ing. (Electrical and Electronic Engineering) , The electricity shortage in South Africa has required the implementation of demand side management (DSM) projects. The DSM projects were implemented by installing energy monitoring and control systems to monitor certain mining aspects such as water pumping systems. Certain energy saving procedures and control systems followed by the mining industry are not sustainable and must be updated regularly in order to meet any changes in the water pumping system. In addition, the present water pumping, monitoring, and control system does not predict the energy consumption or the underground water dam levels. Hence, there is a need to introduce new monitoring system that could control and predict the energy consumption of the underground water pumping system and dam levels based on present and historical data. The work is undertaken to investigate the feasibility of using artificial intelligence in certain aspects of the mining industry. If successful, artificial intelligence systems could lead to improved safety and reduced electrical energy consumption, and decreased human error that could occur throughout the pump station monitoring and control process ...
- Full Text:
- Authors: Hasan, Ali N.
- Date: 2015-02-12
- Subjects: Artificial intelligence , Artificial intelligence - Engineering applications , Expert systems (Computer science) Electric power consumption
- Type: Thesis
- Identifier: uj:13316 , http://hdl.handle.net/10210/13332
- Description: D.Ing. (Electrical and Electronic Engineering) , The electricity shortage in South Africa has required the implementation of demand side management (DSM) projects. The DSM projects were implemented by installing energy monitoring and control systems to monitor certain mining aspects such as water pumping systems. Certain energy saving procedures and control systems followed by the mining industry are not sustainable and must be updated regularly in order to meet any changes in the water pumping system. In addition, the present water pumping, monitoring, and control system does not predict the energy consumption or the underground water dam levels. Hence, there is a need to introduce new monitoring system that could control and predict the energy consumption of the underground water pumping system and dam levels based on present and historical data. The work is undertaken to investigate the feasibility of using artificial intelligence in certain aspects of the mining industry. If successful, artificial intelligence systems could lead to improved safety and reduced electrical energy consumption, and decreased human error that could occur throughout the pump station monitoring and control process ...
- Full Text:
Intelligent system for automated components recognition and handling
- Authors: Findlay, Peter
- Date: 2012-02-06
- Subjects: Computer vision , Artificial intelligence , Pattern recognition systems
- Type: Thesis
- Identifier: uj:2012 , http://hdl.handle.net/10210/4365
- Description: M.Ing. , A machine vision system must, by definition, be intelligent, adaptable and reliable to satisfY the objectives of a system that is highly interactive with its dynamic environment and therefore prone to outside error factors. A machine vision system is described that utilizes a 2D captured web cam image for the purpose of intelligent object recognition, gripping and handling. The system is designed to be generic in its application and adaptable to various gripper configurations and handling configurations. This is achieved by using highly adaptable and intelligent recognition algorithms the gathers as much information as possible from a 2D colour web cam image. Numerous error-checking abilities are also built into the system to account for possible anomalies in the working environment. The entire system is designed around four separate but tightly integrated systems, namely the Recognition, Gripping and Handling structures and the Component Database which acts as the backbone of the system. The Recognition system provides all the input data that is then used for the Gripping and Handling systems. This integrated system functions as a single unit but a hierarchical structure has been used so that each of the systems can function as a stand-alone unit. The recognition system is generic in its ability to provide information such as recognized object identification, position and other orientation information that could be used by another handling system or gripper configuration. The Gripping system is based on a single custom designed gripper that provides basic gripping functionality. It is powered by a single motor and is highly functional with respect to the large range of object sizes that it can grip. The Handling Sub-system controls gripper positioning and motion. The Handling System incorporates control of the robot and the execution of both predetermined and online adaptable handling algorithms based on component data. It receives data from the Component database. The database allows the transparent ability to add and remove objects for recognition as well as other basic abilities. Experimental verification of the system is performed using a fully integrated and automated program and hardware control system developed for this purpose. The integration of the proposed system into a flexible and reconfigurable manufacturing system is explained.
- Full Text:
- Authors: Findlay, Peter
- Date: 2012-02-06
- Subjects: Computer vision , Artificial intelligence , Pattern recognition systems
- Type: Thesis
- Identifier: uj:2012 , http://hdl.handle.net/10210/4365
- Description: M.Ing. , A machine vision system must, by definition, be intelligent, adaptable and reliable to satisfY the objectives of a system that is highly interactive with its dynamic environment and therefore prone to outside error factors. A machine vision system is described that utilizes a 2D captured web cam image for the purpose of intelligent object recognition, gripping and handling. The system is designed to be generic in its application and adaptable to various gripper configurations and handling configurations. This is achieved by using highly adaptable and intelligent recognition algorithms the gathers as much information as possible from a 2D colour web cam image. Numerous error-checking abilities are also built into the system to account for possible anomalies in the working environment. The entire system is designed around four separate but tightly integrated systems, namely the Recognition, Gripping and Handling structures and the Component Database which acts as the backbone of the system. The Recognition system provides all the input data that is then used for the Gripping and Handling systems. This integrated system functions as a single unit but a hierarchical structure has been used so that each of the systems can function as a stand-alone unit. The recognition system is generic in its ability to provide information such as recognized object identification, position and other orientation information that could be used by another handling system or gripper configuration. The Gripping system is based on a single custom designed gripper that provides basic gripping functionality. It is powered by a single motor and is highly functional with respect to the large range of object sizes that it can grip. The Handling Sub-system controls gripper positioning and motion. The Handling System incorporates control of the robot and the execution of both predetermined and online adaptable handling algorithms based on component data. It receives data from the Component database. The database allows the transparent ability to add and remove objects for recognition as well as other basic abilities. Experimental verification of the system is performed using a fully integrated and automated program and hardware control system developed for this purpose. The integration of the proposed system into a flexible and reconfigurable manufacturing system is explained.
- Full Text:
Application of artificial intelligence techniques in design optimization of a parallel manipulator
- Authors: Modungwa, Dithoto
- Date: 2015-02-12
- Subjects: Parallel processing (Electronic computers) , Electronic data processing , Adaptive computing systems , Artificial intelligence
- Type: Thesis
- Identifier: uj:13311 , http://hdl.handle.net/10210/13328
- Description: D.Phil. (Electrical and Electronic Engineering) , The complexity of multi-objective functions and diverse variables involved with optimization of parallel manipulator or parallel kinematic machine design has inspired the research conducted in this thesis to investigate techniques that are suitable to tackle this problem efficiently. Further the parallel manipulator dimensional synthesis problem is multimodal and has no explicit analytical expressions. This process requires optimization techniques which offer high level of accuracy and robustness. The goal of this work is to present method(s) based on Artificial Intelligence (AI) that may be applied in addressing the challenge stated above. The performance criteria considered include; stiffness, dexterity and workspace. The case studied in this work is a 6 degrees of freedom (DOF) parallel manipulator, particularly because it is considered much more complicated than the lesser DOF mechanisms, owing to the number of independent parameters or inputs needed to specify its configuration (i.e. the higher the DOFs, the more the number of independent variables to be considered). The first contribution in this thesis is a comparative study of several hybrid Multi- Objective Optimization (MOO) AI algorithms, in application of a parallel manipulator dimensional synthesis. Artificial neural networks are utilized to approximate a multiple function for the analytical solution of the 6 DOF parallel manipulator’s performance indices, followed by implementation of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) as search algorithms. Further two hybrid techniques are proposed which implement Simulated Annealing and Random Forest in searching for optimum solutions in the Multi-objective Optimization problem. The final contribution in this thesis is ensemble machine learning algorithms for approximation of a multiple objective function for the 6 DOF parallel manipulator analytical solution. The results from the experiments demonstrated not only neural network (NN) but also other machine learning algorithms namely K- Nearest Neighbour (k-NN), M5 Prime (M5’), Zero R (ZR) and Decision Stump (DS) can effectively be implemented for the application of function approximation.
- Full Text:
- Authors: Modungwa, Dithoto
- Date: 2015-02-12
- Subjects: Parallel processing (Electronic computers) , Electronic data processing , Adaptive computing systems , Artificial intelligence
- Type: Thesis
- Identifier: uj:13311 , http://hdl.handle.net/10210/13328
- Description: D.Phil. (Electrical and Electronic Engineering) , The complexity of multi-objective functions and diverse variables involved with optimization of parallel manipulator or parallel kinematic machine design has inspired the research conducted in this thesis to investigate techniques that are suitable to tackle this problem efficiently. Further the parallel manipulator dimensional synthesis problem is multimodal and has no explicit analytical expressions. This process requires optimization techniques which offer high level of accuracy and robustness. The goal of this work is to present method(s) based on Artificial Intelligence (AI) that may be applied in addressing the challenge stated above. The performance criteria considered include; stiffness, dexterity and workspace. The case studied in this work is a 6 degrees of freedom (DOF) parallel manipulator, particularly because it is considered much more complicated than the lesser DOF mechanisms, owing to the number of independent parameters or inputs needed to specify its configuration (i.e. the higher the DOFs, the more the number of independent variables to be considered). The first contribution in this thesis is a comparative study of several hybrid Multi- Objective Optimization (MOO) AI algorithms, in application of a parallel manipulator dimensional synthesis. Artificial neural networks are utilized to approximate a multiple function for the analytical solution of the 6 DOF parallel manipulator’s performance indices, followed by implementation of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) as search algorithms. Further two hybrid techniques are proposed which implement Simulated Annealing and Random Forest in searching for optimum solutions in the Multi-objective Optimization problem. The final contribution in this thesis is ensemble machine learning algorithms for approximation of a multiple objective function for the 6 DOF parallel manipulator analytical solution. The results from the experiments demonstrated not only neural network (NN) but also other machine learning algorithms namely K- Nearest Neighbour (k-NN), M5 Prime (M5’), Zero R (ZR) and Decision Stump (DS) can effectively be implemented for the application of function approximation.
- Full Text:
Artificial intelligence and knowledge management principles in secure corporate intranets
- Authors: Barry, Christopher
- Date: 2010-02-23T10:25:43Z
- Subjects: Artificial intelligence , Knowledge management , Intranets (Computer networks) , Computer networks security measures
- Type: Thesis
- Identifier: uj:6634 , http://hdl.handle.net/10210/3035
- Description: M.Sc. (Computer Science) , Corporations throughout the world are facing numerous challenges in today’s competitive marketplace and are continuously looking for new and innovative means and methods of gaining competitive advantage. One of the means used to gain this advantage is that of information technology, and all the associated technologies and principles. These are primarily used to facilitate business processes and procedures that are designed to provide this competitive advantage. Significant attention has been given to each of the individual technologies and principles of Artificial Intelligence, Knowledge Management, Information Security, and Intranets and how they can be leveraged in order to improve efficiency and functionality within a corporation. However, in order to truly reap the benefits of these technologies and principles, it is necessary to look at them as a collaborative system, rather as individual components. This dissertation therefore investigates each of these individual technologies and principles in isolation, as well as in combination with each other to outline potential advantages, associated risks, and disadvantages when combining them within the corporate world. Based on these, the Intelligently Generated Knowledge (IGK) framework is outlined to implement such a collaborative system. Thereafter, an investigation of a theoretical situation is conducted based on this framework to examine the impact of the implementation of this type of collaborative system. The potential increase in cost savings, efficiency and functionality of corporations that would employ the IGK framework is clearly outlined in the theoretical example, and should this approach be adopted, it would be able to provide significant competitive advantage for any corporation.
- Full Text:
- Authors: Barry, Christopher
- Date: 2010-02-23T10:25:43Z
- Subjects: Artificial intelligence , Knowledge management , Intranets (Computer networks) , Computer networks security measures
- Type: Thesis
- Identifier: uj:6634 , http://hdl.handle.net/10210/3035
- Description: M.Sc. (Computer Science) , Corporations throughout the world are facing numerous challenges in today’s competitive marketplace and are continuously looking for new and innovative means and methods of gaining competitive advantage. One of the means used to gain this advantage is that of information technology, and all the associated technologies and principles. These are primarily used to facilitate business processes and procedures that are designed to provide this competitive advantage. Significant attention has been given to each of the individual technologies and principles of Artificial Intelligence, Knowledge Management, Information Security, and Intranets and how they can be leveraged in order to improve efficiency and functionality within a corporation. However, in order to truly reap the benefits of these technologies and principles, it is necessary to look at them as a collaborative system, rather as individual components. This dissertation therefore investigates each of these individual technologies and principles in isolation, as well as in combination with each other to outline potential advantages, associated risks, and disadvantages when combining them within the corporate world. Based on these, the Intelligently Generated Knowledge (IGK) framework is outlined to implement such a collaborative system. Thereafter, an investigation of a theoretical situation is conducted based on this framework to examine the impact of the implementation of this type of collaborative system. The potential increase in cost savings, efficiency and functionality of corporations that would employ the IGK framework is clearly outlined in the theoretical example, and should this approach be adopted, it would be able to provide significant competitive advantage for any corporation.
- Full Text:
A distributed affective cognitive architecture for cooperative multi-agent learning systems
- Authors: Barnett, Tristan Darrell
- Date: 2012-11-02
- Subjects: Multiagent systems , Intelligent agents (Computer software) , Robotics , Cloud computing , Artificial intelligence , Machine learning
- Type: Thesis
- Identifier: uj:7317 , http://hdl.handle.net/10210/8055
- Description: M.Sc. (Computer Science) , General machine intelligence represents the principal ambition of artificial intelligence research: creating machines that readily adapt to their environment. Machine learning represents the driving force of adaptation in artificial intelligence. However, two pertinent dilemmas emerge from research into machine learning. Firstly, how do intelligent agents learn effectively in real-world environments, in which randomness, perceptual aliasing and dynamics complicate learning algorithms? Secondly, how can intelligent agents exchange knowledge and learn from one another without introducing mathematical anomalies that might impede on the effectiveness of the applied learning algorithms? In a robotic search and rescue scenario, for example, the control system of each robot must learn from its surroundings in a fast-changing and unpredictable environment while at the same time sharing its learned information with others. In well-understood problems, an intelligent agent that is capable of solving task-specific problems will suffice. The challenge behind complex environments comes from fact that agents must solve arbitrary problems (Kaelbling et al. 1996; Ryan 2008). General problem-solving abilities are hence necessary for intelligent agents in complex environments, such as robotic applications. Although specialized machine learning techniques and cognitive hierarchical planning and learning may be a suitable solution for general problem-solving, such techniques have not been extensively explored in the context of cooperative multi-agent learning. In particular, to the knowledge of the author, no cognitive architecture has been designed which can support knowledge-sharing or self-organisation in cooperative multi-agent learning systems. It is therefore social learning in real-world applications that forms the basis of the research presented in this dissertation. This research aims to develop a distributed cognitive architecture for cooperative multi-agent learning in complex environments. The proposed Multi-agent Learning through Distributed Adaptive Contextualization Distributed Cognitive Architecture for Multi-agent Learning (MALDAC) Architecture comprises a self-organising multi-agent system to address the communication constraints that the physical hardware imposes on the system. The individual agents of the system implement their own cognitive learning architecture. The proposed Context-based Adaptive Empathy-deliberation Agent (CAEDA) Architecture investigates the applicability of emotion, ‘consciousness’, embodiment and sociability in cognitive architecture design. Cloud computing is proposed as a method of service delivery for the learning system, in which the MALDAC Architecture governs multiple CAEDA-based agents. An implementation of the proposed architecture is applied to a simulated multi-robot system to best emulate real-world complexities. Analyses indicate favourable results for the cooperative learning capabilities of the proposed MALDAC and CAEDA architectures.
- Full Text:
- Authors: Barnett, Tristan Darrell
- Date: 2012-11-02
- Subjects: Multiagent systems , Intelligent agents (Computer software) , Robotics , Cloud computing , Artificial intelligence , Machine learning
- Type: Thesis
- Identifier: uj:7317 , http://hdl.handle.net/10210/8055
- Description: M.Sc. (Computer Science) , General machine intelligence represents the principal ambition of artificial intelligence research: creating machines that readily adapt to their environment. Machine learning represents the driving force of adaptation in artificial intelligence. However, two pertinent dilemmas emerge from research into machine learning. Firstly, how do intelligent agents learn effectively in real-world environments, in which randomness, perceptual aliasing and dynamics complicate learning algorithms? Secondly, how can intelligent agents exchange knowledge and learn from one another without introducing mathematical anomalies that might impede on the effectiveness of the applied learning algorithms? In a robotic search and rescue scenario, for example, the control system of each robot must learn from its surroundings in a fast-changing and unpredictable environment while at the same time sharing its learned information with others. In well-understood problems, an intelligent agent that is capable of solving task-specific problems will suffice. The challenge behind complex environments comes from fact that agents must solve arbitrary problems (Kaelbling et al. 1996; Ryan 2008). General problem-solving abilities are hence necessary for intelligent agents in complex environments, such as robotic applications. Although specialized machine learning techniques and cognitive hierarchical planning and learning may be a suitable solution for general problem-solving, such techniques have not been extensively explored in the context of cooperative multi-agent learning. In particular, to the knowledge of the author, no cognitive architecture has been designed which can support knowledge-sharing or self-organisation in cooperative multi-agent learning systems. It is therefore social learning in real-world applications that forms the basis of the research presented in this dissertation. This research aims to develop a distributed cognitive architecture for cooperative multi-agent learning in complex environments. The proposed Multi-agent Learning through Distributed Adaptive Contextualization Distributed Cognitive Architecture for Multi-agent Learning (MALDAC) Architecture comprises a self-organising multi-agent system to address the communication constraints that the physical hardware imposes on the system. The individual agents of the system implement their own cognitive learning architecture. The proposed Context-based Adaptive Empathy-deliberation Agent (CAEDA) Architecture investigates the applicability of emotion, ‘consciousness’, embodiment and sociability in cognitive architecture design. Cloud computing is proposed as a method of service delivery for the learning system, in which the MALDAC Architecture governs multiple CAEDA-based agents. An implementation of the proposed architecture is applied to a simulated multi-robot system to best emulate real-world complexities. Analyses indicate favourable results for the cooperative learning capabilities of the proposed MALDAC and CAEDA architectures.
- Full Text:
Benchmarking a neural network forecaster against statistical measures
- Authors: Herman, Hilde
- Date: 2014-09-16
- Subjects: Forecasting - Data processing , Neural networks (Computer science) , Artificial intelligence , Benchmarking (Management)
- Type: Thesis
- Identifier: uj:12312 , http://hdl.handle.net/10210/12098
- Description: M.Ing. (Mechanical Engineering) , The combination of non-linear signal processing and financial market forecasting is a relatively new field of research. This dissertation concerns the forecasting of shares quoted on the Johannesburg Stock Exchange by using Artificial Neural Networks, and does so by comparing neural network results with established statistical results. The share price rise or fall are predicted as well as buy, sell and hold signals and compared to Time Series model and Moving Average Convergence Divergence results. The dissertation will show that artificial neural networks predict the share price rise or fall with less error than statistical models and yielded the highest profit when forecasting buy, sell and hold signals for a particular share.
- Full Text:
- Authors: Herman, Hilde
- Date: 2014-09-16
- Subjects: Forecasting - Data processing , Neural networks (Computer science) , Artificial intelligence , Benchmarking (Management)
- Type: Thesis
- Identifier: uj:12312 , http://hdl.handle.net/10210/12098
- Description: M.Ing. (Mechanical Engineering) , The combination of non-linear signal processing and financial market forecasting is a relatively new field of research. This dissertation concerns the forecasting of shares quoted on the Johannesburg Stock Exchange by using Artificial Neural Networks, and does so by comparing neural network results with established statistical results. The share price rise or fall are predicted as well as buy, sell and hold signals and compared to Time Series model and Moving Average Convergence Divergence results. The dissertation will show that artificial neural networks predict the share price rise or fall with less error than statistical models and yielded the highest profit when forecasting buy, sell and hold signals for a particular share.
- Full Text:
A hierarchy of random context grammars and automata
- Authors: Ehlers, Elizabeth Marie
- Date: 2014-04-03
- Subjects: Machine theory , Formal languages , Artificial intelligence
- Type: Thesis
- Identifier: uj:10501 , http://hdl.handle.net/10210/10004
- Description: Ph.D. (Computer Science) , Traditionally a formal language can be characterized in two ways: by a generative device (a grammar) and an acceptive device (an automaton). The characterization of two- and three-dimensional Random Context Grammars by two- and three-dimensional Random Context Automata are investigated. This thesis is an attempt to progressively extend a certain class of grammars to higher dimensions where the class of languages generated in each dimension is contained in the class of languages generated in the next higher dimension. Random Context Array Automata which characterizes Random Context Array Grammars (Von Solms [4,5]) are defined. The power of both Random Context Array Grammars and Random Context Array Automata is inherent in the fact that the replacement of symbols in figures is subject to horizontal, vertical and global context. A proof is given for the equivalence of the class of languages generated by Random Context Array Grammars and the class of languages accepted by Random Context Array Automata. The two-dimensional Random Context Array Grammars are extended to three dimensions. Random Context Structure Grammars generate three-dimensional structures. A characteristic of Random Context Structure Grammars is that the replacement of symbols in a structure is subject to seven relevant contexts. Random Context Structure Automata which characterize Random Context Structure Grammars are defined. It is shown that the class of languages generated by Random Context Structure Grammars are equivalent to the class of languages accepted by Random Context Array Automata...
- Full Text:
- Authors: Ehlers, Elizabeth Marie
- Date: 2014-04-03
- Subjects: Machine theory , Formal languages , Artificial intelligence
- Type: Thesis
- Identifier: uj:10501 , http://hdl.handle.net/10210/10004
- Description: Ph.D. (Computer Science) , Traditionally a formal language can be characterized in two ways: by a generative device (a grammar) and an acceptive device (an automaton). The characterization of two- and three-dimensional Random Context Grammars by two- and three-dimensional Random Context Automata are investigated. This thesis is an attempt to progressively extend a certain class of grammars to higher dimensions where the class of languages generated in each dimension is contained in the class of languages generated in the next higher dimension. Random Context Array Automata which characterizes Random Context Array Grammars (Von Solms [4,5]) are defined. The power of both Random Context Array Grammars and Random Context Array Automata is inherent in the fact that the replacement of symbols in figures is subject to horizontal, vertical and global context. A proof is given for the equivalence of the class of languages generated by Random Context Array Grammars and the class of languages accepted by Random Context Array Automata. The two-dimensional Random Context Array Grammars are extended to three dimensions. Random Context Structure Grammars generate three-dimensional structures. A characteristic of Random Context Structure Grammars is that the replacement of symbols in a structure is subject to seven relevant contexts. Random Context Structure Automata which characterize Random Context Structure Grammars are defined. It is shown that the class of languages generated by Random Context Structure Grammars are equivalent to the class of languages accepted by Random Context Array Automata...
- Full Text:
An integrated systems approach to risk management within a technology driven industry using the design structure matrix and fuzzy logic
- Authors: Barkhuizen, Willem Frederik
- Date: 2012-08-01
- Subjects: Value analysis (Cost control) , Risk management , Artificial intelligence , Fuzzy logic
- Type: Thesis
- Identifier: uj:8904 , http://hdl.handle.net/10210/5376
- Description: D.Ing. , “Innovation is the act of introducing something new” (Byrd & Brown, 2003). When companies are competing on the technology “playground” they need to be innovative. By analysis according to Byrd & Brown (Byrd & Brown, 2003) the “act of introducing”, relates to risk taking, and the “new” relates to creativity, and therefore these concepts, creativity and risk taking, in combination, are what innovation is all about. Risk management has become one of the greatest challenges of the 21st century, and one of the main components in innovation and the technology driven industry, intensifying the need for a systematic approach to managing uncertainties. During the development and design of complex engineering products, the input and teamwork of multiple participants from various backgrounds are required resulting in complex interactions. Risk interactions exist between the functional and physical elements within such a system and its sub-systems in various dimensions such as spatial interaction, information interaction etc. The relationships are of a multi-dimensional complexity that cannot be simplified using the standard task management tools (Yassine A. A., 2004). To find a meaningful starting point for the seemingly boundless subject of risk management the research takes a step back into the basic definition of risk management and follows an exploratory research methodology to explore each of the risk management processes (risk assessment, risk identification, risk analysis, risk evaluation, risk treatment and risk monitoring and review) and how these processes can be enhanced using the design structure matrix (DSM) and fuzzy logic thinking. The approach to risk management within an organisation should be seen as a holistic approach similar to the total quality management process, providing the ii opportunity to incorporated risk management during the design process as a concurrent task. The risk management model is then developed concurrently (during the design phase) using product development methodologies such as conceptual modeling and prototyping, and ultimately the prototype is tested using a case study. Finally resulting in a clustered DSM providing a visual representation of the system risk areas similar to the methodology used in Finite Element Analysis (FEA). The research combines alternative system representation and analysis techniques (Warfield, 2005), in particular the design structure matrix, and fuzzy logic to quantify the risk management effort neccessary to deal with uncertain and imprecise interactions between system elements.
- Full Text:
- Authors: Barkhuizen, Willem Frederik
- Date: 2012-08-01
- Subjects: Value analysis (Cost control) , Risk management , Artificial intelligence , Fuzzy logic
- Type: Thesis
- Identifier: uj:8904 , http://hdl.handle.net/10210/5376
- Description: D.Ing. , “Innovation is the act of introducing something new” (Byrd & Brown, 2003). When companies are competing on the technology “playground” they need to be innovative. By analysis according to Byrd & Brown (Byrd & Brown, 2003) the “act of introducing”, relates to risk taking, and the “new” relates to creativity, and therefore these concepts, creativity and risk taking, in combination, are what innovation is all about. Risk management has become one of the greatest challenges of the 21st century, and one of the main components in innovation and the technology driven industry, intensifying the need for a systematic approach to managing uncertainties. During the development and design of complex engineering products, the input and teamwork of multiple participants from various backgrounds are required resulting in complex interactions. Risk interactions exist between the functional and physical elements within such a system and its sub-systems in various dimensions such as spatial interaction, information interaction etc. The relationships are of a multi-dimensional complexity that cannot be simplified using the standard task management tools (Yassine A. A., 2004). To find a meaningful starting point for the seemingly boundless subject of risk management the research takes a step back into the basic definition of risk management and follows an exploratory research methodology to explore each of the risk management processes (risk assessment, risk identification, risk analysis, risk evaluation, risk treatment and risk monitoring and review) and how these processes can be enhanced using the design structure matrix (DSM) and fuzzy logic thinking. The approach to risk management within an organisation should be seen as a holistic approach similar to the total quality management process, providing the ii opportunity to incorporated risk management during the design process as a concurrent task. The risk management model is then developed concurrently (during the design phase) using product development methodologies such as conceptual modeling and prototyping, and ultimately the prototype is tested using a case study. Finally resulting in a clustered DSM providing a visual representation of the system risk areas similar to the methodology used in Finite Element Analysis (FEA). The research combines alternative system representation and analysis techniques (Warfield, 2005), in particular the design structure matrix, and fuzzy logic to quantify the risk management effort neccessary to deal with uncertain and imprecise interactions between system elements.
- Full Text:
Application of Artificial Intelligence (AI) methods for designing and analysis of Reconfigurable Cellular Manufacturing System (RCMS)
- Marwala, Tshilidzi, Xing, Bo, Nelwamondo, Fulufbelo V., Battle, Kimberly, Gao, Wenjing
- Authors: Marwala, Tshilidzi , Xing, Bo , Nelwamondo, Fulufbelo V. , Battle, Kimberly , Gao, Wenjing
- Date: 2009
- Subjects: Reconfigurable Cellular Manufacturing System , Artificial intelligence , Cellular Manufacturing System , Reconfigurable Manufacturing System
- Type: Article
- Identifier: uj:5305 , ISSN 978-1-4244-3523-4 , http://hdl.handle.net/10210/5266
- Description: This work focuses on the design and control of a novel hybrId manufacturing system: Reconfigurable Cellular Manufacturing System (RCMS) by using Artificial Intelligence (AI) approach. It is hybrid as it combines the advantages of Cellular Manufacturing System (CMS) and Reconfigurable Manufacturing System (RMS). In addition to inheriting desirable properties from CMS and RMS, RCMS provides additional benefits including flexibility and the ability to respond to changing products, product mix and market conditions during its useful life, avoiding premature obsolescence of the manufacturing system. The emphasis of this research is the formation of Reconfigurable Manufacturing Cell (RMC) which is the dynamic and logical clustering of some manufacturing resources, driven by specific customer orders, aiming at optimally fulfilling customers' orders along with other RMCs in the RCMS.
- Full Text:
- Authors: Marwala, Tshilidzi , Xing, Bo , Nelwamondo, Fulufbelo V. , Battle, Kimberly , Gao, Wenjing
- Date: 2009
- Subjects: Reconfigurable Cellular Manufacturing System , Artificial intelligence , Cellular Manufacturing System , Reconfigurable Manufacturing System
- Type: Article
- Identifier: uj:5305 , ISSN 978-1-4244-3523-4 , http://hdl.handle.net/10210/5266
- Description: This work focuses on the design and control of a novel hybrId manufacturing system: Reconfigurable Cellular Manufacturing System (RCMS) by using Artificial Intelligence (AI) approach. It is hybrid as it combines the advantages of Cellular Manufacturing System (CMS) and Reconfigurable Manufacturing System (RMS). In addition to inheriting desirable properties from CMS and RMS, RCMS provides additional benefits including flexibility and the ability to respond to changing products, product mix and market conditions during its useful life, avoiding premature obsolescence of the manufacturing system. The emphasis of this research is the formation of Reconfigurable Manufacturing Cell (RMC) which is the dynamic and logical clustering of some manufacturing resources, driven by specific customer orders, aiming at optimally fulfilling customers' orders along with other RMCs in the RCMS.
- Full Text:
The plug-in plantation : a proposal for a resilient infrastructural system achieved via the participation-driven robotic reforestation of residual space along Johannesburg's Main Reef Road
- Authors: Jonker, Pieter Jacobus
- Date: 2015
- Subjects: Main Reef Road (Johannesburg, South Africa) , Plantations , Artificial intelligence , Architecture and biology - South Africa - Johannesburg , Architecture and technology - South Africa - Johannesburg , Architecture - Environmental aspects - South Africa - Johannesburg , Urban renewal - South Africa - Johannesburg
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/57827 , uj:16391
- Description: Abstract: Please refer to full text to view abstract , M.Tech. (Architecture)
- Full Text:
- Authors: Jonker, Pieter Jacobus
- Date: 2015
- Subjects: Main Reef Road (Johannesburg, South Africa) , Plantations , Artificial intelligence , Architecture and biology - South Africa - Johannesburg , Architecture and technology - South Africa - Johannesburg , Architecture - Environmental aspects - South Africa - Johannesburg , Urban renewal - South Africa - Johannesburg
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/57827 , uj:16391
- Description: Abstract: Please refer to full text to view abstract , M.Tech. (Architecture)
- Full Text:
Embedding intelligence in enhanced music mapping agents
- Authors: Gray, Marnitz Cornell
- Date: 2009-05-19T06:39:53Z
- Subjects: Artificial intelligence , Intelligent agents (Computer software) , Digital jukebox software
- Type: Thesis
- Identifier: uj:8382 , http://hdl.handle.net/10210/2548
- Description: M.Sc. (Computer Science) , Artificial Intelligence has been an increasing focus of study over the past years. Agent technology has emerged as being the preferred model for simulating intelligence [Jen00a]. Focus is now turning to inter-agent communication [Jen00b] and agents that can adapt to changes in their environment. Digital music has been gaining in popularity over the past few years. Devices such as Apple’s iPod have sold millions. These devices have the capability of holding thousands of songs. Managing such a device and selecting a list of songs to play from so many can be a difficult task. This dissertation expands on agent types by creating a new agent type known as the Modifiable Agent. The Modifiable Agent type defines agents which have the ability to modify their intelligence depending on what data they need to analyse. This allows an agent to, for example, change from being a goal based to a learning based agent, or allows an agent to modify the way in which it processes data. Digital music is a growing field with devices such as the Apple iPod revolutionising the industry. These devices can store large amounts of songs and as such, make it very difficult to navigate as they usually don’t include devices such as a mouse or keyboard. Therefore, creating a play list of songs can be a tiresome process which can lead to the user playing the same songs over and over. The goal of the dissertation is to provide research into methods of automatically creating a play list from a user selected song, i.e. once a user selects a song, a list of similar music is automatically generated and added to the user’s playlist. This simplifies the task of selecting music and adds diversity to the songs which the user listens to. The dissertation introduces intelligent music selection, or selecting a play list of songs depending on music classification techniques and past human interaction.
- Full Text:
- Authors: Gray, Marnitz Cornell
- Date: 2009-05-19T06:39:53Z
- Subjects: Artificial intelligence , Intelligent agents (Computer software) , Digital jukebox software
- Type: Thesis
- Identifier: uj:8382 , http://hdl.handle.net/10210/2548
- Description: M.Sc. (Computer Science) , Artificial Intelligence has been an increasing focus of study over the past years. Agent technology has emerged as being the preferred model for simulating intelligence [Jen00a]. Focus is now turning to inter-agent communication [Jen00b] and agents that can adapt to changes in their environment. Digital music has been gaining in popularity over the past few years. Devices such as Apple’s iPod have sold millions. These devices have the capability of holding thousands of songs. Managing such a device and selecting a list of songs to play from so many can be a difficult task. This dissertation expands on agent types by creating a new agent type known as the Modifiable Agent. The Modifiable Agent type defines agents which have the ability to modify their intelligence depending on what data they need to analyse. This allows an agent to, for example, change from being a goal based to a learning based agent, or allows an agent to modify the way in which it processes data. Digital music is a growing field with devices such as the Apple iPod revolutionising the industry. These devices can store large amounts of songs and as such, make it very difficult to navigate as they usually don’t include devices such as a mouse or keyboard. Therefore, creating a play list of songs can be a tiresome process which can lead to the user playing the same songs over and over. The goal of the dissertation is to provide research into methods of automatically creating a play list from a user selected song, i.e. once a user selects a song, a list of similar music is automatically generated and added to the user’s playlist. This simplifies the task of selecting music and adds diversity to the songs which the user listens to. The dissertation introduces intelligent music selection, or selecting a play list of songs depending on music classification techniques and past human interaction.
- Full Text:
Knowledge-based automation and new workforce implementation at a financial institution
- Authors: Elsworth, Catherine
- Date: 2018
- Subjects: Industrial revolution , Artificial intelligence , Banks and banking - Technological innovations , Banks and banking - Customer services , Knowledge management , Banks and banking - Information technology
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/402839 , uj:33730
- Description: Abstract : Intelligent Automation (IA) entails advanced knowledge-based technologies associated with the so-called Fourth Industrial Revolution (4IR). In this study, the phrase “IA journey” refers to the processes of knowledge-based automation and new workforce implementation. The study’s unit of analysis is not as much the IA journey itself, rather it is an analysis of what constitutes a balanced approach to IA implementation and adoption within an organisation. For example, employees’ feelings of uncertainty during an organisation’s IA journey could cause an imbalance in staff morale and resistance from employees to adapt to the changes. Therefore, the main research question of this study is: What are the components of a balanced approach to knowledge-based automation and new workforce implementation of a financial institution? The research question aligns to the world of service delivery that is changing at an alarming rate, with customers expecting fast, personalised, digital service. The landscape for financial institutions is changing, for example, traditional competitors are taking steps to meet customer demands and non-traditional competitors are entering the market place, threatening the existence of traditional financial institutions, commonly referred to as banks. The literature reveals that the evolution of Internet usage and the influence of social media and smart phones have increased the significance of technology and digital service in the financial services industry. Adoptions of these technologies is vital if traditional banks want to remain relevant in the market where financial technologies companies (Fintechs), and small, digitally nimble start-ups can provide the quick, personalised service that customers expect. Already many financial institutions have started to investigate the opportunities that technologies such as IA and chatbots provide. The potential of chatbot technology to improve customer experience and reduce operational costs make it an attractive option for organisations to consider. Literature reveals that the cost of implementation of this technology is a fraction of the cost of legacy system re-writes. The ability of this technology to integrate with existing systems and improve turnaround time and service to customers makes the IA journey a favourable choice. The IA journey of one South African Financial Institution (SAFI) formed the focus of this study. Research was conducted within the SAFI into the application of this technology across the organisation to understand the impact that the changes experienced had on the employees of the organisation. Understanding how these changes impact employees helps in determining the best ways to manage the changes in order to develop a balanced approach to implementation and adaption of IA within an organisation. The empirical study followed a qualitative research design, featuring qualitative data collection and analysis techniques. Secondary data were collected and displayed in order to show case v hoe IA project were implemented into the organisation. The philosophical paradigm that suited a study of this nature was interpretivism as the research was socially constructed in its aim to understand the adoption processes of the organisation implementing an IA programme. The research followed an inductive approach as the study’s conceptual framework was developed based on data collected and conclusions drawn through the analysis of this data. The study involved the collection of data through the use of interviews conducted across junior and senior management levels within the business units impacted by the changes associated with the IA journey. The aim of these interviews was to gain an understanding of employees’ perceptions of the IA journey across the organisation as well as understand the experiences of those involved in the IA programme. Secondary data was also collected from five SAFI use cases, which provided a rich source for quantitative data. The presentation of results regarding the outcomes of use cases implemented across the organisation is in accordance to the University of Johannesburg Code of Academic and Research Ethics. The research findings informed the development of a conceptual framework, which can be used to encourage a balanced approach towards IA implementation and adoption throughout an organisation that is experiencing major changes. This study reveals that employees’ fears of the changes need to be identified and managed early in order to avoid resistance to the changes and negative perceptions of the technology being created. The conceptual framework identifies the components that a financial institution can use in its balanced approach to increase adoption and reduce fears. Moreover, the study revealed the need for organisations to invest in technologies of the future and the benefits that this technology can have for the organisation. Customer experience and expectations form a vital part of any organisation and the lessons learnt in the value this technology can provide in creating a great customer experience are invaluable. The study revealed that there is a difference between digitisation and automation and that knowledge-based automation technology plays a key role in enabling a digital customer experience... , M.Phil. (Information Management)
- Full Text:
- Authors: Elsworth, Catherine
- Date: 2018
- Subjects: Industrial revolution , Artificial intelligence , Banks and banking - Technological innovations , Banks and banking - Customer services , Knowledge management , Banks and banking - Information technology
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/402839 , uj:33730
- Description: Abstract : Intelligent Automation (IA) entails advanced knowledge-based technologies associated with the so-called Fourth Industrial Revolution (4IR). In this study, the phrase “IA journey” refers to the processes of knowledge-based automation and new workforce implementation. The study’s unit of analysis is not as much the IA journey itself, rather it is an analysis of what constitutes a balanced approach to IA implementation and adoption within an organisation. For example, employees’ feelings of uncertainty during an organisation’s IA journey could cause an imbalance in staff morale and resistance from employees to adapt to the changes. Therefore, the main research question of this study is: What are the components of a balanced approach to knowledge-based automation and new workforce implementation of a financial institution? The research question aligns to the world of service delivery that is changing at an alarming rate, with customers expecting fast, personalised, digital service. The landscape for financial institutions is changing, for example, traditional competitors are taking steps to meet customer demands and non-traditional competitors are entering the market place, threatening the existence of traditional financial institutions, commonly referred to as banks. The literature reveals that the evolution of Internet usage and the influence of social media and smart phones have increased the significance of technology and digital service in the financial services industry. Adoptions of these technologies is vital if traditional banks want to remain relevant in the market where financial technologies companies (Fintechs), and small, digitally nimble start-ups can provide the quick, personalised service that customers expect. Already many financial institutions have started to investigate the opportunities that technologies such as IA and chatbots provide. The potential of chatbot technology to improve customer experience and reduce operational costs make it an attractive option for organisations to consider. Literature reveals that the cost of implementation of this technology is a fraction of the cost of legacy system re-writes. The ability of this technology to integrate with existing systems and improve turnaround time and service to customers makes the IA journey a favourable choice. The IA journey of one South African Financial Institution (SAFI) formed the focus of this study. Research was conducted within the SAFI into the application of this technology across the organisation to understand the impact that the changes experienced had on the employees of the organisation. Understanding how these changes impact employees helps in determining the best ways to manage the changes in order to develop a balanced approach to implementation and adaption of IA within an organisation. The empirical study followed a qualitative research design, featuring qualitative data collection and analysis techniques. Secondary data were collected and displayed in order to show case v hoe IA project were implemented into the organisation. The philosophical paradigm that suited a study of this nature was interpretivism as the research was socially constructed in its aim to understand the adoption processes of the organisation implementing an IA programme. The research followed an inductive approach as the study’s conceptual framework was developed based on data collected and conclusions drawn through the analysis of this data. The study involved the collection of data through the use of interviews conducted across junior and senior management levels within the business units impacted by the changes associated with the IA journey. The aim of these interviews was to gain an understanding of employees’ perceptions of the IA journey across the organisation as well as understand the experiences of those involved in the IA programme. Secondary data was also collected from five SAFI use cases, which provided a rich source for quantitative data. The presentation of results regarding the outcomes of use cases implemented across the organisation is in accordance to the University of Johannesburg Code of Academic and Research Ethics. The research findings informed the development of a conceptual framework, which can be used to encourage a balanced approach towards IA implementation and adoption throughout an organisation that is experiencing major changes. This study reveals that employees’ fears of the changes need to be identified and managed early in order to avoid resistance to the changes and negative perceptions of the technology being created. The conceptual framework identifies the components that a financial institution can use in its balanced approach to increase adoption and reduce fears. Moreover, the study revealed the need for organisations to invest in technologies of the future and the benefits that this technology can have for the organisation. Customer experience and expectations form a vital part of any organisation and the lessons learnt in the value this technology can provide in creating a great customer experience are invaluable. The study revealed that there is a difference between digitisation and automation and that knowledge-based automation technology plays a key role in enabling a digital customer experience... , M.Phil. (Information Management)
- Full Text:
The commercialisation lifecycle of a knowledge management consulting firm in the fourth industrial revolution
- Authors: De Koker, Lucian Theodoric
- Date: 2018
- Subjects: Knowledge management , Industrial revolution , Artificial intelligence , Internet of things
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/415004 , uj:35021
- Description: Abstract: Current situation in business, economies and the world indicate that artificial intelligence (AI), the Internet of Things (IoT) and robotics are some of the technologies that is and will continue to have a tremendous impact on businesses, economies and everyday human life. These technologies amongst others are reshaping the global landscape and business ecosystems and the manner in which business is conducted in the fourth industrial revolution (4IR). Generic commercialisation lifecycles and business models require adaptation in the 4IR, which will aid successful business for a knowledge management (KM) consulting firm. The study focussed on conceptualising and developing a commercialisation lifecycle (CLC) for a KM consulting firm in the 4IR. The research objective was to conceptualise a business model canvas (BMC) and develop an IKM framework that can be used specifically by a KM consulting firm, including entrepreneurs, small businesses and professional business consulting firms in the 4IR. Literature shows that commercialisation lifecycles and business models need to change continuously, especially on the front of the 4IR. To remain competitive and sustain a healthy business, KM consulting firms will need to upskill and improve current business operations. Upskilling, changing and preparing for the 4IR, give competitive advantage over competitors. New technologies need to be embraced and harnessed to exploit the innovative capabilities and value add new technologies offer. With an improved, adapted and updated CLC and BMC in place, a KM consulting firm will be able to provide innovative services to clients, ensuring profitability. The research methodology for the study was qualitative in nature, with an inductive and exploratory approach. Grounded in the interpretivist paradigm, the inductive approach allowed the study to explore a specific phenomenon and identify themes in order to explain patterns. A conceptual framework was developed, using existing literature, to conceptualise a CLC for a KM consulting firm in the 4IR. Data was collected through content analysis and in-depth faceto- face interviews, through multi-method qualitative research. Purposive sampling was selected to determine the 4 participants for the interviews, through critical case sampling, allowing 3 participants to be interviewed and the fourth participant to be used for testing the findings of the interviews. Interviews and testing of the interviews were transcribed, coded, and categorised through the Data Analysis Spiral. Research findings, through triangulation found that the conceptualisation and development of a CLC is crucial; that the conceptualisation of a BMC is crucial; and that new services and the development of an IKM framework is crucial; which will allow a KM consulting firm, including entrepreneurs, small businesses and professional business consulting firms to be successful in the 4IR. Results showed that the CLC, the BMC, new services and the IKM framework, need... , M.Phil. (Information Management)
- Full Text:
- Authors: De Koker, Lucian Theodoric
- Date: 2018
- Subjects: Knowledge management , Industrial revolution , Artificial intelligence , Internet of things
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/415004 , uj:35021
- Description: Abstract: Current situation in business, economies and the world indicate that artificial intelligence (AI), the Internet of Things (IoT) and robotics are some of the technologies that is and will continue to have a tremendous impact on businesses, economies and everyday human life. These technologies amongst others are reshaping the global landscape and business ecosystems and the manner in which business is conducted in the fourth industrial revolution (4IR). Generic commercialisation lifecycles and business models require adaptation in the 4IR, which will aid successful business for a knowledge management (KM) consulting firm. The study focussed on conceptualising and developing a commercialisation lifecycle (CLC) for a KM consulting firm in the 4IR. The research objective was to conceptualise a business model canvas (BMC) and develop an IKM framework that can be used specifically by a KM consulting firm, including entrepreneurs, small businesses and professional business consulting firms in the 4IR. Literature shows that commercialisation lifecycles and business models need to change continuously, especially on the front of the 4IR. To remain competitive and sustain a healthy business, KM consulting firms will need to upskill and improve current business operations. Upskilling, changing and preparing for the 4IR, give competitive advantage over competitors. New technologies need to be embraced and harnessed to exploit the innovative capabilities and value add new technologies offer. With an improved, adapted and updated CLC and BMC in place, a KM consulting firm will be able to provide innovative services to clients, ensuring profitability. The research methodology for the study was qualitative in nature, with an inductive and exploratory approach. Grounded in the interpretivist paradigm, the inductive approach allowed the study to explore a specific phenomenon and identify themes in order to explain patterns. A conceptual framework was developed, using existing literature, to conceptualise a CLC for a KM consulting firm in the 4IR. Data was collected through content analysis and in-depth faceto- face interviews, through multi-method qualitative research. Purposive sampling was selected to determine the 4 participants for the interviews, through critical case sampling, allowing 3 participants to be interviewed and the fourth participant to be used for testing the findings of the interviews. Interviews and testing of the interviews were transcribed, coded, and categorised through the Data Analysis Spiral. Research findings, through triangulation found that the conceptualisation and development of a CLC is crucial; that the conceptualisation of a BMC is crucial; and that new services and the development of an IKM framework is crucial; which will allow a KM consulting firm, including entrepreneurs, small businesses and professional business consulting firms to be successful in the 4IR. Results showed that the CLC, the BMC, new services and the IKM framework, need... , M.Phil. (Information Management)
- Full Text:
Extending labour law and social protection to waste pickers in the Fourth Industrial Age
- Authors: Koen, Louis
- Date: 2019
- Subjects: Refuse and refuse disposal - Social aspects , Informal sector (Economics) - Employees , Technological innovations - Social aspects - South Africa , Artificial intelligence
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/413489 , uj:34836
- Description: Abstract: The world of work has changed significantly and continues to undergo changes brought about by the Fourth Industrial Revolution. These changes have contributed to a significant increase in informal employment where workers face a lack of adequate labour and social protection. The majority of the world’s workforce is engaged in informal employment, and in South Africa the growth in the number of workers in the informal economy has far outpaced that in the formal economy since the 2008/9 global financial crises. This dissertation therefore considers these challenges together with new challenges brought about by the Fourth Industrial Revolution. The rise of systems such as autonomous pneumatic waste management systems demonstrates the ability of new technologies to fundamentally alter the face of the waste management industries. It is against the backdrop of these potentially significant changes that this dissertation considers the need to provide waste pickers with adequate labour and social protection. It does so by firstly considering the protection available to these workers in terms of international and regional instruments. An analysis is also undertaken of the potential for new sources of international labour law, in the form of international trade agreements, to enhance compliance with international labour standards. This dissertation also considers the existing legal framework in order to identify deficiencies in regulation for the Fourth Industrial Age. To this end an analysis is undertaken of the valuable procedural safeguards provided to waste pickers, where new technologies are implemented, within the realm of administrative law. However, the practical ability to enforce these provisions are criticised given that cases brought in terms of the Promotion of Administrative Justice Act can only be heard by the High Court of South Africa. This dissertation then considers the World Economic Forum’s “human centred approach” to the Fourth Industrial Age. It is acknowledged that this idea by the WEF does not in itself represent a legally binding obligation, however, the internationally binding right to development and the constitutional obligation on municipalities to promote the social and economic development of the community similarly place people at the centre. The WEF human centred approach could accordingly guide interpretation of these obligations in the context of regulation for the Fourth Industrial Age. , LL.M. (Mercantile Law)
- Full Text:
- Authors: Koen, Louis
- Date: 2019
- Subjects: Refuse and refuse disposal - Social aspects , Informal sector (Economics) - Employees , Technological innovations - Social aspects - South Africa , Artificial intelligence
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/413489 , uj:34836
- Description: Abstract: The world of work has changed significantly and continues to undergo changes brought about by the Fourth Industrial Revolution. These changes have contributed to a significant increase in informal employment where workers face a lack of adequate labour and social protection. The majority of the world’s workforce is engaged in informal employment, and in South Africa the growth in the number of workers in the informal economy has far outpaced that in the formal economy since the 2008/9 global financial crises. This dissertation therefore considers these challenges together with new challenges brought about by the Fourth Industrial Revolution. The rise of systems such as autonomous pneumatic waste management systems demonstrates the ability of new technologies to fundamentally alter the face of the waste management industries. It is against the backdrop of these potentially significant changes that this dissertation considers the need to provide waste pickers with adequate labour and social protection. It does so by firstly considering the protection available to these workers in terms of international and regional instruments. An analysis is also undertaken of the potential for new sources of international labour law, in the form of international trade agreements, to enhance compliance with international labour standards. This dissertation also considers the existing legal framework in order to identify deficiencies in regulation for the Fourth Industrial Age. To this end an analysis is undertaken of the valuable procedural safeguards provided to waste pickers, where new technologies are implemented, within the realm of administrative law. However, the practical ability to enforce these provisions are criticised given that cases brought in terms of the Promotion of Administrative Justice Act can only be heard by the High Court of South Africa. This dissertation then considers the World Economic Forum’s “human centred approach” to the Fourth Industrial Age. It is acknowledged that this idea by the WEF does not in itself represent a legally binding obligation, however, the internationally binding right to development and the constitutional obligation on municipalities to promote the social and economic development of the community similarly place people at the centre. The WEF human centred approach could accordingly guide interpretation of these obligations in the context of regulation for the Fourth Industrial Age. , LL.M. (Mercantile Law)
- Full Text: