Agent-based crowd simulation using GPU computing
- Authors: O’Reilly, Sean Patrick
- Date: 2014
- Subjects: Crowds - Computer simulation , Virtual computer systems , Graphics processing units , Multiagent systems
- Type: Thesis
- Identifier: http://ujcontent.uj.ac.za8080/10210/382529 , uj:11707 , http://hdl.handle.net/10210/11428
- Description: M.Sc. (Information Technology) , The purpose of the research is to investigate agent-based approaches to virtual crowd simulation. Crowds are ubiquitous and are becoming an increasingly common phenomena in modern society, particularly in urban settings. As such, crowd simulation systems are becoming increasingly popular in training simulations, pedestrian modelling, emergency simulations, and multimedia. One of the primary challenges in crowd simulation is the ability to model realistic, large-scale crowd behaviours in real time. This is a challenging problem, as the size, visual fidelity, and complex behaviour models of the crowd all have an impact on the available computational resources. In the last few years, the graphics processing unit (GPU) has presented itself as a viable computational resource for general purpose computation. Traditionally, GPUs were used solely for their ability to efficiently compute operations related to graphics applications. However, the modern GPU is a highly parallel programmable processor, with substantially higher peak arithmetic and memory bandwidth than its central processing unit (CPU) counterpart. The GPU’s architecture makes it a suitable processing resource for computations that are parallel or distributed in nature. One attribute of multi-agent systems (MASs) is that they are inherently decentralised. As such, a MAS that leverages advancements in GPU computing may provide a solution for crowd simulation. The research investigates techniques and methods for general purpose crowd simulation, including topics in agent behavioural modes, pathplanning, collision avoidance and agent steering. The research also investigates how GPU computing has been utilised to address these computationally intensive problem domains. Based on the outcomes of the research, an agent-based model, Massively Parallel Crowds (MPCrowds), is proposed to address virtual crowd simulation, using the GPU as an additional resource for agent computation.
- Full Text:
- Authors: O’Reilly, Sean Patrick
- Date: 2014
- Subjects: Crowds - Computer simulation , Virtual computer systems , Graphics processing units , Multiagent systems
- Type: Thesis
- Identifier: http://ujcontent.uj.ac.za8080/10210/382529 , uj:11707 , http://hdl.handle.net/10210/11428
- Description: M.Sc. (Information Technology) , The purpose of the research is to investigate agent-based approaches to virtual crowd simulation. Crowds are ubiquitous and are becoming an increasingly common phenomena in modern society, particularly in urban settings. As such, crowd simulation systems are becoming increasingly popular in training simulations, pedestrian modelling, emergency simulations, and multimedia. One of the primary challenges in crowd simulation is the ability to model realistic, large-scale crowd behaviours in real time. This is a challenging problem, as the size, visual fidelity, and complex behaviour models of the crowd all have an impact on the available computational resources. In the last few years, the graphics processing unit (GPU) has presented itself as a viable computational resource for general purpose computation. Traditionally, GPUs were used solely for their ability to efficiently compute operations related to graphics applications. However, the modern GPU is a highly parallel programmable processor, with substantially higher peak arithmetic and memory bandwidth than its central processing unit (CPU) counterpart. The GPU’s architecture makes it a suitable processing resource for computations that are parallel or distributed in nature. One attribute of multi-agent systems (MASs) is that they are inherently decentralised. As such, a MAS that leverages advancements in GPU computing may provide a solution for crowd simulation. The research investigates techniques and methods for general purpose crowd simulation, including topics in agent behavioural modes, pathplanning, collision avoidance and agent steering. The research also investigates how GPU computing has been utilised to address these computationally intensive problem domains. Based on the outcomes of the research, an agent-based model, Massively Parallel Crowds (MPCrowds), is proposed to address virtual crowd simulation, using the GPU as an additional resource for agent computation.
- Full Text:
Alignment invariant image comparison implemented on the GPU
- Roos, Hans, Roodt, Yuko, Clarke, Willem A.
- Authors: Roos, Hans , Roodt, Yuko , Clarke, Willem A.
- Date: 2008
- Subjects: Distance transform , Binary image , Graphics processing units , Parallel processing
- Language: English
- Type: Conference proceedings
- Identifier: http://hdl.handle.net/10210/15593 , uj:15681 , Roos, H., Roodt, Y. & Clarke, W.A. 2008. Alignment invariant image comparison implemented on the GPU. Pattern Recognition Association of South Africa (PRASA), 27-28 Nov. 2008.
- Description: Abstract: This paper proposes a GPU implemented algorithm to determine the differences between two binary images using Distance Transformations. These differences are invariant to slight rotation and offsets, making the technique ideal for comparisons between images that are not perfectly aligned...
- Full Text:
- Authors: Roos, Hans , Roodt, Yuko , Clarke, Willem A.
- Date: 2008
- Subjects: Distance transform , Binary image , Graphics processing units , Parallel processing
- Language: English
- Type: Conference proceedings
- Identifier: http://hdl.handle.net/10210/15593 , uj:15681 , Roos, H., Roodt, Y. & Clarke, W.A. 2008. Alignment invariant image comparison implemented on the GPU. Pattern Recognition Association of South Africa (PRASA), 27-28 Nov. 2008.
- Description: Abstract: This paper proposes a GPU implemented algorithm to determine the differences between two binary images using Distance Transformations. These differences are invariant to slight rotation and offsets, making the technique ideal for comparisons between images that are not perfectly aligned...
- Full Text:
A new adaptive colorization filter for video decompression
- Lee, Vaughan H., Roodt, Yuko, Clarke, William A.
- Authors: Lee, Vaughan H. , Roodt, Yuko , Clarke, William A.
- Date: 2010
- Subjects: Adaptive colorization , Graphics processing units , Video compression
- Language: English
- Type: Conference proceedings
- Identifier: http://hdl.handle.net/10210/15574 , uj:15676 , Lee, V.H., Roodt, Y. & Clarke, W.A. 2010. A new adaptive colourization filter for video decompression. Pattern Recognition Association of South Africa (PRASA), 2010.
- Description: HD content is more in demand and requires a lot of bandwidth. In this paper, a new real-time adaptive colorization filter for HD videos is presented. This approach reduces the required bandwidth by reducing non-key frames in the HD video sequence to grayscale and colourizing these frames at the decompression stage. Additionally this technique determines the frame status based on the image information.
- Full Text:
- Authors: Lee, Vaughan H. , Roodt, Yuko , Clarke, William A.
- Date: 2010
- Subjects: Adaptive colorization , Graphics processing units , Video compression
- Language: English
- Type: Conference proceedings
- Identifier: http://hdl.handle.net/10210/15574 , uj:15676 , Lee, V.H., Roodt, Y. & Clarke, W.A. 2010. A new adaptive colourization filter for video decompression. Pattern Recognition Association of South Africa (PRASA), 2010.
- Description: HD content is more in demand and requires a lot of bandwidth. In this paper, a new real-time adaptive colorization filter for HD videos is presented. This approach reduces the required bandwidth by reducing non-key frames in the HD video sequence to grayscale and colourizing these frames at the decompression stage. Additionally this technique determines the frame status based on the image information.
- Full Text:
Monte Carlo simulations on a graphics processor unit with applications in inertial navigation
- Authors: Roets, Sarel Frederik
- Date: 2012-03-12
- Subjects: Monte Carlo method , Graphics processing units , Inertial navigation systems , Inertial measurement units
- Type: Thesis
- Identifier: uj:2157 , http://hdl.handle.net/10210/4528
- Description: M.Ing. , The Graphics Processor Unit (GPU) has been in the gaming industry for several years now. Of late though programmers and scientists have started to use the parallel processing or stream processing capabilities of the GPU in general numerical applications. The Monte Carlo method is a processing intensive methods, as it evaluates systems with stochastic components. The stochastic components require several iterations of the systems to develop an idea of how the systems reacts to the stochastic inputs. The stream processing capabilities of GPUs are used for the analysis of such systems. Evaluating low-cost Inertial Measurement Units (IMU) for utilisation in Inertial Navigation Systems (INS) is a processing intensive process. The non-deterministic or stochastic error components of the IMUs output signal requires multiple simulation runs to properly evaluate the IMUs performance when applied as input to an INS. The GPU makes use of stream processing, which allows simultaneous execution of the same algorithm on multiple data sets. Accordingly Monte Carlo techniques are applied to create trajectories for multiple possible outputs of the INS based on stochastically varying inputs from the IMU. The processing power of the GPU allows simultaneous Monte Carlo analysis of several IMUs. Each IMU requires a sensor error model, which entails calibration of each IMU to obtain numerical values for the main error sources of lowcost IMUs namely scale factor, non-orthogonality, bias, random walk and white noise. Three low-cost MEMS IMUs was calibrated to obtain numerical values for their sensor error models. Simultaneous Monte Carlo analysis of each of the IMUs is then done on the GPU with a resulting circular error probability plot. The circular error probability indicates the accuracy and precision of each IMU relative to a reference trajectory and the other IMUs trajectories. Results obtained indicate the GPU to be an alternative processing platform, for large amounts of data, to that of the CPU. Monte Carlo simulations on the GPU was performed 200 % faster than Monte Carlo simulations on the CPU. Results obtained from the Monte Carlo simulations, indicated the Random Walk error to be the main source of error in low-cost IMUs. The CEP results was used to determine the e ect of the various error sources on the INS output.
- Full Text:
- Authors: Roets, Sarel Frederik
- Date: 2012-03-12
- Subjects: Monte Carlo method , Graphics processing units , Inertial navigation systems , Inertial measurement units
- Type: Thesis
- Identifier: uj:2157 , http://hdl.handle.net/10210/4528
- Description: M.Ing. , The Graphics Processor Unit (GPU) has been in the gaming industry for several years now. Of late though programmers and scientists have started to use the parallel processing or stream processing capabilities of the GPU in general numerical applications. The Monte Carlo method is a processing intensive methods, as it evaluates systems with stochastic components. The stochastic components require several iterations of the systems to develop an idea of how the systems reacts to the stochastic inputs. The stream processing capabilities of GPUs are used for the analysis of such systems. Evaluating low-cost Inertial Measurement Units (IMU) for utilisation in Inertial Navigation Systems (INS) is a processing intensive process. The non-deterministic or stochastic error components of the IMUs output signal requires multiple simulation runs to properly evaluate the IMUs performance when applied as input to an INS. The GPU makes use of stream processing, which allows simultaneous execution of the same algorithm on multiple data sets. Accordingly Monte Carlo techniques are applied to create trajectories for multiple possible outputs of the INS based on stochastically varying inputs from the IMU. The processing power of the GPU allows simultaneous Monte Carlo analysis of several IMUs. Each IMU requires a sensor error model, which entails calibration of each IMU to obtain numerical values for the main error sources of lowcost IMUs namely scale factor, non-orthogonality, bias, random walk and white noise. Three low-cost MEMS IMUs was calibrated to obtain numerical values for their sensor error models. Simultaneous Monte Carlo analysis of each of the IMUs is then done on the GPU with a resulting circular error probability plot. The circular error probability indicates the accuracy and precision of each IMU relative to a reference trajectory and the other IMUs trajectories. Results obtained indicate the GPU to be an alternative processing platform, for large amounts of data, to that of the CPU. Monte Carlo simulations on the GPU was performed 200 % faster than Monte Carlo simulations on the CPU. Results obtained from the Monte Carlo simulations, indicated the Random Walk error to be the main source of error in low-cost IMUs. The CEP results was used to determine the e ect of the various error sources on the INS output.
- Full Text:
Mitigation of atmospheric turbulence distortions in long range video surveillance
- Robinson, P. E., Clarke, W. A.
- Authors: Robinson, P. E. , Clarke, W. A.
- Date: 2011
- Subjects: Atmospheric turbulence , Scintillation , Heat shimmer , Graphics processing units , Optical flow , Deblurring , Quality metrics
- Type: Article
- Identifier: http://hdl.handle.net/10210/16412 , uj:15771 , Citation: Robinson, P.E. & Clarke, W.A. 2011. Mitigation of atmospheric turbulence distortions in long range video surveillance, SAIEE Africa Research Journal, 102(1) March:16-28.
- Description: Abstract: This paper explores the problem of atmospheric turbulence in long range video surveillance. This turbulence causes a phenomenon called heat scintillation or heat shimmer which introduces distortions into the video being captured. The nature of these distortions is discussed and a number of possible solutions explored.
- Full Text:
- Authors: Robinson, P. E. , Clarke, W. A.
- Date: 2011
- Subjects: Atmospheric turbulence , Scintillation , Heat shimmer , Graphics processing units , Optical flow , Deblurring , Quality metrics
- Type: Article
- Identifier: http://hdl.handle.net/10210/16412 , uj:15771 , Citation: Robinson, P.E. & Clarke, W.A. 2011. Mitigation of atmospheric turbulence distortions in long range video surveillance, SAIEE Africa Research Journal, 102(1) March:16-28.
- Description: Abstract: This paper explores the problem of atmospheric turbulence in long range video surveillance. This turbulence causes a phenomenon called heat scintillation or heat shimmer which introduces distortions into the video being captured. The nature of these distortions is discussed and a number of possible solutions explored.
- Full Text:
Application of stream processing to hydraulic network solvers
- Authors: Crous, Pieter
- Date: 2011-10-24
- Subjects: Graphics processing units , Hydraulic models , Water distribution - Data processing
- Type: Thesis
- Identifier: uj:7254 , http://hdl.handle.net/10210/3907
- Description: M.Ing. , The aim of this research was to investigate the use of stream processing on the graphics processing unit (GPU) and to apply it into the hydraulic modelling of a water distribution system. The stream processing model was programmed and compared to the programming on the conventional, sequential programming platform, namely the CPU. The use of the GPU as a parallel processor has been widely adopted in many different non-graphic applications and the benefits of implementing parallel processing in these fields have been significant. They have the capacity to perform from billions to trillions of floating-point operations per second using programmable shader programs. These great advances seen in the GPU architecture have been driven by the gaming industry and a demand for better gaming experiences. The computational performance of the GPU is much greater than the computational capability of CPU processors. Hydraulic modelling of water distribution systems has become vital to the construction of new water distribution systems. This is because water distribution networks are very complex and are nonlinear in nature. Further, modelling is able to prevent and anticipate problems in a system without physically building the system. The hydraulic model that was used was the Gradient Method, which is the hydraulic model used in the EPANET software package. The Gradient Method produces a linear system which is both positive-definite and symmetric. The Cholesky method is currently being used in the EPANET algorithm in order to solve the linear equations produced by the Gradient Method. Thus, a linear solution method had to be selected for the use in both parallel processing on the GPU and as a hydraulic network solver. The Conjugate Gradient algorithm was selected as an ideal algorithm as it works well with the hydraulic solver and could be converted into a parallel algorithm on the GPU. The Conjugate Gradient Method is one of the best-known iterative techniques used in the solution of sparse symmetric positive definite linear systems. The Conjugate Gradient Method was constructed both in the sequential programming model and the stream processing model, using the CPU and the GPU respectively on two different computer systems. The Cholesky method was also programmed in the sequential programming model for both of the computer systems. A comparison was made between the Cholesky and the Conjugate Gradient Methods in order to evaluate the two methods relative to each other. The findings in this study have shown that stream processing on the GPU can be used in the parallel GPU architecture in order to perform general-purpose algorithms. The results further affirmed that iterative linear solution methods should only be used for large linear systems.
- Full Text:
- Authors: Crous, Pieter
- Date: 2011-10-24
- Subjects: Graphics processing units , Hydraulic models , Water distribution - Data processing
- Type: Thesis
- Identifier: uj:7254 , http://hdl.handle.net/10210/3907
- Description: M.Ing. , The aim of this research was to investigate the use of stream processing on the graphics processing unit (GPU) and to apply it into the hydraulic modelling of a water distribution system. The stream processing model was programmed and compared to the programming on the conventional, sequential programming platform, namely the CPU. The use of the GPU as a parallel processor has been widely adopted in many different non-graphic applications and the benefits of implementing parallel processing in these fields have been significant. They have the capacity to perform from billions to trillions of floating-point operations per second using programmable shader programs. These great advances seen in the GPU architecture have been driven by the gaming industry and a demand for better gaming experiences. The computational performance of the GPU is much greater than the computational capability of CPU processors. Hydraulic modelling of water distribution systems has become vital to the construction of new water distribution systems. This is because water distribution networks are very complex and are nonlinear in nature. Further, modelling is able to prevent and anticipate problems in a system without physically building the system. The hydraulic model that was used was the Gradient Method, which is the hydraulic model used in the EPANET software package. The Gradient Method produces a linear system which is both positive-definite and symmetric. The Cholesky method is currently being used in the EPANET algorithm in order to solve the linear equations produced by the Gradient Method. Thus, a linear solution method had to be selected for the use in both parallel processing on the GPU and as a hydraulic network solver. The Conjugate Gradient algorithm was selected as an ideal algorithm as it works well with the hydraulic solver and could be converted into a parallel algorithm on the GPU. The Conjugate Gradient Method is one of the best-known iterative techniques used in the solution of sparse symmetric positive definite linear systems. The Conjugate Gradient Method was constructed both in the sequential programming model and the stream processing model, using the CPU and the GPU respectively on two different computer systems. The Cholesky method was also programmed in the sequential programming model for both of the computer systems. A comparison was made between the Cholesky and the Conjugate Gradient Methods in order to evaluate the two methods relative to each other. The findings in this study have shown that stream processing on the GPU can be used in the parallel GPU architecture in order to perform general-purpose algorithms. The results further affirmed that iterative linear solution methods should only be used for large linear systems.
- Full Text:
- «
- ‹
- 1
- ›
- »