Comparison of background subtraction techniques under sudden illumination changes
- Nel, A.L., Robinson, P.E., Reyneke, C.J.F.
- Authors: Nel, A.L. , Robinson, P.E. , Reyneke, C.J.F.
- Date: 2014
- Subjects: Background subtraction , Illumination changes
- Type: Article
- Identifier: uj:5043 , ISBN ISBN 978-0-620-62617-0. , http://hdl.handle.net/10210/13563
- Description: This paper investigates three background modelling techniques that have potential to be robust against sudden and gradual illumination changes for a single, stationary camera. The first makes use of a modified local binary pattern that considers both spatial texture and colour information. The second uses a combination of a frame-based Gaussianity Test and a pixel-based Shading Model to handle sudden illumination changes. The third solution is an extension of a popular kernel density estimation (KDE) technique from the temporal to spatio-temporal domain using 9-dimensional data points instead of pixel intensity values and a discrete hyperspherical kernel instead of a Gaussian kernel. A number of experiments were performed to provide a com- parison of these techniques in regard to classfication accuracy.
- Full Text:
- Authors: Nel, A.L. , Robinson, P.E. , Reyneke, C.J.F.
- Date: 2014
- Subjects: Background subtraction , Illumination changes
- Type: Article
- Identifier: uj:5043 , ISBN ISBN 978-0-620-62617-0. , http://hdl.handle.net/10210/13563
- Description: This paper investigates three background modelling techniques that have potential to be robust against sudden and gradual illumination changes for a single, stationary camera. The first makes use of a modified local binary pattern that considers both spatial texture and colour information. The second uses a combination of a frame-based Gaussianity Test and a pixel-based Shading Model to handle sudden illumination changes. The third solution is an extension of a popular kernel density estimation (KDE) technique from the temporal to spatio-temporal domain using 9-dimensional data points instead of pixel intensity values and a discrete hyperspherical kernel instead of a Gaussian kernel. A number of experiments were performed to provide a com- parison of these techniques in regard to classfication accuracy.
- Full Text:
Foreground segmentation in atmospheric turbulence degraded video sequences to aid in background stabilization
- Robinson, Philip E., Nel, Andre L.
- Authors: Robinson, Philip E. , Nel, Andre L.
- Date: 2016
- Subjects: Atmospheric turbulence , Video stabilization , Background subtraction
- Language: English
- Type: Article
- Identifier: http://hdl.handle.net/10210/217644 , uj:21665 , Citation: Robinson, P.E. & Nel, A.L. 2016. Foreground segmentation in atmospheric turbulence degraded video sequences to aid in background stabilization.
- Description: Abstract: Video sequences captured over a long range through the turbulent atmosphere contain some degree of atmospheric turbulence degradation (ATD). Stabilization of the geometric distortions present in video sequences containing ATD and containing objects undergoing real motion is a challenging task. This is due to the difficulty of discriminating what visible motion is real motion and what is caused by ATD warping. Due to this, most stabilization techniques applied to ATD sequences distort real motion in the sequence. In this study we propose a new method to classify foreground regions in ATD video sequences. This classification is used to stabilize the background of the scene while preserving objects undergoing real motion by compositing them back into the sequence. A hand annotated dataset of three ATD sequences is produced with which the performance of this approach can be quantitatively measured and compared against the current state-of-the-art.
- Full Text:
- Authors: Robinson, Philip E. , Nel, Andre L.
- Date: 2016
- Subjects: Atmospheric turbulence , Video stabilization , Background subtraction
- Language: English
- Type: Article
- Identifier: http://hdl.handle.net/10210/217644 , uj:21665 , Citation: Robinson, P.E. & Nel, A.L. 2016. Foreground segmentation in atmospheric turbulence degraded video sequences to aid in background stabilization.
- Description: Abstract: Video sequences captured over a long range through the turbulent atmosphere contain some degree of atmospheric turbulence degradation (ATD). Stabilization of the geometric distortions present in video sequences containing ATD and containing objects undergoing real motion is a challenging task. This is due to the difficulty of discriminating what visible motion is real motion and what is caused by ATD warping. Due to this, most stabilization techniques applied to ATD sequences distort real motion in the sequence. In this study we propose a new method to classify foreground regions in ATD video sequences. This classification is used to stabilize the background of the scene while preserving objects undergoing real motion by compositing them back into the sequence. A hand annotated dataset of three ATD sequences is produced with which the performance of this approach can be quantitatively measured and compared against the current state-of-the-art.
- Full Text:
- «
- ‹
- 1
- ›
- »