Abstract
The development of the non-intrusive-load-monitoring (NILM) system in the smarthome (SMH) owes its progress to machine-learning (ML) and deep-learning (DL) algorithms. However, some challenges faced by NILM recognition systems today include limited recognition of similar signature and power appliances (SSPAs), and the inadequate power series (PS) appliance signal features available over limited time scales. In addition, the max pooling in the convolution neural network (CNN or ConvNet) based NILM recognition results in loss of some useful signal feature information. Furthermore, the DL CNN requires a large training dataset and suffers translational invariance in image detection. Large datasets increase the dynamic memory, computation time, data acquisition and storage requirements. In this thesis to address the first challenge, there is development of three independent multivariate PS parameter input data DL disaggregation algorithms. These algorithms are the multiple parallel structure CNNs (MP-CNNs), the one input recurrent or long-short-termmemory neural network having a number of dense layers in parallel connection (RNN (LSTM)), and the hybrid CNN and RNN (CNN+LSTM). The PS classification is through a transfer-learning (TL) deep multi-layer perceptron (MLP) multi-class network. PS to image transformation results in localized PS features for the computer vision (CV) approach, where the NILM disaggregation is through a stacked denoising autoencoder (sdAE) network, and the classification based on the Visual Geometry Group CNN (VGG convNet). Furthermore, Discrete Wavelet Transform (DWT) image fusion, and implementation of Hinton‘s capsule network (capsNet) achieves appliance equivariance recognition with a relatively reduced dataset. In the finality, the Siamese and prototypical (protonet) metric based few short learning (FSL) achieves NILM recognition with very few training samples. Train and test data is from twelve appliances within a laboratory setup. The recognition performance was 100 % for the PS SSPAs, 93.75 % for the capsNet, and 97.83 % for the FSL metalearning model. On the other hand, in the sdAE VGG CNN model, the performance was 75% for 1 sample per class and 100 % for 4 samples per class under test. Keywords: Capsule network; Few-shot learning; Image based appliance recognition; Machine-learning; Non-intrusive-load monitoring, Power series based appliance recognition; Similar appliance signature recognition; Supervised machine-learning non-intrusive smart-home appliance status recognition.
Ph.D. (Electrical & Electronic Engineering)