A model for the automated detection of fraudulent healthcare claims using data mining methods
- Obodoekwe, Nnaemeka Chukwudi Fortune
- Authors: Obodoekwe, Nnaemeka Chukwudi Fortune
- Date: 2018
- Subjects: Data mining , Fraud - Prevention , Medicare fraud , Corporations - Corrupt practices
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/401505 , uj:33558
- Description: Abstract : The menace of fraud today cannot be underestimated. The healthcare system put in place to facilitate rendering medical services as well as improving access to medical services has not been an exception to fraudulent activities. Traditional healthcare claims fraud detection methods no longer suffice due to the increased complexity in the medical billing process. Machine learning has become a very important technique in the computing world today. The abundance of computing power has aided the adoption of machine learning by different problem domains including healthcare claims fraud detection. The study explores the application of different machine learning methods in the process of detecting possible fraudulent healthcare claims fraud. We propose a data mining model that incorporates several knowledge discovery processes in the pipeline. The model makes use of the data from the Medicare payment data from the Centre for Medicare and Medicaid Services as well as data from the List of Excluded Individual or Entities (LEIE) database. The data was then passed through the data pre-processing and transformation stages to get the data to a desirable state. Once the data is in the desired state, we apply several machine learning methods to derive knowledge as well as classify the data into fraudulent and non-fraudulent claims. The results derived from the comprehensive benchmark used on the implemented version of the model, have shown that machine learning methods can be used to detect possible fraudulent healthcare claims. The models based on the Gradient Boosted Tree Classifier and Artificial Neural Network performed best while the Naïve Bayes model couldn’t classify the data. By applying the correct pre-processing method as well as data transformation methods to the Medicare data, along with the appropriate machine learning methods, the healthcare fraud detection system yields nominal results for identification of possible fraudulent claims in the medical billing process. , M.Sc. (Computer Science)
- Full Text:
- Authors: Obodoekwe, Nnaemeka Chukwudi Fortune
- Date: 2018
- Subjects: Data mining , Fraud - Prevention , Medicare fraud , Corporations - Corrupt practices
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/401505 , uj:33558
- Description: Abstract : The menace of fraud today cannot be underestimated. The healthcare system put in place to facilitate rendering medical services as well as improving access to medical services has not been an exception to fraudulent activities. Traditional healthcare claims fraud detection methods no longer suffice due to the increased complexity in the medical billing process. Machine learning has become a very important technique in the computing world today. The abundance of computing power has aided the adoption of machine learning by different problem domains including healthcare claims fraud detection. The study explores the application of different machine learning methods in the process of detecting possible fraudulent healthcare claims fraud. We propose a data mining model that incorporates several knowledge discovery processes in the pipeline. The model makes use of the data from the Medicare payment data from the Centre for Medicare and Medicaid Services as well as data from the List of Excluded Individual or Entities (LEIE) database. The data was then passed through the data pre-processing and transformation stages to get the data to a desirable state. Once the data is in the desired state, we apply several machine learning methods to derive knowledge as well as classify the data into fraudulent and non-fraudulent claims. The results derived from the comprehensive benchmark used on the implemented version of the model, have shown that machine learning methods can be used to detect possible fraudulent healthcare claims. The models based on the Gradient Boosted Tree Classifier and Artificial Neural Network performed best while the Naïve Bayes model couldn’t classify the data. By applying the correct pre-processing method as well as data transformation methods to the Medicare data, along with the appropriate machine learning methods, the healthcare fraud detection system yields nominal results for identification of possible fraudulent claims in the medical billing process. , M.Sc. (Computer Science)
- Full Text:
A modified hiding high utility item first algorithm (HHUIF) with item selector (MHIS) for hiding sensitive itemsets
- Selvaraj, Rajalakshmi, Kuthadi, Venu Madhav
- Authors: Selvaraj, Rajalakshmi , Kuthadi, Venu Madhav
- Date: 2013
- Subjects: Utility mining , Data mining , Algorithms , Hiding high utility item first algorithm
- Type: Article
- Identifier: uj:5419 , ISSN 1349-4198 , http://hdl.handle.net/10210/10971
- Description: In privacy preserving data mining, utility mining plays an important role. In privacy preserving utility mining, some sensitive itemsets are concealed from the data- base according to certain privacy policies. Hiding sensitive itemsets from the adversaries is becoming an important issue nowadays. Also, only very few methods are available in the literature to hide the sensitive itemsets in the database. One of the existing privacy preserving utility mining methods utilizes two algorithms, HHUIF and MSICF to con- ceal the sensitive itemsets, so that the adversaries cannot mine them from the modi ed database. To accomplish the hiding process, this method nds the sensitive itemsets and modi es the frequency of the high valued utility items. However, the performance of this method lacks if the utility value of the items are the same. The items with the same utility value decrease the hiding performance of the sensitive itemsets and also it has introduced computational complexity due to the frequency modi cation in each item. To solve this problem, in this paper a modified HHUIF algorithm with Item Selector (MHIS) is pro- posed. The proposed MHIS algorithm is a modified version of existing HHUIF algorithm. The MHIS algorithm computes the sensitive itemsets by utilizing the user defined utility threshold value. In order to hide the sensitive itemsets, the frequency value of the items is changed. If the utility values of the items are the same, the MHIS algorithm selects the accurate items and then the frequency values of the selected items are modified. The proposed MHIS reduces the computation complexity as well as improves the hiding per- formance of the itemsets. The algorithm is implemented and the resultant itemsets are compared against the itemsets that are obtained from the conventional privacy preserving utility mining algorithms.
- Full Text:
- Authors: Selvaraj, Rajalakshmi , Kuthadi, Venu Madhav
- Date: 2013
- Subjects: Utility mining , Data mining , Algorithms , Hiding high utility item first algorithm
- Type: Article
- Identifier: uj:5419 , ISSN 1349-4198 , http://hdl.handle.net/10210/10971
- Description: In privacy preserving data mining, utility mining plays an important role. In privacy preserving utility mining, some sensitive itemsets are concealed from the data- base according to certain privacy policies. Hiding sensitive itemsets from the adversaries is becoming an important issue nowadays. Also, only very few methods are available in the literature to hide the sensitive itemsets in the database. One of the existing privacy preserving utility mining methods utilizes two algorithms, HHUIF and MSICF to con- ceal the sensitive itemsets, so that the adversaries cannot mine them from the modi ed database. To accomplish the hiding process, this method nds the sensitive itemsets and modi es the frequency of the high valued utility items. However, the performance of this method lacks if the utility value of the items are the same. The items with the same utility value decrease the hiding performance of the sensitive itemsets and also it has introduced computational complexity due to the frequency modi cation in each item. To solve this problem, in this paper a modified HHUIF algorithm with Item Selector (MHIS) is pro- posed. The proposed MHIS algorithm is a modified version of existing HHUIF algorithm. The MHIS algorithm computes the sensitive itemsets by utilizing the user defined utility threshold value. In order to hide the sensitive itemsets, the frequency value of the items is changed. If the utility values of the items are the same, the MHIS algorithm selects the accurate items and then the frequency values of the selected items are modified. The proposed MHIS reduces the computation complexity as well as improves the hiding per- formance of the itemsets. The algorithm is implemented and the resultant itemsets are compared against the itemsets that are obtained from the conventional privacy preserving utility mining algorithms.
- Full Text:
Anomaly detection in the open financial markets : a case for the bitcoin network
- Authors: Monamo, Mmabusulane Patrick
- Date: 2018
- Subjects: Data mining , Decision trees , Machine learning , Bitcoin , Fraud investigation
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/269227 , uj:28599
- Description: M.Ing. (Electrical Engineering) , Abstract: Please refer to full text to view abstract.
- Full Text:
- Authors: Monamo, Mmabusulane Patrick
- Date: 2018
- Subjects: Data mining , Decision trees , Machine learning , Bitcoin , Fraud investigation
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/269227 , uj:28599
- Description: M.Ing. (Electrical Engineering) , Abstract: Please refer to full text to view abstract.
- Full Text:
Causality : exploratory data analysis and knowledge discovery
- Authors: Parida, Pramod Kumar
- Date: 2017
- Subjects: Machine learning , Statistical decision , Data mining
- Language: English
- Type: Doctoral (Thesis)
- Identifier: http://ujcontent.uj.ac.za8080/10210/377072 , http://hdl.handle.net/10210/243071 , uj:25089
- Description: D.Phil. (Electrical and Electronic Engineering) , Abstract: The phenomenon of cause and effect which rules the natural behaviour of the universe is simple in observation but complicated in interdependency. While all action and reaction states observed in time space are easier to work on, still the difficulty lies in the factor relations. Only knowing the facts/features without the time frame as they occurred/observed heightens the complexity of information retrieval. The relation of cause and effect is vital for knowing the past information which constructs the present state, although feature links remain debatable in this case. The study of Causality deals with these exploratory data analysis problems to inform all possible vital facts which can be extracted from the feature sets. Many researchers also consider the Causal Analysis as the golden standard in data mining and analysis. As is frequently the case, this causal analysis is represented by directed acyclic graphs for simplification of complexity. The directed edges with weighted values inform the flow of information from source/parent to the receiver/child nodes in the graph. The definition of the causal structure for inference analysis is insufficient for many reasons, and the works concluded in this field are inadequate. Most of the techniques proposed provide limited structural analysis, while many others are not able to validate the required criteria for causal analysis. The background study of all the proposed articles with definite contributions towards causality have been studied and are thoroughly analyzed in the literature review. All the methods proposed yet, use the bivariate model for causal analysis. In this scenario, the model, Linear Non-Gaussian Acyclic Model (LiNGAM) is the first to provide estimation for the most number of features. However, it is not completely effective in analyzing the causal models for datasets of mixed distribution types and also constructing a complete causal model from the estimated results is not possible. While using the fundamental structure of LiNGAM, the estimation process for causal detection is newly introduced by the method Altered-LiNGAM (ALiNGAM) in this work. ALiNGAM uses least square estimation on dseparable sets to find the probable causal directions in the observed feature set. The proposed...
- Full Text:
- Authors: Parida, Pramod Kumar
- Date: 2017
- Subjects: Machine learning , Statistical decision , Data mining
- Language: English
- Type: Doctoral (Thesis)
- Identifier: http://ujcontent.uj.ac.za8080/10210/377072 , http://hdl.handle.net/10210/243071 , uj:25089
- Description: D.Phil. (Electrical and Electronic Engineering) , Abstract: The phenomenon of cause and effect which rules the natural behaviour of the universe is simple in observation but complicated in interdependency. While all action and reaction states observed in time space are easier to work on, still the difficulty lies in the factor relations. Only knowing the facts/features without the time frame as they occurred/observed heightens the complexity of information retrieval. The relation of cause and effect is vital for knowing the past information which constructs the present state, although feature links remain debatable in this case. The study of Causality deals with these exploratory data analysis problems to inform all possible vital facts which can be extracted from the feature sets. Many researchers also consider the Causal Analysis as the golden standard in data mining and analysis. As is frequently the case, this causal analysis is represented by directed acyclic graphs for simplification of complexity. The directed edges with weighted values inform the flow of information from source/parent to the receiver/child nodes in the graph. The definition of the causal structure for inference analysis is insufficient for many reasons, and the works concluded in this field are inadequate. Most of the techniques proposed provide limited structural analysis, while many others are not able to validate the required criteria for causal analysis. The background study of all the proposed articles with definite contributions towards causality have been studied and are thoroughly analyzed in the literature review. All the methods proposed yet, use the bivariate model for causal analysis. In this scenario, the model, Linear Non-Gaussian Acyclic Model (LiNGAM) is the first to provide estimation for the most number of features. However, it is not completely effective in analyzing the causal models for datasets of mixed distribution types and also constructing a complete causal model from the estimated results is not possible. While using the fundamental structure of LiNGAM, the estimation process for causal detection is newly introduced by the method Altered-LiNGAM (ALiNGAM) in this work. ALiNGAM uses least square estimation on dseparable sets to find the probable causal directions in the observed feature set. The proposed...
- Full Text:
Enhancing the detection of financial statement fraud through the use of missing value estimation, multivariate filter feature selection and cost-sensitive classification
- Authors: Moepya, Stephen O.
- Date: 2017
- Subjects: Data mining , Fraud - Statistical methods , Missing observations (Statistics) , Computational intelligence
- Language: English
- Type: Doctoral (Thesis)
- Identifier: http://hdl.handle.net/10210/242812 , uj:25056
- Description: D.Phil. (Electrical and Electronic Engineering) , Abstract: Please refer to full text to view abstract
- Full Text:
- Authors: Moepya, Stephen O.
- Date: 2017
- Subjects: Data mining , Fraud - Statistical methods , Missing observations (Statistics) , Computational intelligence
- Language: English
- Type: Doctoral (Thesis)
- Identifier: http://hdl.handle.net/10210/242812 , uj:25056
- Description: D.Phil. (Electrical and Electronic Engineering) , Abstract: Please refer to full text to view abstract
- Full Text:
Fraud detection using data mining
- Authors: Pienaar, Abel Jacobus
- Date: 2014-02-10
- Subjects: Computer crimes , Forensic accounting , Data mining
- Type: Thesis
- Identifier: uj:3733 , http://hdl.handle.net/10210/9112
- Description: M.Com. (Computer Auditing) , Fraud is a major problem in South Africa and the world and organisations lose millions each year to fraud not being detected. Organisations can deal with the fraud that is known to them, but undetected fraud is a problem. There is a need for management, external- and internal auditors to detect fraud within an organisation. There is a further need for an integrated fraud detection model to assist managers and auditors to detect fraud. A literature study was done of authoritative textbooks and other literature on fraud detection and data mining, including the Knowledge Discovery Process in databases and a model was developed that will assist the manager and auditor to detect fraud in an organisation by using a technology called data mining which makes the process of fraud detection more efficient and effective.
- Full Text:
- Authors: Pienaar, Abel Jacobus
- Date: 2014-02-10
- Subjects: Computer crimes , Forensic accounting , Data mining
- Type: Thesis
- Identifier: uj:3733 , http://hdl.handle.net/10210/9112
- Description: M.Com. (Computer Auditing) , Fraud is a major problem in South Africa and the world and organisations lose millions each year to fraud not being detected. Organisations can deal with the fraud that is known to them, but undetected fraud is a problem. There is a need for management, external- and internal auditors to detect fraud within an organisation. There is a further need for an integrated fraud detection model to assist managers and auditors to detect fraud. A literature study was done of authoritative textbooks and other literature on fraud detection and data mining, including the Knowledge Discovery Process in databases and a model was developed that will assist the manager and auditor to detect fraud in an organisation by using a technology called data mining which makes the process of fraud detection more efficient and effective.
- Full Text:
Hybrid technique for frequent pattern extraction from sequential database
- Selvaraj, Rajalakshmi, Kuthadi, Venu Madhav, Kuthadi, Tshilidzi
- Authors: Selvaraj, Rajalakshmi , Kuthadi, Venu Madhav , Kuthadi, Tshilidzi
- Date: 2015
- Subjects: Data mining , Sequential analysis , Sequential machine theory
- Type: Article
- Identifier: uj:5086 , http://hdl.handle.net/10210/13657
- Description: Data mining has became a familiar tool for mining stored value from the large scale databases that are known as Sequential Database. These databases has large number of itemsets that can arrive frequently and sequentially, it can also predict the users behaviors. The evaluation of user behavior is done by using Sequential pattern mining where the frequent patterns extracted with several limitations. Even the previous sequential pattern techniques used some limitations to extract those frequent patterns but these techniques does not generated the more reliable patterns .Thus, it is very complex to the decision makers for evaluation of user behavior. In this paper, to solve this problem a technique called hybrid pattern is used which has both time based limitation and space limitation and it is used to extract more feasible pattern from sequential database. Initially, the space limitation is applied to break the sequential database using the maximum and minimum threshold values. To this end, the time based limitation is applied to extract more feasible patterns where a bury-time arrival rate is computed to extract the reliable patterns.
- Full Text:
- Authors: Selvaraj, Rajalakshmi , Kuthadi, Venu Madhav , Kuthadi, Tshilidzi
- Date: 2015
- Subjects: Data mining , Sequential analysis , Sequential machine theory
- Type: Article
- Identifier: uj:5086 , http://hdl.handle.net/10210/13657
- Description: Data mining has became a familiar tool for mining stored value from the large scale databases that are known as Sequential Database. These databases has large number of itemsets that can arrive frequently and sequentially, it can also predict the users behaviors. The evaluation of user behavior is done by using Sequential pattern mining where the frequent patterns extracted with several limitations. Even the previous sequential pattern techniques used some limitations to extract those frequent patterns but these techniques does not generated the more reliable patterns .Thus, it is very complex to the decision makers for evaluation of user behavior. In this paper, to solve this problem a technique called hybrid pattern is used which has both time based limitation and space limitation and it is used to extract more feasible pattern from sequential database. Initially, the space limitation is applied to break the sequential database using the maximum and minimum threshold values. To this end, the time based limitation is applied to extract more feasible patterns where a bury-time arrival rate is computed to extract the reliable patterns.
- Full Text:
Intelligent pre-processing for data mining
- Authors: De Bruin, Ludwig
- Date: 2014-06-26
- Subjects: Data mining , Intelligent agents (Computer software) , Computational intelligence
- Type: Thesis
- Identifier: uj:11611 , http://hdl.handle.net/10210/11324
- Description: M.Sc. (Information Technology) , Data is generated at an ever-increasing rate and it has become difficult to process or analyse it in its raw form. The most data is generated by processes or measuring equipment, resulting in very large volumes of data per time unit. Companies and corporations rely on their Management and Information Systems (MIS) teams to perform Extract, Transform and Load (ETL) operations to data warehouses on a daily basis in order to provide them with reports. Data mining is a Business Intelligence (BI) tool and can be defined as the process of discovering hidden information from existing data repositories. The successful operation of data mining algorithms requires data to be pre-processed for algorithms to derive IF-THEN rules. This dissertation presents a data pre-processing model to transform data in an intelligent manner to enhance its suitability for data mining operations. The Extract Pre- Process and Save for Data Mining (EPS4DM) model is proposed. This model will perform the pre-processing tasks required on a chosen dataset and transform the dataset into the formats required. This can be accessed by data mining algorithms from a data mining mart when needed. The proof of concept prototype features agent-based Computational Intelligence (CI) based algorithms, which allow the pre-processing tasks of classification and clustering as means of dimensionality reduction to be performed. The task of clustering requires the denormalisation of relational structures and is automated using a feature vector approach. A Particle Swarm Optimisation (PSO) algorithm is run on the patterns to find cluster centres based on Euclidean distances. The task of classification requires a feature vector as input and makes use of a Genetic Algorithm (GA) to produce a transformation matrix to reduce the number of significant features in the dataset. The results of both the classification and clustering processes are stored in the data mart.
- Full Text:
- Authors: De Bruin, Ludwig
- Date: 2014-06-26
- Subjects: Data mining , Intelligent agents (Computer software) , Computational intelligence
- Type: Thesis
- Identifier: uj:11611 , http://hdl.handle.net/10210/11324
- Description: M.Sc. (Information Technology) , Data is generated at an ever-increasing rate and it has become difficult to process or analyse it in its raw form. The most data is generated by processes or measuring equipment, resulting in very large volumes of data per time unit. Companies and corporations rely on their Management and Information Systems (MIS) teams to perform Extract, Transform and Load (ETL) operations to data warehouses on a daily basis in order to provide them with reports. Data mining is a Business Intelligence (BI) tool and can be defined as the process of discovering hidden information from existing data repositories. The successful operation of data mining algorithms requires data to be pre-processed for algorithms to derive IF-THEN rules. This dissertation presents a data pre-processing model to transform data in an intelligent manner to enhance its suitability for data mining operations. The Extract Pre- Process and Save for Data Mining (EPS4DM) model is proposed. This model will perform the pre-processing tasks required on a chosen dataset and transform the dataset into the formats required. This can be accessed by data mining algorithms from a data mining mart when needed. The proof of concept prototype features agent-based Computational Intelligence (CI) based algorithms, which allow the pre-processing tasks of classification and clustering as means of dimensionality reduction to be performed. The task of clustering requires the denormalisation of relational structures and is automated using a feature vector approach. A Particle Swarm Optimisation (PSO) algorithm is run on the patterns to find cluster centres based on Euclidean distances. The task of classification requires a feature vector as input and makes use of a Genetic Algorithm (GA) to produce a transformation matrix to reduce the number of significant features in the dataset. The results of both the classification and clustering processes are stored in the data mart.
- Full Text:
Predicting student performance using machine learning analytics
- Authors: Taodzera, Tatenda T.
- Date: 2018
- Subjects: Machine learning , Data mining , Computational intelligence
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/284335 , uj:30702
- Description: Abstract: Please refer to full text to view abstract. , M.Ing. (Electrical Engineering)
- Full Text:
- Authors: Taodzera, Tatenda T.
- Date: 2018
- Subjects: Machine learning , Data mining , Computational intelligence
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/284335 , uj:30702
- Description: Abstract: Please refer to full text to view abstract. , M.Ing. (Electrical Engineering)
- Full Text:
The use of social media big data within South African hotels and lodges
- Authors: Gutfreund, Sebastian
- Date: 2019
- Subjects: Management - Data processing , Hospitality industry - Customer services , Online social networks , Data mining , Big data
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/421222 , uj:35895
- Description: Abstract: Big data is a revolutionary and disruptive technology that is used to identify behavioural patterns and track customer preferences. It has several advantages for the hospitality industry, where customer loyalty is integral for brand performance. However, big data is greatly underutilised. Therefore, a study was conducted to look at the use of social media big data within South African hotels and lodges The aim of this study was to focus on the general understanding of big data and the link it shares with social media. There was a further focus on the analytical tools that hotels and lodges make use of, as well as the benefits and challenges which social media big data elucidates for these sectors. This information provides an overall image of how South African hotels and lodges are wielding this technology, giving a future viewpoint on the progression and improvements that need to be undertaken. A comparison concerning the key similarities and differences between the lodge and hotel sector was also provided. This gave an overall picture on how South African hotels and lodges are using this technology, thus giving a future outlook on the progression and improvements that need to be taken into consideration. In order to fully grasp and appreciate big data, a literature review was provided in order to understand the relationship big data has with social media, and the impact it has within the hospitality industry, playing closer attention to hotels and lodges. The methodological approach of the study focused on the qualitative research method, where ten participants in total were interviewed - five being marketing managers in hotels and five marketing managers in lodges. The key findings of the study revealed that the South African hospitality industry is presently only at the genesis when it comes to the use of social media big data. This was revealed through the marketing manager’s generic understanding of the phenomenon. Furthermore, the data predominantly illustrated that only basic analytical tools were used, which indicates that there is a shortage of internal specialists who are capable of handling more advanced tools to further their findings. However, the benefits established were primarily related to the identification of behavioural patterns and preferences of both future and current customers, as well as the marketability of certain promotions that are placed on various platforms. In summary, the data is essentially used to enhance the guests experience through targeting their likes and dislikes. The primary challenges within both sectors of the industry emphasised areas such as education and training, the lack of advanced technology, and the security and privacy concerns pertaining to guest data. .. , M.Com. (Tourism and Hospitality Management)
- Full Text:
- Authors: Gutfreund, Sebastian
- Date: 2019
- Subjects: Management - Data processing , Hospitality industry - Customer services , Online social networks , Data mining , Big data
- Language: English
- Type: Masters (Thesis)
- Identifier: http://hdl.handle.net/10210/421222 , uj:35895
- Description: Abstract: Big data is a revolutionary and disruptive technology that is used to identify behavioural patterns and track customer preferences. It has several advantages for the hospitality industry, where customer loyalty is integral for brand performance. However, big data is greatly underutilised. Therefore, a study was conducted to look at the use of social media big data within South African hotels and lodges The aim of this study was to focus on the general understanding of big data and the link it shares with social media. There was a further focus on the analytical tools that hotels and lodges make use of, as well as the benefits and challenges which social media big data elucidates for these sectors. This information provides an overall image of how South African hotels and lodges are wielding this technology, giving a future viewpoint on the progression and improvements that need to be undertaken. A comparison concerning the key similarities and differences between the lodge and hotel sector was also provided. This gave an overall picture on how South African hotels and lodges are using this technology, thus giving a future outlook on the progression and improvements that need to be taken into consideration. In order to fully grasp and appreciate big data, a literature review was provided in order to understand the relationship big data has with social media, and the impact it has within the hospitality industry, playing closer attention to hotels and lodges. The methodological approach of the study focused on the qualitative research method, where ten participants in total were interviewed - five being marketing managers in hotels and five marketing managers in lodges. The key findings of the study revealed that the South African hospitality industry is presently only at the genesis when it comes to the use of social media big data. This was revealed through the marketing manager’s generic understanding of the phenomenon. Furthermore, the data predominantly illustrated that only basic analytical tools were used, which indicates that there is a shortage of internal specialists who are capable of handling more advanced tools to further their findings. However, the benefits established were primarily related to the identification of behavioural patterns and preferences of both future and current customers, as well as the marketability of certain promotions that are placed on various platforms. In summary, the data is essentially used to enhance the guests experience through targeting their likes and dislikes. The primary challenges within both sectors of the industry emphasised areas such as education and training, the lack of advanced technology, and the security and privacy concerns pertaining to guest data. .. , M.Com. (Tourism and Hospitality Management)
- Full Text:
User perspectives on document management efficiency at Eskom
- Authors: Mabitsela, Mamatshetshe
- Date: 2014-05-05
- Subjects: Information resources management , Data mining , Total quality management
- Type: Thesis
- Identifier: uj:10933 , http://hdl.handle.net/10210/10506
- Description: M.Phil. (Information Management) , An efficient document management system is one that considers the user’s needs for information and the ability of the system to provide valuable information that matches certain characteristics. When users utilise a document management system (DMS) they require a system that they perceive will make their work easier. The efficient and effective use of a DMS depends on how receptive the users are to technology and their intention in using the system. The documents in the document management system are corporate knowledge and should therefore be stored in a central repository such as the DMS, where the company’s corporate memory cannot be lost. The DMS has all the capabilities to keep documents safe where the documents can be accessed again. To measure the technology acceptance of end users, research has identified the technology acceptance model (TAM) as the ideal method. TAM is tailored to elaborate on computer usage, perceived ease of use, attitudes toward using and usage behaviour. The research stream on technology acceptance and use has become one of the most prolific and is claimed to be the most mature research in the modern information system field. The problem identified was to analyse the user’s behavioural intent towards effectively utilising the Eskom in-house document management system. The purpose was to investigate the use of the document system that is currently in place at Eskom and determine user perspectives. Employees working in Eskom cannot afford to neglect using the document management system on a regular basis. Important documents relevant for everyday work are stored in the system, and all employees are granted access to these documents. Given these considerations, users’ perceptions of the in-house document system cannot be taken for granted, and these issues were researched. The findings from the TAM variables showed that perceptions of users towards the DMS were divided, while half of the users were satisfied with the information, system, usefulness and ease of using the system the other half was not satisfied. A division in opinion emerged whether the system should be replaced or rather improved. Benefits of both options were weighed and the study suggested that the system be replaced.
- Full Text:
- Authors: Mabitsela, Mamatshetshe
- Date: 2014-05-05
- Subjects: Information resources management , Data mining , Total quality management
- Type: Thesis
- Identifier: uj:10933 , http://hdl.handle.net/10210/10506
- Description: M.Phil. (Information Management) , An efficient document management system is one that considers the user’s needs for information and the ability of the system to provide valuable information that matches certain characteristics. When users utilise a document management system (DMS) they require a system that they perceive will make their work easier. The efficient and effective use of a DMS depends on how receptive the users are to technology and their intention in using the system. The documents in the document management system are corporate knowledge and should therefore be stored in a central repository such as the DMS, where the company’s corporate memory cannot be lost. The DMS has all the capabilities to keep documents safe where the documents can be accessed again. To measure the technology acceptance of end users, research has identified the technology acceptance model (TAM) as the ideal method. TAM is tailored to elaborate on computer usage, perceived ease of use, attitudes toward using and usage behaviour. The research stream on technology acceptance and use has become one of the most prolific and is claimed to be the most mature research in the modern information system field. The problem identified was to analyse the user’s behavioural intent towards effectively utilising the Eskom in-house document management system. The purpose was to investigate the use of the document system that is currently in place at Eskom and determine user perspectives. Employees working in Eskom cannot afford to neglect using the document management system on a regular basis. Important documents relevant for everyday work are stored in the system, and all employees are granted access to these documents. Given these considerations, users’ perceptions of the in-house document system cannot be taken for granted, and these issues were researched. The findings from the TAM variables showed that perceptions of users towards the DMS were divided, while half of the users were satisfied with the information, system, usefulness and ease of using the system the other half was not satisfied. A division in opinion emerged whether the system should be replaced or rather improved. Benefits of both options were weighed and the study suggested that the system be replaced.
- Full Text:
- «
- ‹
- 1
- ›
- »