Appropriate solar spectrum usage : the novel design of a photovoltaic thermal system
- Elshik, Ebrahim, Bester, Rudolf, Nel, Andre
- Authors: Elshik, Ebrahim , Bester, Rudolf , Nel, Andre
- Date: 2017
- Subjects: Sustainable buildings , Photovoltaic power systems , Renewable energy sources
- Language: English
- Type: Conference proceedings
- Identifier: http://hdl.handle.net/10210/217739 , uj:21677 , Citation: Elshik, E., Bester, R. & Nel, A. 2017. Appropriate solar spectrum usage : the novel design of a photovoltaic thermal system.
- Description: Abstract: The path towards zero energy buildings is fraught with many challenges, the onsite renewable energy production to drive consumer appliances that are not low or zero energy is an important challenge. Therefore, developing the energy production such that the production mode is matched to the usage mode is the simplest manner to improve efficiency. As such, energy consumption for lighting could be significantly reduced by optimizing the building`s design to maximize direct daylight usage, similarly cooking using solar stoves, or water heating using solar geysers eliminates the need for PV cells to generate electricity. The most important energy consumption in most buildings is HVAC (accounting for approximately 40% of a building`s energy consumption) which can be addressed with the use of a solar power absorption chiller. This article introduces the design of a novel solar concentrated photovoltaic thermal (CPVT) system that produces electricity and thermal energy simultaneously from the same surface area. The goal of the proposed system is to provide sufficient heat for an absorption cooling system, water heating as well as to produce electricity in a cost effective way. The CPVT system is designed to operate over a wide spectrum (400nm upward contains around 90% of the incident solar radiation spectrum). In the proposed system, solar irradiation is highly concentrated (to the equivalent intensity of approximately 100 suns) onto a single point, using a dual axis sun tracking concentrator with a Fresnel lens. A filter then separates the infrared (IR) from the visible light (VL) components using an imaging lens (viz. a hot mirror which has approximately a 98% filter efficiency). The IR is then utilized for heating while the VL components power the PV cell. The efficiency of the electricity generation in the PV cell improves when the IR component is removed from the incident solar irradiance. High-temperature high pressure water, at approximately 95-120oC (203–248oF), is generated by the IR and serves as a heat source for the absorption cooling system (lithium bromide water / ammonia-water). The proposed system is expected to deliver electricity at the rate of 0.08 W/cm2 (0.2032 W/in2) of PV cell area, and around 0.04W/cm2 (0.1.016 W/in2) collector area. Given that the ratio of collector area to PV cell area is ±9:1 this allows us to design the relative size to suit the building requirements.
- Full Text:
- Authors: Elshik, Ebrahim , Bester, Rudolf , Nel, Andre
- Date: 2017
- Subjects: Sustainable buildings , Photovoltaic power systems , Renewable energy sources
- Language: English
- Type: Conference proceedings
- Identifier: http://hdl.handle.net/10210/217739 , uj:21677 , Citation: Elshik, E., Bester, R. & Nel, A. 2017. Appropriate solar spectrum usage : the novel design of a photovoltaic thermal system.
- Description: Abstract: The path towards zero energy buildings is fraught with many challenges, the onsite renewable energy production to drive consumer appliances that are not low or zero energy is an important challenge. Therefore, developing the energy production such that the production mode is matched to the usage mode is the simplest manner to improve efficiency. As such, energy consumption for lighting could be significantly reduced by optimizing the building`s design to maximize direct daylight usage, similarly cooking using solar stoves, or water heating using solar geysers eliminates the need for PV cells to generate electricity. The most important energy consumption in most buildings is HVAC (accounting for approximately 40% of a building`s energy consumption) which can be addressed with the use of a solar power absorption chiller. This article introduces the design of a novel solar concentrated photovoltaic thermal (CPVT) system that produces electricity and thermal energy simultaneously from the same surface area. The goal of the proposed system is to provide sufficient heat for an absorption cooling system, water heating as well as to produce electricity in a cost effective way. The CPVT system is designed to operate over a wide spectrum (400nm upward contains around 90% of the incident solar radiation spectrum). In the proposed system, solar irradiation is highly concentrated (to the equivalent intensity of approximately 100 suns) onto a single point, using a dual axis sun tracking concentrator with a Fresnel lens. A filter then separates the infrared (IR) from the visible light (VL) components using an imaging lens (viz. a hot mirror which has approximately a 98% filter efficiency). The IR is then utilized for heating while the VL components power the PV cell. The efficiency of the electricity generation in the PV cell improves when the IR component is removed from the incident solar irradiance. High-temperature high pressure water, at approximately 95-120oC (203–248oF), is generated by the IR and serves as a heat source for the absorption cooling system (lithium bromide water / ammonia-water). The proposed system is expected to deliver electricity at the rate of 0.08 W/cm2 (0.2032 W/in2) of PV cell area, and around 0.04W/cm2 (0.1.016 W/in2) collector area. Given that the ratio of collector area to PV cell area is ±9:1 this allows us to design the relative size to suit the building requirements.
- Full Text:
Challenges and proposed solutions towards telecentre sustainability : a Southern Africa case study
- Sumbwanyambe, Mbuyu, Nel, Andre, Clarke, Willem
- Authors: Sumbwanyambe, Mbuyu , Nel, Andre , Clarke, Willem
- Date: 2011
- Subjects: Telecentres , Universal access
- Type: Article
- Identifier: uj:4727 , ISSN 978-1-905824-26-7 , http://hdl.handle.net/10210/11552
- Description: Access to information through telecentres is essential for social and economical growth in rural areas of sub-Saharan Africa. While many governments have established telecentres as means of bridging the increasingly wide digital divide in rural or unserved areas, their corresponding sustainability or continual operation is in doubt due to various challenges. These challenges to information and communications technology (ICT) access has resulted in many of the rural population being unable to exploit the potential of promoting social economic development through innovative business solutions and education. In this study we evaluate the sustainability of telecentres in Zambia and South Africa and propose possible solutions to the problems that telecentres face. Strictly speaking, we focus on two telecentres; Comsol telecentre in KZN, South Africa and Kanyonyo Resource Centre in Mongu, Zambia.
- Full Text:
- Authors: Sumbwanyambe, Mbuyu , Nel, Andre , Clarke, Willem
- Date: 2011
- Subjects: Telecentres , Universal access
- Type: Article
- Identifier: uj:4727 , ISSN 978-1-905824-26-7 , http://hdl.handle.net/10210/11552
- Description: Access to information through telecentres is essential for social and economical growth in rural areas of sub-Saharan Africa. While many governments have established telecentres as means of bridging the increasingly wide digital divide in rural or unserved areas, their corresponding sustainability or continual operation is in doubt due to various challenges. These challenges to information and communications technology (ICT) access has resulted in many of the rural population being unable to exploit the potential of promoting social economic development through innovative business solutions and education. In this study we evaluate the sustainability of telecentres in Zambia and South Africa and propose possible solutions to the problems that telecentres face. Strictly speaking, we focus on two telecentres; Comsol telecentre in KZN, South Africa and Kanyonyo Resource Centre in Mongu, Zambia.
- Full Text:
Gaussian blur identification using scale-space theory
- Robinson, Philip, Roodt, Yuko, Nel, Andre
- Authors: Robinson, Philip , Roodt, Yuko , Nel, Andre
- Date: 2012
- Subjects: Blur identification , Blur estimation , Gaussian blur , Image deblurring algorithms , Scale-space theory
- Type: Article
- Identifier: http://ujcontent.uj.ac.za8080/10210/366248 , uj:6060 , ISBN 978-0-620-54601-0 , http://hdl.handle.net/10210/10475
- Description: Image deblurring algorithms generally assume that the nature of the blurring function that degraded an image is known before an image can be deblurred. In the case of most naturally captured images the strength of the blur present in the image is not known. This paper proposes a method to identify the standard deviation of a Gaussian blur that has been applied to a single image with no a priori information about the conditions under which the image was captured. This simple method makes use of a property of the Gaussian function and the Gaussian scale space representation of an image to identify the amount of blur. This is in contrast to the majority of statistical techniques that require extensive training or complex statistical models of the blur for identification.
- Full Text:
- Authors: Robinson, Philip , Roodt, Yuko , Nel, Andre
- Date: 2012
- Subjects: Blur identification , Blur estimation , Gaussian blur , Image deblurring algorithms , Scale-space theory
- Type: Article
- Identifier: http://ujcontent.uj.ac.za8080/10210/366248 , uj:6060 , ISBN 978-0-620-54601-0 , http://hdl.handle.net/10210/10475
- Description: Image deblurring algorithms generally assume that the nature of the blurring function that degraded an image is known before an image can be deblurred. In the case of most naturally captured images the strength of the blur present in the image is not known. This paper proposes a method to identify the standard deviation of a Gaussian blur that has been applied to a single image with no a priori information about the conditions under which the image was captured. This simple method makes use of a property of the Gaussian function and the Gaussian scale space representation of an image to identify the amount of blur. This is in contrast to the majority of statistical techniques that require extensive training or complex statistical models of the blur for identification.
- Full Text:
Measurement of digital photographic image quality : survey of psychophysics just noticeable threshold difference method
- Pierre, Lindeque, Nel, Andre, Robinson, Philip
- Authors: Pierre, Lindeque , Nel, Andre , Robinson, Philip
- Date: 2016
- Subjects: Image quality , Local psychophysics , JND
- Language: English
- Type: Conference proceedings
- Identifier: http://hdl.handle.net/10210/217056 , uj:21592 , Citation: Pierre, L., Nel, A. & Robinson, P. 2016. Measurement of digital photographic image quality : survey of psychophysics just noticeable threshold difference method.
- Description: Abstract: The modeling and quantification of digital photographic image quality has, from a psychophysics perspective, traditionally followed two paths, one of which is the discriminable small or just noticeable difference (local psychophysics) as detected in an image pair; further extended to cover a wide range of attribute artefactual quality variation. This method has its roots in the mathematical and psychological modeling of psychophysics and boasts a long history starting with the work of researchers such as Bernoulli, Weber and Fechner (18th, 19th century). The method models human perception of difference as a full scale logarithmic law and will be surveyed for its value in the determination of the quantitative quality of digital images.
- Full Text:
- Authors: Pierre, Lindeque , Nel, Andre , Robinson, Philip
- Date: 2016
- Subjects: Image quality , Local psychophysics , JND
- Language: English
- Type: Conference proceedings
- Identifier: http://hdl.handle.net/10210/217056 , uj:21592 , Citation: Pierre, L., Nel, A. & Robinson, P. 2016. Measurement of digital photographic image quality : survey of psychophysics just noticeable threshold difference method.
- Description: Abstract: The modeling and quantification of digital photographic image quality has, from a psychophysics perspective, traditionally followed two paths, one of which is the discriminable small or just noticeable difference (local psychophysics) as detected in an image pair; further extended to cover a wide range of attribute artefactual quality variation. This method has its roots in the mathematical and psychological modeling of psychophysics and boasts a long history starting with the work of researchers such as Bernoulli, Weber and Fechner (18th, 19th century). The method models human perception of difference as a full scale logarithmic law and will be surveyed for its value in the determination of the quantitative quality of digital images.
- Full Text:
Oligopolistic competition in heterogeneous access networks under asymmetries of cost and capacity
- Authors: Zhu, Hailing , Nel, Andre
- Date: 2012
- Subjects: Heterogeneous access network , Pricing , Game theory , Oligopolistic competition
- Type: Article
- Identifier: uj:6034 , http://hdl.handle.net/10210/10421
- Description: With the rapid development of broadband wireless access technologies, multiple wireless service provider (WSPs) operating on various wireless access technologies may coexist in one service area to compete for users, leading to a highly competitive environment for the WSPs. In such a competitive heterogeneous wireless access market, different wireless access technologies used by different WSPs have different bandwidth capacities with various costs. In this paper, we set up a noncooperative game model to study how the cost asymmetry and capacity asymmetry among WSPs affect the competition in this market. We first model such a competitive heterogeneous wireless access market as an oligopolistic price competition, in which multiple WSPs compete for a group of price- and delay-sensitive users through their prices, under cost and capacity asymmetries, to maximize their own profits. Then, we develop an analytical framework to investigate whether or not a Nash equilibrium can be achieved among the WSPs in the presence of the cost and capacity asymmetries, how the asymmetries of cost and capacity affect their equilibrium prices and what impact a new WSP with a cost and capacity advantage entering the market has on the equilibrium achieved among existing WSPs.
- Full Text:
- Authors: Zhu, Hailing , Nel, Andre
- Date: 2012
- Subjects: Heterogeneous access network , Pricing , Game theory , Oligopolistic competition
- Type: Article
- Identifier: uj:6034 , http://hdl.handle.net/10210/10421
- Description: With the rapid development of broadband wireless access technologies, multiple wireless service provider (WSPs) operating on various wireless access technologies may coexist in one service area to compete for users, leading to a highly competitive environment for the WSPs. In such a competitive heterogeneous wireless access market, different wireless access technologies used by different WSPs have different bandwidth capacities with various costs. In this paper, we set up a noncooperative game model to study how the cost asymmetry and capacity asymmetry among WSPs affect the competition in this market. We first model such a competitive heterogeneous wireless access market as an oligopolistic price competition, in which multiple WSPs compete for a group of price- and delay-sensitive users through their prices, under cost and capacity asymmetries, to maximize their own profits. Then, we develop an analytical framework to investigate whether or not a Nash equilibrium can be achieved among the WSPs in the presence of the cost and capacity asymmetries, how the asymmetries of cost and capacity affect their equilibrium prices and what impact a new WSP with a cost and capacity advantage entering the market has on the equilibrium achieved among existing WSPs.
- Full Text:
Optimal price subsidy for universal internet service provision
- Zhu, Hailing, Ouahada, Khmaies, Nel, Andre
- Authors: Zhu, Hailing , Ouahada, Khmaies , Nel, Andre
- Date: 2018
- Subjects: Universal service , Subsidy , Social welfare
- Language: English
- Type: Article
- Identifier: http://hdl.handle.net/10210/282314 , uj:30410 , Citation: Zhu, H., Ouahada, K. & Nel, A. 2018. Optimal price subsidy for universal internet service provision. Sustainability 2018, 10, 1576; doi:10.3390/su10051576
- Description: Abstract: Abstract: Universal service has been adopted by many countries to bridge the digital divide between Information and communication technologies (ICTs) “haves” and “have-nots”. The key goal of universal service is to provide telecommunications services to “needy persons” at “reasonable” rate. It is, therefore, critical for policymakers to make decisions on what is a “reasonable” price or subsidy for “needy persons” so that the targeted users do utilize ICTs and benefit from them. This paper analyzes universal service subsidies in providing subsidized Internet access from a pricing point of view through a hypothetical scenario where the subsidized users being subsidized through a price subsidyandnon-subsidizeduserssharethesamenetworkoperatedbyaserviceprovider. Wepropose a service differentiation system based on priority queuing to accommodate both groups of users, and model such a system as a Stackelberg game from both a revenue-maximizing service provider perspective and a social welfare maximizing planner’s perspective. We then analyze the optimal prices that maximize the service provider’s revenue and social welfare respectively, investigate how the revenue maximizing price and social welfare maximizing price are effected by users’ willingness to pay and the subsidy ratio, and evaluate the revenue maximizing solution on welfare grounds using the social-maximizing solution as a benchmark. Interestingly, the optimal revenue maximizing solution corresponds to the socially optimal solution in terms of social welfare under the optimum subsidy ratio that maximizes the social welfare. This suggests that the subsidy ratio can be used as a tool to induce the revenue maximizing service provider to set a price that leads to social optimality.
- Full Text:
- Authors: Zhu, Hailing , Ouahada, Khmaies , Nel, Andre
- Date: 2018
- Subjects: Universal service , Subsidy , Social welfare
- Language: English
- Type: Article
- Identifier: http://hdl.handle.net/10210/282314 , uj:30410 , Citation: Zhu, H., Ouahada, K. & Nel, A. 2018. Optimal price subsidy for universal internet service provision. Sustainability 2018, 10, 1576; doi:10.3390/su10051576
- Description: Abstract: Abstract: Universal service has been adopted by many countries to bridge the digital divide between Information and communication technologies (ICTs) “haves” and “have-nots”. The key goal of universal service is to provide telecommunications services to “needy persons” at “reasonable” rate. It is, therefore, critical for policymakers to make decisions on what is a “reasonable” price or subsidy for “needy persons” so that the targeted users do utilize ICTs and benefit from them. This paper analyzes universal service subsidies in providing subsidized Internet access from a pricing point of view through a hypothetical scenario where the subsidized users being subsidized through a price subsidyandnon-subsidizeduserssharethesamenetworkoperatedbyaserviceprovider. Wepropose a service differentiation system based on priority queuing to accommodate both groups of users, and model such a system as a Stackelberg game from both a revenue-maximizing service provider perspective and a social welfare maximizing planner’s perspective. We then analyze the optimal prices that maximize the service provider’s revenue and social welfare respectively, investigate how the revenue maximizing price and social welfare maximizing price are effected by users’ willingness to pay and the subsidy ratio, and evaluate the revenue maximizing solution on welfare grounds using the social-maximizing solution as a benchmark. Interestingly, the optimal revenue maximizing solution corresponds to the socially optimal solution in terms of social welfare under the optimum subsidy ratio that maximizes the social welfare. This suggests that the subsidy ratio can be used as a tool to induce the revenue maximizing service provider to set a price that leads to social optimality.
- Full Text:
Optimizing capacity assignment in multiservice MPLS net-works
- Authors: Rassaki, Abdoul , Nel, Andre
- Date: 2017
- Subjects: MPLS , Network capacity assignment , Optimal routing
- Language: English
- Type: Article
- Identifier: http://hdl.handle.net/10210/238052 , uj:24401 , Citation: Rassaki, A. & Nel, A. 2017. Optimizing capacity assignment in multiservice MPLS net-works. South African Computer Journal, 29(1):69–89. DOI: https://doi.org/10.18489/sacj.v29i1.393. , ISSN: 1015-7999 (Print) , ISSN: 2313-7835 (Online)
- Description: Abstract: The general Multiprotocol Label Switch (MPLS) topology optimisation problem is complex and concerns the optimum selection of links, the assignment of capacities to these links and the routing requirements on these links. Ideally, all these are jointly optimised, leading to a minimum cost network which continually meets given objectives on network delay and throughput. In practice, these problems are often dealt with separately and a solution iterated. In this paper, we propose an algorithm that computes the shortest routes, assigns optimal flows to these routes and simultaneously determines optimal link capacities. We take into account the dynamic adaptation of optimal link capacities by considering the same Quality of Service (QoS) measure used in the flow assignment problem in combination with a blocking model for describing call admission controls (CAC) in multiservice broadband telecommunication networks. The main goal is to achieve statistical multiplexing advantages with multiple traffic and QoS classes of connections that share a common trunk present. We offer a mathematical programming model of the problem and proficient solutions which are founded on a Lagrangean relaxation of the problem. Experimental findings on 2-class and 6-class models are reported.
- Full Text:
- Authors: Rassaki, Abdoul , Nel, Andre
- Date: 2017
- Subjects: MPLS , Network capacity assignment , Optimal routing
- Language: English
- Type: Article
- Identifier: http://hdl.handle.net/10210/238052 , uj:24401 , Citation: Rassaki, A. & Nel, A. 2017. Optimizing capacity assignment in multiservice MPLS net-works. South African Computer Journal, 29(1):69–89. DOI: https://doi.org/10.18489/sacj.v29i1.393. , ISSN: 1015-7999 (Print) , ISSN: 2313-7835 (Online)
- Description: Abstract: The general Multiprotocol Label Switch (MPLS) topology optimisation problem is complex and concerns the optimum selection of links, the assignment of capacities to these links and the routing requirements on these links. Ideally, all these are jointly optimised, leading to a minimum cost network which continually meets given objectives on network delay and throughput. In practice, these problems are often dealt with separately and a solution iterated. In this paper, we propose an algorithm that computes the shortest routes, assigns optimal flows to these routes and simultaneously determines optimal link capacities. We take into account the dynamic adaptation of optimal link capacities by considering the same Quality of Service (QoS) measure used in the flow assignment problem in combination with a blocking model for describing call admission controls (CAC) in multiservice broadband telecommunication networks. The main goal is to achieve statistical multiplexing advantages with multiple traffic and QoS classes of connections that share a common trunk present. We offer a mathematical programming model of the problem and proficient solutions which are founded on a Lagrangean relaxation of the problem. Experimental findings on 2-class and 6-class models are reported.
- Full Text:
TB detection using modified Local Binary Pattern features
- Leibstein, Joshua, Nel, Andre
- Authors: Leibstein, Joshua , Nel, Andre
- Date: 2017
- Subjects: Computer vision , Image processing , Biomedical imaging , Tuberculosis - Diagnosis
- Language: English
- Type: Conference proceedings
- Identifier: http://hdl.handle.net/10210/218346 , uj:21760 , Citation: Leibstein, J. & Nel, A. 2017. TB detection using modified Local Binary Pattern features.
- Description: Abstract: This paper explores a computer-aided detection scheme to aid radiologists in making a higher percentage of correct diagnoses when analysing chest radiographs. The approach undertaken in the detection process is to use several proprietary image processing algorithms to adjust, segment and classify a radiograph. Firstly, a Difference of Gaussian (DoG) energy normalisation method is applied to the image. By doing this, the effect of differing equipment and calibrations is normalised. Thereafter, the lung area is detected using Active Shape Models (ASMs). Once identified, the lungs are analysed using Local Binary Patterns (LBPs). This technique is combined with a probability measure that makes use of the the locations of known abnormalities in the training dataset. The results of the segmentation when compared to ground truth masks achieves an overlap segmentation accuracy of 87,598±3,986%. The challenges faced during classification are also discussed.
- Full Text:
- Authors: Leibstein, Joshua , Nel, Andre
- Date: 2017
- Subjects: Computer vision , Image processing , Biomedical imaging , Tuberculosis - Diagnosis
- Language: English
- Type: Conference proceedings
- Identifier: http://hdl.handle.net/10210/218346 , uj:21760 , Citation: Leibstein, J. & Nel, A. 2017. TB detection using modified Local Binary Pattern features.
- Description: Abstract: This paper explores a computer-aided detection scheme to aid radiologists in making a higher percentage of correct diagnoses when analysing chest radiographs. The approach undertaken in the detection process is to use several proprietary image processing algorithms to adjust, segment and classify a radiograph. Firstly, a Difference of Gaussian (DoG) energy normalisation method is applied to the image. By doing this, the effect of differing equipment and calibrations is normalised. Thereafter, the lung area is detected using Active Shape Models (ASMs). Once identified, the lungs are analysed using Local Binary Patterns (LBPs). This technique is combined with a probability measure that makes use of the the locations of known abnormalities in the training dataset. The results of the segmentation when compared to ground truth masks achieves an overlap segmentation accuracy of 87,598±3,986%. The challenges faced during classification are also discussed.
- Full Text:
- «
- ‹
- 1
- ›
- »