Replica server placement is one of the crucial concerns for a given geographic diversity associated with placement problems in content delivery network (CDN). After reviewing the existing literatures, it is noted that studies are more for solving placement problem in conventional CDN and not much over cloud-based CDN architectures, which some few studies are reported towards replica selection are much in its nascent stages of development. Moreover, such models are not benchmarked or practically assessed to prove its effectiveness. Hence, the proposed study introduces a novel design of computational framework associated with cloud-based CDN which can facilitate cost-effective replica server management for enhanced service delivery. Implemented using analytical research methodology, the simulated study outcome shows that proposed scheme offers reduced cost, reduced resource dependencies, reduced latency, and faster processing time in contrast to existing models of replica server placement.
This document proposes an efficient and reliable resource management framework for public cloud computing. It consists of a gossip protocol that ensures fair resource allocation among sites by calculating memory and CPU load factors. It also includes a routing table for dynamically managing tasks. A request partitioning approach is proposed based on the gossip protocol to facilitate cost-efficient splitting of user requests among cloud service providers. Following request partitioning, resource embedding is performed to map virtual to physical resources efficiently and balance resource allocation. The framework is evaluated on a simulated cloud environment and is shown to provide reliable and dynamic resource management.
This document proposes an efficient and reliable resource management framework for public cloud computing. It consists of a gossip protocol that ensures fair resource allocation among sites by calculating memory and CPU load factors. It also includes a routing table for dynamically managing tasks. A request partitioning approach is proposed based on the gossip protocol to facilitate cost-efficient splitting of user requests among cloud service providers. Following request partitioning, resource embedding is performed to map virtual to physical resources efficiently and balance resource allocation. The framework is evaluated on a simulated cloud environment and is shown to provide reliable and dynamic resource management.
On the Optimal Allocation of VirtualResources in Cloud Compu.docxhopeaustin33688
On the Optimal Allocation of Virtual
Resources in Cloud Computing Networks
Chrysa Papagianni, Aris Leivadeas, Symeon Papavassiliou,
Vasilis Maglaris, Cristina Cervelló-Pastor, and �Alvaro Monje
Abstract—Cloud computing builds upon advances on virtualization and distributed computing to support cost-efficient usage of
computing resources, emphasizing on resource scalability and on demand services. Moving away from traditional data-center oriented
models, distributed clouds extend over a loosely coupled federated substrate, offering enhanced communication and computational
services to target end-users with quality of service (QoS) requirements, as dictated by the future Internet vision. Toward facilitating the
efficient realization of such networked computing environments, computing and networking resources need to be jointly treated and
optimized. This requires delivery of user-driven sets of virtual resources, dynamically allocated to actual substrate resources within
networked clouds, creating the need to revisit resource mapping algorithms and tailor them to a composite virtual resource mapping
problem. In this paper, toward providing a unified resource allocation framework for networked clouds, we first formulate the optimal
networked cloud mapping problem as a mixed integer programming (MIP) problem, indicating objectives related to cost efficiency of
the resource mapping procedure, while abiding by user requests for QoS-aware virtual resources. We subsequently propose a method
for the efficient mapping of resource requests onto a shared substrate interconnecting various islands of computing resources, and
adopt a heuristic methodology to address the problem. The efficiency of the proposed approach is illustrated in a simulation/emulation
environment, that allows for a flexible, structured, and comparative performance evaluation. We conclude by outlining a proof-of-
concept realization of our proposed schema, mounted over the European future Internet test-bed FEDERICA, a resource virtualization
platform augmented with network and computing facilities.
Index Terms—Federated infrastructures, resource allocation, resource mapping, virtualization, cloud computing, quality of service
Ç
1 INTRODUCTION
CLOUD computing promises reliable services deliveredthrough next generation data centers that are built on
compute and storage virtualization technologies. According
to Buyya et al., [1] “a cloud is a type of parallel and distributed
system consisting of a collection of interconnected and virtualized
computers that are dynamically provisioned and presented as one
or more unified computing resources based on service-level
agreements established through negotiation between the service
provider and the consumers” and accessible as a composable
service via web 2.0 technologies.
Therefore, with respect to cloud computing there exist
the “as a service” definitions, which include software as a
service (SaaS), infrastructure as a se.
FDMC: Framework for Decision Making in Cloud for EfficientResource Management IJECEIAES
An effective resource management is one of the critical success factors for precise virtualization process in cloud computing in presence of dynamic demands of the user. After reviewing the existing research work towards resource management in cloud, it was found that there is still a large scope of enhancement. The existing techniques are found not to completely utilize the potential features of virtual machine in order to perform resource allocation. This paper presents a framework called FDMC or Framework for Decision Making in Cloud that gives better capability for the VMs to perform resource allocation. The contribution of FDMC is a joint operation of VM to ensure faster processing of task and thereby withstand more number of increasing traffic. The study outcome was compared with some of the existing systems to find FDMC excels better performance in the scale of task allocation time, amount of core wasted, amount of storage wasted, and communication cost.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...ijccsa
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...neirew J
This paper proposes a neuro-fuzzy system called Multi Attribute QoS scoring (MAQS) for dynamic resource allocation in collaborative cloud computing. MAQS uses a 3-layer neural network trained on 5 quality of service attributes - distance, reputation, task completion time, completion ratio, and load - to provide a QoS score for each resource. Resources are then allocated based on this score. The algorithm collects data periodically from nodes and calculates QoS scores for incoming tasks to select the highest scoring node for task allocation. The paper argues this approach considers multiple attributes and heterogeneity of resources better than previous single-attribute methods.
Management of context aware software resources deployed in a cloud environmen...ijdpsjournal
This document discusses a new scheduling algorithm proposed for managing requests for context-aware software deployed in a cloud computing environment. The algorithm aims to improve the performance of servers hosting high-demand context-aware applications while reducing cloud providers' costs. It does this by classifying similar context requests and dynamically scoring requests, with the goal of processing requests for similar context data in parallel to reduce response times. The algorithm is evaluated through simulation and found to improve efficiency compared to the gi-FIFO scheduling algorithm.
This document proposes an efficient and reliable resource management framework for public cloud computing. It consists of a gossip protocol that ensures fair resource allocation among sites by calculating memory and CPU load factors. It also includes a routing table for dynamically managing tasks. A request partitioning approach is proposed based on the gossip protocol to facilitate cost-efficient splitting of user requests among cloud service providers. Following request partitioning, resource embedding is performed to map virtual to physical resources efficiently and balance resource allocation. The framework is evaluated on a simulated cloud environment and is shown to provide reliable and dynamic resource management.
This document proposes an efficient and reliable resource management framework for public cloud computing. It consists of a gossip protocol that ensures fair resource allocation among sites by calculating memory and CPU load factors. It also includes a routing table for dynamically managing tasks. A request partitioning approach is proposed based on the gossip protocol to facilitate cost-efficient splitting of user requests among cloud service providers. Following request partitioning, resource embedding is performed to map virtual to physical resources efficiently and balance resource allocation. The framework is evaluated on a simulated cloud environment and is shown to provide reliable and dynamic resource management.
On the Optimal Allocation of VirtualResources in Cloud Compu.docxhopeaustin33688
On the Optimal Allocation of Virtual
Resources in Cloud Computing Networks
Chrysa Papagianni, Aris Leivadeas, Symeon Papavassiliou,
Vasilis Maglaris, Cristina Cervelló-Pastor, and �Alvaro Monje
Abstract—Cloud computing builds upon advances on virtualization and distributed computing to support cost-efficient usage of
computing resources, emphasizing on resource scalability and on demand services. Moving away from traditional data-center oriented
models, distributed clouds extend over a loosely coupled federated substrate, offering enhanced communication and computational
services to target end-users with quality of service (QoS) requirements, as dictated by the future Internet vision. Toward facilitating the
efficient realization of such networked computing environments, computing and networking resources need to be jointly treated and
optimized. This requires delivery of user-driven sets of virtual resources, dynamically allocated to actual substrate resources within
networked clouds, creating the need to revisit resource mapping algorithms and tailor them to a composite virtual resource mapping
problem. In this paper, toward providing a unified resource allocation framework for networked clouds, we first formulate the optimal
networked cloud mapping problem as a mixed integer programming (MIP) problem, indicating objectives related to cost efficiency of
the resource mapping procedure, while abiding by user requests for QoS-aware virtual resources. We subsequently propose a method
for the efficient mapping of resource requests onto a shared substrate interconnecting various islands of computing resources, and
adopt a heuristic methodology to address the problem. The efficiency of the proposed approach is illustrated in a simulation/emulation
environment, that allows for a flexible, structured, and comparative performance evaluation. We conclude by outlining a proof-of-
concept realization of our proposed schema, mounted over the European future Internet test-bed FEDERICA, a resource virtualization
platform augmented with network and computing facilities.
Index Terms—Federated infrastructures, resource allocation, resource mapping, virtualization, cloud computing, quality of service
Ç
1 INTRODUCTION
CLOUD computing promises reliable services deliveredthrough next generation data centers that are built on
compute and storage virtualization technologies. According
to Buyya et al., [1] “a cloud is a type of parallel and distributed
system consisting of a collection of interconnected and virtualized
computers that are dynamically provisioned and presented as one
or more unified computing resources based on service-level
agreements established through negotiation between the service
provider and the consumers” and accessible as a composable
service via web 2.0 technologies.
Therefore, with respect to cloud computing there exist
the “as a service” definitions, which include software as a
service (SaaS), infrastructure as a se.
FDMC: Framework for Decision Making in Cloud for EfficientResource Management IJECEIAES
An effective resource management is one of the critical success factors for precise virtualization process in cloud computing in presence of dynamic demands of the user. After reviewing the existing research work towards resource management in cloud, it was found that there is still a large scope of enhancement. The existing techniques are found not to completely utilize the potential features of virtual machine in order to perform resource allocation. This paper presents a framework called FDMC or Framework for Decision Making in Cloud that gives better capability for the VMs to perform resource allocation. The contribution of FDMC is a joint operation of VM to ensure faster processing of task and thereby withstand more number of increasing traffic. The study outcome was compared with some of the existing systems to find FDMC excels better performance in the scale of task allocation time, amount of core wasted, amount of storage wasted, and communication cost.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...ijccsa
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...neirew J
This paper proposes a neuro-fuzzy system called Multi Attribute QoS scoring (MAQS) for dynamic resource allocation in collaborative cloud computing. MAQS uses a 3-layer neural network trained on 5 quality of service attributes - distance, reputation, task completion time, completion ratio, and load - to provide a QoS score for each resource. Resources are then allocated based on this score. The algorithm collects data periodically from nodes and calculates QoS scores for incoming tasks to select the highest scoring node for task allocation. The paper argues this approach considers multiple attributes and heterogeneity of resources better than previous single-attribute methods.
Management of context aware software resources deployed in a cloud environmen...ijdpsjournal
This document discusses a new scheduling algorithm proposed for managing requests for context-aware software deployed in a cloud computing environment. The algorithm aims to improve the performance of servers hosting high-demand context-aware applications while reducing cloud providers' costs. It does this by classifying similar context requests and dynamically scoring requests, with the goal of processing requests for similar context data in parallel to reduce response times. The algorithm is evaluated through simulation and found to improve efficiency compared to the gi-FIFO scheduling algorithm.
A combined computing framework for load balancing in multi-tenant cloud eco-...IJECEIAES
Since the world is getting digitalized, cloud computing has become a core part of it. Massive data on a daily basis is processed, stored, and transferred over the internet. Cloud computing has become quite popular because of its superlative quality and enhanced capability to improvise data management, offering better computing resources and data to its user bases (UBs). However, there are many issues in the existing cloud traffic management approaches and how to manage data during service execution. The study introduces two distinct research models: data center virtualization framework under multi-tenant cloud-ecosystem (DCVF-MT) and collaborative workflow of multi-tenant load balancing (CW-MTLB) with analytical research modeling. The sequence of execution flow considers a set of algorithms for both models that address the core problem of load balancing and resource allocation in the cloud computing (CC) ecosystem. The research outcome illustrates that DCVF-MT, outperforms the one-toone approach by approximately 24.778% performance improvement in traffic scheduling. It also yields a 40.33% performance improvement in managing cloudlet handling time. Moreover, it attains an overall 8.5133% performance improvement in resource cost optimization, which is significant to ensure the adaptability of the frameworks into futuristic cloud applications where adequate virtualization and resource mapping will be required.
This document summarizes a paper that presents a novel method for passive resource discovery in cluster grid environments. The method monitors network packet frequency from nodes' network interface cards to identify nodes with available CPU cycles (<70% utilization) by detecting latency signatures from frequent context switching. Experiments on a 50-node testbed showed the method can consistently and accurately discover available resources by analyzing existing network traffic, including traffic passed through a switch. The paper also proposes algorithms for distributed two-level resource discovery, replication and utilization to optimize resource allocation and access costs in distributed computing environments.
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
Technical analysis of content placement algorithms for content delivery netwo...IJECEIAES
Content placement algorithm is an integral part of the cloud-based content de-livery network. They are responsible for selecting a precise content to be re-posited over the surrogate servers distributed over a geographical region. Although various works are being already carried out in this sector, there are loopholes connected to most of the work, which doesn't have much disclosure. It is already known that quality of service, quality of experience, and the cost is always an essential objective targeting to be improved in existing work. Still, there are various other aspects and underlying reasons which are equally important from the design aspect. Therefore, this paper contributes towards reviewing the existing approaches of content placement algorithm over cloud-based content delivery network targeting to explore open-end re-search issues.
A latency-aware max-min algorithm for resource allocation in cloud IJECEIAES
Cloud computing is an emerging distributed computing paradigm. However, it requires certain initiatives that need to be tailored for the cloud environment such as the provision of an on-the-fly mechanism for providing resource availability based on the rapidly changing demands of the customers. Although, resource allocation is an important problem and has been widely studied, there are certain criteria that need to be considered. These criteria include meeting user’s quality of service (QoS) requirements. High QoS can be guaranteed only if resources are allocated in an optimal manner. This paper proposes a latency-aware max-min algorithm (LAM) for allocation of resources in cloud infrastructures. The proposed algorithm was designed to address challenges associated with resource allocation such as variations in user demands and on-demand access to unlimited resources. It is capable of allocating resources in a cloud-based environment with the target of enhancing infrastructure-level performance and maximization of profits with the optimum allocation of resources. A priority value is also associated with each user, which is calculated by analytic hierarchy process (AHP). The results validate the superiority for LAM due to better performance in comparison to other state-of-the-art algorithms with flexibility in resource allocation for fluctuating resource demand patterns.
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
Cost-Minimizing Dynamic Migration of Content Distribution Services into Hybri...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Resource optimization-based network selection model for heterogeneous wireles...IAESIJAI
The internet of things (IoT) environment prerequisite seamless connectivity for meeting real-time application requirements; thus, required efficient resource management techniques. Heterogeneous wireless networks (HWNs) have been emphasized for providing seamless connectivity with high quality of service (QoS) performance to provision IoT applications. However, the existing resource allocation scheme suffers from interference and fails to provide a quality experience for low-priority users. As a result, induce bandwidth wastage and increase handover failure. In addressing the research issues this paper presented the resource-optimized network selection (RONS) method for HWNs. The RONS method employs better load balancing to reduce handover failure and maximizes resource utilization through dynamic slot optimization. The RONS method assures tradeoffs between high performance to high priority users and quality of experience (QoE) for low priority users. The experiment outcome shows the RONS achieves very good performance in terms of throughput, packet loss, and handover failures in comparison with existing resource selection methods.
Social-sine cosine algorithm-based cross layer resource allocation in wireles...IJECEIAES
Cross layer resource allocation in the wireless networks is approached traditionally either by communications networks or information theory. The major issue in networking is the allocation of limited resources from the users of network. In traditional layered network, the resource are allocated at medium access control (MAC) and the network layers uses the communication links in bit pipes for delivering the data at fixed rate with the occasional random errors. Hence, this paper presents the cross-layer resource allocation in wireless network based on the proposed social-sine cosine algorithm (SSCA). The proposed SSCA is designed by integrating social ski driver (SSD) and sine cosine algorithm (SCA). Also, for further refining the resource allocation scheme, the proposed SSCA uses the fitness based on energy and fairness in which max-min, hard-fairness, proportional fairness, mixed-bias and the maximum throughput is considered. Based on energy and fairness, the cross-layer optimization entity makes the decision on resource allocation to mitigate the sum rate of network. The performance of resource allocation based on proposed model is evaluated based on energy, throughput, and the fairness. The developed model achieves the maximal energy of 258213, maximal throughput of 3.703, and the maximal fairness of 0.868, respectively.
Demand-driven Gaussian window optimization for executing preferred population...IJECEIAES
Scheduling is one of the essential enabling technique for Cloud computing which facilitates efficient resource utilization among the jobs scheduled for processing. However, it experiences performance overheads due to the inappropriate provisioning of resources to requesting jobs. It is very much essential that the performance of Cloud is accomplished through intelligent scheduling and allocation of resources. In this paper, we propose the application of Gaussian window where jobs of heterogeneous in nature are scheduled in the round-robin fashion on different Cloud clusters. The clusters are heterogeneous in nature having datacenters with varying sever capacity. Performance evaluation results show that the proposed algorithm has enhanced the QoS of the computing model. Allocation of Jobs to specific Clusters has improved the system throughput and has reduced the latency.
An efficient resource sharing technique for multi-tenant databases IJECEIAES
Multi-tenancy is a key component of Software as a Service (SaaS) paradigm. Multi-tenant software has gained a lot of attention in academics, research and business arena. They provide scalability and economic benefits for both cloud service providers and tenants by sharing same resources and infrastructure in isolation of shared databases, network and computing resources with Service level agreement (SLA) compliances. In a multitenant scenario, active tenants compete for resources in order to access the database. If one tenant blocks up the resources, the performance of all the other tenants may be restricted and a fair sharing of the resources may be compromised. The performance of tenants must not be affected by resource-intensive activities and volatile workloads of other tenants. Moreover, the prime goal of providers is to accomplish low cost of operation, satisfying specific schemas/SLAs of each tenant. Consequently, there is a need to design and develop effective and dynamic resource sharing algorithms which can handle above mentioned issues. This work presents a model referred as MultiTenant Dynamic Resource Scheduling Model (MTDRSM) embracing a query classification and worker sorting technique enabling efficient and dynamic resource sharing among tenants. The experiments show significant performance improvement over existing model.
IRJET- Improving Data Availability by using VPC Strategy in Cloud Environ...IRJET Journal
This document discusses improving data availability in cloud environments using virtual private cloud (VPC) strategies and data replication strategies (DRS). It proposes using VPC to define private networks in public clouds and deploying cloud resources into those private networks for improved security and control. It also proposes using DRS to store multiple copies of data across different nodes to increase data availability, reduce bandwidth usage, and provide fault tolerance. The proposed approach identifies popular data files for replication, selects the best storage sites based on factors like request frequency, failure probability, and storage usage, and decides when to replace replicas to optimize resource usage. A simulation showed this hybrid VPC and DRS approach improved performance metrics like response time, network usage, and load balancing compared to
A Study on Replication and Failover Cluster to Maximize System UptimeYogeshIJTSRD
This document summarizes a study on using replication and failover clusters to maximize system uptime for cloud services. It discusses challenges in ensuring high availability of cloud services from a provider perspective. The study aims to present a high availability solution using load balancing, elasticity, replication, and disaster recovery configuration. It reviews related literature on digital media distribution platforms, content delivery networks, auto-scaling strategies, and database replication impact. It also covers methodologies like CloudFront, state machine replication, neural networks, Markov decision processes, and sliding window protocols. The scope is to build a scalable, fault-tolerant environment with disaster recovery and ensure continuous availability. The conclusion is that data replication and failover clusters are necessary to plan data
A simplified optimization for resource management in cognitive radio network-...IJECEIAES
With increasing evolution of applications and services in internet-of-things (IoT), there is an increasing concern of offering superior quality of service to its ever-increasing user base. This demand can be fulfilled by harnessing the potential of cognitive radio network (CRN) where better accessibility of services and resources can be achieved. However, existing review of literature shows that there are still open-end issues in this regard and hence, the proposed system offers a solution to address this problem. This paper presents a model which is capable of performing an optimization of resources when CRN is integrated in IoT using five generation (5G) network. The implementation uses analytical modeling to frame up the process of topology construction for IoT and optimizing the resources by introducing a simplified data transmission mechanism in IoT environment. The study outcome shows proposed system to excel better performance with respect to throughput and response time in comparison to existing schemes.
Performance and Cost Evaluation of an Adaptive Encryption Architecture for Cl...Editor IJLRES
The cloud database as a service is a novel paradigm that can support several Internet-based applications, but its adoption requires the solution of information confidentiality problems. We propose a novel architecture for adaptive encryption of public cloud databases that offers an interesting alternative to the tradeoff between the required data confidentiality level and the flexibility of the cloud database structures at design time. We demonstrate the feasibility and performance of the proposed solution through a software prototype. Moreover, we propose an original cost model that is oriented to the evaluation of cloud database services in plain and encrypted instances and that takes into account the variability of cloud prices and tenant workloads during a medium-term period.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
This document summarizes a proposed enhancement to the OpenStack Nova scheduler to incorporate network factors into virtual machine scheduling decisions. The current Nova scheduler only considers CPU, memory, and storage utilization when placing VMs, but not network utilization or connectivity. The proposed enhancement adds a network filter and weighting to Nova's filtering scheduler. It would check network interface status and bandwidth when initially placing VMs to ensure connectivity. It would also enable dynamic VM migration if a host's network card fails. This aims to optimize VM placement and improve performance by considering network factors.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Weitere ähnliche Inhalte
Ähnlich wie A novel cost-based replica server placement for optimal service quality in cloud-based content delivery network
A combined computing framework for load balancing in multi-tenant cloud eco-...IJECEIAES
Since the world is getting digitalized, cloud computing has become a core part of it. Massive data on a daily basis is processed, stored, and transferred over the internet. Cloud computing has become quite popular because of its superlative quality and enhanced capability to improvise data management, offering better computing resources and data to its user bases (UBs). However, there are many issues in the existing cloud traffic management approaches and how to manage data during service execution. The study introduces two distinct research models: data center virtualization framework under multi-tenant cloud-ecosystem (DCVF-MT) and collaborative workflow of multi-tenant load balancing (CW-MTLB) with analytical research modeling. The sequence of execution flow considers a set of algorithms for both models that address the core problem of load balancing and resource allocation in the cloud computing (CC) ecosystem. The research outcome illustrates that DCVF-MT, outperforms the one-toone approach by approximately 24.778% performance improvement in traffic scheduling. It also yields a 40.33% performance improvement in managing cloudlet handling time. Moreover, it attains an overall 8.5133% performance improvement in resource cost optimization, which is significant to ensure the adaptability of the frameworks into futuristic cloud applications where adequate virtualization and resource mapping will be required.
This document summarizes a paper that presents a novel method for passive resource discovery in cluster grid environments. The method monitors network packet frequency from nodes' network interface cards to identify nodes with available CPU cycles (<70% utilization) by detecting latency signatures from frequent context switching. Experiments on a 50-node testbed showed the method can consistently and accurately discover available resources by analyzing existing network traffic, including traffic passed through a switch. The paper also proposes algorithms for distributed two-level resource discovery, replication and utilization to optimize resource allocation and access costs in distributed computing environments.
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
Technical analysis of content placement algorithms for content delivery netwo...IJECEIAES
Content placement algorithm is an integral part of the cloud-based content de-livery network. They are responsible for selecting a precise content to be re-posited over the surrogate servers distributed over a geographical region. Although various works are being already carried out in this sector, there are loopholes connected to most of the work, which doesn't have much disclosure. It is already known that quality of service, quality of experience, and the cost is always an essential objective targeting to be improved in existing work. Still, there are various other aspects and underlying reasons which are equally important from the design aspect. Therefore, this paper contributes towards reviewing the existing approaches of content placement algorithm over cloud-based content delivery network targeting to explore open-end re-search issues.
A latency-aware max-min algorithm for resource allocation in cloud IJECEIAES
Cloud computing is an emerging distributed computing paradigm. However, it requires certain initiatives that need to be tailored for the cloud environment such as the provision of an on-the-fly mechanism for providing resource availability based on the rapidly changing demands of the customers. Although, resource allocation is an important problem and has been widely studied, there are certain criteria that need to be considered. These criteria include meeting user’s quality of service (QoS) requirements. High QoS can be guaranteed only if resources are allocated in an optimal manner. This paper proposes a latency-aware max-min algorithm (LAM) for allocation of resources in cloud infrastructures. The proposed algorithm was designed to address challenges associated with resource allocation such as variations in user demands and on-demand access to unlimited resources. It is capable of allocating resources in a cloud-based environment with the target of enhancing infrastructure-level performance and maximization of profits with the optimum allocation of resources. A priority value is also associated with each user, which is calculated by analytic hierarchy process (AHP). The results validate the superiority for LAM due to better performance in comparison to other state-of-the-art algorithms with flexibility in resource allocation for fluctuating resource demand patterns.
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
Cost-Minimizing Dynamic Migration of Content Distribution Services into Hybri...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Resource optimization-based network selection model for heterogeneous wireles...IAESIJAI
The internet of things (IoT) environment prerequisite seamless connectivity for meeting real-time application requirements; thus, required efficient resource management techniques. Heterogeneous wireless networks (HWNs) have been emphasized for providing seamless connectivity with high quality of service (QoS) performance to provision IoT applications. However, the existing resource allocation scheme suffers from interference and fails to provide a quality experience for low-priority users. As a result, induce bandwidth wastage and increase handover failure. In addressing the research issues this paper presented the resource-optimized network selection (RONS) method for HWNs. The RONS method employs better load balancing to reduce handover failure and maximizes resource utilization through dynamic slot optimization. The RONS method assures tradeoffs between high performance to high priority users and quality of experience (QoE) for low priority users. The experiment outcome shows the RONS achieves very good performance in terms of throughput, packet loss, and handover failures in comparison with existing resource selection methods.
Social-sine cosine algorithm-based cross layer resource allocation in wireles...IJECEIAES
Cross layer resource allocation in the wireless networks is approached traditionally either by communications networks or information theory. The major issue in networking is the allocation of limited resources from the users of network. In traditional layered network, the resource are allocated at medium access control (MAC) and the network layers uses the communication links in bit pipes for delivering the data at fixed rate with the occasional random errors. Hence, this paper presents the cross-layer resource allocation in wireless network based on the proposed social-sine cosine algorithm (SSCA). The proposed SSCA is designed by integrating social ski driver (SSD) and sine cosine algorithm (SCA). Also, for further refining the resource allocation scheme, the proposed SSCA uses the fitness based on energy and fairness in which max-min, hard-fairness, proportional fairness, mixed-bias and the maximum throughput is considered. Based on energy and fairness, the cross-layer optimization entity makes the decision on resource allocation to mitigate the sum rate of network. The performance of resource allocation based on proposed model is evaluated based on energy, throughput, and the fairness. The developed model achieves the maximal energy of 258213, maximal throughput of 3.703, and the maximal fairness of 0.868, respectively.
Demand-driven Gaussian window optimization for executing preferred population...IJECEIAES
Scheduling is one of the essential enabling technique for Cloud computing which facilitates efficient resource utilization among the jobs scheduled for processing. However, it experiences performance overheads due to the inappropriate provisioning of resources to requesting jobs. It is very much essential that the performance of Cloud is accomplished through intelligent scheduling and allocation of resources. In this paper, we propose the application of Gaussian window where jobs of heterogeneous in nature are scheduled in the round-robin fashion on different Cloud clusters. The clusters are heterogeneous in nature having datacenters with varying sever capacity. Performance evaluation results show that the proposed algorithm has enhanced the QoS of the computing model. Allocation of Jobs to specific Clusters has improved the system throughput and has reduced the latency.
An efficient resource sharing technique for multi-tenant databases IJECEIAES
Multi-tenancy is a key component of Software as a Service (SaaS) paradigm. Multi-tenant software has gained a lot of attention in academics, research and business arena. They provide scalability and economic benefits for both cloud service providers and tenants by sharing same resources and infrastructure in isolation of shared databases, network and computing resources with Service level agreement (SLA) compliances. In a multitenant scenario, active tenants compete for resources in order to access the database. If one tenant blocks up the resources, the performance of all the other tenants may be restricted and a fair sharing of the resources may be compromised. The performance of tenants must not be affected by resource-intensive activities and volatile workloads of other tenants. Moreover, the prime goal of providers is to accomplish low cost of operation, satisfying specific schemas/SLAs of each tenant. Consequently, there is a need to design and develop effective and dynamic resource sharing algorithms which can handle above mentioned issues. This work presents a model referred as MultiTenant Dynamic Resource Scheduling Model (MTDRSM) embracing a query classification and worker sorting technique enabling efficient and dynamic resource sharing among tenants. The experiments show significant performance improvement over existing model.
IRJET- Improving Data Availability by using VPC Strategy in Cloud Environ...IRJET Journal
This document discusses improving data availability in cloud environments using virtual private cloud (VPC) strategies and data replication strategies (DRS). It proposes using VPC to define private networks in public clouds and deploying cloud resources into those private networks for improved security and control. It also proposes using DRS to store multiple copies of data across different nodes to increase data availability, reduce bandwidth usage, and provide fault tolerance. The proposed approach identifies popular data files for replication, selects the best storage sites based on factors like request frequency, failure probability, and storage usage, and decides when to replace replicas to optimize resource usage. A simulation showed this hybrid VPC and DRS approach improved performance metrics like response time, network usage, and load balancing compared to
A Study on Replication and Failover Cluster to Maximize System UptimeYogeshIJTSRD
This document summarizes a study on using replication and failover clusters to maximize system uptime for cloud services. It discusses challenges in ensuring high availability of cloud services from a provider perspective. The study aims to present a high availability solution using load balancing, elasticity, replication, and disaster recovery configuration. It reviews related literature on digital media distribution platforms, content delivery networks, auto-scaling strategies, and database replication impact. It also covers methodologies like CloudFront, state machine replication, neural networks, Markov decision processes, and sliding window protocols. The scope is to build a scalable, fault-tolerant environment with disaster recovery and ensure continuous availability. The conclusion is that data replication and failover clusters are necessary to plan data
A simplified optimization for resource management in cognitive radio network-...IJECEIAES
With increasing evolution of applications and services in internet-of-things (IoT), there is an increasing concern of offering superior quality of service to its ever-increasing user base. This demand can be fulfilled by harnessing the potential of cognitive radio network (CRN) where better accessibility of services and resources can be achieved. However, existing review of literature shows that there are still open-end issues in this regard and hence, the proposed system offers a solution to address this problem. This paper presents a model which is capable of performing an optimization of resources when CRN is integrated in IoT using five generation (5G) network. The implementation uses analytical modeling to frame up the process of topology construction for IoT and optimizing the resources by introducing a simplified data transmission mechanism in IoT environment. The study outcome shows proposed system to excel better performance with respect to throughput and response time in comparison to existing schemes.
Performance and Cost Evaluation of an Adaptive Encryption Architecture for Cl...Editor IJLRES
The cloud database as a service is a novel paradigm that can support several Internet-based applications, but its adoption requires the solution of information confidentiality problems. We propose a novel architecture for adaptive encryption of public cloud databases that offers an interesting alternative to the tradeoff between the required data confidentiality level and the flexibility of the cloud database structures at design time. We demonstrate the feasibility and performance of the proposed solution through a software prototype. Moreover, we propose an original cost model that is oriented to the evaluation of cloud database services in plain and encrypted instances and that takes into account the variability of cloud prices and tenant workloads during a medium-term period.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
This document summarizes a proposed enhancement to the OpenStack Nova scheduler to incorporate network factors into virtual machine scheduling decisions. The current Nova scheduler only considers CPU, memory, and storage utilization when placing VMs, but not network utilization or connectivity. The proposed enhancement adds a network filter and weighting to Nova's filtering scheduler. It would check network interface status and bandwidth when initially placing VMs to ensure connectivity. It would also enable dynamic VM migration if a host's network card fails. This aims to optimize VM placement and improve performance by considering network factors.
Ähnlich wie A novel cost-based replica server placement for optimal service quality in cloud-based content delivery network (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
A review on techniques and modelling methodologies used for checking electrom...
A novel cost-based replica server placement for optimal service quality in cloud-based content delivery network
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 13, No. 5, October 2023, pp. 5588~5598
ISSN: 2088-8708, DOI: 10.11591/ijece.v13i5.pp5588-5598 5588
Journal homepage: http://ijece.iaescore.com
A novel cost-based replica server placement for optimal service
quality in cloud-based content delivery network
Priyanka Dharmapal1
, Channakrishnaraju2
, Chethan Bommalingaiahanapalya Krishnamurthy3
1
Department of Computer Science and Engineering, Sri Siddhartha Academy of Higher Education University, Karnataka, India
2
Department of Computer Science and Engineering, Sri Siddhartha Institute of Technology, Karnataka, India
3
Department of Artificial Intelligence and Machine Learning, Vidyavardhaka College of Engineering,
Visveswaraiah Technological University, Karnataka, India
Article Info ABSTRACT
Article history:
Received Dec 10, 2022
Revised Apr 4, 2023
Accepted Apr 7, 2023
Replica server placement is one of the crucial concerns for a given
geographic diversity associated with placement problems in content delivery
network (CDN). After reviewing the existing literatures, it is noted that
studies are more for solving placement problem in conventional CDN and
not much over cloud-based CDN architectures, which some few studies are
reported towards replica selection are much in its nascent stages of
development. Moreover, such models are not benchmarked or practically
assessed to prove its effectiveness. Hence, the proposed study introduces a
novel design of computational framework associated with cloud-based CDN
which can facilitate cost-effective replica server management for enhanced
service delivery. Implemented using analytical research methodology, the
simulated study outcome shows that proposed scheme offers reduced cost,
reduced resource dependencies, reduced latency, and faster processing time
in contrast to existing models of replica server placement.
Keywords:
Cloud
Content delivery network
Cost
Replica server
Resource
This is an open access article under the CC BY-SA license.
Corresponding Author:
Priyanka Dharmapal
Research Scholar, Department of Computer Science and Engineering, Sri Siddhartha Academy of Higher
Education University
Tumkur, Karnataka-572017, India
Email: prsh2019@gmail.com
1. INTRODUCTION
With the rising demands of data and service availability, there is a need of an effective networking
technology that assists in the offering the seamless services irrespective of traffic condition in dedicated
manner. A content delivery network (CDN) is one such technology that assists in delivering the content
efficiently and quickly using surrogate server and content placement [1]–[5]. Although, there are several
reported advantage of adopting CDN system towards facilitating rate of data delivery, it is characterized by
various challenging condition too [6]. The primary challenge in deployment of CDN is its inclusion of large
sum of money with highly sophisticated process of deployment [7]. On the basis of multiple studies, it has
been noted that enabling technologies of CDN in current times make use of cloud computing more [8]. This
adoption offers beneficial characteristic of service provisioning, resource allocation, pay-as-per usage, and
hosting service supportability [9].
It should be noted that in the complete operation of CDN, the role of replica server is highly
important which is actually responsible for replicating the original content to content replica server.
Irrespective of various approaches being carried out towards content placement algorithm, there are still a
large trade-off between the demands of the user and the present state of operation of replica server. The
primary issue associated with the placement of replica server is associated with the optimization problem,
2. Int J Elec & Comp Eng ISSN: 2088-8708
A novel cost-based replica server placement for optimal service quality in … (Priyanka Dharmapal)
5589
which could be either constraint or unconstrained type. The deployment cost of the replica server as well as
its associated delivery cost and update cost is not much emphasized upon in existing system with respect to
network topology, metric of network performance, latency, and count of hop. Apart from this, there are
various theoretical models presented in existing studies which has presented discussion about the facility
location with both capacitated and uncapacitated variant considering constraints as capacity of server.
Existing studies have also used K-median, minimum k-center, and k-cache location. for the purpose of
localization of replica server [10]. A thorough review of existing scheme also shows that majority of the
approaches associated with replica server placement is carried out on the basis of conventional CDN and not
towards cloud-based CDN. The prime difference between these two strategies is that conventional CDN are
bit centralized while cloud-based CDN are highly distributed and still it could be centralized to formulate a
large chain of network. It will mean that there are various inevitable challenges to be encountered when a
conventional CDN is exposed to challenges of heavier traffic condition as well as uncertain behavior of
various service and application running over it. In this entire scenario, it is also noted that cloud-based CDN
system also offers a less emphasis towards harnessing virtualized environment for increasing the coverage of
replica server. Although, it is nearly impossible to position the content server in multiple geographical places;
however, the cost can be significantly minimized when the existing virtualized environment (or machine) can
be classified into local and global form in order to offer connectivity in both smaller and larger scale. Hence,
it is essential to carry out an investigation in this direction in order to understand the impact of using local
virtual machine as well as proxies in order to facilitate better form of networking in cloud-based CDN
system. Further, it is also essential to understand the impact of spatial attributes towards the placement of the
replica server in cloud-based CDN and its possible impact in large scale network. Hence, in order to explore
these possibilities, the proposed study also carry out a deeper investigation of current literatures where it is
found that studies are more towards content placement and less towards replica server placement deploying
the virtualized environment.
The discussion of the relevant literature is as: our prior review has discussed about different
approaches and techniques required for improving the performance of CDN [11]. At present, there are
various further attempts that directly or indirectly contributes towards leveraging the performance of CDN.
The work carried out by Al-Abbasi et al. [12] have presented a probability-based model towards controlling
the stalling event during streaming services in CDN. A model is developed on the basis of restricted spaces in
caches along with allocation of CDN for better streaming delivery. Further, the work carried out by
Chuan et al. [13] have presented an optimized model for content placement considering belief propagation
system over cache in distributed network. The complete model is developed in a sequential flow where a
selection mechanism of content helper is carried out followed by caching the contents and delivering them at
proper destination. On the basis of states of communication channel and popularity, the modelling is carried
out in iterative manner emphasizing towards controlling energy consumption in wireless network. The work
carried out by Fan et al. [14] have presented an allocation model for replica server which is claimed to be
more energy efficient as well as reliable. The study model contributes to concurrent task processing over the
main server mapped with a dedicated storage point. The work carried out by Guerrero et al. [15] have
developed a model for data replication with a core intention towards leveraging data availability over a
sophisticated form of weighted network along with centrality factors. The study model also evaluates a graph
partitioned attributed while the data replicates are stored in fog devices. According to this model, one unit
stores fog information and a single replica of data while the other stores all file replica using greedy based
approach.
The work carried out by Yovita and Syambas [16] have discussed about the importance of the
caching mechanism which is highly essential for large distributed network. According to the study outcome,
there are yet unsolved problems towards caching mechanism. Kusuma et al. [17] have emphasized towards
the vertex markers in order to discuss possible scale of amendments towards grid methods.
Liu et al. [18] have presented a unique replica placement scheme that uses the existing software framework
for Hadoop in order to address the problems of large-scale voluminous data aggregated during data exchange.
The study considers three-dimensional raster data in order to perform better determination of location
associated with replica server placement while the study outcome show efficient performance with reduced
network overhead. Adoption of machine learning is also witnessed in existing scheme towards solving the
problem of selection of replica. This cadre of work is reported by Mostafa et al. [19] where a large scale
environment is considered by deploying artificial neural network (ANN) for profiling the behavior of the
location involved in the process. The outcome exhibits satisfactory predictive accuracy with sufficient
optimization of channel capacity. The study towards cost estimation is carried out by Nazir et al. [20] while
performing job scheduling over grid system. The work introduces a unique dynamic scheduling policy that is
centralized towards replica placement. The prime intention of this model is to reduce the cost of replica
placement by scheduling the data considering computing capacity of the node in order to explore the task to
be processed. A case study of telco CDN is reported in work of Safavi et al. [21] where an on-line learning
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5588-5598
5590
scheme towards replica placement is discussed considering the popularity of the contents. The spatial
patterns are computed that is associated with the request of the content in order to perform predictive
modelling on the basis of content popularity. As discussed by various work model, it was noted that content
caching is one of the essential operations for leveraging the perform of CDN. Study in such direction was
carried out by Saino et al. [22] focusing on both conventional and futuristic form of CDN where the authors
contributes towards introducing a unique placement of contents along with higher degree of flexibility
towards placement operation. The study outcome is exhibited to offer improved load balancing performance
and being less sensitive toward any form of traffic fluctuation.
The study of content placement is also carried out over 5G network services as noted in work of
Santos et al. [23] where the edge sites are targeted for placing the virtualized environment of CDN. The core
objective of the model is towards splitting the heavier multimedia file followed by localizing those splitted
files over dynamic and virtualized resources. Nearly similar form of approach is also carried out by Shankar
and Chitra [24] using integrated machine learning process. According to this model, the latency is addressed
by distributed storage of data followed by learning the placement coordinates using support vector machine
(SVM) and radial basis function (RBF). This learning scheme is capable of classifying data centers along
with prediction of possible traffic load by using either cloud or edge resources. Further, the optimization is
carried out using graph partitioning in dynamic approach in order to cater up the latency demands with
reduced cost of placement. Shao et al. [25] have carried out an investigation towards various processes of
selection of replica followed by techniques towards itself placement in edge environment as well as in
internet of things (IoT) environment. The review outcome stated the importance of provenance of data in
order to improve accuracies. The author also stated the ongoing challenges of replica placement is quite
complex to deal with especially in presence of dynamic CDN environment. Teng et al. [26] have presented a
scheme of content placement considering the problem of delay minimization by harnessing biconvexity.
The work carried out by Tran et al. [27] have presented a framework using software defined
network on CDN which primarily emphasizes on selection of server followed by constructing an intelligent
controller system which is primarily meant for resisting all possible overloading condition in SDN. The work
carried out by Xiong et al. [28] have presented a strategy for replication management in order to mitigate the
delay during access considering data with spatial and temporal characteristics on the basis of caching system.
According to the implementation model, the maximized popularity score is mined using correlational factor
associated with an access of user by adopting beneficial aspect of location. It also uses all connected files
obtained from the information of user access. This process finally yields replicates followed by selection of
cache node in order to fulfil the purpose of placement. The work carried out by Xu et al. [29] have presented
a scheme have presented an optimization framework using mixed integer linear programming where the
complete model initiates by optimal selection of replica server followed by undertaking decision towards
caching the content items within the replica server. Finally, all the varied servers are allocated with the loads
of content request followed by constructing a heuristic method. Another unique work has been presented by
Yu and Pan [30] where the problems associated with conventional hashing method for distributed data
storage in Hadoop is addressed. According to this study model, a hypergraph-based methodology for solving
the data placement has been discussed that takes into account varied metrics of performance connected with
the location information of the target data in cumulative fashion. The study model uses this method in order
to partition the selected dataset items and relocate them in different location of distributed storage units. The
study is also claimed to be capable enough for exploring the presence of redundant data.
After reviewing the above-mentioned existing studies, the conclusive statement of problem is as: it
is computationally challenging task to develop a generalized scheme of replica node placement over
distributed environment of cloud-based CDN. The prime reason behind this statement are as: i) majority of
the existing studies have emphasized on quality of service (QoS) attribute, which is essential; however,
consideration of user’s experience is also equally important which is not much reported to be emphasized, ii)
existing approaches has deployed a mechanism considering varied forms of files; however, such schemes are
not much effective when deployed under a situation of non-uniform distance between content server and
replica server; and iii) further, non-utilization of virtualized environment is less seen in the existing scheme.
The proposed solution presented in this paper discusses about a unique replica server placement by
harnessing the data availability potential of local virtualized machined and proxies connected to centralized
content server that is further maintained in a distributed form. The core idea of this work is mainly towards
offers similar user experience quality as well as service quality in presence of dynamic traffic condition in
cost effective manner.
The value added in proposed scheme are as: i) the proposed scheme presents a cost-based optimal
replica server placement strategy which equally contributes towards task allocation unlike existing schemes;
ii) adoption of non-iterative scheme towards flow processing considering proximity importance attribute
4. Int J Elec & Comp Eng ISSN: 2088-8708
A novel cost-based replica server placement for optimal service quality in … (Priyanka Dharmapal)
5591
among all the nodes connection in large network; and iii) faster processing time to process the query
irrespective of any form of traffic. The next section discusses about adopted method.
The organization of the proposed manuscript is as: section 1 discusses introduction while the
section 2 discusses about methodology being undertaken, section 3 discusses about the result being
accomplished while section 4 discusses about the conclusion of the paper.
2. METHOD
The prime aim of this stage of study is to develop a novel model that can perform provisioning of
resources to ensure better replica server placement in dynamic distributed cloud-based CDN system.
Adopting analytical research methodology, the proposed system initiates its design using communication
model considering multi-cloud system as shown in Figure 1. For simplifying the topology design, the
proposed system implements a tree structure that are highly clustered. According to this tree structure, each
user is served by a dedicated cloud system using surrogate server that are further incorporated with
operations associated with virtual machine, storage, and bandwidth. The next part of implementation is
associated with developing a cloud-based CDN resource management system that will be further classified
into planning stage and reallocation of user stage. The proposed system will use the cost model for assessing
the cost incurred for resource allocation in present scenario. In the next stage of user reallocation, three set of
operations are carried out toward resource provisioning considering formulation of practical constraints and
achieving highest optimality in replica server placement. Finally, an objective function will be designed
which can balance the resource provisioning demands with dynamic content delivery system for effective
replica server placement in cloud-based CDN system.
This part of implementation is anticipated to achieve optimal performance in cloud-based CDN.
A test environment will be constructed to assess the influence of various network attributes e.g., channel
capacity, cost, and uncertain traffic situation on the model. The model is also anticipated to exhibited lower
computational complexity. The next section discusses about the research methodology adopted for the
purpose of implementation briefed with respect to system design and algorithm implementation.
Communication Model
Tree Structure (clustered)
Surrogate server
Virtual Machine
Storage
Bandwidth
Cloud-based CDN Resource Management
Planning Stage
Reallocation
of user stage
local
Global
optimal
Objective function
Figure 1. Architecture of 2nd
module of implementation
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5588-5598
5592
2.1. System design
The core system design of the proposed scheme is based on the location and proximity of the replica
server and the proxy cache in order to offer better availability of service via cloud networks. According to
this scheme, a centralized content server (CCS) is considered which is positioned at uniform proximity of
multiple local virtual machine (LVM) which retains information about replica server and proxy cache.
The Figure 2 highlights the placement strategy of replica server where ith
number of multiple
observation zone 𝑂𝑧 exists i.e., 𝑂𝑧𝑖 = [𝑂𝑧1, 𝑂𝑧2, … 𝑂𝑧𝑖] where the placement of LVM is carried out. It will
mean that each 𝑂𝑧 consist of specific number of LVM (depending upon the geographic spread of 𝑂𝑧).
Mathematically, it can be expressed as (1).
𝐿𝑉𝑀𝑖 = [𝑃𝑖, 𝑅𝑖]𝛼𝛽
(1)
The (1) shows ith
number of LVM is positioned in 𝑂𝑧𝑖 and each LVMi consists of proxy cache data Pi
and replica server Ri considering 𝛼 as total proxy server and specific bandwidth 𝛽 associated with replica
server. Each LVM are connected to each other by one-to-many relation in unique fashion. In order to
simplify this process, the proposed scheme allocate a link coefficient 𝛾 for each of the possible connection
from each LVM as shown in (2).
𝛾 = ∑ 𝐿𝑉𝑀𝑗
𝑙
⁄
𝑘
𝑗=1 (2)
In the mathematical expression (2), the variable 𝑗 and 𝑙 will represent specific number of LVM and
remaining number of individual LVM to be connected with current LVM. It should be noted that this is only
possible when the proposed scheme is designed using tree-structure. The contribution of link coefficient is
that it assists in formulating LVM connectivity with respect to reduced cost involved in data transmission
over the assigned tree structure. One of the contributions of the proposed scheme is its placement strategy
which is circular in its orientation that targets to offer a uniform performance of the data delivery services
from the CCS. However, it should be noted that multiple CCS joins in the tree structure to formulate highly
distributed and yet well-connected to each other targeting persistent quality of service delivery. Another
contribution in this scheme is uniform utilization of bandwidth. It will eventually mean that each node of
replica server in tree structure is characterized by unique orientation in the form of in-degree and out-degree
links where each node bears information associated with service provider of CDN discretely defined for each
direction in distributed manner. Another best part is its capability to identify the presence of any form of
redundant data that may possibly reside in cache proxy.
Figure 2. Scheme of replica server placement
6. Int J Elec & Comp Eng ISSN: 2088-8708
A novel cost-based replica server placement for optimal service quality in … (Priyanka Dharmapal)
5593
2.2. Algorithm implementation
The proposed system introduces a very simple and yet novel algorithm which uses cost estimation in
order to perform placement of replica server. It is to be noted that while performing the placement of replica
server, the importance is given more towards LVM as it is the bridge between the replica server and the
actual content server connected in distributed manner. The steps of proposed algorithm are as:
Algorithm 1. For cost-based replica server placement
Input: no, ne, LVM, j, S
Output: Φ
Start
1. init net[no, ne(LVMj), S]
2. G→(no, ne)S
3. θ=μ(G)
4. formulate matrix G=[θ, IL, OL]
5. Extract new tree=ω(G)
6. Lt=(G, cond(prob))
7. Φ=ρ(θ)
End
The algorithm 1 takes the input of origin node no, end node ne, local virtual machine LVM, number
of LVM j, and service being hosted by the CDN S that gives the resultant outcome of cost Φ. The preliminary
set of tasks of the proposed algorithm is to initialize specific number (i.e., 𝑗) of LVM considering the
structure of its placement and connectivity with origin and end node (Line-1). The next part of
implementation is associated with construction of graphical tree 𝐺 for mapping the complete nodes and links
in the form of a network topology in distributed and large-scale order (Line-2). It is to be noted that proposed
scheme constructs distributed scheme of replicate server placement by connecting LVMs associated with
centrally located CCS (Line-3). The computation of the central location is carried out by obtaining the score
of each LVM that is divided with respect to available least quantity of nodes which has essential
characteristics of edge information linking with the respective LVM. The outcome of this operation gives the
score of importance attribute θ using a specific function μ doing the above-mentioned task by taking the input
argument of 𝐺 (Line-3). After computing the importance attribute, the algorithm formulates a two-
dimensional matrix G that retains information associated with 𝜃 importance attribute, incoming link 𝐼𝐿, and
outgoing link 𝑂𝐿 (Line-4). This mechanism is deployed in order to construct a distributed network links
connecting all the LVMs in different location. This assessment is performed for the purpose of assessing the
consistency of the proposed algorithm in order to acquire reduced delay considering multiple position of the
LVM associated with the replica server in specific 𝑂𝑧. After this task is accomplished, the algorithm deploys
another function ω that constructs a diagraph tree structure G over the existing tree (Line-5). The system than
generates a local tree Lt considering the newly constructed tree G obtained in prior step followed by selection
of the LVM with an embedded statistical condition cond (Line-6). According to this condition cond, the
construction of the Lt is formed only when its probability value prob is found to be statistically significant. In
the last step, an explicit function 𝜌 is formed that performs final estimation of the cost 𝛷 associated with the
placement of replica server at the time of data transmission (Line-7). The algorithm considers importance
attribute 𝜃 considering the LVM as well as their location-specific data during this mechanism.
The complete operation of the cost function 𝜌 is stated as:
a) The cost function 𝜌 constructs a two-dimensional matrix of (d1×d2) where each elements of the matrix
represents the cost attributes that is allocated with specific d1 number of LVM with d2 storing all the
discrete jobs that it has to process. The construction of this matrix is carried out in specific form to retain
least number of d1 and d2 in the form of rows and columns where allocation of bandwidth 𝛽 is defined as
𝛽 = 𝑚𝑖𝑛𝑖𝑚𝑢𝑚(𝑑1, 𝑑2).
b) The consecutive process is to look for least number of matrix element in complete d1 rows followed by
subtracting it from residual elements residing in the rows.
c) All the elements in the matrix are then looked for presence of any element with value zero. The system
than flags all the element which is found to be non-zero elements in the matrix. Similar process is
repeated for all the elements residing within the matrix.
d) The algorithm encapsulates all the flagged zero elements residing within the matrix in the form of
columnar position. If the system finds βtot number of columnar elements that are flagged with the zero
elements than it represents this as a total allocation of bandwidth in unique form. This is the usual
scenario of termination of total operation of an algorithm, otherwise it is resumed for next level of
processes.
e) The system identified all the non-encapsulated zero elements followed by transforming into prime
numbers. If further there is no flagged zero elements over the rows in matrix, the function proceeds to
next step otherwise it encapsulates the specific elements maintained in row while it does not encapsulate
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5588-5598
5594
any columnar elements with elements flagged with zero. This process is continued until the function
encapsulates all the zero elements followed by saving all the minimal value of non-encapsulated elements
in matrix.
f) The function then constructs all the primed elements with zeros as well as flagged elements with zero.
g) The outcome of this process is yielded in 5th
step and this outcome is appended complete elements in the
matrix in the form of encapsulated form maintained over rows that is subtracted from all the
non-encapsulated elements in columnar form. The function further performs operation highlighted in 5th
step above without any need of changing flagging or primes.
h) Therefore, the function yields a set of scores that is required to be assigned to the node taking into
account for location of all the elements with zero and flagging over the defined cost matrix. It will
eventually mean that if matrix of dimension d1×d2 is found to be flagged as zero than all the scores
associated with the respect to d1 row will be assigned to scores that are associated with d2 column.
Therefore, the resultant outcome of the last function ρ is considered to be an optimal result with
significantly least cost of placing replica server. The next section discusses about the accomplished
outcome.
3. RESULTS AND DISCUSSION
From the prior section, it was noted that proposed scheme introduces a novel mechanism towards
replica server placement in order to ascertain better performance delivery in cloud-based distributed CDN
system. Hence, it is essential to chalk out a definitive assessment strategy in order to ensure better data
transmission performance with respect to variable test environment in CDN. The algorithm of the proposed
system is scripted in MATLAB emphasing towards accomplishing the tree structure deployment where cost
for replica server placement is the prime analysis factor towards result analysis. Table 1 discusses about the
evaluated outcomes of cost that is linked with individual LVM residing over multiple orientation considering
central location of CCS. The assessment of proposed scheme is carried out by constructing a bipartite graph.
Table 1. Simulation parameters
Simulation parameter Values assigned
Total deployment areas of LVM 9
Total replica server 10
Bandwidth assigned for replica server 5,000 Mbps
Total proxy (cache) 25
Bandwidth assigned for proxy 3,000 Mbps
Deploying the simulation parameters and its respective values exhibited in Table 1, the proposed
system carries out cost estimation for individual LVM in 9 different circular position considering CCS at its
center. The complete analysis is carried out over variable size of random data packets of 2,500 bytes over a
simulation area of size 1,000×1,000 m2
considering presence of 500 nodes. The assessment is carried out
considering randomly selected origin node and end node. The individual outcomes of cost in 9 different
locations are shown in Table 2 assessed over 50 iterations.
Table 2. Accomplished cost (𝛷)
Origin Estimated cost for nodal location of 9 LVMs
1 2 3 4 5 6 7 8 9
LVM2 71 73 28 65 23 93 5 25 56
LVM4 65 98 14 97 68 1 19 93 44
LVM1 4 54 29 25 85 47 73 28 65
LVM3 8 34 45 69 35 43 48 78 66
LVM2→LMV1 33 12 54 30 79 47 16 20 69
LVM2→LVM3 54 62 47 68 69 78 35 30 65
LVM4→LVM1 66 79 89 71 2 33 62 10 96
LVM4→LVM3 42 43 53 8 61 79 20 59 22
CCS 83 10 95 26 40 48 75 69 72
In order to assess the sustainability of the network, the analysis is carried out by further increasing
the size of packet to 3,000 bytes where observation being carried out for 500 iterations in order to arrive at
final averaged outcome of estimated cost of replica server placement as exhibited in Table 3. Comparing the
8. Int J Elec & Comp Eng ISSN: 2088-8708
A novel cost-based replica server placement for optimal service quality in … (Priyanka Dharmapal)
5595
difference of trends in cost in Tables 2 and 3, it can be seen the cost values get significantly reduced over a
period of time with an inclusion of more number of traffic and more iteration compared to less number of
traffic in CDN. The prime reason behind this is topology gets more branched with increasing traffic while
allocation of resources is also distributive carried out which splits up the computational burden of each LVM
using tree structure. Hence, adoption of tree structure can be seen as one contributory points towards
minimizing the cost of replica server placement in cloud-based CDN system.
Table 3. Estimated cost
Location LVM No 𝛷
LVM2 7 4
LVM4 6 0
LVM1 3 28
LVM3 1 7
LVM2→LMV1 2 11
LVM2→LVM3 8 29
LVM4→LVM1 5 1
LVM4→LVM3 9 21
CCS 4 25
Figure 3 highlights the outcome associated with the estimated cost of replica server placement
where a gain of approximately 45.2% is accomplished in contrast to existing schemes of QoS aware [31] and
consistency-aware algorithms [31] that is frequently adopted in existing replica server placement. The
existing scheme of QoS aware approach is noted to carry out pre-defined computation where the system
ascertains its network topology in order to accomplish a known target QoS via LVM for processing generated
query. This static characteristic of QoS-aware approach is therefore witnesses with discrepancies when
exposed to increasing traffic that further increases the cost of replica server placement allocation. The
consistency-aware algorithm is found to offer slightly efficient performance in comparison to QoS-aware
approach.
Figure 3. Comparative evaluation of cost of replica server placement
However, it was noted that such approach increasingly demands more computational effort in order
to find an optimal condition which is not permitted by the nature of consistency-aware algorithm. It has to
settle with the defined network attributes and hence fails to address the dynamic demands of traffic causing
extra cost involved in replica server placement. On the other hand, proposed scheme is found to offer better
capability of reducing the operational cost in presence of increasing traffic as it can handle multiple tasks
using its symmetrical form of placement of both distributed CCS aligned with multiple LVM. Hence, the
nearly uniform computational effort is used without introducing any form of staticness in the nature of the
topology-building process. Further, the computed cost is also subjected to updating process with every event
of topological alteration, which directly assists allocating the cost for other communicating nodes. Hence,
proposed scheme offers reduced cost for replica server placement in cloud CDN environment.
9. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5588-5598
5596
Figure 4 showcase approximately 37.4% of improvement of better resource allocation compared to
existing schemes. The prime reason behind this is the operation carried out by proposed scheme toward each
incoming and outgoing traffic by maintaining equal balance between normal and priority task involved for
every serviced hosted for analysis. This argument also contributes towards the reduction of latency as well as
processing time as exhibited in Figures 5 and 6. The reduction in latency is found to be approximately 67.5%
while the reduction processing time is approximately 37.2%. The conclusive remark of these outcome can be
stated to be nearly similar pattern of outcomes associated with existing system which emphasizes more
towards solving problem space of local networks whereas proposed scheme initiated with a large
interconnected distributed network where each link is characterized by a weight while the nodes are allocated
with cost factors. This assists in better deployment of decision making while performing data delivery in
cloud-based CDN.
Figure 4. Comparative evaluation of probability of
resource allocation
Figure 5. Comparative evaluation of latency
incurred
Figure 6. Comparative evaluation of processing time
4. CONCLUSION
The prime contribution of the proposed study model is to develop a unique modelling towards cost
effective positioning of replica server considering the use case of cloud-based CDN system. The contribution
of the proposed study model are as: i) a unique mechanism of computed cost based construction of structure
of records of network towards reducing the dependencies of more number of replica server that indirectly
reduces cost of data transmission; ii) the proposed scheme implements a unique concept of local virtual
machine in order to replace the convention content server for cost reduction; this adaptation increases
10. Int J Elec & Comp Eng ISSN: 2088-8708
A novel cost-based replica server placement for optimal service quality in … (Priyanka Dharmapal)
5597
virtualization effectiveness creating a large chain of highly indexed structure resulting in higher availability
of data and services; iii) the proposed scheme of replica server placement offers equal importance to user’s
experience and service quality; and iv) the study outcome shows that proposed system offers much better
performance in contrast to existing scheme.
REFERENCES
[1] I. Kilanioti et al., “Towards efficient and scalable data-intensive content delivery: state-of-the-art, issues and challenges,” in
Lecture Notes in Computer Science, Springer International Publishing, 2019, pp. 88–137.
[2] B. Zolfaghari et al., “Content delivery networks,” ACM Computing Surveys, vol. 53, no. 2, pp. 1–34, Mar. 2021,
doi: 10.1145/3380613.
[3] T. Bilen and B. Canberk, “Deliver the content over multiple surrogates: A request routing model for high bandwidth requests,”
Computer Communications, vol. 146, pp. 39–47, Oct. 2019, doi: 10.1016/j.comcom.2019.07.009.
[4] S. Ul Islam et al., “Leveraging utilization as performance metric for CDN enabled energy efficient internet of things,”
Measurement, vol. 147, Dec. 2019, doi: 10.1016/j.measurement.2019.07.042.
[5] G. Eslami and A. T. Haghighat, “A new surrogate placement algorithm for cloud-based content delivery networks,” The Journal
of Supercomputing, vol. 73, no. 12, pp. 5310–5331, Dec. 2017, doi: 10.1007/s11227-017-2088-5.
[6] Q. Fan et al., “Video delivery networks: Challenges, solutions and future directions,” Computers and Electrical Engineering,
vol. 66, pp. 332–341, Feb. 2018, doi: 10.1016/j.compeleceng.2017.04.011.
[7] F. Ayankoya, O. Otushile, and B. Ohwo, “A review on content delivery networks and emerging paradigms,” International
Journal of Scientific and Engineering Research, vol. 9, no. 12, pp. 211–217, 2018.
[8] L. Yala, “Content delivery networks as a service (CDNaaS),” Universite Rennes 1, 2018.
[9] M. A. Salahuddin, J. Sahoo, R. Glitho, H. Elbiaze, and W. Ajib, “A survey on content placement algorithms for cloud-based
content delivery networks,” IEEE Access, vol. 6, pp. 91–114, 2018, doi: 10.1109/ACCESS.2017.2754419.
[10] D. Priyanka, Channakrishnaraju, and B. K. Chethan, “Insights on effectiveness towards research approaches deployed in content
delivery network,” in Software Engineering Perspectives in Systems, Springer International Publishing, 2022, pp. 224–243.
[11] V. Cohen-Addad, A. Gupta, L. Hu, H. Oh, and D. Saulpic, “An improved local search algorithm for k -median,” in Proceedings
of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), Philadelphia, PA: Society for Industrial and Applied
Mathematics, 2022, pp. 1556–1612.
[12] A. O. Al-Abbasi, V. Aggarwal, T. Lan, Y. Xiang, M.-R. Ra, and Y.-F. Chen, “FastTrack: minimizing stalls for CDN-based over-
the-top video streaming systems,” IEEE Transactions on Cloud Computing, vol. 9, no. 4, pp. 1453–1466, Oct. 2021,
doi: 10.1109/TCC.2019.2920979.
[13] J. Chuan, B. Bai, X. Wu, and H. Zhang, “Optimizing content placement and delivery in wireless distributed cache systems
through belief propagation,” IEEE Access, vol. 8, pp. 100684–100701, 2020, doi: 10.1109/ACCESS.2020.2996222.
[14] Y. Fan, C. Wang, W. Wu, T. Znati, and D. Du, “Slow replica and shared protection: energy-efficient and reliable task assignment
in cloud data centers,” IEEE Transactions on Reliability, vol. 70, no. 3, pp. 931–943, Sep. 2021, doi: 10.1109/TR.2019.2923770.
[15] C. Guerrero, I. Lera, and C. Juiz, “Optimization policy for file replica placement in fog domains,” Concurrency and Computation:
Practice and Experience, vol. 32, no. 21, Nov. 2020, doi: 10.1002/cpe.5343.
[16] L. V. Yovita and N. R. Syambas, “Caching on named data network: a survey and future research,” International Journal of
Electrical and Computer Engineering (IJECE), vol. 8, no. 6, pp. 4456–4466, Dec. 2018, doi: 10.11591/ijece.v8i6.pp4456-4466.
[17] W. T. Kusuma, A. A. Supianto, and H. Tolle, “Vertex markers: Modification of grid methods as markers to reproduce large size
augmented reality objects to afford hands,” International Journal of Electrical and Computer Engineering (IJECE), vol. 10, no. 1,
pp. 1063–1069, Feb. 2020, doi: 10.11591/ijece.v10i1.pp1063-1069.
[18] Z. Liu, W. Hua, X. Liu, D. Liang, Y. Zhao, and M. Shi, “An efficient group-based replica placement policy for large-scale
geospatial 3D raster data on Hadoop,” Sensors, vol. 21, no. 23, Dec. 2021, doi: 10.3390/s21238132.
[19] N. Mostafa, W. H. F. Aly, S. Alabed, and Z. Al-Arnaout, “Intelligent replica selection in edge and IoT environments using
artificial neural networks,” Electronics, vol. 11, no. 16, Aug. 2022, doi: 10.3390/electronics11162531.
[20] B. Nazir, F. Ishaq, S. Shamshirband, and A. Chronopoulos, “The impact of the implementation cost of replication in data grid job
scheduling,” Mathematical and Computational Applications, vol. 23, no. 2, May 2018, doi: 10.3390/mca23020028.
[21] M. Safavi, S. Bastani, and B. Landfeldt, “Online learning and placement algorithms for efficient delivery of user generated
contents in telco-CDNs,” IEEE Transactions on Network and Service Management, vol. 17, no. 1, pp. 637–651, Mar. 2020,
doi: 10.1109/TNSM.2019.2961560.
[22] L. Saino, I. Psaras, and G. Pavlou, “Framework and algorithms for operator-managed content caching,” IEEE Transactions on
Network and Service Management, vol. 17, no. 1, pp. 562–576, Mar. 2020, doi: 10.1109/TNSM.2019.2956525.
[23] D. Santos, R. Silva, D. Corujo, R. L. Aguiar, and B. Parreira, “Follow the user: a framework for dynamically placing content
using 5G-enablers,” IEEE Access, vol. 9, pp. 14688–14709, 2021, doi: 10.1109/ACCESS.2021.3051570.
[24] B. P. Shankar and S. Chitra, “Optimal data placement and replication approach for SIoT with edge,” Computer Systems Science
and Engineering, vol. 41, no. 2, pp. 661–676, 2022, doi: 10.32604/csse.2022.019507.
[25] Z.-L. Shao, C. Huang, and H. Li, “Replica selection and placement techniques on the IoT and edge computing: a deep study,”
Wireless Networks, vol. 27, no. 7, pp. 5039–5055, Oct. 2021, doi: 10.1007/s11276-021-02793-x.
[26] W. Teng, M. Sheng, K. Guo, and Z. Qiu, “Content placement and user association for delay minimization in small cell networks,”
IEEE Transactions on Vehicular Technology, vol. 68, no. 10, pp. 10201–10215, Oct. 2019, doi: 10.1109/TVT.2019.2936182.
[27] H.-A. Tran, S. Souihi, D. Tran, and A. Mellouk, “MABRESE: a new server selection method for smart SDN-based CDN
architecture,” IEEE Communications Letters, vol. 23, no. 6, pp. 1012–1015, Jun. 2019, doi: 10.1109/LCOMM.2019.2907948.
[28] L. Xiong, L. Yang, Y. Tao, J. Xu, and L. Zhao, “Replication strategy for spatiotemporal data based on distributed caching
system,” Sensors, vol. 18, no. 1, Jan. 2018, doi: 10.3390/s18010222.
[29] K. Xu, X. Li, S. K. Bose, and G. Shen, “Joint replica server placement, content caching, and request load assignment in content
delivery networks,” IEEE Access, vol. 6, pp. 17968–17981, 2018, doi: 10.1109/ACCESS.2018.2817646.
[30] B. Yu and J. Pan, “A framework of hypergraph-based data placement among geo-distributed datacenters,” IEEE Transactions on
Services Computing, vol. 13, no. 3, pp. 395–409, May 2020, doi: 10.1109/TSC.2017.2712773.
[31] J. Sahoo, M. A. Salahuddin, R. Glitho, H. Elbiaze, and W. Ajib, “A survey on replica server placement algorithms for content
delivery networks,” IEEE Communications Surveys & Tutorials, vol. 19, no. 2, pp. 1002–1026, 2017,
doi: 10.1109/COMST.2016.2626384.
11. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5588-5598
5598
BIOGRAPHIES OF AUTHORS
Priyanka Dharmapal has 10 years of teaching experience for UG and PG
courses in Computer Science and Engineering and is presently working as Assistant Professor
in the Department of Computer Science and Engineering in Sri Siddhartha Institute of
Technology, Tumkur. She obtained B.E degree from Visveswaraiah Technological University
in the year 2009 and PG in Computer Science and Engineering in the year 2012 from
Visveswaraiah Technological University. Her research interests are in the areas of content
delivery networks, network security, mobile computing and cloud computing. She is currently
pursuing doctoral degree in Sri Siddhartha Academy of Higher Education, Tumkur, under the
guidance of Dr. Channakrishnaraju, Professor, Department of Computer Science and
Engineering, Sri Siddhartha Institute of Technology, Tumkur. She can be contacted at email:
prsh2019@gmail.com.
Channakrishnaraju has 26 years of teaching experience for UG and PG courses
in computer Science and Engg, and is presently working as Professor in the Department of
computer Science and Engg, in Sri Siddhartha Institute of Technology, Tumkur. He obtained
B.E from Bangalore University in the year 1995 and PG in software systems in the year 2000
from BITS Pilani and doctoral degree in Sri Siddhartha Academy of higher Education,
Tumkur. His research interests are in the areas of wireless sensor networks, network security
and artificial intelligence. He can be contacted at email: rajuck@ssit.edu.in.
Chethan Bommalingaiahanapalya Krishnamurthy working as Associate
Professor Department of CSE (AI and ML) in Vidyavardhaka college of Engineering, Mysuru,
India. He obtained B.E degree from Visveswaraiah Technological University in the year 2005
and PG in Computer Science and Engineering in the year 2007 from Visveswaraiah
Technological University and doctoral degree in the year 2020 from Visveswaraiah
Technological University. He has around 16 years of teaching experience. His research domains
are computer network, network security and software engineering. He can be contacted at
email: chethan08@gmail.com.