Journal of Grid Computing - From Grids to Cloud Federations [IF 3.288 (2018)]
Special Issue on Orchestration of computing resources in the Cloud-to-Things continuum
The objective of the SI is to collect the latest research findings on major and emerging topics related to the orchestration of resources in a wide ecosystem where IoT, Edge/Fog and Cloud converge to form a computing continuum also known as Cloud-to-Things continuum.
Cloud computing can provide flexible and scalable resources to meet any computing needs. Big Data has revolutionized the approach to data computation. With the increase of the volume of data produced by IoT devices, there is a growing demand of applications capable of elaborating such data flows close to their sources, not just on the Cloud, or anywhere else along the IoT-to- Cloud path (Edge/Fog). Where computation should occur depends on the specific needs of each application. Strict real-time constraints require computation to run as close to the data origin as possible (e.g., IoT Gateway). Conversely, batch-wise tasks (e.g., Big Data analytics) are advised to run on the Cloud where computing resources are abundant. Edge/Fog may be a good compromise in case of a concomitant demand of both computing power and timeliness of elaboration. Application designers would greatly benefit from a support for a flexible and dynamic provisioning of computing resources along the Cloud-to-Things path, that is, a provisioning system capable of orchestrating (activating, deactivating, integrating, etc.) computing resources provided by heterogeneous computing infrastructures. Furthermore, that system shall also take into account and cope with the heterogeneity of providers owning the computing infrastructures in terms of service APIs, guaranteed service levels, data management policies, etc. Moreover, typical data-intensive workloads that consist of data-analytics tasks such as Machine Learning (ML)/AI and descriptive analysis are perfect candidates for the Cloud-to-things continuum, since data is being generated typically on the edge (by IoT devices, etc) with the use for instance of a serverless pipeline, whereas the analysis (either for model training, or execution of descriptive tasks) traditionally happens on centralized locations on the cloud with the use of distributed processing frameworks.
This SI encourages submissions that address resource orchestration issues in the Cloud-to-Things landscape and propose experimental solutions, case studies, deployed systems and best practices in that field.