Context-aware workload optimisation in cloud ecosystems: a framework for sustainable and low-impact AI processing
About the Project
The increasing demand for cloud computing to support artificial intelligence (AI), particularly its applications in big data processing, has driven significant advancements but has also introduced notable environmental challenges due to the energy-intensive nature of these operations. By focusing on dynamic workflow management and context-aware resource allocation, this research aims to optimise resource usage to balance demand and energy consumption efficiently, thereby reducing environmental impact while maintaining high performance.
Unlike traditional approaches that aim to enhance computational speed or rely primarily on renewable energy sources, this project will focus on optimising task scheduling and resource allocation by integrating diverse contextual factors such as task priority, workload characteristics, and node availability. Leveraging federated computing and modular architectural approaches from software engineering, workloads can be intelligently distributed across nodes or server farms. This prevents the overloading of any single component, reduces idle or wasted energy, and ensures more efficient use of resources. These paradigms are particularly well-suited to decentralised, scalable systems, offering an opportunity to explore their roles in building sustainable cloud ecosystems.
Modular architectures play a central role in this research by enabling flexibility, adaptability, and efficiency in workload distribution and resource utilisation. Modular approaches allow systems to be divided into smaller, independently manageable components, which can be activated or scaled based on the specific requirements of individual tasks. This flexibility reduces unnecessary energy usage by targeting only the necessary components for operation. Modular systems also enhance scalability by enabling individual components to scale independently according to demand, avoiding the inefficiencies of monolithic architectures. By integrating these modular principles with federated computing, which involves distributing computational tasks across multiple, often geographically dispersed nodes, the project will explore how to optimise workload distribution. Federated computing allows tasks to be routed to locations where resources are underutilised or where energy costs are lower, enhancing overall system efficiency.
Together, modular architectural approaches and federated computing provide a cohesive framework for optimising resource scheduling by incorporating contextual factors such as task priority, workload characteristics, network latency, node energy efficiency, and carbon footprint. This integration will allow the research to balance system performance and sustainability, creating scalable and energy-aware cloud ecosystems.
Beyond modular and federated approaches, other software engineering paradigms will also contribute to this research. Event-driven architectures enable systems to respond to specific triggers, eliminating the need for continuous polling and reducing idle resource consumption. By incorporating event-driven mechanisms, resource usage can be adjusted dynamically to meet real-time demands, minimising energy waste. Serverless computing, where cloud providers dynamically manage resources, further enhances efficiency. Serverless approaches allocate resources only as needed, reducing energy consumption while ensuring the effective execution of AI and big data workloads. Together, these paradigms form a comprehensive framework for designing energy-aware systems.
The research methodology will employ a combination of theoretical and practical approaches. The expected outcomes include context-sensitive algorithms for resource allocation and workload scheduling that balance energy efficiency and computational performance. These solutions will contribute to the broader goal of reducing the carbon footprint of cloud-based AI and big data processing. This research aligns with current advancements in sustainable computing, highlighting the importance of innovative strategies for managing workload distribution and resource allocation in energy-efficient ways.
By integrating established paradigms such as modular architectures and federated computing with innovative scheduling algorithms, this research will provide a robust framework for balancing resource demand and energy consumption in a context-sensitive manner. This approach not only addresses the environmental challenges of cloud computing but also establishes best practices for designing scalable, energy-aware systems capable of supporting the growing demands of AI and big data workloads.
To excel in this project, the ideal candidate should possess strong programming skills in languages such as Python, Java, or C++. A solid understanding of software engineering principles, including modular architectures and event-driven systems, will also be essential. Familiarity with cloud computing platforms and federated computing frameworks, as well as strong analytical skills to evaluate energy efficiency and performance metrics using simulation tools or real-world data, will be highly beneficial.
Funding Notes
there is no funding for this project
Unlock this job opportunity
View more options below
View full job details
See the complete job description, requirements, and application process


