Official PyTorch and River implementation of Adaptive Machine Learning for Resource-Constrained Environments presented at DELTA 2024, ACM SIGKDD KDD 2024, Barcelona, Spain.
- 📄 Paper: View on Lecture Notes in Computer Science - LNCS
- 📄 Paper-Conference: Adaptive Machine Learning for Resource-Constrained Environments
- 🤗 Dataset on HuggingFace: adaptive_cpu_utilisation_dataset
- 🤗 Models on HuggingFace: adaptive_cpu_utilisation_prediction_models
- 📊 Poster: View Poster
- 📄 Paper (Online): View Paper
- GitHub Repository: AML4CPU
- Hold-out Script - Experiment 1:
run_holdout.py
- Pre-sequential Script - Experiment 2:
run_pre_sequential.py
- Zero-shot and Fine-tuning with Lag-Llama:
run_finetune.py
🇪🇺 This work has received funding from the European Union's HORIZON research and innovation programme under grant agreement No. 101070177.
Let's start by setting up your environment:
-
Create a Conda Environment:
conda create -n AML4CPU python=3.10.12 -y conda activate AML4CPU
-
Clone the Repository and Install Requirements:
git clone https://github.com/sebasmos/AML4CPU.git cd AML4CPU pip install -r requirements.txt
-
Install PyTorch and Other Dependencies:
pip install clean-fid numba numpy torch==2.0.0+cu118 torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu118
Run the holdout evaluation script:
python run_holdout.py --output_file 'exp1' --output_folder Exp1 --num_seeds 20
Run the pre-sequential evaluation script:
python run_pre_sequential.py --output_file 'exp2' --eval --output_folder Exp2 --num_seeds 20
Test zero-shot over different context lengths (32, 64, 128, 256) with and without RoPE:
python run_finetune.py --output_file zs --output_folder zs --model_path ./models/lag_llama_models/lag-llama.ckpt --eval_multiple_zero_shot --max_epochs 50 --num_seeds 20
Finetune and test Lag-Llama over different context lengths (32, 64, 128, 256) with and without RoPE:
python run_finetune.py --output_file exp3_REAL_parallel --output_folder Exp3 --model_path ./models/lag_llama_models/lag-llama.ckpt --max_epochs 50 --num_seeds 20 --eval_multiple
This project is licensed under the MIT License. See LICENSE for details.
We are grateful to our colleagues at the EU Horizon project ICOS and Ireland’s Centre for Applied AI for helping to start and shape this research effort. Our advancement has been made possible by funding from the European Union’s HORIZON research and innovation program (Grant No. 101070177).
Please cite as:
@InProceedings{10.1007/978-3-031-82346-6_1,
author="Ord{\'o}{\~{n}}ez, Sebasti{\'a}n A. Cajas
and Samanta, Jaydeep
and Su{\'a}rez-Cetrulo, Andr{\'e}s L.
and Carbajo, Ricardo Sim{\'o}n",
editor="Piangerelli, Marco
and Prenkaj, Bardh
and Rotalinti, Ylenia
and Joshi, Ananya
and Stilo, Giovanni",
title="Adaptive Machine Learning for Resource-Constrained Environments",
booktitle="Discovering Drift Phenomena in Evolving Landscapes",
year="2025",
publisher="Springer Nature Switzerland",
address="Cham",
pages="3--19",
isbn="978-3-031-82346-6"
}