Decode-Length-Predictor is a machine learning model designed to predict the length of decoded tokens based on an input prompt for a specific Large Language Model (LLM). This project is inspired by the research presented in Power-aware Deep Learning Model Serving with μ-Serve.
To set up the environment, install the required dependencies by running:
pip install -r requirements.txt
Download the dataset (e.g., ShareGPT) and store it in the appropriate directory:
mkdir -p data/shareGPT
wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json -O data/shareGPT/ShareGPT_V3_unfiltered_cleaned_split.json
Generate output sequences from the LLM based on the dataset, and preprocess the data for training, validation, and testing:
./run_preprocess.sh
Train the model using the preprocessed dataset:
./run_train.sh
Evaluate the trained model to obtain results and performance metrics:
./run_test.sh
Evaluation results will be available soon.