Modifying files of BigCode Evaluation Harness
Оплачивается при доставке
Currently, while generate and evaluate on a task using bigcode evaluation harness, it generates a .json file something attached below. The task is to try appending a single hard-coded prompt to prompts in TokenizedDataset in lm_eval/utils.py.
First, understand what the exact structure of the resulting prompt will be like.
E.g. “Prompt: print hello world \n\n Answer: print(‘hello world’) \n\n Prompt: read a file \n\n Answer: “; vs “Intent: print hello world \n\n Code: print(‘hello world’) \n\n Intent: read a file \n\n Code: “.
a. “# prints hello world”; what separator to use between the prompt and the answer
b. check the docs on huggingface for models - to see if they have examples of ICL (in-context learning)
c. check the papers of the models (codegen) - for any examples how to use ICL with the model
ID проекта: #36717006