Modifying files of BigCode Evaluation Harness

Закрыт Опубликован 3 мес. назад Оплачивается при доставке
Закрыт Оплачивается при доставке

Currently, while generate and evaluate on a task using bigcode evaluation harness, it generates a .json file something attached below. The task is to try appending a single hard-coded prompt to prompts in TokenizedDataset in lm_eval/utils.py.

First, understand what the exact structure of the resulting prompt will be like.

E.g. “Prompt: print hello world \n\n Answer: print(‘hello world’) \n\n Prompt: read a file \n\n Answer: “; vs “Intent: print hello world \n\n Code: print(‘hello world’) \n\n Intent: read a file \n\n Code: “.

a. “# prints hello world”; what separator to use between the prompt and the answer

b. check the docs on huggingface for models - to see if they have examples of ICL (in-context learning)

c. check the papers of the models (codegen) - for any examples how to use ICL with the model

Python Естественный язык GPT-3 Deep Learning NLP

ID проекта: #36717006

О проекте

2 заявок(-ки) Удаленный проект Последняя активность 2 мес. назад

2 фрилансеров(-а) готовы выполнить эту работу в среднем за ₹1300

dipalika199

Objective Highly energetic, motivated and enthusiastic teenager able to quickly learn and handle varied responsibilities is seeking employment opportunities to further enhance learning abilities and to gain experience Больше

₹1300 INR за 7 дней(-я)
(0 отзывов(-а))
0.0