Tensorflow data pipeline improvements

I currently have a google colab file with a data generator and a model. I train the model using the generator. The current training is very slow and I would like to hopefully speed up the training process by using the [login to view URL] api or anything else. Let me know if you are interested in helping. Relevant Files/Documents are attached below.

COLAB: https://colab.research.google.com/drive/1oOnjeCP0VuGJOljlJt9fW7EUCCjcmrNp#scrollTo=Ie8-s2T9I7H1


[login to view URL]

The JSON data we read is too large to be read into memory all at once, so our preprocessor breaks it up into these smaller json files. I need the dataset to progressively load them rather than all at once.

Навыки: Tensorflow, Python, Keras, Machine Learning (ML), Neural Networks

Показать больше: example insert data database using xml file vbnet, extract data sql database xml file, collect data user using flash file website, extract data sql server xml file, data non applicable excel file, copy data paste anther excel file macro, need pull data csv delimited text file, data comport store text file, insert data table using xml file sql server, data form web part file subfolder, move data oracle database excel file using, extract data mssql backup bak file, russia offline data entry job jpeg file, 2008 rename file suffix current date, data map csv import file, export data exchange 2003 csv file, capture data website produce txt file, checking existing data database uploading csv file asp net, collect data drupal database csv file

О работодателе:
( 0 отзыв(-а, -ов) ) Gates Mills Blvd, United States

ID проекта: #29911837

2 фрилансеров(-а) готовы выполнить эту работу в среднем за $50/час

(91 отзывов(-а))
(2 отзывов(-а))