The dataset for text detection is very large(2-3 GBs). How to handle such large dataset for training. Should I upload this dataset to G drive and connect my colab notebook to it. Or, are there some other good options.
Text detection in capstone project
One option is uploading it to drive, or you can use your own kaggle kernel.