Why do we train a model using batches of data?

while we are training this model we are making batches and than training it on that batch
is this will reduce are data size on which we are training the model?
are we training this model on 200 data points only which corresponds to n_points in train?
are we not reducing are the ability of model to train on large data points which is X_train, y_train which will help the model to train more efficiently?

@purushartha, I ascertain that you have a confusion regarding the use of batches while training. The learning method involves the use of gradients in various forms such as stochastic gradient descent, mini-batch, batch gradient descent Reference to gradient descent. While using a batch of data in no way reduces the size of the data set, it only changes the number of inputs to consider before making an update to our model. The model will iterate through the entire training set taking a batch of inputs at a time. Aditionally click here to know more about gradient descents. Hope this helps!!

so when we are using trainloader in CNN it will train the model with several batches parallel

Till what i know , trainloader is only loading the images which is not parallelly run.We load a set of images , train them (do backpropagation on them and upgrade the weights and biases according to the labels through different algorithms like Adam , AdaGrad) and then we load next set of images and train the model from where we left.We use batches in CNN also because the memory of gpu wont be able to handle so much chunks of data all at once. And more the batches , the more will be the upgrades hence more the accurate it may get.But too small batch size also leads to high training time.

Than how vetorization is performed in CNN