Reading a huge .csv file

I am trying to read data from a .csv file with up to 2 lakh rows and creating a dataframe from it.
But, it is creating the dataframe with only 35000 rows using pd.read_csv() function.
How to create the dataframe of complete 2 lakh rows?

Any error message or it just silently drops rest of the rows from the dataframe?

Will it be possible to share the csv file here (via google drive link for e.g.)? Or any helpful screenshots showing dataframe info/csv statistics for number of rows.

It just drops rest of the rows from the dataframe.
This is the link for the csv file: https://cocl.us/sanfran_crime_dataset

This is what I’m getting after reading the csv file

1

I tried it on my local machine and google colab. Worked fine at both places.
There are 150500 rows of data (excluding the header) which is successfully read by pandas (I cross-checked by counting on the shell as well).
May be you can double check if by any chance some different file is being read.

PdId data type in my screenshot is int64 while its float64 in your case. As a first step, can you try with a fresh copy of the file?

ok, let me check again.
Thanks!