Your session crashed after using all available RAM - Google Colab
Why Google Colab showing this error ?
The main reason behind this error is either you are loading large file or loading very large database for model training with very high batch size.
Example : what happens when you overload lift and exceed the weight limit. it will not work.
similar is happening in google colab. the ram is filled with data and there is no space left to process anything and that's why it is crashing.
What You can do to solve this error ?
Simple solution would be not loading large files at once in ram,
if it's text file you can read the file line by line, if it's csv file the  you can also load it in chunks
if you are training machine learning model and dataset is images then you can decrease the batch size and it will solve the issue.
i can not cover all cases for which you are getting this error but the simple solution is processing it in batches.
in next section i will discuss few common situations where you will get this error and also discuss the solution with code.
How You can solve this error ?
in this section i will try to cover top possible section where you can get this error and also provide the implemented solution.
If you are getting this error while reading very large txt file.
You might be reading that file like this, below code will load whole txt file into ram and if the file size is bigger than your ram then it will crash the google colab.

Solution : Read the file line by Line

in above code i am using readline function which will only read the one line from file, you can add your own logic to store few lines in bucket and process them .
if you are getting this error while training CNN Image model
To solve this you can adjust the batch size in code.
Example : in below example the batch size is 1000. so it is loading 1000 images in ram and then training the model,
The solution depends on how much ram your system has.
and you can get perfect by batch size by trial-error method.
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
'/gdrive/MyDrive/shot/training',
target_size=(1280, 720),
batch_size=1000,
class_mode='categorical')
batch_size=
you can try few different batch sizes. you can start with upper value and keep on decreasing until it stops crashing.
example :
first try :
- batch_size=1024
- batch_size=512
- batch_size=256
- batch_size=128
- batch_size=64
keep on decreasing until it won't crash.
this batch size will be different for everyone becuase it depends on dataset size and ram .
If you are getting this error while reading very large csv file using pandas
in pandas you can load csv in Batches, check below implementation . below code will load batch_size number of rows from file in dataframe
and you can access all the function on that dataframe just like normally you do when you read file.
in below example i print the shape of df by using df.shape
import pandas as pd
filename = "test.csv"
batch_size = 1000
for df in pd.read_csv(filename, chunksize=batch_size):
print(df.shape)
Paid Solution :
above solution are free solutions. if you can afford monthly charge of google colab you can just purchase their pro plan and your existing code will work as expected.
