Your session crashed after using all available RAM - Google Colab

Your session crashed after using all available RAM - Google Colab
Photo by Greg Bulla / Unsplash
🤓
If you are using Google Colab, a cloud-based platform for running and executing Python code, your session may crash if the code you are running uses up all of the available memory on the system. This can happen if your code contains a large number of variables, or if you are running complex operations that require a lot of memory to execute. To avoid this issue, you can try to optimize your code to use less memory, or you can increase the amount of memory available to your Colab session by using a larger instance type. You can also monitor your code's memory usage to ensure that it is not exceeding the available memory on your system.
👉
You can skip to How section if you just looking for solution

Why Google Colab showing this error ?

The main reason behind this error is either you are loading large file or loading very large database for model training with very high batch size.

Example : what happens when you overload lift and exceed the weight limit. it will not work.

similar is happening in google colab. the ram is filled with data and there is no space left to process anything and that's why it is crashing.

What You can do to solve this error ?

Simple solution would be not loading large files at once in ram,

if it's text file you can read the file line by line, if it's csv file the  you can also load it in chunks

if you are training machine learning model and dataset is images then you can decrease the batch size and it will solve the issue.

i can not cover all cases for which you are getting this error but the simple solution is processing it in batches.

in next section i will discuss few common situations where you will get this error and also discuss the solution with code.

How You can solve this error ?

in this section i will try to cover top possible section where you can get this error and also provide the implemented solution.

If you are getting this error while reading very large txt file.

You might be reading that file like this, below code will load whole txt file into ram and if the file size is bigger than your ram then it will crash the google colab.

wrong way

Solution : Read the file line by Line

Right way

in above code i am using readline function which will only read the one line from file, you can add your own logic to store few lines in bucket and process them .

if you are getting this error while training CNN Image model

To solve this you can adjust the batch size in code.

Example : in below example the batch size is 1000. so it is loading 1000 images in ram and then training the model,

The solution depends on how much ram your system has.

and you can get perfect by batch size by trial-error method.

test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    '/gdrive/MyDrive/shot/training',
    target_size=(1280, 720),
    batch_size=1000,
    class_mode='categorical')

batch_size=

you can try few different batch sizes. you can start with upper value and keep on decreasing until it stops crashing.

example :

first try :

  1. batch_size=1024
  2. batch_size=512
  3. batch_size=256
  4. batch_size=128
  5. batch_size=64

keep on decreasing until it won't crash.

🚀
Remember , sometimes the batch size affect the speed of training that is why i suggest you to choose high batch size first and keep on decreasing it until it stops crashing.

this batch size will be different for everyone becuase it depends on dataset size and ram .

If you are getting this error while reading very large csv file using pandas

in pandas you can load csv in Batches, check below implementation . below code will load batch_size number of rows from file in dataframe

and you can access all the function on that dataframe just like normally you do when you read file.

in below example i print the shape of df by using df.shape

import pandas as pd

filename = "test.csv"

batch_size = 1000 

for df in pd.read_csv(filename, chunksize=batch_size):
    print(df.shape)

👽
i can not cover all senario when this error occurs , so if your senario is not mentioned in above solution then feel free to write it down in comment section . i review all comments once a day so you will get the solution in next 24 hour after commenting.

above solution are free solutions. if you can afford monthly charge of google colab you can just purchase their pro plan and your existing code will work as expected.

Google Colab Pro Plans