•   about 1 year ago

How to utilize GPU RAM instead of System RAM within Colab when running code on large datasets

I'm having trouble figuring out how to utilize the GPU RAM of Colab's provided GPUs instead of using the system RAM in order to make my code requests run more efficiently. I'm currently in the process of trying to visualize the graph of the dataset that I have converted (fairly large, 200k+ nodes and 1M+ edges) and noticed that running this operation does not use the GPU RAM at all (it tries to use the system RAM instead, making it very slow and inefficient for a large dataset like mine). I have set my runtime type to the T4 GPU and followed all the steps in the Jupyter Notebook provided (including the steps for verifying that I'm using an NVIDIA GPU and installing nx-cugraph via pip), but it seems like it did not change anything. For further context, I am running my code on an M1 MacBook Pro. Is there something that I am missing or doing wrong?

  • 2 comments

  • Manager   •   about 1 year ago

    Hi Ryan,
    You are following the correct instructions for selecting a GPU on your Colab notebook.
    The issue comes from the tool used to visualize your graph (i.e matplotlib, networkx) cannot take advantage of a T4 GPU to visualize the entire graph.
    With the current packages installed, the GPU in this case is only useful for when you want to run algorithms on your graph.
    To solve this, you have 3 options:
    1. Visualize a sample of your graph in your notebook
    2. Visualize a sample of your graph in your ArangoDB instance (once it has been loaded)
    3. Consider using an alternative graph visualization package that supports GPU (e.g PyGraphistry) in your notebook
    I would recommend sticking with Option 1 or 2 for now, as the main focus of this hackathon is towards the Agentic App. Visualization is a nice-to-have, but no need to get a visualization of the full graph.

  •   •   about 1 year ago

    While running the Jupyter Template on a Colab Notebook, I see that GPU is not enforced even for graph execution even though I've selected runtime as GPU. Any suggestion on what might be wrong with respect to the sample template running on Colab instance

Comments are closed.