How to run ML/DL model on remote GPU server in Jupyter notebook

12 记录 Leave a Comment

If your AWS, Azure or any other remote server has GPU, you would like to run you model on them.

Maybe tensorflow, keras, or sklean.

Just follow these steps:

Step 1

Install Docker and Cuda driver on your server.

Step 2

Run tensorflow and jupyter container on your server.

docker run -p 8888:8888 quay.io/jupyter/scipy-notebook:2023-10-31

For more detail on this Docker image, please refer to docker-stacks.

Step 3

From previous step, you will get the output from docker run, it contains the token:

http://127.0.0.1:8888/lab?token=aa821e54884537ec41eb845c0cfaa5369332dc02c59f2b59

replace 127.0.0.1 to public IP of the remote server.

Open it on browser, you will see a remote jupyter.

Step 4

Upload Jupyter notebook. Upload the .ipynb file using browser and run it.

If you want to install denpendencies, just use !pip.

!pip install --upgrade pip
!pip install numpy tensorflow
!pip install imageio scikit-image

Leave a Reply

Your email address will not be published. Required fields are marked *

Name *