in , ,

Hosting a deep learning model using python on a VPS (Server)

This is an important step in life cycle of Machine learning and deep learning. Python programming language is a popular choice of web developers as well. Thanks to it, there are lots of tools and frameworks available which makes deployment easy. Two most popular web development frameworks are: Django and Flask. 

Django is a full fledged web framework and Flask is a light weight framework. Flask is also called a micro framework which helps us make API’s quickly. We will use Flask in this article. We will use Apache and MOD WSGI for deployment.  

We will deploy the model in three steps. 

  1. Setting up server
  2. Installing Libraries
  3. Hosting model

There are lots of hosting services available. We can use any of them. Here we used a Linux server with Ubuntu 20.04 LTS. When it comes to Servers, Linux is the Industry’s choice. It is Open Source and free for all & has very active community which makes it suitable for our task.  Servers do not provide Graphical User Interfaces or GUI’s in common language. We can do everything using command line interface (CLI).

Let’s start with the 1st step.

To connect to the remote server, we will use PuTTY. It will look something like above picture.  

Firstly we will install python on the server using command

sudo apt get install python3

now, we need to install python package installer (PIP) using,

sudo apt get install python3-pip

Now we can start installing apache2 server and MOD WSGI on our server,

sudo apt get install apache2

sudo apt get install libapache2-mod-wsgi-py3

Now apache2 server is installed on our server. Its default page is live on our server. We will configure it to host our model. But firstly we need to install all the libraries which will be used by machine learning code on our server. 

We install the libraries next,

sudo pip3 install Flask scikit-learn scikit-image imageio numpy pandas 

Now we have everything to run our code on the server. Let’s make API’s using flask.

Make a new file using 

sudo nano model.py

This will be our main python file. Firstly we will import all necessary libraries which we installed earlier. Now we will define our flask app. Method app.route() routes traffic to specific url’s. Here we only used a slash (/) which means it will direct user to domain. We can rout to specific urls by specifying in the parenthesis like /welcome, /home etc. 

We mapped function predict using app.route method. So whenever a request is made on that url, predict function will be called. 

In predict function, first we check if we need to send data or receive data from user based on type of request it is making. If asking for data from user, it will show a form to upload an image to make prediction on it. When user uploads an image, our first check using ‘if’ condition becomes true and checks if file received is a valid or not. It then passes image to our model which then makes its prediction and this prediction is send to user. 

from flask import Flask, jsonify, send_file, request

import numpy as np

import model from model #importing our ML model

#import other necessary modules

app = Flask(__name__)

app.route(‘/’, methods = [‘POST’, ‘GET’])

def predict():

if request.method == ‘POST’:

file = request.files.get(‘file1’)

if file is None or file.filename== ‘ ‘:

return jsonify({‘error’: ‘no file’})

image_bytes = file.read()

  prediction = model.predict(images_bytes)

return send_file(prediction)

return render_template(‘index.html’)

if __name__ == ‘__main__’:

app.run()

Here index.html contain html code for a basic form to submit image. When submitted, it will be sent to model and prediction will be made and which eventually be send to user. The submission page will look like this. We will save this file as model.py

Now we need to make a wsgi file using command

sudo nano model.wsgi

This command will open it in a text editor in CLI, using which we will save it.

import logging

import sys

logging.basicConfig(stream=sys.stderr)

sys.path.insert(0, ‘/var/www/html/file_directory’)

from model import app as application

We will create a symbolic link for our original directory in apache server directory using command:

sudo ln –sT ~/model /var/www/html/model

 Replace file_directory with your directory structure (path to model.wsgi file, like model used here). Now you can see in directory /var/www/html , your directory appears with all files as in original folder. 

Now we are on the last step, we need to modify the default config file or add new config file to apache server to make changes live. Go to /etc/apache2/sites-enabled directory and open 000-default.conf using

Sudo nano 000-default.conf

Add the following code below line no 12,

        WSGIDaemonProcess model 

        WSGIScriptAlias / /var/www/html/path/model.wsgi

        WSGIApplicationGroup %{GLOBAL}

        <Directory Model>

             WSGIProcessGroup Model

             WSGIApplicationGroup %{GLOBAL}

             Order deny,allow

             Allow from all 

        </Directory>

Here change model with your directory name and path accordingly. We have made all the changes sufficient to host our model. We need to restart the apache2 server to incorporate our changes. Use 

sudo service apache2 restart

This will restart the server and all the changes will be live. You can access the model using the domain url or the ip address. 

Following these steps you can host your machine learning or deep learning model on any remote linux server. 

What do you think?

Written by HackerVibes

How to connect Google Nest Mini to TV

Best programming language for Artificial Intelligence