This document shows how you can deploy a fitted model as a web service using ACR, ACI and AKS. The framework used is Plumber, a package to expose your R code as a service via a REST API.
We’ll fit a simple model for illustrative purposes, using the Boston housing dataset which ships with R (in the MASS package). To make the deployment process more interesting, the model we fit will be a random forest, using the randomForest package.
Now that we have the model, we also need a script to obtain predicted values from it given a set of inputs:
This is fairly straightforward, but the comments may require some explanation. They are plumber annotations that tell it to call the function if the server receives a HTTP POST request with the path
/score, and query parameter
df. The value of the
df parameter is then converted to a data frame, and passed to the randomForest
Let’s package up the model and the scoring script into a Docker image. A Dockerfile to do this would look like the following. This uses the base image supplied by Plumber (
trestletech/plumber), installs randomForest, and then adds the model and the above scoring script. Finally, it runs the code that will start the server and listen on port 8000.
# example Dockerfile to expose a plumber service FROM trestletech/plumber # install the randomForest package RUN R -e 'install.packages(c("randomForest"))' # copy model and scoring script RUN mkdir /data COPY bos_rf.rds /data COPY bos_rf_score.R /data WORKDIR /data # plumb and run server EXPOSE 8000 ENTRYPOINT ["R", "-e", \ "pr <- plumber::plumb('/data/bos_rf_score.R'); pr$run(host='0.0.0.0', port=8000)"]
The code to store our image on Azure Container Registry is as follows. If you are running this code, you should substitute the values of
secret from your Azure service principal. Similarly, if you are using the public Azure cloud, note that all ACR instances share a common DNS namespace, as do all ACI and AKS instances.
For more information on how to create a service principal, see the AzureRMR readme.
library(AzureContainers) # create a resource group for our deployments deployresgrp <- AzureRMR::get_azure_login()$ get_subscription("sub_id")$ create_resource_group("deployresgrp", location="australiaeast") # create container registry deployreg_svc <- deployresgrp$create_acr("deployreg") # build image 'bos_rf' call_docker("build -t bos_rf .") # upload the image to Azure deployreg <- deployreg_svc$get_docker_registry(as_admin=TRUE) deployreg$push("bos_rf")
If you run this code, you should see a lot of output indicating that R is downloading, compiling and installing randomForest, and finally that the image is being pushed to Azure. (You will see this output even if your machine already has the randomForest package installed. This is because the package is being installed to the R session inside the container, which is distinct from the one running the code shown here.)
All Docker calls in AzureContainers, like the one to build the image, return the actual docker commandline as the
cmdline attribute of the (invisible) returned value. In this case, the commandline is
docker build -t bos_rf . Similarly, the
push() method actually involves two Docker calls, one to retag the image, and the second to do the actual pushing; the returned value in this case will be a 2-component list with the command lines being
docker tag bos_rf deployreg.azurecr.io/bos_rf and
docker push deployreg.azurecr.io/bos_rf.
The simplest way to deploy a service is via a Container Instance. The following code creates a single running container which contains our model, listening on port 8000.
Once the instance is running, let’s call the prediction API with some sample data. By default, AzureContainers will assign the container a domain name with prefix taken from the instance name. The port is 8000 as specified in the Dockerfile, and the URI path is
/score indicating we want to call the scoring function defined earlier.
The data to be scored—the first 10 rows of the Boston dataset—is passed in the body of the request as a named list, encoded as JSON. A feature of Plumber is that, when the body of the request is in this format, it will extract the elements of the list and pass them to the scoring function as named arguments. This makes it easy to pass around relatively large amounts of data, eg if the data is wide, or for scoring multiple rows at a time. For more information on how to create and interact with Plumber APIs, consult the Plumber documentation.
Deploying a service to a container instance is simple, but lacks many features that are important in a production setting. A better alternative for production purposes is to deploy to a Kubernetes cluster. Such a cluster can be created using Azure Kubernetes Service (AKS).
Unlike an ACI resource, creating a Kubernetes cluster can take several minutes. By default, the
create_aks() method will wait until the cluster provisioning is complete before it returns.
Having created the cluster, we can deploy our model and create a service. We’ll use a YAML configuration file to specify the details for the deployment and service API. The image to be deployed is the same as before.
apiVersion: apps/v1 kind: Deployment metadata: name: bos-rf spec: selector: matchLabels: app: bos-rf replicas: 1 template: metadata: labels: app: bos-rf spec: containers: - name: bos-rf image: deployreg.azurecr.io/bos_rf ports: - containerPort: 8000 resources: requests: cpu: 250m limits: cpu: 500m imagePullSecrets: - name: deployreg.azurecr.io --- apiVersion: v1 kind: Service metadata: name: bos-rf-svc spec: selector: app: bos-rf type: LoadBalancer ports: - protocol: TCP port: 8000
The following code will obtain the cluster endpoint from the AKS resource and then deploy the image and service to the cluster. The configuration details for the
deployclus cluster are stored in a file located in the R temporary directory; all of the cluster’s methods will use this file. Unless told otherwise, AzureContainers does not touch your default Kubernetes configuration (
To check on the progress of the deployment, run the
get() methods specifying the type and name of the resource to get information on. As with Docker, these correspond to calls to the
kubectl commandline tool, and again, the actual commandline is stored as the
cmdline attribute of the returned value.
deployclus$get("deployment bos-rf") #> Kubernetes operation: get deployment bos-rf --kubeconfig=".../kubeconfigxxxx" #> NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE #> bos-rf 1 1 1 1 5m svc <- read.table(text=deployclus$get("service bos-rf-svc")$stdout, header=TRUE) #> Kubernetes operation: get service bos-rf-svc --kubeconfig=".../kubeconfigxxxx" #> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE #> bos-rf-svc LoadBalancer 10.0.8.189 188.8.131.52 8000:32276/TCP 5m
Once the service is up and running, as indicated by the presence of an external IP in the service details, let’s test it with a HTTP request. The response should be the same as it was with the container instance. Notice how we extract the IP address from the service details above.
Finally, once we are done, we can tear down the service and deployment. Depending on the version of Kubernetes the cluster is running, deleting the service may take a few minutes.
And if required, we can also delete all the resources created here, by simply deleting the resource group (AzureContainers will prompt you for confirmation):
One important thing to note about the above example is that it is insecure. The Plumber service is exposed over HTTP, and there is no authentication layer: anyone on the Internet can contact the service and interact with it. Therefore, it’s highly recommended that you should provide at least some level of authentication, as well as restricting the service to HTTPS only (this will require deploying an ingress controller to the Kubernetes cluster). You can also create the AKS resource as a private cluster; however, be aware that if you do this, you can only interact with the cluster endpoint from a host which is on the cluster’s own subnet.
Plumber is a relatively simple framework for creating and deploying services. There are some alternatives for those looking for more features:
The RestRserve package is a more comprehensive framework, built on top of functionality provided by Rserve. It includes features such as automatic parallelisation, support for HTTPS, and support for basic and bearer authentication schemes.
The model operationalisation framework found in Microsoft Machine Learning Server also provides a way to deploy models. A notable feature of MMLS is model management, which allows you to deploy different generations of a model and distinguish between them. Note that MMLS is proprietary software, unlike Plumber and RestRserve.