Docker
Deploying your app, pipeline or model is an important part of your development process, and containarization is here to help you have a highly reproductible and sandboxed enviroment.
There isn't much secret in making Dockerfile for your Docker/OCI container for a Rust application, in general terms I start with downloading the necessary packages and as Rust though Rustup as expected:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y \
curl clang gcc g++ zlib1g-dev libmpc-dev
libmpfr-dev libgmp-dev git cmake pkg-config \
libssl-dev build-essential
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s - -y
ENV PATH=/root/.cargo/bin:${PATH}
Followed by a simple neat trick of first building the dependencies using and only then copying the contents of src
and calling cargo build
again, this ensures the hability of docker build
reusing all the previous build stages (including depedencies built if they haven't changed).
WORKDIR /opt
COPY Cargo.toml Cargo.toml
COPY Cargo.lock Cargo.lock
RUN mkdir src && echo "fn main() {}" > api/src/main.rs
RUN cargo build --release --locked
RUN rm -rf src
COPY src src
CMD cargo build --release --locked
CMD cargo run --release --locked
Note that the --locked
used on the cargo build
is important given that .
In case you are building your images using Github action, you can enable reuse of previous builds with the cache-from
and cache-to
directive, see an example here.
For examples of Docker Compose deploys, see here and here.
Kubernetes and Argo Workflows
There also isn't anything super special about deploying Rust on a Kubernetes cluster (it bears to mention however that Rust was found to be well suited for Linkerd Kubernetes operator given its performance and reliability).
However, as an interesting aspect for Data Science and Engineering, I will brifly mention Argo Workflows as a Kubernetes native option to deploy workflows where you can pack your containerized Rust data pipeline.
If you found this project helpful, please consider making a donation.