Creating Docker images from dotnet solution with project references is easy when you understand basics of Docker commands, but writing proper Dockerfile can be tricky for beginners.
Most of the examples show how to dockerize dotnet project, assuming that it has no local dependencies.
So let’s analyse what can we do, when our project has references to other projects from solution.
We will start with diving into simple example without dependencies first, to understand what changes we introduce and why.
If you just want to skip to solution and copy-paste it, of course you can do it, but it’s not recommended, because
sooner or later you will be blocked with another obstacle due to not understanding what is happening, and in result you will waste more time.
This article is good to start learning Docker instructions and commands, because everything is explained with short, plain language.
We will use .net core 2.2 as it’s current version at the moment of writing.
Example dockerized dotnet core application is available on GitHub, feel free to use it for your needs.
and those two commands, to be run from project folder where Dockerfile is located:
Dockerfile FROM instruction
Our Dockerfile starts with FROM instruction: FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
which means that we base our image on official Microsoft Dotnet Core SDK in version 2.2.
We use SDK at this moment, not production runtime, because we will compile our application in Docker during building image.
So you don’t even need to have .net core SDK installed on your host machine, this Dockerfile is prepared in a way that you won’t compile
your app for Docker image yourself - Docker will compile it. You could use binaries built on your host machine, but it’s not safe - it may not work due to compatibility troubles.
Why there is AS build-env instruction - we will come to this later (in Docker multi-stage build section).
Dockerfile WORKDIR instruction
In second line we see WORKDIR /app instruction which means, that following RUN, CMD, ENTRYPOINT, COPY and ADD instructions in our Dockerfile
will be executed in /app directory. If it doesn’t exist it will be created (even if it wouldn’t be used).
Dockerfile COPY instruction
Next we see COPY *.csproj ./ instruction, which means that all csproj files from Docker build context will be copied to workdir (/app) directory
inside Docker image. Docker build command will be explained later, but in short - build context is the directory from your host
machine, pointed in Docker build command. If you point . path, the directory where you execute command is taken.
So in our case we copy only one csproj file, because we run build command with project directory set as build context.
Dockerfile RUN instruction
Next is RUN dotnet restore instruction, which simply runs dotnet restore command in our workdir (/app) directory.
At this moment inside /app directory in our image, is nothing but .csproj of our project, because we copied only it
in previous step, but it’s enough for restoring nuget dependencies.
Copy and compile app source
Again we see COPY instruction - COPY . ./ to copy everything from our build context - in our case
it means project files (.cs files etc.), because we run docker build command with project directory set as build context.
Then with run instruction - RUN dotnet publish -c Release -o out we simply run dotnet publish in our workdir (/app)
directory inside image, with -c Release -o out parameters. This dotnet command
compiles our app with release configuration and publishes results in out directory (in our case /app/out).
We can compile source because we base this image on developer’s sdk.
Docker multi-stage build
Once again we see FROM instruction, which sets on which image we base our image…
How it’s possible to specify it again, with different base? It’s quite fresh Docker feature (since Docker 17.05 version) called multi-stage builds.
When we use FROM keyword again, we mean that previous image specified above is temporary, and was used only to serve some purpose.
In our case it was made only to compile our application - that’s why we used SDK as base image. Now we specify base image again and this time
we are preparing our real image - the one which will be deployed to production, and this one doesn’t base on SDK, only on production runtime,
which gives smaller size in result. We will just copy our compiled app from temporary image. So again we specify working directory to /app
catalog and then we copy our binaries - COPY --from=build-env /app/out . which means copy files from /app/out/ from build-env image
(that’s why we gave it a name in the first line) to current working directory (/app).
Dockerfile ENTRYPOINT instruction
Very last instruction in this Dockerfile is ENTRYPOINT, which (in simple terms) specifies a command that will be executed when the container starts.
So in our case - ENTRYPOINT ["dotnet", "PROJECT_NAME.dll"] - Docker will run dotnet with PROJECT_NAME.dll parameter (which should be
replaced to our project name of course), to start our app.
Docker build command
With such Dockerfile, we are told to run docker build -t aspnetapp . command in project directory (where Dockerfile is stored).
Option: -t name (--tag name) is not mandatory - it allows to tag image (to name it and optionally give it a tag in ‘name:tag’ format), so don’t focus on it,
and look on this command this way: docker build ., because important thing is after options - build context parameter.
Build context is path on host machine which will be accessible during building image for Dockerfile instructions.
In our case it’s . path which means that the directory where we run this command is passed as build context.
Because we are told to run this command in project directory (where .csproj file is stored), our project files are passed as build context.
Docker run command
Docker run command creates container from image.
Image is readonly manual for Docker, to create container, and container is working virtual machine where our app lives.
We can think about it this way: image is like a class in object-oriented programming, and container is like an instance, created from this class.
So we can create as many containers (instances) as we want, and it doesn’t affect image (class) - image is only necessary to let Docker know how to create container.
We are told to run it this way: docker run -d -p 8080:80 --name myapp aspnetapp
Without --detach option (-d) we will start seeing app console output from container.
With --publish (-p) option we bind container’s port(s) to the host (by default with TCP, but you can specify UDP and SCTP as well).
With --name option we assign a name to the container (without this option Docker will choose some funny name for us).
At the end we pass image name, which Docker will read to create container. Because we named our image aspnetapp, we use this name here.
Proper Docker commands
Once we understand what happens in basic example, let’s see how to change it, to make it work when our project has references to other
The problem is of course, that we run Docker build command from project directory passing . path as build context.
This means, that files only from this directory will be accessible during building image, and depending projects are of course in other
directories. We have several options to fix this. We can move Dockerfile one level up (to solution directory) and run docker build from there.
But it’s recommended to have Dockerfile in project directory, to be able to have more than one Dockerfile in solution (for different projects).
You could also run docker build as before (from project directory), but change build context path to one level up (..).
In my view more elegant is third solution - to run docker build from solution directory, pass . as build context and to specify which Dockerfile we want
to read with --file (-f) option, like this: docker build -f PROJECT_DIRECTORY/Dockerfile -t IMAGE_NAME .
How to adjust Dockerfile
Next we need to adjust Dockerfile, because the one from official example assume that we have project directory as build context.
My version looks like this:
I have skipped restoring nuget packages as single step to simplify, restore is included in dotnet publish and if it fails due to nuget failure,
error message is legible. But if you have many nuget dependencies you may want to have separate step, because this way Docker treats it as distinct layer and reuses it
if none of csproj file changed, which gives smaller build time. In my case restoring nugets is fast enough to skip it, but remember about this if you have long building time (big Restore completed in… time).
So we copy all projects (because build context is now solution directory) to /app directory inside container.
Next from /app workdir we run dotnet publish command, specifying which project to compile -
RUN dotnet publish PROJECT_NAME -c Release -o out - here PROJECT_NAME is the directory name with .csproj file inside.
Other instructions stays untouched with one small change - during copying compiled app from temporary image, this time we need to pass project name to path: COPY --from=build-env /app/PROJECT_NAME/out .
Where to keep .dockerignore file
Official article says to add .dockerignore file to project directory, to make build context as small as possible, which of course is
reasonable. But Docker CLI looks for .dockerignore file in root directory of the build context, so now we need to move it to solution
directory. But in my view it’s even better, because we don’t need to create and maintain many .dockerignore files for many projects,
we keep one for all of them. Mine currently looks like this:
When I had to dockerize .net core app for the first time I just took Dockerfile from mentioned article, copy-pasted Docker commands
and when I faced obstacle I tried to solve it without analysis of how Docker works. After wasting some time this way, I wasted time again - trying
to just copy-paste solution from the internet - again without analysis of what I do, and without success too. Then I learned (once again in my life…), that haste didn’t save
time, but do the opposite - waste it.
Because I didn’t find proper article nor tutorial to start I came though official documentation and manuals, which are written nice, but have too many
details for beginners.
This article shows essentials of analysis I took and explains basics with plain language. I hope this way it is good to start writing proper Dockerfile,
without having trouble with situation like the one presented here - dotnet project with references to other projects from solution, nor with any other obstacle.
What to do when you have already containerized your app, but need to use some dependent system, for example DB?
You can compose them together with docker-compose, which I have described with simple .NET Core app and MySQL DB as composed system example, in next article.
Don’t hesitate to write comment whether it was helpful for you, or to share it on Facebook or Twitter :)