Docker architecture is in such a way that the containers can connect to the outside world by default. But you cannot access the container directly from outside, by default.
There maybe many instances where you may have to access the container data from outside, such as application testing, website and so on.
To make that possible, you would have to expose the container ports. Docker ports can be exposed by mapping a container’s port to an external port in the host.
Today we’ll see how to expose docker ports to make them accessible from the internet.
How to expose docker ports to access your containers
Suppose you have to run NginX web server application in your Docker container and access it from the outside network.
You can install the nginx image and start a container but you cannot directly access the container from outside. But you can access the Docker host IP from outside.
This is because the Docker containers have an internal network and an IP address that can be accessed only internally from the docker machine.
So, the solution we have now is to bind the internal port 80 of the Docker container to a port on the host machine, say 9000.
This can be done using the ‘-p’ option with ‘docker run’ command to bind the port when launching the container based on the image:
docker run -d -p 9090:80 -t nginx
This will create a container with the image ‘ubuntu’ and bind the container’s port 80 to the host machine’s port 9090. You can verify this using ‘docker ps‘ command:

Create docker container with port binding
To see the port bindings of a specific container, use the ‘docker inspect [container-id]’ command:

Port bindings of container
Once the port is bound and the container is up and running, the port inside the container can be accessed using the host machine IP as http://host-ip-address:9090/ .
How to expose multiple docker ports
We saw how mapping one container port to one host port is done. But this one-one mapping may not be an efficient way in the case of a multiple container setup.
In such instances, it is possible to map a range of ports in the docker host to a container port, using the option:
docker run -d -p 7000-8000:4000 web-app
This would bind port 4000 in the container to a random port between 7000 and 8000 on the host, depending upon the port that is available in the host.
To update Docker regarding the ports that the container listens on, the parameter ‘EXPOSE’ can be set in Dockerfile:
EXPOSE <CONTAINERPORT>
Restrict the port binding to single interface
By default, the ‘-p’ flag will bind the specified port to all interfaces on the host machine. This may not be required for all scenarios.
In some cases, you would have to restrict the port binding to a single interface, say localhost only.
This can be done by mapping the container port to the host port as follows:
docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT -t image
The command ‘docker port [container-id]’ can be used to see the port mapping of a particular container.

Verify container port mapping
Expose docker ports randomly during build time
To randomly map any network port inside a container to a port in the Docker host, the ‘-P’ option can be used in ‘docker run’:
docker run -d -P webapp
To see the port mapping for this container, you can use ‘docker ps’ command after creating the container.

Docker container ports
Summary
Today we saw how to make your Docker containers accessible from outside, using the ‘docker port expose’ method.
But selecting the port mapping is not a trivial task and should be done with utmost care to avoid conflicts. The firewall rules would also have to be configured.
The port assigned for containers would vary depending on the application running in it and the business purpose served.
If you’d like to know how to manage your Docker system and to get the best out of them for your business, we’d be happy to talk to you.
The post How to expose docker ports to make your containers externally accessible appeared first on Bobcares.