前言:

推荐免费Docker基础讲解视频:【狂神说Java】Docker最新超详细版教程通俗易懂_哔哩哔哩_bilibili

Docker网络详解

理解Docker0

1
2
3
# 移除所有镜像和容器
docker rm -f $(docker ps -aq)
docker rmi -f $(docker images -aq)

测试

image-20220927105709077

三个网络

docker时如何处理容器网络访问的?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 运行tomcat
docker run -d -P --name tomcat01 tomcat:8.0

# 查看dockers分配ip
[root@jokerdig ~]# docker exec -it tomcat01 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
54: eth0@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever

# linux能否ping通容器内部
[root@jokerdig ~]# ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
# 可以看到能够ping通容器内部
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.017 ms
64 bytes from 172.18.0.2: icmp_seq=3 ttl=64 time=0.017 ms

原理

每启动一个docker容器,docker就会给docker容器分配一个ip,我们只要安装了docker,就会有一个网卡docker0桥接模式,使用的技术是evth-pair技术

再次测试IP

而且这里的一对网卡55和@if54与上方dockers分配tomcat的IP向对应;

image-20220927120235013

结论

我们发现这些网卡都是一对对的veth-pair,也就是一对对的虚拟设备接口,它们都是成对出现的,一段连着协议,一段彼此相连;
正因为有这个特性veth-pair充当一个桥梁,连接各种虚拟网络设备;
OpenStack和Docker容器之间的连接,OVS的连接,都是使用veth-pair技术;

测试两个容器之间是否能连通

1
2
3
4
5
6
7
8
9
10
# 再启动一个tomcat容器
[root@jokerdig ~]# docker run -d -P --name tomcat02 tomcat:8.0
7f1045542896c51fd31d5024ca777f41f5c92155a122e9db30c0c2a3f1fc710c
# tomcat02 ping tomcat01
[root@jokerdig ~]# docker exec -it tomcat02 ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
# 仍然可以ping通
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.080 ms
64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from 172.18.0.2: icmp_seq=3 ttl=64 time=0.041 ms

image-20220927123205593

小结

  • tomcat01和tomcat02是共用的一个路由器:docker0,而所有的容器在不指定网络的情况下,都是由docker0路由的,docker会给我们的容器分配一个默认的可用IP;

  • Docker中所有的网络接口都是虚拟的,虚拟的转发效率比较高(内网传递文件);

  • 只要容器停止,对应的网桥就没了;

容器互联

我们能否使用容器名互相ping通

1
2
3
# 发现不能ping通
[root@jokerdig ~]# docker exec -it tomcat02 ping tomcat01
ping: unknown host tomcat01

解决办法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 启动tomcat03容器,通过link连通tomcat02
[root@jokerdig ~]# docker run -d -P --name tomcat03 --link tomcat02 tomcat:8.0
4efdd62046a1de705292581f12cadbf8802144994a914b75ea481711990d3ee7

# 使用tomcat03 ping tomcat02
[root@jokerdig ~]# docker exec -it tomcat03 ping tomcat02
PING tomcat02 (172.18.0.3) 56(84) bytes of data.
# 发现可以连通
64 bytes from tomcat02 (172.18.0.3): icmp_seq=1 ttl=64 time=0.082 ms
64 bytes from tomcat02 (172.18.0.3): icmp_seq=2 ttl=64 time=0.035 ms
64 bytes from tomcat02 (172.18.0.3): icmp_seq=3 ttl=64 time=0.028 ms

# 但是反向ping又不可以,需要配置才行
[root@jokerdig ~]# docker exec -it tomcat02 ping tomcat03
ping: unknown host tomcat03

其实这个tomcat03就是本地配置了tomcat02的配置

1
2
# 查看hosts
docker exec -it tomcat03 cat /etc/hosts

本质:--link就是我们在hosts配置中增加了一个172.18.0.3 tomcat02 7f1045542896;

Dcoker0问题:不支持容器名连接访问!

现在使用Docker已经不建议使用--link

更多使用自定义网络!

自定义网络

查看所有docker网络

1
2
3
4
5
6
# 查看所有docker网络
[root@jokerdig ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
2b71546dab0c bridge bridge local
40685a830626 host host local
f439a7c39635 none null local

网络模式

  • bridge:桥接 docker;(默认使用)
  • none:不配置网络;
  • host:和宿主机共享网络;
  • container:容器内网络连通!(局限性大,使用少)

测试自定义网络

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
# 停止所有容器
[root@jokerdig ~]# docker stop $(docker ps)
# 我们直接启动默认使用Docker0

# 自定义网络
# --driver 网络模式(这里使用桥接)
# --subnet 子网
# --gateway 网关
[root@jokerdig ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
cbff82527e3adc9b373d6cd20da32f881d614a8cd041f4a4e4b7d4c59919ec6a
[root@jokerdig ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
2b71546dab0c bridge bridge local
40685a830626 host host local
cbff82527e3a mynet bridge local
f439a7c39635 none null local

# 查看我们创建的网络
[root@jokerdig ~]# docker network inspect mynet
...
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
...

# 使用自定义网络启动tomcat01和tomcat02
[root@jokerdig ~]# docker run -d -P --name tomcat-net-01 --net mynet tomcat:8.0
3dde8c5ea1428ea822a5abb09f3af23f14e4623362135ec3e1f98db474985585
[root@jokerdig ~]# docker run -d -P --name tomcat-net-02 --net mynet tomcat:8.0
0640d75202c6b01aa52baabec084c5887556a2653eb7df700fe967edc9c076e6

# 再次查看我们创建的网络,可以看到tomcat-net-01和tomcat-net-02
"Containers": {
"0640d75202c6b01aa52baabec084c5887556a2653eb7df700fe967edc9c076e6": {
# tomcat02
"Name": "tomcat-net-02",
"EndpointID": "7aff7b42878fa8feaf098407fa21f80c3df93322c882c56a597ac3fdc83a3356",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
},
"3dde8c5ea1428ea822a5abb09f3af23f14e4623362135ec3e1f98db474985585": {
# tomcat01
"Name": "tomcat-net-01",
"EndpointID": "1d42528dc20f4f436a0eb436030a6e71bb6666367edf98faac1102ff373e4a22",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
}
},

# 测试使用IP地址ping
[root@jokerdig ~]# docker exec -it tomcat-net-01 ping 192.168.0.3
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.085 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from 192.168.0.3: icmp_seq=3 ttl=64 time=0.033 ms
# 测试使用容器名ping
[root@jokerdig ~]# docker exec -it tomcat-net-01 ping tomcat-net-02
PING tomcat-net-02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.044 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=3 ttl=64 time=0.045 ms

自定义的网络docker都已经帮我们维护好了对应的关系,推荐使用。

网络连通

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# 启动tomcat01和tomcat02
[root@jokerdig ~]# docker run -d -P --name tomcat01 tomcat:8.0
47bfa9fbde912971d77a90fe942858cd6bdab19042d437b61617c62c94f9305a
[root@jokerdig ~]# docker run -d -P --name tomcat02 tomcat:8.0
c42ee965158c0b11ab25f9b45fef287574cecce1162e576d88ba7ca16cff7de9

# 查看正在运行的容器
[root@jokerdig ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c42ee965158c tomcat:8.0 "catalina.sh run" About a minute ago Up About a minute 0.0.0.0:49164->8080/tcp, :::49164->8080/tcp tomcat02
47bfa9fbde91 tomcat:8.0 "catalina.sh run" About a minute ago Up About a minute 0.0.0.0:49163->8080/tcp, :::49163->8080/tcp tomcat01
0640d75202c6 tomcat:8.0 "catalina.sh run" 4 hours ago Up 4 hours 0.0.0.0:49162->8080/tcp, :::49162->8080/tcp tomcat-net-02
3dde8c5ea142 tomcat:8.0 "catalina.sh run" 4 hours ago Up 4 hours 0.0.0.0:49161->8080/tcp, :::49161->8080/tcp tomcat-net-01

# tomcat01去ping tomcat-net-01
[root@jokerdig ~]# docker exec -it tomcat01 ping tomcat-net-01
ping: unknown host tomcat-net-01

# docker0的容器能够连接到mynet
# tomcat01连接到mynet
[root@jokerdig ~]# docker network connect mynet tomcat01

# 查看mynet的配置,可以看到直接把tomcat01添加到mynet中
[root@jokerdig ~]# docker network inspect mynet
.....
"47bfa9fbde912971d77a90fe942858cd6bdab19042d437b61617c62c94f9305a": {
"Name": "tomcat01",
"EndpointID": "a6a6e022b3d816643f4ee91a3cb86b7f7a605b4909dc6471f48613e49ce65392",
"MacAddress": "02:42:c0:a8:00:04",
"IPv4Address": "192.168.0.4/16",
"IPv6Address": ""
}
....

# tomcat01pingtomcat-net-01,已经连通
[root@jokerdig ~]# docker exec -it tomcat01 ping tomcat-net-01
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.087 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.037 ms

# 当然,tomcat02我们没有连通,仍然不能ping到
[root@jokerdig ~]# docker exec -it tomcat02 ping tomcat-net-01
ping: unknown host tomcat-net-01

Redis集群部署

部署示图

image-20220927172257091

准备工作

1
2
# 移除所有容器
docker rm $(docker ps -a)

实现步骤

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
# 创建网卡
docker network create redis --subnet 172.38.0.0/16

# 通过脚本创建六个redis配置
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done

# 运行6个redis
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
-v /mydata/redis/node-2/data:/data \
-v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
-v /mydata/redis/node-3/data:/data \
-v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
-v /mydata/redis/node-4/data:/data \
-v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
-v /mydata/redis/node-5/data:/data \
-v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

# 查看6个容器是否运行
[root@jokerdig ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ea6bef5f455 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 3 seconds ago Up 2 seconds 0.0.0.0:6376->6379/tcp, :::6376->6379/tcp, 0.0.0.0:16376->16379/tcp, :::16376->16379/tcp redis-6
ffb0ee433817 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 9 seconds ago Up 8 seconds 0.0.0.0:6375->6379/tcp, :::6375->6379/tcp, 0.0.0.0:16375->16379/tcp, :::16375->16379/tcp redis-5
23bf58f2416c redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 14 seconds ago Up 13 seconds 0.0.0.0:6374->6379/tcp, :::6374->6379/tcp, 0.0.0.0:16374->16379/tcp, :::16374->16379/tcp redis-4
978bfae5277d redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 21 seconds ago Up 20 seconds 0.0.0.0:6373->6379/tcp, :::6373->6379/tcp, 0.0.0.0:16373->16379/tcp, :::16373->16379/tcp redis-3
993778a478e1 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 29 seconds ago Up 28 seconds 0.0.0.0:6372->6379/tcp, :::6372->6379/tcp, 0.0.0.0:16372->16379/tcp, :::16372->16379/tcp redis-2
b873d0a4b077 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 41 seconds ago Up 40 seconds 0.0.0.0:6371->6379/tcp, :::6371->6379/tcp, 0.0.0.0:16371->16379/tcp, :::16371->16379/tcp redis-1

# 进入redis-1
[root@jokerdig ~]# docker exec -it redis-1 /bin/sh
/data #

# 创建redis集群
/data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: ebd3b94b5d4b5a135898b7a441974edfcb22f10d 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
M: 67d2ae9c42bcd35507058b57cc18ed2486555808 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
M: 6c2f9917c3ec0404519d3a902e20b8b59f40df5f 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
S: 273c7e8abba7c8cb6eb748a23bc64d124c2f98ba 172.38.0.14:6379
replicates 6c2f9917c3ec0404519d3a902e20b8b59f40df5f
S: ed0b4973c7b6a203fe5d94721f9e4fe205b1de9f 172.38.0.15:6379
replicates ebd3b94b5d4b5a135898b7a441974edfcb22f10d
S: 59c6d4f92968105eac6d3addf85f1511eeee2d8f 172.38.0.16:6379
replicates 67d2ae9c42bcd35507058b57cc18ed2486555808
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: ebd3b94b5d4b5a135898b7a441974edfcb22f10d 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 67d2ae9c42bcd35507058b57cc18ed2486555808 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 273c7e8abba7c8cb6eb748a23bc64d124c2f98ba 172.38.0.14:6379
slots: (0 slots) slave
replicates 6c2f9917c3ec0404519d3a902e20b8b59f40df5f
M: 6c2f9917c3ec0404519d3a902e20b8b59f40df5f 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: ed0b4973c7b6a203fe5d94721f9e4fe205b1de9f 172.38.0.15:6379
slots: (0 slots) slave
replicates ebd3b94b5d4b5a135898b7a441974edfcb22f10d
S: 59c6d4f92968105eac6d3addf85f1511eeee2d8f 172.38.0.16:6379
slots: (0 slots) slave
replicates 67d2ae9c42bcd35507058b57cc18ed2486555808
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

简单测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# 进入redis集群
/data # redis-cli -c
# 查看信息
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3 # 集群大小为3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:193
cluster_stats_messages_pong_sent:187
cluster_stats_messages_sent:380
cluster_stats_messages_ping_received:182
cluster_stats_messages_pong_received:193
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:380
# 查看节点
127.0.0.1:6379> cluster nodes
67d2ae9c42bcd35507058b57cc18ed2486555808 172.38.0.12:6379@16379 master - 0 1664271554000 2 connected 5461-10922
ebd3b94b5d4b5a135898b7a441974edfcb22f10d 172.38.0.11:6379@16379 myself,master - 0 1664271551000 1 connected 0-5460
273c7e8abba7c8cb6eb748a23bc64d124c2f98ba 172.38.0.14:6379@16379 slave 6c2f9917c3ec0404519d3a902e20b8b59f40df5f 0 1664271554000 4 connected
6c2f9917c3ec0404519d3a902e20b8b59f40df5f 172.38.0.13:6379@16379 master - 0 1664271554000 3 connected 10923-16383
ed0b4973c7b6a203fe5d94721f9e4fe205b1de9f 172.38.0.15:6379@16379 slave ebd3b94b5d4b5a135898b7a441974edfcb22f10d 0 1664271553000 5 connected
59c6d4f92968105eac6d3addf85f1511eeee2d8f 172.38.0.16:6379@16379 slave 67d2ae9c42bcd35507058b57cc18ed2486555808 0 1664271554722 6 connected
# set一个之,可以看到这个值是由172.38.0.12来处理的
127.0.0.1:6379> set name jokerdig
-> Redirected to slot [5798] located at 172.38.0.12:6379
OK

SpringBoot微服务打包Docker镜像

  1. 构建一个SpringBoot项目

    创建controller包,编写测试类

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    package com.jokerdig.controller;

    import org.springframework.web.bind.annotation.RequestMapping;
    import org.springframework.web.bind.annotation.RestController;

    /**
    * @author Joker大雄
    * @data 2022/9/27 - 17:50
    **/
    @RestController
    public class TestController {

    @RequestMapping("/test")
    public String test(){
    return "Hey,Joker"
    }
    }

    本地测试

    访问:http://localhost:8080/test

    1
    Hey,Joker
  2. 打包应用

    image-20220927175615874

  3. 编写Dockerfile

    在IDEA安装Docker插件

    image-20220927175750382

    项目根目录创建Dockerfile文件,并把刚刚生成的jar包复制到项目根目录

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    # java:8`已被弃用,官方推荐使用openjdk
    FROM openjdk

    COPY *.jar /app.jar

    CMD ["-server.port=8080"]

    EXPOSE 8080

    ENTRYPOINT ["java","-jar","/app.jar"]

    image-20220927180153612

    在服务器创建一个目录,把Dockerfilejar包上传到服务器;

  4. 构建镜像

    查看上传到服务器的文件

    1
    2
    [root@jokerdig idea]# ls
    Dockerfile docker-package-0.0.1-SNAPSHOT.jar

    构建Docker镜像

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    [root@jokerdig idea]# docker build -t springboot-docker-test .

    Sending build context to Docker daemon 17.64MB
    Step 1/5 : FROM openjdk
    latest: Pulling from library/openjdk
    051f419db9dd: Pull complete
    aa51c6010a14: Pull complete
    eb80f8e66e0b: Pull complete
    Digest: sha256:14a8f0b5f29cf814e635c03a20be91bc498b8f52e7ea59ee1fa59189439e8c26
    Status: Downloaded newer image for openjdk:latest
    ---> 116c4fdea277
    Step 2/5 : COPY *.jar /app.jar
    ---> e5e6639fa00b
    Step 3/5 : CMD ["-server.port=8080"]
    ---> Running in 71645b3695b7
    Removing intermediate container 71645b3695b7
    ---> 5d1f5f7e56db
    Step 4/5 : EXPOSE 8080
    ---> Running in a916aa96385f
    Removing intermediate container a916aa96385f
    ---> 9942d5834227
    Step 5/5 : ENTRYPOINT ["java","-jar","/app.jar"]
    ---> Running in 9d13df6a03d3
    Removing intermediate container 9d13df6a03d3
    ---> bc280c083c40
    Successfully built bc280c083c40
    Successfully tagged springboot-docker-test:latest
  5. 发布运行

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    # 启动容器
    [root@jokerdig idea]# docker run -d -P --name springboot-test springboot-docker-test
    7638f26d95521999d38a9ce1fa61ebc588b3dc7c223e7b7dc7efbbd692b512e0

    # 查看是否运行
    [root@jokerdig idea]# docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    7638f26d9552 springboot-docker-test "java -jar /app.jar …" 9 seconds ago Up 9 seconds 0.0.0.0:49166->8080/tcp, :::49166->8080/tcp springboot-test

    # 访问地址,运行成功
    [root@jokerdig idea]# curl localhost:49166/test
    Hey,Joker[root@jokerdig idea]#