验证envoy 作为grpc的代理网关
0x01 环境准备
- 任意centos 8 环境,安装minikube,详细步骤见参考中的 centos8 install minikube
- 通过istioctl 安装 istio组件,直接安装default 配置
- 验证用的应用(可以自定义,这次试用用的echo服务,应用监听的端口为50051)
0x02 应用部署配置
deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
labels:
app: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: echo:latest
imagePullPolicy: IfNotPresent #minikube 使用本地镜像
ports:
- containerPort: 50051
name: grpc
volumeMounts:
- mountPath: /data/log
name: log
volumes:
- hostPath:
path: /data/log
type: ""
name: log
service
apiVersion: v1
kind: Service
metadata:
name: echo
spec:
type: ClusterIP
clusterIP: None
selector:
app: echo
ports:
- protocol: TCP
port: 50051
targetPort: 50051
name: grpc # 这里必须是grpc,不然grpc服务会报网络错误
0x03 修改ingress gateway
- 获得deploy资源
kubectl get deployment -n istio-system istio-ingressgateway -o yaml > istio-ingressgateway-deployment.yaml
- 修改导出的istio-imgressgateway-deployment.yaml 在istio-proxy 下增加监听
name: istio-proxy
ports:
- containerPort: 15021
protocol: TCP
- containerPort: 8080
protocol: TCP
- containerPort: 9090 #增加
protocol: TCP #增加
- containerPort: 8443
protocol: TCP
- containerPort: 15443
protocol: TCP
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
- 重新部署istio-ingressgateway-deployment.yaml
kubectl apply -f istio-ingressgateway-deployment.yaml
- 获得service资源
kubectl get svc -n istio-system istio-ingressgateway -o yaml > istio-ingressgateway-svc.yaml
- 增加端口映射
externalTrafficPolicy: Cluster
ports:
- name: status-port
nodePort: 30840
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 32457
port: 80
protocol: TCP
targetPort: 8080
- name: grpc #增加
nodePort: 31399 #增加 (外部访问端口)
port: 9090 #增加(service 端口)
protocol: TCP #增加
targetPort: 9090 #增加(deploy监听端口)
- name: https
nodePort: 30524
port: 443
protocol: TCP
targetPort: 8443
- name: tls
nodePort: 30027
port: 15443
protocol: TCP
targetPort: 15443
- 重新部署service
kubectl apply -f istio-ingressgateway-svc.yaml
0x04 配置istio
- 增加gateway资源 echo-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grpc-echo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9090
name: grpc
protocol: GRPC
hosts:
- "*"
kubectl apply -f echo-gateway.yaml
- 增加虚拟服务资源 echo-virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grpc-echo
spec:
hosts:
- "*"
gateways:
- grpc-echo-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: echo.default.svc.cluster.local
port:
number: 50051
kubectl apply -f echo-virtualservice.yaml
0x05 发起请求
用bloomRPC 工具发起访问 例如本次试验基于virtualbox 安装的centos8 环境,虚拟机地址为192.168.56.102;bloomRPC 向 192.168.56.102::31399 发起访问
问题
- echo 服务端报错:transport: http2Server.HandleStreams received bogus greeting from client grpc 传输层收到的首个包字节数不对,在配置echo service 时需要设置端口名字为grpc
工具
BloomRPC 作为作为集群外客户端发起请求的工具。 minikube 搭建集群环境 istio 版本1.7.3