Kubernetes应用日志收集 标准输出部分
Kubernetes应用日志收集 标准输出部分
ELK Stack日志系统
ELK 是三个开源软件的缩写,提供一套完整的企业级日志平台解决方案。分别是:
Elasticsearch:搜索、分析和存储数据Logstash :采集日志、格式化、过滤,最后将数据推送到Elasticsearch存储Kibana:数据可视化Beats :集合了多种单一用途数据采集器,用于实现从边缘机器向 Logstash 和 Elasticsearch 发送数据。里面应用最多的是Filebeat,是一个轻量级日志采集器
在每台要采集日志的服务器上面安装filebeat,filebeat是去采集该机器上面的日志文件,采集好之后推送到logstash。或者直接绕过logstash直接存储到es。然后kibana直接从es里面进行可视化日志展示(我这里使用filbeat--->elasticsearch,不使用filebeat---->logstash--->elasticsearch)
Elasticsearch部署
第一步先将es kibana搭建起来,下面先将es搭建起来
#先创建命名空间ops,日志收集的相关软件都在该命名空间下[root@k8s-master ~]# kubectl create ns opsnamespace/ops created[root@k8s-master ~]# mkdir -p elk[root@k8s-master elk]# lselasticsearch.yaml[root@k8s-master ~]# cat elasticsearch.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: elasticsearch namespace: ops labels: k8s-app: elasticsearchspec: replicas: 1 selector: matchLabels: k8s-app: elasticsearch template: metadata: labels: k8s-app: elasticsearch spec: containers: - image: elasticsearch:7.9.2 name: elasticsearch resources: limits: cpu: 2 memory: 3Gi requests: cpu: 0.5 memory: 500Mi env: - name: "discovery.type" #指定变量类型,为单实例,如果是集群使用有状态部署 value: "single-node" - name: ES_JAVA_OPTS value: "-Xms512m -Xmx2g" #JVM内存设定 ports: - containerPort: 9200 name: db protocol: TCP volumeMounts: - name: elasticsearch-data mountPath: /usr/share/elasticsearch/data volumes: - name: elasticsearch-data persistentVolumeClaim: claimName: es-pvc---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: es-pvc namespace: opsspec: storageClassName: "managed-nfs-storage" #采用pv的动态供给,所以集群当中要部署存储类 accessModes: - ReadWriteMany resources: requests: storage: 10Gi---apiVersion: v1kind: Servicemetadata: name: elasticsearch namespace: opsspec: ports: - port: 9200 protocol: TCP targetPort: 9200 selector: k8s-app: elasticsearch#如果上面你只是自己实验环境,可以将PersistentVolumeClaim删除了,把数据持久化删除[root@k8s-master elk]# kubectl get pod,svc -n opsNAME READY STATUS RESTARTS AGEpod/elasticsearch-67545b8fcd-pmfl4 1/1 Running 0 5m23sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/elasticsearch ClusterIP 10.103.46.115
Kibana部署
[root@k8s-master elk]# cat kibana.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: kibana namespace: ops labels: k8s-app: kibanaspec: replicas: 1 selector: matchLabels: k8s-app: kibana template: metadata: labels: k8s-app: kibana spec: containers: - name: kibana image: kibana:7.9.2 resources: limits: cpu: 2 memory: 2Gi requests: cpu: 0.5 memory: 500Mi env: - name: ELASTICSEARCH_HOSTS value: #连接es的地址,elasticsearch.ops通过coredns解析 - name: I18N_LOCALE #设置中文语言 value: zh-CN ports: - containerPort: 5601 name: ui protocol: TCP---apiVersion: v1kind: Servicemetadata: name: kibana namespace: opsspec: type: NodePort ports: - port: 5601 protocol: TCP targetPort: ui #这里引用上面定义的5601端口 nodePort: 30601 selector: k8s-app: kibana
[root@k8s-master elk]# kubectl get pod,svc -n opsNAME READY STATUS RESTARTS AGEpod/elasticsearch-67545b8fcd-pmfl4 1/1 Running 0 28mpod/kibana-55c8979979-vlk2m 1/1 Running 0 13mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/elasticsearch ClusterIP 10.103.46.115
上面两个步骤就将elasticsearch和kiabana平台部署起来了,下面就要使用deamonset去部署filebeat去收集每个节点日志(以DaemonSet方式在每个Node上部署一个日志收集程序,采集 /var/lib/docker/containers/目录下所有容器日志)
Filebeat部署
[root@k8s-master elk]# cat filebeat-kubernetes.yaml ---apiVersion: v1kind: ConfigMapmetadata: name: filebeat-config namespace: ops labels: k8s-app: filebeatdata: filebeat.yml: |- filebeat.config: inputs: # Mounted `filebeat-inputs` configmap: path: ${path.config}/inputs.d/*.yml # Reload inputs configs as they change: reload.enabled: false modules: path: ${path.config}/modules.d/*.yml # Reload module configs as they change: reload.enabled: false output.elasticsearch: hosts: ['elasticsearch.ops:9200']---#下面是针对k8s的配置,日志类型为docker。这里要启动filebeat内置对docker的支持基于docker采集日志,并且对日志进行处理apiVersion: v1kind: ConfigMapmetadata: name: filebeat-inputs namespace: ops labels: k8s-app: filebeatdata: kubernetes.yml: |- - type: docker containers.ids: - "*" processors: - add_kubernetes_metadata: in_cluster: true---apiVersion: apps/v1 kind: DaemonSetmetadata: name: filebeat namespace: ops labels: k8s-app: filebeatspec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 containers: - name: filebeat image: elastic/filebeat:7.9.2 args: [ "-c", "/etc/filebeat.yml", "-e", ] securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: inputs mountPath: /usr/share/filebeat/inputs.d readOnly: true - name: data mountPath: /usr/share/filebeat/data - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true volumes: - name: config configMap: defaultMode: 0600 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: inputs configMap: defaultMode: 0600 name: filebeat-inputs # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: path: /var/lib/filebeat-data type: DirectoryOrCreate---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: filebeatsubjects:- kind: ServiceAccount name: filebeat namespace: opsroleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: filebeat labels: k8s-app: filebeatrules:- apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods verbs: - get - watch - list---apiVersion: v1kind: ServiceAccountmetadata: name: filebeat namespace: ops labels: k8s-app: filebeatdata: kubernetes.yml: |- - type: docker containers.ids: - "*" processors: - add_kubernetes_metadata: in_cluster: true#内置对kubernetes的支持,从k8s中获取采集日志的信息,比如标签,命名空间#现在要部署fileat采集所有容器的标准输出,使用configmap做配置存储,使用hostpath将宿主机日志目录挂载,这样filebeat就可以读取到宿主机上面的目录了 - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers[root@k8s-master elk]# kubectl apply -f filebeat-kubernetes.yaml configmap/filebeat-config createdconfigmap/filebeat-inputs createddaemonset.apps/filebeat createdclusterrolebinding.rbac.authorization.k8s.io/filebeat createdclusterrole.rbac.authorization.k8s.io/filebeat createdserviceaccount/filebeat created[root@k8s-master elk]# kubectl get pod -l k8s-app=filebeat -n ops -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESfilebeat-q5gch 1/1 Running 0 2m1s 10.244.36.78 k8s-node1
当部署好filebeat部署好之后就可以去采集这个目录下的日志了,只要filebeat能够连接到es就可以写入数据
[root@k8s-master elk]# kubectl exec -it filebeat-sfzgg -n ops -- shsh-4.2# ls /var/lib/docker/containers01e8a96a871cfbb5f37babe1e39b5f7e313a9e39edd69304389f8f7592d5c06c0db974974bb446c83dbea186f063f50a3e4881dfb52538700055455ab13794fe157e5ded52dbdcae4f9a2a1282088cbde4b4b3bc099765de09589c73919cd7a01cc7875a061a9673c92ecdb187a9fe48b0bb0001176e89d5ddb1955a83f3dc24356d96d52f199c4ebba8d63888e6fa730c9583803057365e750a9af4fb2d4fe536c35f3913e880b6ee08ec898c7230113bd10eb65afb115aceaaf3abc5489b2e3ab20de4bb389a4cf9547bd7114bf429c296f05b4054ddd56f6c86a3bfd720ff
部署成功之后就开始去采集指定目录下面的所有文件了
kibana展示
基于这个索引可以创建月份的
[root@master ~]# kubectl get pod -n kubesphere-logging-system NAME READY STATUS RESTARTS AGEfluent-bit-95hmf 1/1 Running 0 41mfluent-bit-hczn6 1/1 Running 0 41mfluent-bit-qkw5s 1/1 Running 0 41mfluentbit-operator-5576bbdcff-ckdlm 1/1 Running 0 41mlogsidecar-injector-deploy-5b484f575d-f5qh6 2/2 Running 0 41mlogsidecar-injector-deploy-5b484f575d-l4292 2/2 Running 0 41m
可以看到fluent-bit的日志采集到了,并且日志报错无法连接到外部的es
同时可以基于字段去刷选过滤出想要的信息
现在创建一个标准输出的pod
[root@master elk]# cat app-log-stdout.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: app-log-stdoutspec: replicas: 1 selector: matchLabels: project: microservice app: nginx-stdout template: metadata: labels: project: microservice app: nginx-stdout spec: containers: - name: nginx image: nginx---apiVersion: v1kind: Servicemetadata: name: app-log-stdoutspec: ports: - port: 80 protocol: TCP targetPort: 80 selector: project: microservice app: nginx-stdout[root@master elk]# kubectl get pod,svcNAME READY STATUS RESTARTS AGEpod/app-log-stdout-584c76c5d-7wz2z 1/1 Running 0 14mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/app-log-stdout ClusterIP 10.233.13.75
现在访问这个pod制造一条日志
[root@node1 ~]# curl 10.233.96.16