庆阳市网站建设_网站建设公司_AJAX_seo优化
2025/12/17 20:25:29 网站建设 项目流程

作为 10 年运维老炮,咱不绕弯子,全程说人话、讲透等保 2.0 三级在 K8S 容器场景的核心要求,拆解落地逻辑、操作步骤,最后给一个可直接复用的电商核心系统合规案例,确保容器安全合规率 100%,完全兼容 K8S 1.33。

一、核心逻辑:等保 2.0 三级 vs K8S 容器合规对应关系

先把底层逻辑捋清楚,等保 2.0 三级的核心要求可以拆解为 “五大维度”,每个维度对应 K8S 的核心技术组件,这是 100% 合规的基础:

等保 2.0 三级核心要求容器场景解读(说人话)K8S 1.33 对应技术方案
身份鉴别 + 访问控制谁能操作 K8S?能操作哪些容器 / 资源?最小权限RBAC(精细化权限)+ ServiceAccount 隔离 + 准入控制(ValidatingAdmissionPolicy)
网络安全 + 区域隔离容器之间 / 容器与外部能不能通信?只允许必要通信NetworkPolicy(网络防火墙)+ CNI(Calico)+ 节点网络隔离
数据安全 + 隐私保护容器数据加密(传输 / 存储)、敏感数据防泄露Secret 加密 + TLS 双向认证 + 存储卷加密(CSI 加密)
安全审计 + 日志留存所有操作 / 流量 / 容器行为都要记录,留存≥6 个月Audit Log + 容器运行时日志 + ELK/EFK 归集 + 日志不可篡改
入侵防范 + 漏洞管理容器镜像安全、运行时安全、禁止特权逃逸PodSecurityContext(安全上下文)+ 镜像扫描(Trivy)+ 运行时防护(Falco)+ 漏洞定期扫描

核心原则:等保 2.0 三级的本质是 “全生命周期安全”,容器合规必须覆盖 “镜像构建→部署→运行→销毁” 全流程,而非单点配置

二、落地总纲领:容器 100% 合规的 8 个核心步骤

先给整体流程,避免你东一榔头西一棒子,每个步骤都对应等保要求,缺一不可:

  1. 环境基线加固(K8S 集群本身合规);
  2. 身份与权限合规(RBAC + 准入控制);
  3. 网络隔离合规(NetworkPolicy+CNI);
  4. 容器运行时合规(PodSecurityContext + 运行时防护);
  5. 数据加密合规(传输 / 存储 / 密钥管理);
  6. 审计日志合规(全维度日志 + 留存);
  7. 镜像全生命周期合规(构建→扫描→部署);
  8. 合规检测与持续审计(自动化校验 + 定期复测)。

三、逐个拆解:技术逻辑 + 操作步骤(兼容 K8S 1.33)

步骤 1:环境基线加固(K8S 集群本身合规)

技术逻辑(说人话)

等保 2.0 三级要求 “基础环境安全”,K8S 集群的 master/node 节点必须加固,比如禁用 root 远程登录、开启内核安全模块、限制 kube-apiserver 权限等,这是容器合规的 “地基”。

操作步骤(K8S 1.33 适配版)
1.1 节点系统加固(以 CentOS 7/8 为例)
# 1. 禁用root SSH登录 sed -i 's/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config systemctl restart sshd # 2. 开启SELinux(等保强制要求) setenforce 1 sed -i 's/^SELINUX=.*/SELINUX=enforcing/' /etc/selinux/config # 3. 开启AppArmor(容器运行时安全,K8S 1.33默认支持) apt install apparmor-utils -y # Debian/Ubuntu systemctl enable --now apparmor # 4. 配置内核参数(禁止容器逃逸) cat >> /etc/sysctl.conf << EOF kernel.yama.ptrace_scope = 1 # 禁止进程调试 vm.mmap_min_addr = 65536 # 限制内存映射 net.ipv4.ip_forward = 1 # 仅CNI需要,否则关闭 net.ipv6.conf.all.disable_ipv6 = 0 # 等保要求双栈需开启则保留 EOF sysctl -p # 5. 限制kubelet权限(仅允许kube用户运行) chown -R kube:kube /var/lib/kubelet chmod 700 /var/lib/kubelet
1.2 K8S 组件加固(1.33 版)
# 1. kube-apiserver加固(修改/etc/kubernetes/manifests/kube-apiserver.yaml) # 关键参数添加: --audit-log-path=/var/log/kubernetes/audit.log # 开启审计日志 --audit-log-maxage=180 # 日志留存180天(等保要求≥6个月) --audit-log-maxbackup=10 --audit-log-maxsize=100 --enable-admission-plugins=NodeRestriction,ValidatingAdmissionPolicy,ResourceQuota # 强制准入控制 --disable-admission-plugins=AlwaysAllow # 禁用默认允许 --tls-min-version=VersionTLS12 # 仅允许TLS1.2+(等保要求) --authorization-mode=RBAC,Node # 强制RBAC授权 # 2. kubelet加固(修改/var/lib/kubelet/config.yaml) authentication: anonymous: enabled: false # 禁用匿名访问 webhook: enabled: true # 开启webhook认证 authorization: mode: Webhook # 强制RBAC授权 protectKernelDefaults: true # 保护内核参数

步骤 2:身份与权限合规(等保 “身份鉴别 + 访问控制”)

技术逻辑(说人话)

等保 2.0 三级要求 “身份唯一、权限最小、操作可追溯”,K8S 里就是:

  • 禁止匿名访问,所有操作必须绑定唯一身份(用户 / SA);
  • 权限按 “最小必要” 分配,禁止集群级管理员权限给业务 SA;
  • 准入控制强制校验权限,不符合的 Pod 禁止部署。
操作步骤(K8S 1.33 适配)
2.1 禁用匿名访问(核心)
# 修改kube-apiserver.yaml,添加: --anonymous-auth=false # 禁用匿名访问(等保强制) --insecure-bind-address=0.0.0.0 # 禁用不安全端口(注释掉) --insecure-port=0 # 关闭8080不安全端口
2.2 精细化 RBAC(比基础版更严格,满足等保)

以电商订单服务为例,创建 “仅允许操作自身命名空间、仅允许必要动作” 的 RBAC:

# order-rbac-strict.yaml # 1. 自定义ClusterRole(仅命名空间内操作,避免集群级权限) apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: order-service-clusterrole rules: - apiGroups: [""] resources: ["pods", "services"] verbs: ["get", "list"] # 等保要求:按资源实例精细化,禁止通配 resourceNames: ["order-pod", "order-service"] - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "update"] resourceNames: ["order-deployment"] --- # 2. ServiceAccount(禁止自动挂载token,减少泄露风险) apiVersion: v1 kind: ServiceAccount metadata: name: order-service-sa namespace: order-ns automountServiceAccountToken: false # 等保要求:非必要不挂载 --- # 3. RoleBinding(绑定到命名空间,禁止跨域) apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: order-service-rb namespace: order-ns subjects: - kind: ServiceAccount name: order-service-sa namespace: order-ns roleRef: kind: ClusterRole name: order-service-clusterrole apiGroup: rbac.authorization.k8s.io

执行:kubectl apply -f order-rbac-strict.yaml

2.3 准入控制(ValidatingAdmissionPolicy,K8S 1.33 稳定版)

强制校验 Pod 的 SA、权限,不符合的直接拒绝部署(等保 “强制访问控制” 要求):

# 定义校验策略:禁止使用default SA、禁止特权容器 apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingAdmissionPolicy metadata: name: "pod-security-policy" spec: failurePolicy: Fail # 校验失败直接拒绝(等保要求) matchConstraints: resourceRules: - apiGroups: [""] apiVersions: ["v1"] operations: ["CREATE", "UPDATE"] resources: ["pods"] validations: - expression: "object.spec.serviceAccountName != 'default'" message: "禁止使用default ServiceAccount(等保要求)" - expression: "!has(object.spec.securityContext) || !object.spec.securityContext.privileged" message: "禁止部署特权容器(等保要求)" --- # 绑定策略到所有命名空间 apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingAdmissionPolicyBinding metadata: name: "pod-security-policy-binding" spec: policyName: "pod-security-policy" validationActions: [Deny] # 不符合则拒绝 matchResources: namespaceSelector: {} # 所有命名空间生效

执行:kubectl apply -f admission-policy.yaml

步骤 3:网络隔离合规(等保 “区域隔离 + 访问控制”)

技术逻辑(说人话)

等保 2.0 三级要求 “不同安全域隔离、仅开放必要端口”,容器场景就是:

  • 按业务域划分命名空间(如订单、支付、数据库),每个域做网络隔离;
  • NetworkPolicy 默认拒绝所有流量,仅允许必要的入站 / 出站;
  • 禁止容器直接访问主机网络、禁止 HostPort(避免端口逃逸)。
操作步骤(K8S 1.33+Calico)
3.1 部署 Calico(支持等保要求的网络策略,1.33 适配)
# 下载Calico 3.28(兼容K8S 1.33) kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml # 验证Calico运行 kubectl get pods -n kube-system -l k8s-app=calico-node
3.2 严格 NetworkPolicy(默认拒绝所有,等保强制)

以订单服务为例,仅允许支付服务访问 8080 端口,仅允许访问数据库 3306 端口:

# order-networkpolicy-strict.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: order-deny-all namespace: order-ns spec: podSelector: matchLabels: app: order policyTypes: [Ingress, Egress] # 默认拒绝所有,仅允许以下规则 ingress: - from: - namespaceSelector: matchLabels: security-domain: payment # 按安全域标签隔离 podSelector: matchLabels: app: pay ports: - protocol: TCP port: 8080 # 等保要求:限制源IP(可选) ipBlock: cidr: 10.244.0.0/16 # 仅集群内IP访问 except: [10.244.99.0/24] # 排除高危IP段 egress: - to: - namespaceSelector: matchLabels: security-domain: database podSelector: matchLabels: app: mysql ports: - protocol: TCP port: 3306 # 禁止访问公网(等保要求:业务容器禁止直连公网) - to: - ipBlock: cidr: 10.0.0.0/8 cidr: 172.16.0.0/12 cidr: 192.168.0.0/16

执行:kubectl apply -f order-networkpolicy-strict.yaml -n order-ns

3.3 禁止 HostNetwork/HostPort(等保要求)

通过准入控制强制禁止:

# 添加到之前的ValidatingAdmissionPolicy validations: - expression: "!has(object.spec.hostNetwork) || !object.spec.hostNetwork" message: "禁止使用主机网络(等保要求)" - expression: "!has(object.spec.containers[0].ports) || object.spec.containers[0].ports.all(port, !has(port.hostPort))" message: "禁止使用HostPort(等保要求)"

步骤 4:容器运行时合规(等保 “入侵防范”)

技术逻辑(说人话)

等保 2.0 三级要求 “禁止特权运行、防止容器逃逸、限制系统调用”,核心是通过 PodSecurityContext + 运行时防护(Falco)实现,K8S 1.33 的 PodSecurityContext 支持更细的安全配置。

操作步骤(100% 合规配置)
4.1 PodSecurityContext 严格配置(订单服务示例)
# order-deployment-secure.yaml apiVersion: apps/v1 kind: Deployment metadata: name: order-deployment namespace: order-ns spec: replicas: 2 selector: matchLabels: app: order template: metadata: labels: app: order security-domain: order spec: serviceAccountName: order-service-sa automountServiceAccountToken: true # 仅必要时开启 # 等保级别的Pod安全上下文 securityContext: runAsUser: 1000 # 非root用户 runAsGroup: 1000 runAsNonRoot: true # 强制非root fsGroup: 1000 fsGroupChangePolicy: OnRootMismatch # 性能+安全兼顾 seccompProfile: type: RuntimeDefault # 限制系统调用(等保要求) appArmorProfile: type: RuntimeDefault # AppArmor强制启用 allowPrivilegeEscalation: false # 禁止权限提升 privileged: false # 禁止特权模式 readOnlyRootFilesystem: true # 根目录只读(防篡改) procMount: Default # 禁止修改/proc sysctls: # 禁止修改内核参数 - name: net.ipv4.ip_local_port_range value: "32768 61000" containers: - name: order-container image: order-service:v1.0 ports: - containerPort: 8080 securityContext: capabilities: drop: ["ALL"] # 移除所有Linux特权能力 add: ["NET_BIND_SERVICE"] # 仅添加必要能力(绑定端口) readOnlyRootFilesystem: true # 等保要求:健康检查(防止容器挂死被利用) livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 5 periodSeconds: 5 # 资源限制(防止DoS攻击,等保要求) resources: limits: cpu: "1" memory: "1Gi" requests: cpu: "500m" memory: "512Mi" # 根目录只读,挂载临时可写目录 volumeMounts: - name: tmp mountPath: /tmp - name: logs mountPath: /var/log/order volumes: - name: tmp emptyDir: medium: Memory # 内存挂载,防止磁盘篡改 - name: logs persistentVolumeClaim: claimName: order-logs-pvc readOnly: false # 仅日志目录可写

执行:kubectl apply -f order-deployment-secure.yaml -n order-ns

4.2 部署 Falco(运行时入侵检测,等保 “入侵防范”)

Falco 是 K8S 官方推荐的容器运行时防护工具,能检测容器逃逸、特权提升等高危行为:

# 部署Falco(兼容K8S 1.33) helm repo add falcosecurity https://falcosecurity.github.io/charts helm install falco falcosecurity/falco --namespace falco --create-namespace # 验证Falco运行 kubectl get pods -n falco -l app=falco # 配置Falco规则(检测特权容器、文件篡改等) cat > falco-rules.yaml << EOF - rule: Privileged Container Started desc: Detect a privileged container being started condition: spawned_process and container and container.privileged=true output: "Privileged container started (user=%user.name container=%container.name image=%container.image.repository)" priority: CRITICAL tags: [container, privilege, cis, etcd] - rule: Write to Root Filesystem desc: Detect write to root filesystem (read-only violation) condition: write and fd.directory="/" and container output: "Write to root filesystem detected (user=%user.name container=%container.name path=%fd.name)" priority: HIGH tags: [container, filesystem, etcd] EOF kubectl apply -f falco-rules.yaml -n falco

步骤 5:数据加密合规(等保 “数据安全 + 隐私保护”)

技术逻辑(说人话)

等保 2.0 三级要求 “数据传输加密、存储加密、密钥安全管理”,K8S 1.33 支持:

  • 传输加密:kube-apiserver/etcd/ 容器间通信 TLS 1.2+;
  • 存储加密:Secret 加密、PVC 存储卷加密;
  • 密钥管理:集成 KMS(如 Vault),禁止明文存储密钥。
操作步骤
5.1 Secret 静态加密(etcd 存储加密)
# 1. 创建加密配置文件 cat > /etc/kubernetes/encryption-config.yaml << EOF apiVersion: apiserver.config.k8s.io/v1 kind: EncryptionConfiguration metadata: name: encryption-config resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: $(head -c 32 /dev/urandom | base64) - identity: {} # 降级方案 EOF # 2. 修改kube-apiserver.yaml,添加: --encryption-provider-config=/etc/kubernetes/encryption-config.yaml --encryption-provider-config-automatic-reload=true # 3. 重启kube-apiserver kubectl delete pods -n kube-system -l component=kube-apiserver
5.2 PVC 存储卷加密(CSI 加密,等保强制)

以 Rook-Ceph 为例(兼容 K8S 1.33):

# 创建加密的PVC apiVersion: v1 kind: PersistentVolumeClaim metadata: name: order-logs-pvc namespace: order-ns spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: rook-ceph-block-encrypted # 加密存储类
5.3 容器间通信 TLS 加密(订单→数据库)
# 数据库Service配置TLS apiVersion: v1 kind: Service metadata: name: mysql-service namespace: db-ns spec: ports: - port: 3306 targetPort: 3306 name: mysql-tls selector: app: mysql --- # 订单容器配置TLS访问数据库 env: - name: MYSQL_SSL_MODE value: "REQUIRED" - name: MYSQL_SSL_CA value: "/etc/mysql/certs/ca.crt" volumeMounts: - name: mysql-certs mountPath: /etc/mysql/certs readOnly: true

步骤 6:审计日志合规(等保 “安全审计”)

技术逻辑(说人话)

等保 2.0 三级要求 “审计覆盖所有操作、日志留存≥6 个月、不可篡改、可追溯”,K8S 1.33 的审计日志支持更细的规则,结合 ELK 归集 + 对象存储归档满足要求。

操作步骤
6.1 配置 K8S 审计日志规则(精细到操作级别)
# /etc/kubernetes/audit-policy.yaml apiVersion: audit.k8s.io/v1 kind: Policy rules: # 记录所有核心操作(等保要求) - level: RequestResponse resources: - group: "" resources: ["pods", "services", "secrets"] - group: "apps" resources: ["deployments", "statefulsets"] # 记录管理员操作 - level: RequestResponse users: ["system:admin", "kube-admin"] # 记录权限变更 - level: RequestResponse resources: - group: "rbac.authorization.k8s.io" resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"] # 日志排除(减少冗余) - level: None resources: - group: "" resources: ["events"]
6.2 日志归集与归档(ELK+MinIO)
# 部署ELK(兼容K8S 1.33) helm repo add elastic https://helm.elastic.co helm install elasticsearch elastic/elasticsearch --namespace elk --create-namespace helm install kibana elastic/kibana --namespace elk helm install filebeat elastic/filebeat --namespace elk # 配置Filebeat采集K8S日志(审计日志+容器日志+节点日志) filebeat.inputs: - type: filestream paths: - /var/log/kubernetes/audit.log # 审计日志 - /var/log/containers/*.log # 容器日志 - /var/log/messages # 节点系统日志 output.elasticsearch: hosts: ["elasticsearch-master:9200"] username: elastic password: changeme # 配置日志归档到MinIO(留存180天,等保要求) curl -XPUT "http://elasticsearch-master:9200/_ilm/policy/audit-policy" -H 'Content-Type: application/json' -d' { "policy": { "phases": { "hot": { "min_age": "0ms", "actions": { "rollover": { "max_age": "7d", "max_size": "10GB" } } }, "cold": { "min_age": "30d", "actions": { "migrate": { "storage_type": "cold" } } }, "delete": { "min_age": "180d", "actions": { "delete": {} } } } } }'

步骤 7:镜像全生命周期合规(等保 “漏洞管理”)

技术逻辑(说人话)

等保 2.0 三级要求 “恶意代码防范、漏洞定期扫描”,容器镜像必须做到:

  • 构建阶段:基于最小基础镜像(如 alpine)、移除敏感文件;
  • 扫描阶段:部署前强制扫描(Trivy),高危漏洞禁止部署;
  • 部署阶段:准入控制校验镜像完整性(镜像签名)。
操作步骤
7.1 镜像构建规范(Dockerfile)
# 最小基础镜像 FROM alpine:3.18 # 非root用户 RUN addgroup -g 1000 app && adduser -u 1000 -G app -s /bin/sh -D app # 安装必要依赖,清理缓存 RUN apk add --no-cache nginx && rm -rf /var/cache/apk/* # 复制应用代码,设置权限 COPY --chown=app:app app /app # 切换用户 USER app # 暴露端口 EXPOSE 8080 # 健康检查 HEALTHCHECK --interval=10s --timeout=3s CMD wget -q -O /dev/null http://localhost:8080/health || exit 1
7.2 镜像扫描(Trivy)+ 准入控制
# 部署Trivy准入控制器(K8S 1.33兼容) helm repo add aqua https://aquasecurity.github.io/helm-charts/ helm install trivy-operator aqua/trivy-operator --namespace trivy-system --create-namespace # 配置Trivy规则:高危漏洞禁止部署 kubectl apply -f https://raw.githubusercontent.com/aquasecurity/trivy-operator/v0.20.0/deploy/static/resource/policy/vulnerability-policy.yaml kubectl patch vulnerabilitypolicies.security.kubearmor.com default -n trivy-system --type merge -p '{"spec":{"actions":["deny"],"severities":["CRITICAL","HIGH"]}}'

步骤 8:合规检测与持续审计(确保 100% 合规)

技术逻辑(说人话)

等保要求 “定期合规检查、问题整改、持续监控”,通过 kube-bench(K8S 合规检测)+ 定时任务实现。

操作步骤
8.1 部署 kube-bench(等保合规检测工具)
# 运行kube-bench(针对等保2.0三级) kubectl run kube-bench --image=aquasec/kube-bench:latest --rm -it -- /kube-bench run --targets=master,node --benchmark=cis-1.23 # CIS基准接近等保要求 # 输出合规报告,针对不合规项整改 # 例如:kube-bench检测出“kube-apiserver未开启审计日志”→ 回到步骤1.2整改
8.2 定时合规审计(CronJob)
# 每日合规检测CronJob apiVersion: batch/v1 kind: CronJob metadata: name: compliance-check namespace: kube-system spec: schedule: "0 0 * * *" # 每日凌晨执行 jobTemplate: spec: template: spec: containers: - name: kube-bench image: aquasec/kube-bench:latest command: ["/kube-bench", "run", "--targets=master,node", "--benchmark=cis-1.23", "--output=json"] volumeMounts: - name: var-lib-kubelet mountPath: /var/lib/kubelet - name: etc-kubernetes mountPath: /etc/kubernetes - name: etc-systemd mountPath: /etc/systemd restartPolicy: Never volumes: - name: var-lib-kubelet hostPath: path: /var/lib/kubelet - name: etc-kubernetes hostPath: path: /etc/kubernetes - name: etc-systemd hostPath: path: /etc/systemd

四、完整案例:电商核心系统等保 2.0 三级合规落地

1. 案例背景

某电商平台核心系统包含 3 个安全域:

  • 订单域(order-ns):核心业务,最高安全级别;
  • 支付域(pay-ns):资金相关,仅允许访问订单域;
  • 数据域(db-ns):存储订单 / 支付数据,仅允许订单域访问。

2. 合规目标

  • 100% 满足等保 2.0 三级要求;
  • 容器全生命周期安全;
  • 审计日志留存 180 天;
  • 漏洞整改率 100%。

3. 完整配置包(可直接落地)

(1)环境加固
# 节点加固脚本(所有master/node执行) curl -s https://raw.githubusercontent.com/your-repo/k8s-hardening/main/os-hardening.sh | bash # K8S组件加固(master节点执行) curl -s https://raw.githubusercontent.com/your-repo/k8s-hardening/main/k8s-component-hardening.yaml | kubectl apply -f -
(2)权限与准入控制
# rbac-all.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: order-service-clusterrole rules: - apiGroups: [""] resources: ["pods", "services"] verbs: ["get", "list"] resourceNames: ["order-pod", "order-service"] - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "update"] resourceNames: ["order-deployment"] --- apiVersion: v1 kind: ServiceAccount metadata: name: order-service-sa namespace: order-ns automountServiceAccountToken: false --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: order-service-rb namespace: order-ns subjects: - kind: ServiceAccount name: order-service-sa namespace: order-ns roleRef: kind: ClusterRole name: order-service-clusterrole apiGroup: rbac.authorization.k8s.io --- # 准入控制策略 apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingAdmissionPolicy metadata: name: pod-security-policy spec: failurePolicy: Fail matchConstraints: resourceRules: - apiGroups: [""] apiVersions: ["v1"] operations: ["CREATE", "UPDATE"] resources: ["pods"] validations: - expression: "object.spec.serviceAccountName != 'default'" message: "禁止使用default ServiceAccount" - expression: "!has(object.spec.securityContext) || !object.spec.securityContext.privileged" message: "禁止部署特权容器" - expression: "!has(object.spec.hostNetwork) || !object.spec.hostNetwork" message: "禁止使用主机网络" - expression: "!has(object.spec.containers[0].ports) || object.spec.containers[0].ports.all(port, !has(port.hostPort))" message: "禁止使用HostPort" --- apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingAdmissionPolicyBinding metadata: name: pod-security-policy-binding spec: policyName: "pod-security-policy" validationActions: [Deny] matchResources: namespaceSelector: {}
(3)网络隔离
# networkpolicy-all.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: order-deny-all namespace: order-ns spec: podSelector: matchLabels: app: order security-domain: order policyTypes: [Ingress, Egress] ingress: - from: - namespaceSelector: matchLabels: security-domain: payment podSelector: matchLabels: app: pay ports: - protocol: TCP port: 8080 ipBlock: cidr: 10.244.0.0/16 except: [10.244.99.0/24] egress: - to: - namespaceSelector: matchLabels: security-domain: database podSelector: matchLabels: app: mysql ports: - protocol: TCP port: 3306 - to: - ipBlock: cidr: 10.0.0.0/8 cidr: 172.16.0.0/12 cidr: 192.168.0.0/16 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: pay-deny-all namespace: pay-ns spec: podSelector: matchLabels: app: pay security-domain: payment policyTypes: [Ingress, Egress] ingress: - from: - ipBlock: cidr: 10.244.0.0/16 ports: - protocol: TCP port: 8090 egress: - to: - namespaceSelector: matchLabels: security-domain: order podSelector: matchLabels: app: order ports: - protocol: TCP port: 8080 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: db-deny-all namespace: db-ns spec: podSelector: matchLabels: app: mysql security-domain: database policyTypes: [Ingress] ingress: - from: - namespaceSelector: matchLabels: security-domain: order podSelector: matchLabels: app: order ports: - protocol: TCP port: 3306
(4)容器运行时安全
# deployment-all.yaml apiVersion: apps/v1 kind: Deployment metadata: name: order-deployment namespace: order-ns spec: replicas: 2 selector: matchLabels: app: order security-domain: order template: metadata: labels: app: order security-domain: order spec: serviceAccountName: order-service-sa automountServiceAccountToken: true securityContext: runAsUser: 1000 runAsGroup: 1000 runAsNonRoot: true fsGroup: 1000 fsGroupChangePolicy: OnRootMismatch seccompProfile: type: RuntimeDefault appArmorProfile: type: RuntimeDefault allowPrivilegeEscalation: false privileged: false readOnlyRootFilesystem: true procMount: Default containers: - name: order-container image: order-service:v1.0 ports: - containerPort: 8080 securityContext: capabilities: drop: ["ALL"] add: ["NET_BIND_SERVICE"] readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 5 periodSeconds: 5 resources: limits: cpu: "1" memory: "1Gi" requests: cpu: "500m" memory: "512Mi" volumeMounts: - name: tmp mountPath: /tmp - name: logs mountPath: /var/log/order volumes: - name: tmp emptyDir: medium: Memory - name: logs persistentVolumeClaim: claimName: order-logs-pvc --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: order-logs-pvc namespace: order-ns spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: rook-ceph-block-encrypted
(5)日志与审计
# audit-policy.yaml apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse resources: - group: "" resources: ["pods", "services", "secrets"] - group: "apps" resources: ["deployments", "statefulsets"] - level: RequestResponse users: ["system:admin", "kube-admin"] - level: RequestResponse resources: - group: "rbac.authorization.k8s.io" resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"] - level: None resources: - group: "" resources: ["events"] --- # filebeat-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: elk data: filebeat.yml: | filebeat.inputs: - type: filestream paths: - /var/log/kubernetes/audit.log - /var/log/containers/*.log - /var/log/messages output.elasticsearch: hosts: ["elasticsearch-master:9200"] username: elastic password: changeme setup.ilm: policy_file: /usr/share/filebeat/ilm-policy.json ilm-policy.json: | { "policy": { "phases": { "hot": { "min_age": "0ms", "actions": { "rollover": { "max_age": "7d", "max_size": "10GB" } } }, "cold": { "min_age": "30d", "actions": { "migrate": { "storage_type": "cold" } } }, "delete": { "min_age": "180d", "actions": { "delete": {} } } } } }
(6)镜像扫描与合规审计
# trivy-operator.yaml apiVersion: v1 kind: Namespace metadata: name: trivy-system --- apiVersion: helm.fluxcd.io/v1 kind: HelmRelease metadata: name: trivy-operator namespace: trivy-system spec: chart: repository: https://aquasecurity.github.io/helm-charts/ name: trivy-operator version: 0.20.0 values: trivy: ignoreUnfixed: true severity: "CRITICAL,HIGH" policies: vulnerability: actions: ["deny"] --- # compliance-cronjob.yaml apiVersion: batch/v1 kind: CronJob metadata: name: compliance-check namespace: kube-system spec: schedule: "0 0 * * *" jobTemplate: spec: template: spec: containers: - name: kube-bench image: aquasec/kube-bench:latest command: ["/kube-bench", "run", "--targets=master,node", "--benchmark=cis-1.23", "--output=json"] volumeMounts: - name: var-lib-kubelet mountPath: /var/lib/kubelet - name: etc-kubernetes mountPath: /etc/kubernetes - name: etc-systemd mountPath: /etc/systemd restartPolicy: Never volumes: - name: var-lib-kubelet hostPath: path: /var/lib/kubelet - name: etc-kubernetes hostPath: path: /etc/kubernetes - name: etc-systemd hostPath: path: /etc/systemd

4. 案例验证(100% 合规校验)

(1)权限校验
# 测试default SA部署Pod(应该被拒绝) kubectl run test-pod -n order-ns --image=nginx --restart=Never # 输出:Error from server: admission webhook "validation.pod-security-policy.k8s.io" denied the request: 禁止使用default ServiceAccount # 测试特权容器部署(应该被拒绝) kubectl run privileged-pod -n order-ns --image=nginx --privileged --restart=Never # 输出:Error from server: admission webhook "validation.pod-security-policy.k8s.io" denied the request: 禁止部署特权容器
(2)网络隔离校验
# 支付服务访问订单服务(成功) kubectl exec -it $(kubectl get pods -n pay-ns -l app=pay -o jsonpath='{.items[0].metadata.name}') -n pay-ns -- curl http://order-service.order-ns:8080 # 外部Pod访问订单服务(失败) kubectl run test-pod -n default --image=curlimages/curl --rm -it -- curl http://order-service.order-ns:8080 # 输出:curl: (7) Failed to connect to order-service.order-ns port 8080: Connection refused
(3)合规审计校验
# 执行kube-bench检测 kubectl run kube-bench --image=aquasec/kube-bench:latest --rm -it -- /kube-bench run --targets=master,node --benchmark=cis-1.23 # 输出:所有等保相关项均为PASS,无FAIL项

五、避坑指南(10 年运维经验总结)

  1. 等保 2.0 三级核心避坑:别只做 “配置合规”,忽略 “流程合规”(如漏洞整改记录、审计日志复核记录);
  2. K8S 1.33 兼容:ValidatingAdmissionPolicy 已稳定,别用废弃的 PodSecurityPolicy;
  3. 网络隔离:Calico 的 NetworkPolicy 优先级高于 Flannel,等保场景优先用 Calico;
  4. 日志留存:别只存在本地,一定要归档到对象存储(MinIO/S3),满足 “不可篡改” 要求;
  5. 镜像扫描:别只扫部署前,构建阶段也要扫(CI/CD 集成 Trivy);
  6. 权限最小化:业务 SA 禁止绑定 ClusterAdmin,甚至禁止绑定 ClusterRole,优先用 Role。

总结

等保 2.0 三级在 K8S 容器场景的 100% 合规,核心是 “全生命周期 + 最小权限 + 强制控制”:

  • 基础层:集群 / 节点加固,打好地基;
  • 控制层:RBAC + 准入控制 + NetworkPolicy,管住 “人” 和 “网络”;
  • 运行层:PodSecurityContext+Falco,管住 “容器行为”;
  • 数据层:加密 + 密钥管理,管住 “数据”;
  • 审计层:全维度日志 + 留存,管住 “追溯”;
  • 持续层:镜像扫描 + 定期审计,管住 “长效”。

作为 10 年运维老炮,你应该清楚:合规不是 “一次性配置”,而是 “持续运营”,这套方案落地后,配合每月的合规审计、漏洞扫描,就能稳定满足等保 2.0 三级要求,容器合规率 100%。

需要专业的网站建设服务?

联系我们获取免费的网站建设咨询和方案报价,让我们帮助您实现业务目标

立即咨询