Kubeflow v1.9.1 单机部署实战:用一台ECS搞定你的第一个MLOps平台(含A10 GPU调度)

张开发
2026/4/5 19:32:51 15 分钟阅读

分享文章

Kubeflow v1.9.1 单机部署实战:用一台ECS搞定你的第一个MLOps平台(含A10 GPU调度)
Kubeflow v1.9.1 单机部署实战用一台ECS搞定你的第一个MLOps平台含A10 GPU调度在机器学习项目从实验走向生产的过程中环境配置往往是第一个拦路虎。数据科学家习惯了Jupyter Notebook的交互式开发但如何将模型无缝部署到生产环境如何管理训练任务的资源调度如何确保实验可复现这些问题在个人开发者或小团队场景下尤为突出——既需要完整的MLOps能力又受限于有限的硬件资源。本文将展示如何在一台阿里云ECS上通过Kubeflow v1.9.1构建全功能的机器学习运维平台特别针对A10 GPU资源的调度优化提供开箱即用的解决方案。1. 环境准备与基础配置1.1 云服务器选型建议对于个人开发或小型团队推荐选择以下ECS配置作为基础环境组件推荐规格备注CPU8核以上建议Intel Xeon Platinum系列确保Kubernetes控制平面稳定运行内存32GB以上每个Kubeflow组件平均消耗500MB-2GB内存系统盘100GB SSD用于操作系统和基础软件安装数据盘200GB SSD可选建议挂载独立数据盘用于PV存储GPUNVIDIA A1024GB显存支持CUDA 11适合中小规模模型训练和推理操作系统Ubuntu 20.04 LTS对Kubernetes和容器运行时兼容性最佳提示如果预算有限可以选择竞价实例降低成本但需注意实例可能被回收的风险。建议首次部署使用按量付费实例。1.2 基础依赖安装在干净的Ubuntu系统上首先安装必要的工具链# 更新系统并安装基础工具 sudo apt update sudo apt upgrade -y sudo apt install -y curl git docker.io nfs-common # 配置Docker守护进程避免toomanyopenfiles错误 sudo tee /etc/docker/daemon.json EOF { default-ulimits: { nofile: { Name: nofile, Hard: 65535, Soft: 65535 } } } EOF sudo systemctl restart docker # 调整系统参数解决Kubernetes常见问题 sudo tee -a /etc/sysctl.conf EOF fs.inotify.max_user_watches1048576 fs.file-max655360 vm.max_map_count262144 EOF sudo sysctl -p2. Kubernetes单节点集群部署2.1 使用kubeadm快速初始化单节点集群需要特殊配置以绕过Kubernetes的默认限制# 安装kubeadm、kubelet和kubectl sudo apt install -y apt-transport-https ca-certificates curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo deb https://apt.kubernetes.io/ kubernetes-xenial main | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt update sudo apt install -y kubelet1.25.0-00 kubeadm1.25.0-00 kubectl1.25.0-00 sudo apt-mark hold kubelet kubeadm kubectl # 初始化单节点集群关键参数 sudo kubeadm init \ --pod-network-cidr10.244.0.0/16 \ --apiserver-advertise-address$(hostname -I | awk {print $1}) \ --control-plane-endpoint$(hostname -I | awk {print $1}) \ --ignore-preflight-errorsNumCPU # 配置kubectl访问权限 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # 允许Master节点调度Pod单节点必须 kubectl taint nodes --all node-role.kubernetes.io/control-plane-2.2 GPU支持与本地存储配置为充分利用A10 GPU需要安装NVIDIA设备插件和本地存储提供程序# 安装NVIDIA容器运行时 distribution$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add - curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/libnvidia-container.list sudo apt update sudo apt install -y nvidia-container-toolkit sudo systemctl restart docker # 部署NVIDIA设备插件 kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0/nvidia-device-plugin.yml # 验证GPU识别 kubectl get nodes -o json | jq .items[0].status.capacity # 配置本地存储类 kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-storage.yaml kubectl patch storageclass local-path -p {metadata: {annotations:{storageclass.kubernetes.io/is-default-class:true}}}3. Kubeflow v1.9.1核心组件部署3.1 使用Kustomize批量安装Kubeflow的模块化架构允许按需安装组件以下是经过验证的单节点优化配置# 安装依赖工具 curl -s https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash sudo mv kustomize /usr/local/bin/ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 bash get_helm.sh # 获取特定版本Manifests git clone --branch v1.9.1 --depth 1 https://github.com/kubeflow/manifests.git cd manifests # 生成并应用资源配置重试机制应对依赖问题 max_retries5 retry_count0 until kustomize build example | kubectl apply -f -; do retry_count$((retry_count1)) if [ $retry_count -ge $max_retries ]; then echo Failed to apply Kubeflow manifests after $max_retries attempts. exit 1 fi echo Retrying to apply resources (attempt $retry_count)... sleep 30 done3.2 关键组件状态检查部署完成后需要重点关注以下组件的就绪状态Istio Ingress Gateway服务暴露入口kubectl get svc -n istio-system istio-ingressgateway输出应包含NodePort映射如31000端口Kubeflow Dashboard中央控制台kubectl get pods -n kubeflow | grep centraldashboardNotebook Controller交互式开发环境管理kubectl get pods -n kubeflow | grep notebook-controller常见问题处理PVC挂载失败# 检查存储类配置 kubectl get pvc -n kubeflow # 若处于Pending状态尝试删除重建 kubectl delete pvc -n kubeflow --allPod资源不足# 调整资源限制示例Notebook控制器 kubectl edit deploy -n kubeflow notebook-controller-deployment # 将requests/limits调整为适合单节点的值4. GPU资源调度与Notebook实战4.1 创建支持GPU的Notebook实例通过Kubeflow UI创建Notebook时需要特殊配置以启用GPU访问http://ECS公网IP:31000使用默认凭证登录userexample.com/12341234导航至Notebooks → New Notebook关键参数设置Image选择jupyter/tensorflow-notebook:latest预装CUDAResource LimitsCPU: 4 coresMemory: 8GiGPU: 1 (A10)Storage挂载50Gi PVC使用local-path-provisioner创建后通过以下命令验证GPU访问# 在Notebook中执行 import tensorflow as tf print(GPU Available:, tf.config.list_physical_devices(GPU))4.2 典型GPU任务调度示例案例1分布式训练任务# tf-job.yaml apiVersion: kubeflow.org/v1 kind: TFJob metadata: name: mnist-distributed namespace: kubeflow-user-example-com spec: tfReplicaSpecs: Worker: replicas: 2 template: spec: containers: - name: tensorflow image: tensorflow/tensorflow:2.12.0-gpu command: [python, /mnist.py] resources: limits: nvidia.com/gpu: 1 restartPolicy: OnFailure应用配置kubectl apply -f tf-job.yaml # 监控任务状态 kubectl get tfjobs -n kubeflow-user-example-com案例2Katib超参数优化# katib-experiment.yaml apiVersion: kubeflow.org/v1beta1 kind: Experiment metadata: name: mnist-hp-tuning namespace: kubeflow-user-example-com spec: objective: type: minimize goal: 0.01 objectiveMetricName: loss algorithm: algorithmName: random parallelTrialCount: 2 maxTrialCount: 5 parameters: - name: learning_rate parameterType: double feasibleSpace: min: 0.001 max: 0.1 - name: batch_size parameterType: int feasibleSpace: min: 32 max: 128 trialTemplate: primaryContainerName: tensorflow trialParameters: - name: learningRate description: Learning rate for the training reference: learning_rate - name: batchSize description: Batch size reference: batch_size trialSpec: apiVersion: batch/v1 kind: Job spec: template: spec: containers: - name: tensorflow image: tensorflow/tensorflow:2.12.0-gpu command: [python, /mnist.py] args: - --lr${trialParameters.learningRate} - --batch-size${trialParameters.batchSize} resources: limits: nvidia.com/gpu: 1 restartPolicy: Never5. 生产级优化与维护5.1 性能调优参数在单节点环境下需要对Kubernetes和Kubeflow进行特殊优化Kubelet配置调整sudo tee /etc/default/kubelet EOF KUBELET_EXTRA_ARGS--max-pods200 --eviction-hardmemory.available500Mi --kube-reservedcpu1,memory2Gi --system-reservedcpu1,memory1Gi EOF sudo systemctl restart kubeletIstio资源限制kubectl edit deploy -n istio-system istiod # 调整resources部分为 resources: requests: cpu: 500m memory: 1Gi limits: cpu: 2 memory: 4Gi5.2 监控与日志方案轻量级监控栈部署# 安装Prometheus Operator helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm install prometheus prometheus-community/kube-prometheus-stack \ --namespace monitoring \ --create-namespace \ --set prometheus.prometheusSpec.resources.requests.cpu500m \ --set prometheus.prometheusSpec.resources.requests.memory2Gi # 配置GPU指标采集 kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0/nvidia-device-plugin.yml关键监控指标看板指标类型查询表达式告警阈值GPU利用率DCGM_FI_DEV_GPU_UTIL90%持续5分钟显存使用DCGM_FI_DEV_FB_USED90%总显存Pod内存压力node_memory_MemAvailable_bytes500MBCPU负载node_load5核心数*25.3 备份与恢复策略针对单节点环境设计的数据保护方案关键数据备份清单PV数据# 创建本地存储快照 sudo tar czvf /backup/kubeflow-pv-$(date %Y%m%d).tar.gz \ /var/lib/rancher/local-path-provisioner/Kubernetes资源定义# 导出所有Kubeflow相关资源 kubectl get all -n kubeflow -o yaml /backup/kubeflow-resources-$(date %Y%m%d).yaml kubectl get pvc -n kubeflow -o yaml /backup/kubeflow-pvc-$(date %Y%m%d).yaml配置备份# 备份关键配置文件 sudo cp /etc/kubernetes/admin.conf /backup/ sudo cp -r /etc/docker /backup/docker-config-$(date %Y%m%d)灾难恢复流程重建基础Kubernetes集群参考第2章恢复PV数据sudo tar xzvf /backup/kubeflow-pv-最新.tar.gz -C /重新部署Kubeflowcd manifests kustomize build example | kubectl apply -f -恢复应用资源kubectl apply -f /backup/kubeflow-resources-最新.yaml

更多文章