基于Kubernetes和Slurm的AI模型训练_第1页
基于Kubernetes和Slurm的AI模型训练_第2页
基于Kubernetes和Slurm的AI模型训练_第3页
基于Kubernetes和Slurm的AI模型训练_第4页
基于Kubernetes和Slurm的AI模型训练_第5页
已阅读5页,还剩11页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、基于Kubernetes和Slurm的AI模型训练AI Model Training with Kubernetes and SlurmAI Model Training+DATAimagevoicetextDigital dataAlgorithmAPI / SDKModelModel DeveloperModel UserModel Run EngineTrainDeployPublishOptimized ModelOptimizeModel TrainingModel Inference2K8S Resource Scheduling3K8S Shared FolderNFS Serv

2、er (/opt/sharefolder/ns1)PV: pv-ns1PVC: pvc-ns1pod2(/share)Namespace: ns1K8Spod2(/share)4K8S Train Models Using K8Simage:run container use this docker image.resources:requests defines the minimal resource needed by this container.limits defines the max resource can be used by this container.the mini

3、mal cpu unit is m, 1m=1/1000 core.for gpu, the number of request and limits must be the same.volumes:volume share-dir use the persitentVolumeClaim pvc-ns1.volumeMounts:mount volume share-dir to container folder /share.workingDir, command, args:all the program/data (demo.py) should be placed in the v

4、olume first.run the program.restartPolicy:run the container one time.5Dont support resource reservationIf I have one model training job which need more than 1 pod, some pods may be created successfully, some pods may be created failed due to no available resource. I cant reserve enough resource for

5、these pods before creating them.Dont support queuingIf there is no available resource for model training job, the job creation will be failed immediately.Dont support GPU sharingOne GPU can be assigned to one pod only.Dont support to limit GPU number based on GPU typeK8S resource quota can limit the

6、 allocated number of GPUs to namespace. One cluster may have different GPU types, such as P100, V100, A100. If I want one users namespace to use P100 only, it is not supported.6K8S Limitations of Using K8S for Model TrainingSlurm Why Singularity with Slurm7Why Docker is not sufficient:Security: User

7、 can gain root inside the containerNo integration with Parallel Environment: schedulers, etc.Singularity to the rescue:Less separation from the host(Container uses host user and host network, No background daemon)Integration with schedulers and parallel libraries (MPI)Cannot gain rootHigh adoption r

8、ate in HPC (first release in 2016)Slurm Singularity Basic UsageBuild singularity container from dockerUse singularity container8Slurm GPU SchedulingT1T2.0T2.1T3.0T3.1T3.2T3.3GPU1GPU2Server2GPU0GPU1GPU2GPU3Server1GPU0T1T2.0T2.1T3.0T3.1Server3GPU0GPU1GPU2GPU3T3.2T3.3Server4GPU0GPU3T4.0T4.0T4.1T5.0T5.1

9、T5.0GPU1T4.1T5.1T4.2GPU2T4.2T2.2GPU3T2.2Server5GPU0GPU1GPU2GPU3T6.0T6.1T6.2T6.3T6.0T6.19T6.3T6.2Single node, single GPU Single node, multi GPUs Multi nodes, multi GPUs GPU shareExclude modeSlurm GPU SchedulingServer1GPU0GPU1Server4GPU0GPU1Server3GPU0GPU1Server2GPU0GPU1Leaf switchSpine switchSwitch1

10、groupSwitch2 groupServer110GPU0GPU1Server4GPU0GPU1Server3GPU0GPU1Server2GPU0GPU1P100 groupV100 groupLeaf switchResource GroupingSlurm Slurm Configuration to Support GPU/etc/slurm/slurm.conf on all the nodes:.GresTypes=gpu # #NODES# #NodeName=compute1 CPUs=32 Gres=gpu:2 NodeName=compute2 CPUs=32 Gres

11、=gpu:2 NodeName=compute3 CPUs=32 Gres=gpu:4 NodeName=compute4 CPUs=32 Gres=gpu:4 # #PARTITIONS# #PartitionName=compute Nodes=compute1,compute2 State=UP PartitionName=gpushare Nodes=compute3,compute4 State=UP11/etc/slurm/gres.conf on compute1 and compute2:cat /etc/slurm/gres.conf Name=gpu File=/dev/n

12、vidia0 Name=gpu File=/dev/nvidia1/etc/slurm/gres.conf on compute3 and compute4:cat /etc/slurm/gres.conf Name=gpu File=/dev/nvidia0 Name=gpu File=/dev/nvidia1 Name=gpu File=/dev/nvidia0 Name=gpu File=/dev/nvidia1Container ImagesProgram FilesDataOutput FilesSlurm Train Models Using Slurm12Slurm Train

13、Models Using Slurm#!/bin/bash#SBATCH -job-name=single_node#SBATCH -chdir=/share/data/demo/singlenode#SBATCH -partition=compute #SBATCH -nodes=1 #SBATCH mincpus=16 #SBATCH -gres=gpu:2singularity exec nv B /share/data:/share/data /share/images/tf_gpu.sifpython /share/data/demo/singlenode/training.pySi

14、ngle Node Single GPU/Multi GPUs#!/bin/bash#SBATCH -job-name=single_node#SBATCH -chdir=/share/data/demo/singlenode #SBATCH -partition=gpushare#SBATCH -nodes=1 #SBATCH mincpus=16 #SBATCH -gres=gpu:2singularity exec nv B /share/data:/share/data /share/images/tf_gpu.sifpython /share/data/demo/singlenode

15、/training.pyGPU Share#!/bin/bash#SBATCH -job-name=single_node#SBATCH -chdir =/share/data/demo/multinode #SBATCH -partition=compute#SBATCH -nodes=1 #SBATCH exclusive #SBATCH -gres=gpu:2singularity exec nv B /share/data:/share/data /share/images/tf_gpu.sif python /share/data/demo/singlenode/training.p

16、y#!/bin/bash#SBATCH -job-name=multi_node#SBATCH -chdir =/share/data/demo/multinode#SBATCH -partition=compute#SBATCH -nodes=2 #SBATCH mincpus=16 #SBATCH -gres=gpu:2#run the program which should get allocated resource by parsing environment variables and launch the model training program on remote nod

17、es by slurm command srunMulti Node Multi GPUsExclusive#!/bin/bash array=($SLURM_NODELIST/,/ )for i in $!array; do echo $i=$arrayi; donesrun -N1 -n1 -gres=gpu:2 -nodelist=$array0 -l singularity exec nv B/share/data:/share/data /share/images/tf_gpu.sif python/share/data/demo/multinode/training.py worker1srun -N1 -n1 -gres=gpu:2 -nodelist=$array1 -l singularity exec nv B/share/data:/share/data /share/images/tf_gpu.sif python/share/data/demo/multinode/training.py worker2/share/data/program.shKubernetes is more popular than slurmDocker is more popular than singularitySlurm C

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论