




版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、_IBM Platform LSF家族安装和配置简介V1.0版 马雪洁2013.5.71 / 50目录1集群结构11.1单纯LSF环境(命令行提交)11.2LSF+PAC环境(WEB提交)11.3LSF+PM环境(PM提交)32LSF安装和基本配置举例32.1LSF安装步骤32.1.1获得LSF和PAC安装包32.1.2解压缩lsfinstall安装脚本文件42.1.3修改install.config配置文件42.1.4执行安装42.1.5配置开机自启动42.1.6测试安装42.1.7启动/停止LSF进程(三种方式)52.1.8测试提交作业52.1.9使能root提交作业52.1.10修改配置
2、文件后reconfig62.1.11日志和debug62.2配置文件说明62.3常用命令72.4配置公平竞争调度策略72.4.1添加轮循调度队列72.4.2添加层次公平竞争策略82.4.3多队列公平竞争策略82.4.4使能配置92.5配置抢占调度策略92.6配置全局限制策略102.6.1限制用户运行的作业数目102.6.2限制节点运行作业数目102.6.3限制队列作业的运行限制102.6.4设定General limits112.6.5使能配置122.7配置提交控制脚本esub122.8配置资源管理elim示例122.8.1汇报home目录空闲大小122.8.2汇报root进程数目122.8.
3、3汇报应用程序许可证数目132.8.4测试elim脚本142.8.5添加资源定义和资源地图142.8.6查看资源数目143LSF命令行集成应用示例143.1CFD+集成(spooling file)143.1.1CFD+安装和许可证143.1.2集成许可证管理elim143.1.3添加CFD+ job starter153.1.4添加CFD APP profile173.1.5CFD+命令行提交脚本实例173.2GAUSSIAN集成方式(spooling file)183.2.1Gaussian安装和许可证183.2.2Gaussian命令行提交脚本实例183.3Abaqus的脚本集成(bsu
4、b命令)183.4Platform MPI作业193.5Openmpi作业203.6Intel MPI作业203.6.1Express版本不记账方式203.6.2Express版本blaunch记账方式223.6.3Standard版本PAM集成方式274安装PAC295使用PAC进行应用程序集成305.1Gaussian界面集成过程315.2CFD+集成后界面和后台脚本365.3在PAC中监控许可证386安装License Scheduler406.1基本安装测试406.2基本配置举例406.2.1添加许可证服务器地址406.2.2映射许可证feature:406.2.3使用许可证资源416
5、.2.4配置许可证调度策略417常见问题418使用man page419售后技术支持411 集群结构较大的集群都会设计 单独的登录节点,用户只能ssh到登录节点,不能直接ssh到集群的任何主节点和计算节点。同时配置用户在计算节点之间的ssh互信,为了并行作业的运行。登录节点也安装LSF,配置为LSF 静态Client或者MXJ值为0,也即不运行作业的客户端。集群的WEB节点与办公访问局域网一个网段。如需使用浮动client,主节点网卡需要1.1 单纯LSF环境(命令行提交)1.2 LSF+PAC环境(WEB提交)用户通过portal提交作业:1.3 LSF+PM环境(PM提交)2 LSF安装和
6、基本配置举例2.1 安装前的准备工作NIS ready;NFS/GPFS ready;2.2 LSF安装步骤Use root to install.Get NIS and NFS/GPFS ready.2.2.1 获得LSF和PAC安装包lsf8.3_linux2.6-glibc2.3-x86_64.tar.Z lsf8.3_lsfinstall_linux_x86_64.tar.Z pac8.3_standard_linux-x64.tar.Z 许可证文件platform_hpc_std_entitlement.dat2.2.2 解压缩lsfinstall安装脚本文件 Put the pac
7、kage under /root/lsfrootS2 lsf# gunzip lsf8.3_lsfinstall_linux_x86_64.tar.Z tar -xvf lsf8.3_lsfinstall_linux_x86_64.tar2.2.3 修改install.config配置文件首先添加集群管理员lsfadmin。cd lsf8.3_lsfinstallvi install.configrootS2 lsf8.3_lsfinstall# cat install.config LSF_TOP="/opt/lsf" (安装目录) LSF_ADMINS="ls
8、fadmin" (先创建lsfadmin的用户名) LSF_CLUSTER_NAME="platform" (集群名称,任意指定) LSF_MASTER_LIST="s2 s3" (LSF管理节点) LSF_ENTITLEMENT_FILE="/root/lsf/platform_hpc_std_entitlement.dat" (安装源许可证的地址) LSF_TARDIR="/root/lsf/" (安装源文件包的地址)2.2.4 执行安装./lsfinstall -f install.config2.
9、2.5 配置开机自启动/opt/lsf/9.1/installhostsetuprhostsetup2.2.6 测试安装安装目录下的/conf目录rootS2 conf# source profile.lsfAdd source profile.lsf to /etc/profileif no rsh, set ssh in lsf.confrootS2 conf# tail lsf.confLSF_RSH="ssh"2.2.7 启动/停止LSF进程(三种方式)rootS2 conf# lsfstartup/lsfstop或者lsadmin limstatup/limshu
10、tdownlsadmin resstartup/resshutdownbadmin hstartup/hshutdown或者lsf_daemons start/stoprootS2 conf# lsidIBM Platform LSF Express 8.3 for IBM Platform HPC, May 10 2012Copyright Platform Computing Inc., an IBM Company, 1992-2012.US Government Users Restricted Rights - Use, duplication or disclosure restr
11、icted by GSA ADP Schedule Contract with IBM Corp.My cluster name is platformMy master name is s2You have new mail in /var/spool/mail/rootrootS2 conf# lsloadHOST_NAME status r15s r1m r15m ut pg ls it tmp swp mems2 ok 0.0 0.0 0.0 1% 0.0 1 0 151G 20G 61Gs4 ok 0.0 0.0 0.0 2% 0.0 1 2 183G 20G 62Gs6 ok 0.
12、0 0.0 0.0 3% 0.0 1 2 3734M 2G 30Gs5 ok 0.0 0.0 0.0 5% 0.0 1 2 3468M 2G 30G2.2.8 测试提交作业bsub sleep 1000002.2.9 使能root提交作业enable root to submit job: LSF_ROOT_REX=local重启LSF进程。2.2.10 修改配置文件后reconfig修改lsf.*配置文件后lsadmin reconfig修改lsb.*配置文件后badmin reconfig部分参数需要重启LSF主调度或者其他进程:badmin mbdrestart; lsadmin lim
13、restart; lsadmin resrestart; badmin hrestart2.2.11 日志和debugFind the logs under log directory. LSF will run mainly 3 processes on each node, on master node will have 2 more.Master: lim,res,sbatchd,mbatchd,mbschedCompute:lim,res,sbatchdTurn on debug in command line:Run lim -2 directly on node to check
14、 why lim not startup.2.3 配置文件说明目录/etc/init.d:/etc/init.s/lsf lsf服务自启动脚本目录/apps/platform/8.3/lsf/conf:lsf.conf lsf配置文件lsf.cluster.cluster83 集群配置文件lsf.shared 共享资源定义文件./lsbatch/cluster83/configdir/lsb.* 调度系统配置文件lsb.users lsf用户与用户组配置文件lsb.queues lsf队列配置文件lsb.params lsf调度参数配置文件lsb.applications lsf应用配置文件l
15、sb.hosts lsf机器与机器组配置文件lsb.resources lsf资源配置文件lsb.modules lsf模块配置文件2.4 常用命令bsub:提交作业;bjobs:查看作业信息;bhist:查看作业历史;lshosts:查看节点静态资源;bhosts,lsload:查看节点状态和资源信息;bqueues :查看队列配置;blimits:查看限制limit信息;lsid:集群版本和主节点;bmod:修改bsub option;等等。2.5 基于资源的调度策略bsub R “ (type=LINUX2.4 && r1m < 2.0)|(type=AI
16、X && r1m < 1.0) ”或者在队列lsb.queues或者lsb.application文件定义:RES_REQ=select(type=LINUX2.4 && r1m < 2.0)|(type=AIX && r1m < 1.0)bsub R "selecttype=any && swap>=300 && mem>500 orderswap:mem rusageswap=300,mem=500" job1bsub R rusagemem=500:
17、app_lic_v2=1 | mem=400:app_lic_v1.5=1" job1bsub R "selecttype=any && swp>=300 && mem>500 ordermem" job12.6 配置公平竞争调度策略2.6.1 添加轮循调度队列Modify lsb.queues, add followingBegin QueueQUEUE_NAME = roundRobin PRIORITY = 40FAIRSHARE = USER_SHARESdefault,1#USERS = userGroupA
18、Define your own usergroupEnd QueueRun badmin reconfig to enable the change.Run bqueues l to check the queues configure2.6.2 添加层次公平竞争策略Add following queue to add hierarchicalshare policy:Begin QueueQUEUE_NAME = hierarchicalShare PRIORITY = 40USERS = userGroupB userGroupC FAIRSHARE = USER_SHARESuserGr
19、oupB,7 userGroupC,3End Queue2.6.3 多队列公平竞争策略在lsb.queues中添加下列队列,注意节点组和用户组定义。Begin QueueQUEUE_NAME = verilog DESCRIPTION = master queue definition cross-queuePRIORITY = 50FAIRSHARE = USER_SHARESuser1,100 default,1FAIRSHARE_QUEUES = normal shortHOSTS = hostGroupC# resource contention #RES_REQ = rusageve
20、rilog = 1End QueueBegin QueueQUEUE_NAME = shortDESCRIPTION = short jobs PRIORITY = 70 # highestHOSTS = hostGroupC RUNLIMIT = 5 10End QueueBegin QueueQUEUE_NAME = normalDESCRIPTION = default queuePRIORITY = 40 # lowestHOSTS = hostGroupC End Queue2.6.4 使能配置badmin reconfig提交作业,并查看队列的用户动态优先级变化:bqueues r
21、l normal2.7 配置抢占调度策略配置最基本的slots抢占:Begin QueueQUEUE_NAME = shortPRIORITY = 70HOSTS = hostGroupC # potential conflictPREEMPTION = PREEMPTIVEnormalEnd QueueBegin QueueQUEUE_NAME = normalPRIORITY = 40HOSTS = hostGroupC # potential conflict PREEMPTION = PREEMPTABLEshortEnd Queue向两个队列提交作业,查看被preempt的作业的pe
22、nding原因。2.8 配置全局限制策略2.8.1 限制用户运行的作业数目在lsb.users文件中添加:Begin User USER_NAME MAX_JOBS JL/P user1 4 - user2 2 1 user3 - 2 groupA 8 - groupB 1 1 Default 2 - End User2.8.2 限制节点运行作业数目在lsb.hosts文件中:Begin HostHOST_NAME MXJ JL/Uhost1 4 2host2 2 1host3 ! -End Host2.8.3 限制队列作业的运行限制在lsb.queues中添加:Begin QueueQUEU
23、E_NAME = myQueue HJOB_LIMIT = 2PJOB_LIMIT = 1UJOB_LIMIT = 4HOSTS = hostGroupA USERS = userGroupA End Queue2.8.4 设定General limits在lsb.resources文件定义全局general limits示例:Begin LimitUSERSQUEUESHOSTS SLOTS MEM SWP user1 -hostB - - 20% user2normalhostA - 20 - End LimitBegin Limit NAME = limit1 USERS = user1
24、 PER_HOST = hostA hostC TMP = 30% SWP = 50% MEM = 10%End LimitBegin Limit PER_USER QUEUES HOSTS SLOTS MEM SWP TMP JOBS groupA - hgroup1 - - - - 2 user2 normal - - 200 - - - - short - - - - - 200 End Limit 2.8.5 使能配置badmin reconfig2.9 配置提交控制脚本esub全局esub脚本在作业被提交是调用,可以被自动的或者显式的调用从而控制用户作业提交的行为。编辑esub.pr
25、oject文件在$LSF_SERVERDIR下面(chmod为可执行):#!/bin/shif "_$LSB_SUB_PARM_FILE" != "_" ; then . $LSB_SUB_PARM_FILE if "_$LSB_SUB_PROJECT_NAME" = "_" ; then echo "You must specify a project!" >&2 exit $LSB_SUB_ABORT_VALUE fifiexit 0在lsf.conf中定义LSB_ESUB_M
26、ETHOD=”project”2.10 配置资源管理elim示例2.10.1 汇报home目录空闲大小编辑elim文件elim.home,放置在$LSF_SERVERDIR下面。chmod为可执行。#!/bin/shwhile true ; do home=df -k /home | tail -1 | awk 'printf "%4.1f", $4/(1024*1024)' echo 1 home $home sleep 30done2.10.2 汇报root进程数目编辑elim.root,放置在$LSF_SERVERDIR下面。chmod为可执行。#!/
27、bin/shwhile true ; do root=ps -ef | grep -v grep | grep -c root echo 1 rootprocs $root sleep 30done2.10.3 汇报应用程序许可证数目#!/bin/shlic_X=0 ; num=0while true ; do # only want the master to gather lic_X if "$LSF_MASTER" = "Y" ; then lic_X=lmstat a c lic_X.dat | grep .>&2 fi # onl
28、y want training8, training1 to gather simpton licenses if "hostname" = "training8" o "hostname" = "training1" ; then num=lmstat a c simpton_lic.dat | grep .>&2 fi # all hosts including master, will gather the following root=ps efw | grep v grep | grep c
29、 root>>1&2 tmp=df k /var/tmp | grep var | awk 'print $4 /1024'>&2if "$LSF_MASTER" = "Y" ; then echo 4 lic_X $lic_X simpton $num rtprc $root tmp $tmp else echo 3 simpton $num rtprc $root tmp $tmp fi # the same INTERVAL values defined in lsf.shared sleep 60
30、done2.10.4 测试elim脚本直接运行./elim.root查看elim输出是否正确。2.10.5 添加资源定义和资源地图在lsf.shared文件中添加rootprocs 定义,并在lsf.cluster resources Map中添加资源和节点的映射关系。使能配置:lsadmin reconfig; badmin reconfig2.10.6 查看资源数目lsload l3 LSF命令行集成应用示例本节例举几个应用的不同集成方式。使用spooling文件或者bsub命令行都可以自由转换。3.1 CFD+集成(spooling file)3.1.1 CFD+安装和许可证安装路径:l
31、n-3620-4许可证:/gpfs/software/cfdpp/mbin/Metacomp.lic 许可证服务器:ln-3620-4启动许可证服务器:hpcadminmn-3650 jessi$ ssh ln-3620-4Last login: Tue Mar 26 19:19:24 2013 from mn-3650.private.dns.zone hpcadminln-3620-4 $ /gpfs/software/cfdpp/mbin/lmgrd -c /gpfs/software/cfdpp/mbin/Metacomp.lic确认许可证服务器是否正常运行:/gpfs/softwar
32、e/cfdpp/mbin/lmutil lmstat -a -c /gpfs/software/cfdpp/mbin/Metacomp.lic3.1.2 集成许可证管理elim添加elim方法:(elim全集群只需运行一个,因此只在头节点放置elim脚本即可)在头节点:cd $LSF_SERVERDIR添加如下文件:elim.lic:rootmn-3650 jessi# cd $LSF_SERVERDIRrootmn-3650 etc# pwd/opt/lsf/8.3/linux2.6-glibc2.3-x86_64/etcrootmn-3650 etc# cat elim.lic#!/bin
33、/shtotallicences=/gpfs/software/cfdpp/mbin/lmutil lmstat -a -c /gpfs/software/cfdpp/mbin/Metacomp.lic |grep "Users of CFD+_SOLV_Ser" | /bin/cut -d' ' -f7while truedousedlicences=/gpfs/software/cfdpp/mbin/lmutil lmstat -a -c /gpfs/software/cfdpp/mbin/Metacomp.lic | /bin/grep "U
34、sers of CFD+_SOLV_Ser" | /bin/cut -d' ' -f13 cfd_lic=$( $totallicences-$usedlicences )echo "1 cfd_lic $cfd_lic"/bin/sleep 30done rootmn-3650 etc# chmod a+x elim.lic修改如下的配置文件:rootmn-3650 etc# vi $LSF_ENVDIR/lsf.shared添加如下一行:cfd_lic Numeric 30 Y (CFD+ License)rootmn-3650 etc# vi
35、 $LSF_ENVDIR/lsf.cluster在resourcemap一段添加如下一行:Begin ResourceMap RESOURCENAME LOCATION cfd_lic allhostid default rootmn-3650 etc# lsadmin reconfig; badmin reconfig3.1.3 添加CFD+ job starter如果使用spooling file可不用添加。(Portal集成方式使用)添加jobstarter可执行文件:hpcadminmn-3650 jessi$ cat /opt/lsf/jobstarter/cfd_starter #
36、!/bin/shMPI_RUN=/gpfs/software/cfdpp/hpmpi/bin/mpiruncase "$PRESSION" in SINGLE_PRESSION) CFD_CMD=/gpfs/software/cfdpp/mbin/mcfd.11.1/r4_hpmpimcfd ; DOUBLE_PRESSION) CFD_CMD=/gpfs/software/cfdpp/mbin/mcfd.11.1/hpmpimcfd ; esacCMD="$* -hostfile $LSB_DJOB_HOSTFILE $CFD_CMD"eval &qu
37、ot;$CMD"3.1.4 添加CFD APP profilerootmn-3650 etc# vi $LSF_ENVDIR/lsf.shared添加如下配置:Begin ApplicationNAME = cfdJOB_STARTER=/opt/lsf/jobstarter/cfd_starterRES_REQ="rusagecfd_lic=1"End Applicationbadmnin reconfig使得此文件生效,使用bapp l cfd查看是否成功:rootmn-3650 bin# bapp -l cfdAPPLICATION NAME: cfd -
38、No description provided.STATISTICS: NJOBS PEND RUN SSUSP USUSP RSV 12 12 0 0 0 0PARAMETERS:JOB_STARTER: /opt/lsf/jobstarter/cfd_starterRES_REQ: "rusagecfd_lic=1"3.1.5 CFD+命令行提交脚本实例hpcadminmn-3650 jessi$ cat cfd.sh#!/bin/sh#BSUB -n 12#BSUB -o %J.out#BSUB -e %J.err#BSUB -app cfd#BSUB -R &quo
39、t;rusagecfd_lic=1"cd /gpfs/software/cfd+/test/ogive/gpfs/software/cfdpp/hpmpi/bin/mpirun -hostfile $LSB_DJOB_HOSTFILE /gpfs/software/cfdpp/mbin/mcfd.11.1/hpmpimcfd然后bsub < cfd.sh 提交作业。3.2 GAUSSIAN集成方式(spooling file)3.2.1 Gaussian安装和许可证路径:/gpfs/software/Gaussian/许可证:无许可证版本,单个作业只能单机运行。3.2.2 Ga
40、ussian命令行提交脚本实例下面脚本:g03.sh#!/bin/sh#BSUB -q qchem#BSUB -n 4#BSUB -R "spanhosts=1"#BSUB -cwd .#BSUB -e %J.err#BSUB -o %J.outJOB=Full_codes_112_ipr_C1_JOBNAME=basename "$JOB" .comexport g03root=/gpfs/software/Gaussianexport GAUSS_SCRDIR=/tmpsource $g03root/g03/bsd/file/gpfs/
41、software/Gaussian/g03/g03 < $JOB > "$JOBNAME.log"提交作业:bsub < g03.sh3.3 Abaqus的脚本集成(bsub命令)编辑脚本abaqus_run.sh脚本#!/bin/sh# version: 1.3.0export ABAQUS_CMD="/gpfs/software/Abaqus/Commands/abaqus"export LM_LICENSE_FILE="/gpfs/software/Abaqus/License/abq612.lic"#指定cp
42、u number,注意要与bsub命令行中-n指定的cpu个数一致export NCPU=16#指定输入文件export INPUT_FILE=beam.inp#指定作业名export JOB_NAME=abaqus_job3$ABAQUS_CMD job=$JOB_NAME cpus=$NCPU input="$INP_INPUT_FILE"2)通过LSF提交#进入输入数据所在目录,执行bsub命令bsub -q qeng -n 16 ./abaqus_run.sh3.4 Amber作业(blaunch集成,可记账)针对intelmpi,编写mpdboot.lsf脚本。变
43、为可执行,放置在$LSF_SERVERDIR下面编写提交作业脚本:ymeimnis test$ cat new.sh #!/bin/sh#BSUB -q small#BSUB -n 128#BSUB -o %J.out#BSUB -e %J.err#BSUB -J IMPI#BSUB -x #export PATH=/gpfs01/software/intel/impi/24/intel64/bin:$PATH#/gpfs01/home/ymei/jessi/mpdboot.lsfmpdboot.lsfexport I_MPI_DEVICE=ssm#export I_MPI_FA
44、BRICS=shm:ofa#export I_MPI_FAST_STARTUP=1#export I_MPI_DEVICE=rdssm#mpiexec -np $LSB_DJOB_NUMPROC /gpfs01/software/intel/impi/24/test/helloword mpiexec -np $LSB_DJOB_NUMPROC $AMBERHOME/bin/sander.MPI -ng 32 -groupfile remd10.groupfilempdallexit提交作业:bsub < new.sh3.5 Platform MPI作业3.5.1 安装Pl
45、atform MPI确认用户无密码访问ssh OK。安装Platform MPI到共享目录下:sh platform_mpi-0-0320r.x64.sh -installdir=/opt/pmpi norpm如果缺失C Compiler,执行:yum install gcc3.5.2 LSF外面验证安装OK设置环境变量:export MPI_REMSH="ssh -x"export MPI_ROOT=/opt/pmpi/opt/ibm/platform_mpi/编译helloworld示例程序:/opt/pmpi/opt/ibm/platform_mpi/b
46、in/mpicc -o helloworld /opt/pmpi/opt/ibm/platform_mpi/help/hello_world.crootserver3 help# /opt/pmpi/opt/ibm/platform_mpi/bin/mpirun -f ./help/hosts warning: MPI_ROOT /opt/pmpi/opt/ibm/platform_mpi/ != mpirun path /opt/pmpi/opt/ibm/platform_mpiHello world! I'm 1 of 4 on server3Hello world! I'
47、m 0 of 4 on server3Hello world! I'm 3 of 4 on computer007Hello world! I'm 2 of 4 on computer007rootserver3 help# cat ./help/hosts -h server3 -np 2 /opt/pmpi/opt/ibm/platform_mpi/help/helloworld -h computer007 -np 2 /opt/pmpi/opt/ibm/platform_mpi/help/helloworld3.5.3 通过LSF提交export MPI_REMSH=b
48、launch$ mpirun -np 4 -IBV /helloworld$ mpirun -np 32 -IBV /helloworld$ mpirun -np 4 -TCP /helloworld或者 rootserver3 conf# bsub -o %J.out -e.%J.err -n 4 /opt/pmpi/opt/ibm/platform_mpi/bin/mpirun -lsb_mcpu_hosts /opt/pmpi/opt/ibm/platform_mpi/help/helloworldJob <210> is submitted to default queue
49、 <normal>.rootserver3 conf# bjobsJOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME210 root PEND normal server3 *elloworld May 9 10:55 rootserver3 conf# cat 210.out Sender: LSF System <jessicomputer007>Subject: Job 210: </opt/pmpi/opt/ibm/platform_mpi/bin/mpirun -lsb_mcpu_
50、hosts /opt/pmpi/opt/ibm/platform_mpi/help/helloworld> in cluster <jessi_cluster> DoneJob </opt/pmpi/opt/ibm/platform_mpi/bin/mpirun -lsb_mcpu_hosts /opt/pmpi/opt/ibm/platform_mpi/help/helloworld> was submitted from host <server3> by user <root> in cluster <jessi_cluster
51、>.Job was executed on host(s) <4*computer007>, in queue <normal>, as user <root> in cluster <jessi_cluster>.</root> was used as the home directory.</opt/lsf/conf> was used as the working directory.Started at Thu May 9 18:49:06 2013Results reported at Thu May 9
52、18:49:07 2013Your job looked like:-# LSBATCH: User input/opt/pmpi/opt/ibm/platform_mpi/bin/mpirun -lsb_mcpu_hosts /opt/pmpi/opt/ibm/platform_mpi/help/helloworld-Successfully completed.Resource usage summary: CPU time : 0.23 sec. Max Memory : 2 MB Average Memory : 2.00 MB Total Requested Memory : - Delta Memory : - (Delta: the difference between total requested memory and actual max usage.) Max Swap : 36 MB Max Processes : 1 Max Threads : 1The
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 跨领域项目合作协议
- 工业设计产品设计理念与案例分析
- 航空航天材料工程考试试题集
- 现代管理学基础理论学习题集
- 运动训练理论与实践题
- 2025年工程成本预测试题及答案
- 2024年天然植物纤维及人造纤维编织工艺品项目投资申请报告代可行性研究报告
- 犯罪分析测试题及答案
- 跨部门协作的成功要素计划
- 多元化工作方法的探索计划
- 年处理24万吨焦油常减压蒸馏车间设计书
- GB/T 536-2017液体无水氨
- GB/T 11361-2008同步带传动梯形齿带轮
- GB 5009.121-2016食品安全国家标准食品中脱氢乙酸的测定
- 《电业安全工作规程》
- 处置室工作制度(6篇)
- 二次配线工艺标准守则
- 骨髓穿刺术评分表
- 海底捞火锅店各岗位职责
- 发证机关所在地区代码表
- Q∕GDW 10799.7-2020 国家电网有限公司电力安全工作规程 第7部分:调相机部分
评论
0/150
提交评论