




已阅读5页,还剩33页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
老菜鸟学VIO在网上找了些资料,看了几眼就开始安装(再看下去就睡着了),所以不断会有很多问题要提出来,先挖个坑,慢慢填。 找了台595,检查了VIO啊,micro processor等都是enable了,就新建一分区。选了vio server,然后后面的基本用默认的设定,处理器给了1-1-4,内存也是1-1-4吧。分了一张网卡,两块硬盘。然后配个ip,让它能连上nim server。接着从网络上安装,nim那边是5.2的光盘,记得某份资料里说5L就行了。安装过程跟一般分区一样,我是从HMC上做的,但快要做完的时候,HMC连不进去了。算了,HMC远在几千公里以外。等那边的同事弄好了再说。闲着没事又看书,忽然发现在using vio server那个书里说,安装的时候要放Virtual I/O Server CD。奇怪,难道不是放AIX 的CD?问题一:是否要用Virtual I/O Server CD来安装而不是AIX OS CD?要用VIO SERVER CD。不能用AIX CD,否则变成了一个普通AIX分区。Place the Virtual I/O Server Version 1.3 DVD disk in the drive。vio的光盘在59系列的机器里面是默认配置吧。反正我看59的机器都跟着vio和plm。我有!1。装VIO server2。配置网卡mkvdev -sea 物理网卡名 -vadapter 服务器网卡名 -default 缺省使用网卡名 -defaultid 3。加IP地址mktcpip -hostname HostName -inetaddr Address -interface Interface 4。建立VioRootvg mkvg -f -vg vg 名 pv名5。建立给客户机用的 LVmklv -mirror -lv 逻辑卷名大小G放在那个vg 6。用HMC建立相对应的 client 卡,包括 scsi lan serial 口注:1.3 版本可以建立client 卡的时候同时创建server adapter7。建立逻辑devmkvdev -vdev 刚刚建立的LV -vadapter 刚刚建立的client 卡 -dev 你想要的dev 名字ok 设置成功!是不是好简单啊?暂时没光盘,先解决一些概念。virtual SCSI,有3种类型:1. lv2. pv3. cd-rom, dvd在服务器端生成virtual scsi server adapter,然后在客户端生成virtual scsi client adapter,就可以象平时普通的lun这样去使用。问题二:既然是lv和pv层面,那VIO server里面就可以有物理的scsi硬盘,或者用HBA连接的存储上的盘,然后看到的都是pv,都可以共享出去。对吗?问题三:光盘那些比较直观,但使用的时候象595上的一般adapter那样独占?问题四:一个pv可以分配给多个client,但使用的时候怎样处理?是象gpfs那样同时使用吗?问题五:我一个现成普通的分区,能够建一个virtual scsi client adapter来使用vio server里面的资源吗?回答二:在VIO server里面,前端的设备叫virtual scsi devices,而后面就对应到物理设备上,物理设备包括物理硬盘(包括san devices and internal storage devices),optical devices and logical volumns.当要建立virtual device的时候,你必须先把物理存储设备分配到virtual scsi server adapter。pv可以exported to client partitions as virtual scsi disks. 但无可避免会换卡或者 san reconfiguration等动作,可能会令pv的名字或者vio server里面前后端设备的对应关系都改变了。所以,这些pv要有一个UDID,PVID或者an IEEE volumn attribute。这个跟我们平常的定义很相似。virtual scsi disks,后面对应的可以是一个物理盘,或者是LV。无论后端是哪种类型的physical storage,都会有所有的标准scsi特性。例如可以做boot device,或者NIM target。(下面帖子笨猪说的用作client的rootvg就是用了server的一个lv来做)做的时候最好记住1点,最最重要的一点 !如果你用LV 做客户机的rootvg ,记得把lv 设大点.因为rootvg不能用chvg -g 参数修改的,如果你以后要扩rootvg ,只能通过增加新的 LV(等于在客户机上增加新的硬盘)至于换卡啊,等的操作,不会影响到LV 改变的,因此也不会影响到客户机的。回答三:optical device 后端就是DVD驱动器,前端就是virtual scsi optical devices,根据物理设备的不同,可以以ROM或RAM的方式做export。使用的时候,同一时间只能分配给一个client logical partition。如果之前有其它分区占用,必须先释放,这个跟DLPAR的情况一样。在client端看来,都是scsi device,但后端可以是scsi, IDE甚至USB的光驱。在我们平常的操作时,扩vg也是要加盘啊。是不是非rootvg就可以通过在server上extendlv来实现扩大vg size呢?非rootvg ,你可以通过 扩展vdev(就是服务器上的LV),然后在 客户机上 用chvg -g vg名 来直接扩展vg ,不用另外新加盘的。好,明白了。但我查vio command refrerence,里面的chvg命令并非做你所说地用途,而是做unlock,suspend,resume的,难道这里的chvg命令只是用于 vio server? 而你所说的chvg -g vg的命令是可以应用在客户机而不能用于vio server。今天HMC搞好了,终于又可以进去看看了。前几天装的那个系统已 经装好了,正如前面的兄弟所言,装的时候用了AIX光盘,那装出来的只是个普通的分区而已,所以一定要等VIO光盘。下午我再看了下econfig出来 595的列表,VIO CD是在软件里的一个独立的项,正如野马前辈说,VIO和PLM是跟着这个档次的机器就有的了。然后,我看到在HMC的dlpar里面有virtual device的选项,可以增加或者减少,并且可以创建client adapter,但server adapter是灰色,估计是VIO server并未启动的原因。而在DLPAR同级的菜单里也有个virtual I/O,这个是列出了所有的虚拟设备。回答五:一个普通的物理分区,可以在DLPAR的菜单选项里面增加virtual client adapter,可以是scsi,lan,serial三种。virtual Ethernet adapter是client端的adapter,你可以通过HMC(或者IVM)来建立。书上有句话我比较感兴趣:就算没有VIO server,你也可以创建virtual ethernet adapter,只不过不能与其它lan adapter通信而已。好,那我就尝试建立一个新的client ethernet adapter,系统有警告,说现在那个VIO server通信上有问题(当然,我还没装上,只是定义了个分区),按继续可以做下去。做完后,在这个分区上并不能看到这个新增的卡。估计我把这个分区的 定义也删除了,干干净净的就行了,明天测试。反正第2台595上还没建VIO server分区,在那上面做就可以了。靠,还是不行。client lpar无法跟vio server通信的时候,就算你硬把client ethernet adapter加上去了,还是认不出来。看来这vio server一定要起来了才能创建clieng端的adapter。iso文件已经运到了,然后做成了文件系统。进NIM的菜单,有一个叫NIM power5 tools,进去之后,选择VIO server,然后选setup。这里其实是调用installios命令。我这台机器是NIM server,HMC上那个用作VIO server的分区profile已经建立好了,网卡上配了地址,与这台NIM server也ping过,通了。好,那么在刚才说的菜单里打入一些资料,我用命令行来表达:installios -d /viocd -i 136.8.1.24 -g 0.0.0.0 -S 255.255.255.0-d表示vio server cd的所在-i表示要安装vio server的机器-g是网关,-S是mask执行后:Creating the /export/installios/mksysb directory.Creating the /export/installios/spot directory.Creating the /export/installios/bundle directory.Creating the /export/installios/lpp_source directory.Defining installios_client object.doneFilesystem size changed to 2621440Copying from /viocd/nimol/ioserver_res/mksysb to /export/installios/mksysb.doneDefining installios_mksysb object.doneCopying from /viocd/installp/ppc/en_US.bnd to /export/installios/bundle.doneDefining installios_bundle object.doneCreating installios_lpp_source lpp_source from /export/installios/bundle/en_US.bnd bundle file.doneDefining installios_lpp_source object.doneDefining installios_spot object.doneCopying from /viocd/nimol/ioserver_res/bosinst.data to /export/installios/installios_bosinst.data.doneDefining installios_bosinst object.donedoneInitiating install.failed它在本机(NIM server)上自动建立了/export/installios目录,然后把cd里面的东西包括mksysb的image都复制过去了,但到了最后却不知为何失败了。可能是nim client的定义问题,于是重新定义了这个client。然后在HMC上active这个分区,从网卡上启动,很顺利地开始安装系统了。但却没有任何东西给我选,已经自动在安装了,结果把我原来硬盘上的东西都覆 盖了(我共有4块盘,3块空的)。等了好久,它自己装完了重起,然后停在IO Hosting上,console上表示IO Hosting require a logical partition,然后就停在OK promt。用installios -u -f把所有东西都删除了,重新做一次setup,还是在相同的地方失败,看来我估计能在nim server端装vio server这个想法暂时行不通了。在nim菜单里把installios_client用户reset掉,在/etc/hosts里面加上了installios_client的定义。留意,这里我没重新在nim server里面做setup。然后在hmc里面active这个分区,用网卡启动,行了,下载了。终于出现我期待很久的画面,就是按1然后enter那个界面,出现了这个界面,就意味着后面的几个熟悉的安装页面也会出来,也就意味着我可以选择安装在哪个硬盘。这过程跟一般aix的安装没啥区别。现在正在安装,等会儿不知会不会又停在OK prompt,紧张啊。问题在哪?问题在/etc/hosts问题在它指定用installios_client这个用户,就算我之前定义了其他用户,就算能从网络上安装,都是错的。算了,等会如果安装成功,我还是想测试从nim server直接安装的方式。没书看,瞎猜。手上的书都没有介绍从nim安装vio的,各位有没资料?噢,还是ok prompt。有welcome to Virtual I/O Server的字眼了,还是停在io hosting.I/O hosting requires a hosting partition - boot not permittedEXIT called ok0 ok0 郁闷。Installing the Virtual I/O Server using NIMYou can use the following procedures to install the Virtual I/O Server into environments managed by the HMC or Integrated Virtualization Manager using Network Installation Management (NIM).PrerequisitesYou need the following files before beginning this procedure. These files are located on the Virtual I/O Server installation media:nimol/ioserver_res/mksysb (the mksysb image) In addition, the following system requirements must be met:A system running AIX 5.3 with 5300-03 or higher, which contains a file system with at least 700 MB available. A logical partition of type Virtual I/O Server containing an Ethernet adapter connected to an active network for installing the Virtual I/O Server. For information about creating logical partitions, see Creating the Virtual I/O Server logical partition and partition profile. A storage controller containing at least 16 GB of disk space. After the prerequisites have been met, follow these steps to use NIM to install the Virtual I/O Server:Insert the Virtual I/O Server DVD into the DVD drive. Run the installios command without any arguments to start the installation wizard. The installios wizard then guides you through the process of filling-out the necessary information to start an installation on the Virtual I/O Server or on a Integrated Virtualization Manager. If you run installios on a NIM client, then you are prompted for the location to the bos.sysmgt.nim.master fileset. The NIM client is then configured as a NIM master. For more information about command-line usage of installios, see the installios command.The installios setup process creates the following NIM resources to start the installation:bosinst_data installp_bundle lpp_source mksysb resolv_conf SPOT Client definition If you are installing the Virtual I/O Server logical partition, and if Secure Shell (SSH) and credentials have been configured on the NIM master, then the partition is network-booted from the Hardware Management Console (HMC) to begin the installation.If you are installing the Virtual I/O Server logical partition without SSH, or if you are installing the Integrated Virtualization Manager, then go to step 3.On the system on which the Virtual I/O Server software will be installed, boot the Virtual I/O Server logical partition or the Integrated Virtualization Manager into System Management Services (SMS) mode by following these steps:To boot the Virtual I/O Server logical partition into SMS:On the HMC, right-click the partition to open the menu. Click Activate. The Activate Partition menu opens with a selection of partition profiles. Be sure the correct profile is highlighted. Select the Open a terminal window or console session check box to open a virtual terminal (vterm) window. Click (Advanced.) to open the advanced options menu. For the Boot mode, select SMS. Click OK to close the advanced options menu. Click OK. A vterm window opens for the partition. In the vterm window, select Setup Remote IPL (Initial Program Load). Select the network adapter that will be used for the installation. Select IP Parameters. Enter the client IP address, server IP address, and gateway IP address. Optionally, you can enter the subnet mask. After you have entered these values, press Esc to return to the Network Parameters menu. Select Ping Test to ensure that the network parameters are properly configured. Press Esc twice to return to the Main Menu. From the Main Menu, select Select Boot Options. Select Select Install/Boot Device. Select Network. Select the network adapter whose remote IPL settings you previously configured. When prompted for Normal or Service mode, select Normal. When asked if you want to exit, select Yes. To boot the Integrated Virtualization Manager into SMS:Begin with the machine turned off. Switch on the machine, and as icons begin to appear from left to right on the bottom of your display, press F1.Note: If the last icon is displayed before pressing F1, then you get the normal mode boot list instead of SMS, so try again.The System Management Services menu opens. Select Utilities. From the System Management Services Utilities menu, select Remote Initial Program Load Setup. From the Network Parameters panel, select IP Parameters. Set or change the displayed values so they are correct for your client system. Specify the IP address of the following:The client machine you are booting in the client address field. Your NIM master server in the server address field. Your clients gateway in the gateway address field. Your clients subnet mask in the subnet mask field. After you specify the addresses, press Enter to save the addresses and continue. The Network Parameters window opens. Select the Ping option. Select the network adapter to be used as the clients boot device. Verify that the displayed addresses are the same as the addresses you specified for your boot device. If the addresses are incorrect, press Esc until you return to the main menu. Then, go back to step e. If they are correct, continue with k. If the addresses are correct, press Enter to perform the ping test. The ping test might take several seconds to complete. If the ping test fails, verify that the addresses are correct, and analyze the network problem. If the ping test is successfull, press Enter to acknowledge the success message. Press Esc until you return to the System Management Services menu. From the System Management Services menu, choose the Select Boot Devices option. Select the network adapter to be used for the network boot from the list of displayed bootable devices After the installation is complete, the Virtual I/O Server logical partition or the Integrated Virtualization Manager is ready to be configured and managed.To remove all the NIM resources that were created from the installios setup process, run the installios command with the -u flag. If the installios command fails to perform the cleanup, run installios -u and specify the -f flag to force NIM to reset and deallocate resources to the client. This still leaves the NIM environment, but removes all resources and directory structures created from the installios wizard. If, however, you want to unconfigure NIM, or to uninstall the bos.sysmgt.nim.master fileset and return the NIM master back to a NIM client if it was configured from a NIM client, specify installios -u with a -U flag. You can also install a Virtual I/O Server or a Integrated Virtualization Manager through the SMIT interface. Run smitty installios to get access to the SMIT interface to the installios command. When invoked on a NIM master, the administrator will have access to two options to Setup for Virtual I/O and Integrated Virtualization Manager Installation and Cleanup after Virtual I/O and Integrated Virtualization Manager Installation, where Configure Client as Master for Virtual I/O and Integrated Virtualization Manager Installation is the only available option on a NIM client. Then fill out the specific fields as required from the installios wizard to invoke the installios command to setup the environment to initiate the installation, or to perform a cleanup.发现一个低级错误,昨天原来用错分区,混乱中竟然用了一个普通分区。 配置好vio分区,重新再来一次:rootlkjnim:/installiosEnter the source of the installation images /dev/cd0: /viocdEnter the name of the HMCnone: 136.8.1.1Password:The following objects of type managed system were found.Please select one:1.Server-9119-595-SN024D86DEnter a number: 1Password:The following objects of type virtual I/O server partition were found.Please select one:1. vio_serverEnter a number: 1Password:The following objects of type profile were found.Please select one:1. vio_serverEnter a number: 1Enter the intended IP address of the client: 136.8.1.24Enter the intended subnet mask of the client: 255.255.255.0Enter the gateway of the client: 0.0.0.0Enter the speed for the client100:Enter the duplex for the clientfull:Would you like to configure the network for the client after the installation yes/no? noEnter a language in which to view the license agreement,or leave blank and press enter to view the agreement later:Here are the values you entered:managed_system = Server-9119-595-SN024D86Dvio_partition = vio_serverprofile = vio_serversource = /viocdclient_addr = 136.8.1.24netmask = 255.255.255.0gateway = 0.0.0.0speed = 100duplex = fullPress enter to proceed or type Ctrl-c to cancel.Creating the /export/installios/mksysb directory.Creating the /export/installios/spot directory.Creating the /export/installios/bundle directory.Creating the /export/installios/lpp_source directory.Defining installios_client object.doneCopying from /viocd/nimol/ioserver_res/mksysb to /export/installios/mksysb.doneDefining installios_mksysb object.doneCopying from /viocd/installp/ppc/en_US.bnd to /export/installios/bundle.doneDefining installios_bundle object.doneCreating installios_lpp_source lpp_source from /export/installios/bundle/en_US.bnd bundle file.doneDefining installios_lpp_source object.doneDefining installios_spot object.doneCopying from /viocd/nimol/ioserver_res/bosinst.data to /export/installios/installios_bosinst.data.doneDefining installios_bosinst object.donedoneInitiating install.failedInitiating network boot.Password:# Connecting to vio_server# Connected# Checking for power off.# Power off the node# Wait for power off.# Power off complete.# Power on vio_server to Open Firmware.# Power on complete.lpar_netboot: cannot connect to vio_serverfailed正在查installios这个命令的语句,我的lpar有一块4口卡,要启动的卡是第4个口。估计是这里出问题了,因为我没看见命令里有选卡的动作,应该是用了第一块卡。下面是installios调用的lpar_netboot语句:/usr/bin/ssh hscroot$hmc lpar_netboot $debug_arg -A -D -t ent -d $duplex -s $speed -S $server -G $gateway -C $client $partition $profile $managed$debug_arg = -x -v 控制显示信息-A Returns all adapters of the given type. 这里返回所有ent-D Performs a ping test so the adapter can successfully ping the server specified with the -S flag. 这里是做ping test,我的环境应该只有第4张卡可以通这里并没有指定从哪块卡来启动,少了-l参数。我直接在HMC上用lpar_netboot命令来取mac address,目的是检验一下这个命令的可行性,结果:missing close-brace while executingif $noboot # Display information for all supported adapters system $DSPMSG -s 1 $CATALOG 52 # Getting adapter location codes.n 2/dev/nu. (file /usr/hmcrbin/lpar_netboot line 2445)难道我真的这么背,这个命令本来就是有错的?在同事抗议之下,我终于放弃了,暂时放弃了从NIM server直接装VIO的测试,虽然官方文档就是这样装的。我在NIM里重新reset了installios_client这台机器,重新定义了它,采用mksysb。然后,在HMC的界面里用第4块卡来启动,现在正在安装中,貌似正常。我觉得IBM原来的设计是可以直接安装的,可惜我水平有限,暂时没能掌握。不过这里面也有一点问题,就是installios这个命令,它定义好resource之后就直接安装了,中间没有停顿,那NIM里的machine是什 么时候去定义呢。当然,肯定是installios帮你自动定义了,但定义错了。所以出现那句cannot connect to vio_server这句。因为machine的prev_state 并非ready for a NIM operation,而Cstate并非BOS installation has been enabled整个machine的状态email=rootlkjnim:/etcrootlkjnim:/etclsnim/email -l installios_clientinstallios_client: class = machines type = standalone comments = Client for installios connect = shell platform = chrp netboot_kernel = mp if1 = network1 client24 0 net_settings1= auto auto cable_type1 = N/A Cstate = BOS installation has been enabled prev_state = ready for a NIM operation Mstate = not running boot = boot lpp_source = installios_lpp_source mksysb = installios_mksysb spot = installios_spot control = master安装中的状态: Cstate = Base Operating System installation is being performed prev_state = BOS installation has been enabled
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
评论
0/150
提交评论