




已阅读5页,还剩47页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
VMware vSAN 6.6 Knowledge Transfer Kit,Architecture Overview,Agenda,Architecture Overview Hybrid Architecture Flash Architecture Hardware Requirements Design Considerations Customer Use Cases,2,Architecture Overview,VMware vSAN,4,Radically Simple Hypervisor-Converged Storage for VMs,VMware vSphere + vSAN,vSAN Datastore,vSAN Architecture,5,Hybrid,All-Flash,40K IOPS per Host,90K IOPS per Host + sub-millisecond latency,Caching,Read and Write Cache,Writes cached first. Reads go directly to capacity tier,Capacity Tier SAS/NL SAS/SATA/Direct-attached JBOD,Capacity Tier Flash Devices Reads go directly to capacity tier,Data Persistence,vSAN,SSD,PCI-e,Ultra DIMM,SSD,PCI-e,Ultra DIMM,vSAN Hybrid Architecture,6,Radically Simple Hypervisor-Converged Storage Software,vSphere + vSAN,vSAN Datastore,VMware vSAN All-Flash Architecture,7,vSphere + vSAN,vSAN All-Flash Datastore,SSDs,SSDs,SSDs,Performance with Predictable Latency,vSAN Stretched Cluster,Active-Active data centers,8,VMware vSAN cluster split across 2 sites! Each site is a Fault Domain (FD) Site-level protection with zero data loss and near-instantaneous recovery Automated failover,5ms RTT, 10GbE,Today,VMware vSphere vSAN,Technical Details Support for up to 5 ms RTT latency between data sites 10 Gbps bandwidth expectation Witness VM can reside anywhere 200 ms RTT latency 100 Mbps bandwidth requirement,vSAN Disaster Recovery,9,Today,5ms RTT, 10GbE,VMware vSphere and vSAN,Any distance 5 min RPO,Site Recovery Manager,Replication between vSAN datastores enables RPO as low as 5 minutes Exclusively available to vSAN 6.x, leverages VMware vSphere Replication Leverage VMware Site Recovery Manager for disaster recovery orchestration Stretched across metro distance, replicated across geo!,ROBO Deployments,vSAN 6.1 supports 2-node (ROBO) deployments Both hosts only used to store “data” Extension of vSAN Stretched Cluster solution Each host is a Fault Domain (FD) All sites managed centrally by one VMware vCenter Server instance,10,vSphere and vSAN,Technical Details One witness host needed per vSAN cluster Witness node can be an VMware ESXi VM 500 ms RTT latency 1.5 Mbps bandwidth requirement,2-node Direct Connect,vSAN 6.5,vsan,management witness,vSAN for Cloud Native Apps,VMware vSphere Integrated Containers Run containerized storage in production Native vSphere container data volumes support Leverage existing vSphere / vSAN features vSAN for Photon Platform Non vSphere SDDC stack for rapidly deploying and managing containerized applications at scale Support for container volumes shared among a cluster of hosts Developer-friendly APIs for storage provisioning and consumption IT-friendly APIs / GUI for infrastructure management and operation vSphere Docker Volume Driver Test/Dev containerized storage vSphere Docker Volume Driver Works on existing vSphere deployments and datastores Download: ,Photon Controller,vSAN for VMware Photon,vSphere Integrated Containers,vSphere,vSAN,Deduplication and Compression for Space Efficiency,Nearline deduplication and compression per disk group level Enabled on a cluster level Deduplicated when de-staging from cache tier to capacity tier Fixed block length deduplication (4 KB Blocks) Compression after deduplication If block is compressed = 2 KB Otherwise full 4 KB block is stored,13,Beta,esxi-01,esxi-02,esxi-03,vmdk,vmdk,vSphere vSAN,vmdk,All Flash Only,Significant space savings achievable, making the economics of an all-flash vSAN very attractive,RAID-5 Inline Erasure Coding,When Number of Failures to Tolerate = 1 and Failure Tolerance Method = Capacity RAID-5 3+1 (4 host minimum) 1.33x instead of 2x overhead 20 GB disk consumes 40 GB with RAID-1, now consumes 27 GB with RAID-5,14,RAID-5,All Flash Only,RAID-6 Inline Erasure Coding,When Number of Failures to Tolerate = 2 and Failure Tolerance Method = Capacity RAID-6 4+2 (6 host minimum) 1.5x instead of 3x overhead 20 GB disk consumes 60 GB with RAID-1, now consumes 30 GB with RAID-6,15,All Flash Only,RAID-6,2-Node Direct Connect Configuration,Disable VMware vSphere High Availability VMware vSphere vMotion VMkernel interface setting Explicit Fail-over order = uplink 1 active / uplink 2 standby vSAN VMkernel interface setting Explicit Fail-over order = uplink 2 active / uplink 1 standby Set the designated witness VMkernel interface to witness traffic type: esxcli vsan network ip add -i vmkx -T=witness Validate the traffic type is witness: “Traffic Type: witness” esxcli vsan network list Continue with 2-node setup Then enable HA for the cluster Note: From the VMware vSphere Web Client, the vmknic interface will no longer be selected for vSAN traffic. This should not be re-enabled in the vSphere Web Client!,vSAN 6.5,vsan,management witness,Hardware Requirements,Hardware Requirements,Minimum of 3 hosts in a cluster configuration All 3 hosts must contribute storage Recommended that hosts are configured with similar hardware Hosts: Scales up to 64 nodes Disks: Locally-attached disks Hybrid: Magnetic disks and flash devices All-Flash: Flash devices only Network 1 GB Ethernet or 10 GB Ethernet (preferred) (required for all-flash) “Witness” component (only metadata) acts as tie-breaker during availability decisions,18,REPLICA-1,REPLICA-2,vSAN datastore,Hardware Requirements (cont.),19,Any server in the VMware Compatibility Guide,All hardware must be listed in the VMware Compatibility Guide vSAN,Flash-Based Devices,In vSAN hybrid, all read and write operations always go directly to the Flash tier Flash-based devices serve two purposes in vSAN hybrid architecture Write-back buffer (30%) Writes acknowledged as soon as they are persisted on flash (on all replicas) Reduces latency for writes Read cache (70%) Active data set always in flash, hot data replace cold data Cache hits reduce read latency Cache miss Read data from HDD and put in cache A performance tier tuned for virtualized workloads achieved with modest capacity: 10% of HDD High IOPS and low cost Low and predictable latency Choice of hardware is the #1 performance differentiator between vSAN configurations,20,vSAN Hybrid,Flash-Based Devices,In vSAN all-flash, read and write operations always go directly to the Flash devices Flash-based devices serve two purposes in vSAN all-flash Cache tier (100% write-back buffer) High endurance flash devices. Listed in VMware Compatibility Guide vSAN Capacity tier Low endurance flash devices Listed in VMware Compatibility Guide vSAN Choice of hardware is the #1 performance differentiator between vSAN configurations,21,vSAN All-Flash,Flash-Based Devices (cont.),VMware SSD Performance Classes Class B: 5,000-10,000 writes per second Class C: 10,000-20,000 writes per second Class D: 20,000-30,000 writes per second Class E: 30,000-100,000 writes per second Class F: 100,000+ writes per second Examples Intel DC S3700 SSD 36000 writes per second - Class E Toshiba SAS SSD MK2001GRZB 16000 writes per second - Class C,Workload Definition Queue Depth: 16 or less Transfer Length: 4KB Operations: Write Pattern: 100% random Latency: less than 5 ms Endurance Terabyte writes per lifetime of the device Class A: = 365 TBW Class B: = 1825 TBW Class C: = 3650 TBW Class D: = 7300 TBW,22,Magnetic Disks (HDD),SAS/NL-SAS/SATA HDDs supported 7200 RPM for capacity 10,000 RPM balance between capacity and performance 15,000 RPM for additional performance NL SAS will provide higher HDD controller queue depth at same drive rotational speed and similar price point NL SAS recommended if choosing between SATA and NL SAS,23,Storage Controllers,SAS/SATA storage controllers Pass-through or “RAID0” mode supported Performance using pass-through mode is controller dependent Check with your vendor for PCI-e device performance behind a RAID-controller Replacing devices for upgrade of failure purposes might require host downtime Support for hot-plug devices Storage controller queue depth matters Higher storage controller queue depth will increase performance Minimum queue support of 256 Validate number of drives supported for each controller,24,Storage Controllers RAID0 Mode,Configure all disks in RAID0 mode PCI-e based devices (SSD) Magnetic disks (HDD) Disable the storage controller cache Allows better performance because cache is controlled by vSAN Disks device cache support Flash-based devices leverage write-through caching ESXi might not be able to differentiate flash-based devices from magnetic devices Use the vSphere Web Client under the Cluster vSAN Settings Disk management Use ESXCLI to manually flag the devices as PCI-e device (SSD),25,Network,1 Gb / 10 Gb supported for hybrid architecture 10 Gb shared with NetIOC for QoS is recommended for most environments If 1 GB, recommend dedicated links for vSAN 10 Gb supported only for all-flash architecture 10 Gb shared with NIOC for QoS will support most environments Jumbo frames will provide nominal performance increase Enable for greenfield deployments Enable in large deployments to reduce CPU overhead vSAN supports both vSphere standard switch and VMware vSphere Distributed Switch products NetIOC requires VDS Network bandwidth performance has more impact on host evacuation and rebuild times than on workload performance,26,Firewall Ports,vSAN Vendor Provider (VSANVP) Inbound and outbound TCP 8080 vSAN Clustering Service (CMMDS) Inbound and outbound UDP 12345 23451 vSAN Transport (RDT) Inbound and outbound TCP 2233,27,vSAN Support for Blade-Only Direct-Attached JBODs,High-Density Direct-Attached Storage,2015 2016,Storage Blades,Blade Servers,Manages disks in enclosures Enables vSAN to scale on blade servers by adding more storage to blade servers with few or no local disks Flash acceleration provided on the server or in the subsystem Examples IBM Flex SEN with x240 Blade Series Dell FX2 with 12 G Controllers,Direct Attach Compute: Storage 1:1,28,vSAN Compatibility Guide,29,1,2,1,2,vSAN Hardware Quick Reference Guide Five ReadyNode profile guidelines Sizing assumptions Design considerations,vSAN ReadyNode Configurator List components and quantity that make up each ReadyNode Information on how to quote/order the ReadyNode,3,Always use certified components! ReadyNode types Drivers and firmware Supportability Increases customer satisfaction Reduces rework and time-to-market,3,vSAN Hardware Choices,30,Component-Based,using the VMware Compatibility Guide vSAN1,Choose individual components ,vSAN ReadyNode,40+ OEM validated server configurations ready for vSAN deployment2,1. Components must be chosen from vSAN HCL. Using any other components is unsupported see the VMware Compatibility Guide vSAN (Page) 2. VMware continues to update the list of the available ReadyNodes see the VMware Compatibility Guide vSAN (Page) 3. Product availability varies by countries. Contact your local VMware partners for details, pricing and availability (Page),Maximum Flexibility,Maximum Ease of Use,VMware EVO:RAIL,A Hyper-Converged Infrastructure Appliance (HCIA) for the SDDC,Each EVO:RAIL HCIA is pre-built on a qualified and optimized 2U/4 Node server platform Sold through a single SKU by VMware Qualified EVO:RAIL Partners (QEPs)3,Software + Hardware,Hyper-Converged Infrastructure,3 vSAN ReadyNode Profiles All Flash Platform,For complete details on the sizing assumptions and design considerations of the ReadyNode profiles, click the vSAN Hardware Quick Reference Guide link in the VMware Compatibility Guide vSAN (Page),31,Note #1: Assumes latest generation CPU architecture,4 vSAN ReadyNode Profiles Hybrid Platform,For complete details on the sizing assumptions and design considerations of the ReadyNode profiles, click the vSAN Hardware Quick Reference Guide link in the VMware Compatibility Guide vSAN (Page),32,Note #1: Assumes latest generation CPU architecture,vSAN ReadyNode Profiles Wizard,For complete details on the sizing assumptions and design considerations of the ReadyNode profiles, click the vSAN Hardware Quick Reference Guide link in the VMware Compatibility Guide vSAN (Page),33,Design Considerations,Objects,Individual storage block devices that are compatible with SCSI semantics Each object that resides on the vSAN datastore is comprised of multiple components Objects are assigned storage performance and availability services requirements through VM storage profiles,35,vSAN Datastore Considerations,It is important to understand the impact of availability and performance storage capabilities on the consumption of storage capacity Number of Failures to Tolerate Number of Disk Stripes per Object Flash Read Cache Reservation Object Space Reservation,36,Disk Groups Hybrid Architecture,A single flash-based device (SAS/SATA/PCI-e) as a performance device and one or more magnetic disks (SAS/SATA HDD) Disk groups make up the distributed flash tier and storage capacity of the vSAN datastore,37,Disk Groups All-Flash Architecture,All flash devices are utilized to provide two tiers Performance tier Capacity tier One flash-based device (SAS/SATA/PCI-e SSD) and one or more disks (SAS/SATA HDD) Disk groups make up the distributed storage capacity of the vSAN datastore,38,Number of Disk Stripes Per Object,If the Number of Disk Stripes per Object is increased beyond the default value of 1, each stripe will count as a separate component This has an impact on the of total number of components supported per host,39,Disk Group Design,One performance device per disk group Multiple flash-based devices and multiple disk groups will be created to leverage the additional flash The higher the ratio of flash-based device capacity to magnetic disks capacity, the greater the size of the cache layer Define and reduce the storage failure domains,Failure domain,Each host: 5 disk groups max. Each disk group: 1 caching device + 1 to 7 capacity devices,vSAN network,VSAN network,vSAN network,vSAN network,vSAN network,40,Flash Capacity Sizing,The general recommendation for sizing vSAN flash capacity is to have 10% of the anticipated consumed storage capacity before the Number of Failures To Tolerate is considered Total flash capacity percentage should be based on use case, capacity and performance requirements 10% is a general recommendation. It could be too much or it might not be enough,41,All-Flash Cache Tier Sizing,Cache devices should be high write endurance models: choose 2+TBW/day or 3650+/5 year Total cache capacity percentage should be based on use case 10% is a general recommendation For write-intensive workloads a higher amount should be configured,42,Memory and CPU,Memory requirements for vSAN are defined based on the number of disks groups and disks that are managed by hypervisor As long as vSphere hosts have greater memory configurations than 32 gigabytes of RAM, they will be able to support the maximum disk group and disk configuration supported in vSAN vSAN is designed to introduce no more than 10% of CPU overhead per hosts. Consider this fact in vSAN implementations with high consolidation ratios and CPU intensive applications requirements,43,Network,vSAN network activities can potentially saturate and overwhelm an entire 1 GbE network, particularly during rebuild and synchronization operations Separate the different traffic types (management, vSphere vMotion, virtual machine, vSAN) on to different VLANs and use shares as a Quality of Service mechanism to sustain the level of performance expected during possible contentions scenarios vSAN requires IP multicast to be enabled on the Layer 2 physical network segment used for vSAN communication Layer-3 communication is now supported in vSAN 6.0 Enabling Jumbo frames can reduce CPU overhead in large cluster deployments vSAN 6.0 all-flash architectures require the use of 10G network speeds vSAN ready nodes ( typically standardize to two 10 Gb server side network uplinks Multicast network traffic dependence has been removed from vSAN 6.6,44,Network Support for Pure IPv6,Overview Operate vSAN in IPv6-only mode. All network communications are through IPv6 network Configuration through UI vSAN supports L2 or L3 multicast Monitor/Management through HealthUI Support mixed IPv4 and IPv6 for upgrade process Benefits Address the needs of the customers who are moving to IPv6 (e.g. Federal, Service Providers) Aligns with vSphere support of IPv6,45,vSAN network,vSAN network,vSAN network,Support for pure IPv4 or IPv6 networks,vSAN Datastore,New in6.2,Customer Use Cases,Customer Case Study Oregon State University High Performance for VDI with Budget-Friendly Scaling,47,Challenge Storage overburdened by VDI workloads User login took longer than 20 minutes during peak times Recomposing VDI environment took 10+ hours Manual resource balancing not sustainable Limited storage budget Solution vSAN deployed on Dell servers Results Drastically better performance User login time reduced from 20 minutes to 1 minute Increased VDI scale Supports 170+ additional users on existing servers/network Simplified management Reset of virtual desktops reduced from 10 hours to 2 hours Lower cost and ease of scale Lowered acquisition costs by 75% and enabled easier scaling for future needs,“Before vSAN, we had no ability to scale. Nowits a piece of cake. If I want to add additional capacity, I just add an additional server. I dont have to worry about whether my SAN can grow or not.”,Alan Sprague, System Administrator,Case Study: Oregon State University Video: vSAN at Oregon State University,Customer Case Study Union Hospital Accelerating Performance for Tier 1 Applications,48,Challenge Aging SAN reaching end of life. Too expensive to replace/upgrade SAN overload resulting in frequent database latency issues. Application crashes, slow screen refreshes, delays accessing patient records, and slow reports Did not want to introduce new vendors in the mix Solution Business critical applications on vSAN GE Centricity EMR applicati
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 解析卷人教版八年级上册物理声现象《声音的产生与传播》单元测评试卷(含答案详解版)
- 2025历年教师资格证考试真题及答案
- 护肤知识考试题目及答案
- 湖南电路考试题及答案
- 难点解析-人教版八年级上册物理声现象《噪声的危害和控制》必考点解析试卷(含答案解析)
- 考点解析-人教版八年级物理《浮力》达标测试练习题(含答案解析)
- 中石油政治职称考试题库及答案
- 小学生背古诗词考试题及答案
- 慢性肾衰竭相关试卷及答案:病因、治疗、护理及检查测试卷有答案
- 电商不正当竞争认定-洞察与解读
- 地理信息安全在线培训考试系统题库
- 新概念1-50课语法复习
- 福建省退役军人参加学历教育身份确认表
- GB/T 3452.3-2005液压气动用O形橡胶密封圈沟槽尺寸
- 斯吹瓶机培训
- 山鬼课件上课用课件
- 发票拒收证明模板
- 《伟大的改革开放》优秀课件1
- 助产士核心胜任力量表
- 2022秋季教科版2017版六年级 上册《科学》全册期末复习 知识总结 背诵归纳
- 保安队排班表
评论
0/150
提交评论