SMB Direct RDMA Infniband 40GB Virtual to Physical


hi all,

 

i'm writing today discuss proposed san , network design upcoming project.  i'm looking suggestions , info in regards design.  key technologies hope to incorporated are:

 

mellanox 40gb infiniband (connectx2/3 infiniscale iv/switchx 2)

windows storage spaces

windows scale-out file server cluster

rdma

my setup involves both physical windows server 2012 systems , vmware vsphere 5.1 windows 2012 vms.  so have posted vmware forum figured still relevant to forum well. 

 

the san going 3 node scale-out file server using windows server 2012 storage spaces dataon dns-1640d jbod appliance.  below link general setup.  ssds used disks.

 

http://www.dataonstorage.com/images/pdf/solutions/dls/dataon_microsoft_server_2012_storage_space_ha_cluster_shared_storage_teched2012.pdf

 

we using lsi 9207-8e hbas attach dns-1640d.  looking use 40gb rdma solution create 3 node scale-out file server cluster.  there need connect san cluster our blade servers using 40gb switch.  using supermicro twinblades model sbi-7227r-t2.  sumermicro offers 40gb 4x qdr switch blade enclosure model sbm-ibs-q3616m.  based on infiniscale iv silicon.  blades can use mezzanine cards model aoc-ibh-xqd based on connectx-2 silicon.  plan connect supermico blade enclosure switch larger 40gb mellanox switch sit between san , blades.

 

the san cluster physical systems no virtualization while our blades run vmware esxi 5.1 windows server 2012 vms.  not sure if  rdma can achieved between physical virtual environments such our san cluster blades.  don't know 40gb stuff in vmware.  know vmware working on paravirtual rdma based solution don't think available yet.  believe can use pass-thu method vm or sr-iov assign functions vm, don't know implementing either of these , pass-thu not option our environment. 

 

we have been thinking of sx6018 model switch form mellanox siit between san , blades.  san adapters thinking of mcx314a-bcbt or mcx354a-fcbt.  i'm not sure if these adapters , switch work correctly supermicro products based on connectx-2 , infiniscale iv instead of connectx3 , switchx-2 silicon.

 

if has had experience mellanox on vmware , or rdma hear learned , worked or didn't.  general suggestions , information has share design or ideas great.  san not going used store vms or vmdks, it's other data vm guest os's access.

 

thanks!

chris 


hi,

thank post.

i trying involve familiar topic further @ issue. there might time delay. appreciate patience.

thank understanding , support.

regards,


nick gu - msft



Windows Server  >  Platform Networking



Comments

Popular posts from this blog

Group Policy Event ID 1058 Error Code 1326 (The user name or password is incorrect)

Suspicious event log Event ID: 4905

DCOM received error "2147746132" from...