存储+调优:存储-IP-SAN
 数据一致性问题
     硬盘(本地,远程同步rsync)
     存储设备(网络)
     
 网络存储
     不同接口的磁盘
     1.速率
     2.支持连接更多设备
     3.支持热拔插
 存储设备什么互联
    千兆或者万兆的网卡 ip
    光纤网卡,光纤交换机 
    怎么解决大家都去访问存储设备数据的时候的一个竞争性问题
    尽可能减小网络延时,瓶颈不出在网络上
    I/O的效率更高一些
 硬盘接口是硬盘与主机系统间的连接部件,作用是在硬盘缓存和主机内存之间传输数据。不同的硬盘接口决定着硬盘与计算机之间的连接速度,在整个系统中,硬盘接口的优劣直接影响着程序运行快慢和系统性能好坏。
IDE
 IDE的英文全称为“Integrated Drive Electronics”,即“电子集成驱动器”
 IDE最高的传输速度133 MB/S
SATA  
 使用SATA(Serial ATA)口的硬盘又叫串口硬盘
 SATA1.0 100MB/s
 SATA2.0 7200rpm 3Gb/秒 
SCSI
 SCSI的英文全称为“Small Computer System Interface”(小型计算机系统接口)
 Ultra Wide SCSI        40MB
 Ultra2 Wide SCSI    80MB
 Ultra160 SCSI        160MB
 Ultra320 SCSI         320MB
SAS(Serial Attached SCSI)即串行连接SCSI
 SAS2 15000rpm 6.0Gb/秒 
固态硬盘(Solid State Disk、IDE FLASH DISK)
 SATA3.0 500MB/s
DAS—直连式存储
   直连式存储依赖服务器主机操作系统进行数据的IO读写和存储维护管理,数据备份和恢复要求占用服务器主机资源(包括CPU、系统IO等),数据流需要回流主机再到服务器连接着的磁带机(库),数据备份通常占用服务器主机资源20-30%,因此许多企业用户的日常数据备份常常在深夜或业务系统不繁忙时进行,以免影响正常业务系统的运行。直连式存储的数据量越大,备份和恢复的时间就越长,对服务器硬件的依赖性和影响就越大。 
 PC---> 主板---》接口---》数据总线---》硬盘
SCSI 小型计算机系统接口(直连式)
     1.SCSI协议信息
     2.本服务器承担I/O消耗
     
 NAS(Network Attached Storage:网络附属存储)是一种将分布、独立的数据整合为大型、集中化管理的数据中心,以便于对不同主机和应用服务器进行访问的技术。按字面简单说就是连接在网络上,具备资料存储功能的装置,因此也称为“网络存储器”。它是一种专用数据存储服务器。它以数据为中心,将存储设备与服务器彻底分离,集中管理数据,从而释放带宽、提高性能、降低总拥有成本、保护投资。其成本远远低于使用服务器存储,而效率却远远高于后者。目前国际著名的NAS企业有Netapp、EMC、OUO等。
PC-----》交换机----------》 nfs | cifs lvm RAID
1.可扩展
 2.I/O转移到网络
 SAN(存储区域网络及其协议Storage Area Network and SAN Protocols)
   存储区域网络(SAN)是一种高速网络或子网络,提供在计算机与存储系统之间的数据传输。存储设备是指一台或多台用以存储计算机数据的磁盘设备,通常指磁盘阵列。一个 SAN 网络由负责网络连接的通信结构、负责组织连接的管理层、存储部件以及计算机系统构成,从而保证数据传输的安全性和力度。 
 SAN (NAS+DAS)
         PC
          |
          |
         交换机
              /    \
            /       \
        lvm      lvm
       raid      raid
1.SAN存储被前端使用存储的设备识别为块设备
 2.需要实现在网络上传输scsi协议数据
 3.光纤协议 FC
ISCSI------》 IP SAN
 FC -------》 FC SAN
1.存储节点 iscsi_target
 2.前端节点 iscsi_initiator
         -------       ----------
         server1        server2
         -------       ---------
       172.16.1.10       172.16.1.20
            \               /
              \            /
         交换机
               /           \
             /                \
       172.16.1.1       172.16.1.2
        -------           ------
        node1            node2
        --------          -------
node1  node2
 [root@node1 ~]# yum install scsi-target-utils
 [root@node1 ~]# vim /etc/tgt/targets.conf 
 default-driver iscsi
 # Continue if tgtadm exits with non-zero code (equivalent of
 # --ignore-errors command line option)
 #ignore-errors yes
 # Sample target with one LUN only. Defaults to allow access for all initiators:
<target iqn.2012-02.com.uplooking:node1.target1>
     backing-store /dev/sdb
     vendor_id node1
     product_id storage1
     initiator-address 172.16.1.10
     initiator-address 172.16.1.20
 </target>
[root@node1 ~]# service tgtd start
 Starting SCSI target daemon: Starting target framework daemon
   
 [root@node1 ~]# netstat -tunpl | grep tgtd
 tcp        0      0 0.0.0.0:3260                0.0.0.0:*                   LISTEN      3710/tgtd           
 tcp        0      0 :::3260                     :::*                        LISTEN      3710/tgtd           
 [root@node1 ~]# tgt-admin --show
 Target 1: iqn.2012-02.com.uplooking:node1.target1
     System information:
         Driver: iscsi
         State: ready
     I_T nexus information:
     LUN information:
         LUN: 0
             Type: controller
             SCSI ID: IET     00010000
             SCSI SN: beaf10
             Size: 0 MB
             Online: Yes
             Removable media: No
             Backing store type: rdwr
             Backing store path: None
         LUN: 1
             Type: disk
             SCSI ID: IET     00010001
             SCSI SN: beaf11
             Size: 21475 MB
             Online: Yes
             Removable media: No
             Backing store type: rdwr
             Backing store path: /dev/sdb
     Account information:
     ACL information:
         172.16.1.10
         172.16.1.20
 server1 server2
[root@server2 ~]# yum install iscsi-initiator-utils
 [root@server2 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.1:3260
 iscsiadm: can not connect to iSCSI daemon (111)!
 iscsiadm: Could not scan /sys/class/iscsi_transport.
 iscsiadm: Could not scan /sys/class/iscsi_transport.
 iscsiadm: can not connect to iSCSI daemon (111)!
 iscsiadm: Cannot perform discovery. Initiatorname required.
 iscsiadm: Discovery process to 172.16.1.1:3260 failed to create a discovery session.
 iscsiadm: Could not perform SendTargets discovery.
[root@server2 ~]# service iscsid start
 Starting iSCSI daemon:                                     [  OK  ]
                                                            [  OK  ]
 [root@server2 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.1:3260
 172.16.1.1:3260,1 iqn.2012-02.com.uplooking:node1.target1
 [root@server2 ~]# iscsiadm -m discovery -t sendtargets -p 172.16.1.2:3260
 172.16.1.2:3260,1 iqn.2012-02.com.uplooking:node2.target1
[root@server2 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node1.target1 -l
 Logging in to [iface: default, target: iqn.2012-02.com.uplooking:node1.target1, portal: 172.16.1.1,3260]
 Login to [iface: default, target: iqn.2012-02.com.uplooking:node1.target1, portal: 172.16.1.1,3260]: successful
[root@server2 ~]# iscsiadm -m node -T iqn.2012-02.com.uplooking:node2.target1 -l
 Logging in to [iface: default, target: iqn.2012-02.com.uplooking:node2.target1, portal: 172.16.1.2,3260]
 Login to [iface: default, target: iqn.2012-02.com.uplooking:node2.target1, portal: 172.16.1.2,3260]: successful
 [root@server2 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
 255 heads, 63 sectors/track, 2610 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
 /dev/sda1   *           1          13      104391   83  Linux
 /dev/sda2              14        2610    20860402+  8e  Linux LVM
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
 64 heads, 32 sectors/track, 20480 cylinders
 Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 21.4 GB, 21474836480 bytes
 64 heads, 32 sectors/track, 20480 cylinders
 Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdc doesn't contain a valid partition table
[root@server2 ~]# udevinfo -a -p /sys/block/sdb/
Udevinfo starts with the device specified by the devpath and then
 walks up the chain of parent devices. It prints for every device
 found, all possible attributes in the udev rules key format.
 A rule to match, can be composed by the attributes of the device
 and the attributes from one single parent device.
  looking at device '/block/sdb':
     KERNEL=="sdb"
     SUBSYSTEM=="block"
     SYSFS{stat}=="      17       27      352       51        0        0        0        0        0       51       51"
     SYSFS{size}=="41943040"
     SYSFS{removable}=="0"
     SYSFS{range}=="16"
     SYSFS{dev}=="8:16"
  looking at parent device '/devices/platform/host1/session1/target1:0:0/1:0:0:1':
     ID=="1:0:0:1"
     BUS=="scsi"
     DRIVER=="sd"
     SYSFS{ioerr_cnt}=="0x1"
     SYSFS{iodone_cnt}=="0x20"
     SYSFS{iorequest_cnt}=="0x20"
     SYSFS{iocounterbits}=="32"
     SYSFS{timeout}=="60"
     SYSFS{state}=="running"
     SYSFS{rev}=="0001"
     SYSFS{model}=="storage1"
     SYSFS{vendor}=="node1"
     SYSFS{scsi_level}=="6"
     SYSFS{type}=="0"
     SYSFS{queue_type}=="none"
     SYSFS{queue_depth}=="32"
     SYSFS{device_blocked}=="0"
  looking at parent device '/devices/platform/host1/session1/target1:0:0':
     ID=="target1:0:0"
     BUS==""
     DRIVER==""
  looking at parent device '/devices/platform/host1/session1':
     ID=="session1"
     BUS==""
     DRIVER==""
  looking at parent device '/devices/platform/host1':
     ID=="host1"
     BUS==""
     DRIVER==""
  looking at parent device '/devices/platform':
     ID=="platform"
     BUS==""
     DRIVER==""
[root@server2 ~]# vim /etc/udev/rules.d/90-iscsi.rules
 [root@server2 ~]# cat /etc/udev/rules.d/90-iscsi.rules
 SUBSYSTEM=="block",  SYSFS{size}=="41943040", SYSFS{model}=="storage1", SYSFS{vendor}=="node1", SYMLINK="iscsi/node1-disk"
 SUBSYSTEM=="block",  SYSFS{size}=="41943040", SYSFS{model}=="storage2", SYSFS{vendor}=="node2", SYMLINK="iscsi/node2-disk"
 [root@server2 ~]# start_udev 
 Starting udev:                                             [  OK  ]
 [root@server2 ~]# ls -l /dev/iscsi/
 total 0
 lrwxrwxrwx 1 root root 6 Feb 23 01:35 node1-disk -> ../sdb
 lrwxrwxrwx 1 root root 6 Feb 23 01:37 node2-disk -> ../sdc
[root@server2 ~]# scp /etc/udev/rules.d/90-iscsi.rules 172.16.1.10:/etc/udev/rules.d/
 [root@server2 ~]# pvcreate /dev/iscsi/node1-disk 
   Physical volume "/dev/iscsi/node1-disk" successfully created
 [root@server2 ~]# pvcreate /dev/iscsi/node2-disk 
   Physical volume "/dev/iscsi/node2-disk" successfully created
 [root@server2 ~]# 
 [root@server2 ~]# vgcreate vgiscsi /dev/iscsi/node1-disk /dev/iscsi/node2-disk
   /dev/hdc: open failed: Read-only file system
   /dev/cdrom: open failed: Read-only file system
   Attempt to close device '/dev/cdrom' which is not open.
   /dev/cdrom: open failed: Read-only file system
   Attempt to close device '/dev/cdrom' which is not open.
   /dev/cdrom: open failed: Read-only file system
   Attempt to close device '/dev/cdrom' which is not open.
   Volume group "vgiscsi" successfully created
 [root@server2 ~]# 
 [root@server2 ~]# lvcreate -l 250 -n lviscsi vgiscsi
   /dev/hdc: open failed: Read-only file system
   Logical volume "lviscsi" created
[root@server1 ~]# pvscan 
   PV /dev/sdc   VG vgiscsi   lvm2 [20.00 GB / 19.02 GB free]
   PV /dev/sdd   VG vgiscsi   lvm2 [20.00 GB / 20.00 GB free]
   Total: 2 [39.99 GB] / in use: 2 [39.99 GB] / in no VG: 0 [0   ]
 [root@server1 ~]# vgchange -ay vgiscsi
   1 logical volume(s) in volume group "vgiscsi" now active
 [root@server1 ~]# lvs
   LV      VG      Attr   LSize    Origin Snap%  Move Log Copy%  Convert
   lviscsi vgiscsi -wi-a- 1000.00M   
[root@server2 ~]# yum install gfs2-utils
 [root@server2 ~]# yum install kmod-gfs
 [root@server2 ~]# modprobe gfs
 [root@server2 ~]# lsmod | grep gfs
 gfs                   269540  0 
 gfs2                  349833  1 lock_dlm
 dlm                   113749  12 gfs,lock_dlm
 configfs               28753  2 dlm
[root@server2 ~]# mkfs.gfs2 -t iscsi_cluster:lvsicsi -p lock_nolock /dev/vgiscsi/lviscsi 
 This will destroy any data on /dev/vgiscsi/lviscsi.
Are you sure you want to proceed? [y/n] y
Device:                    /dev/vgiscsi/lviscsi
 Blocksize:                 4096
 Device Size                0.98 GB (256000 blocks)
 Filesystem Size:           0.98 GB (255997 blocks)
 Journals:                  1
 Resource Groups:           4
 Locking Protocol:          "lock_nolock"
 Lock Table:                "iscsi_cluster:lvsicsi"
 UUID:                      220DA71F-D2B2-23FC-BE5E-66F90A104D34
