innodb adaptive hash index nedir ?

Eğer bir InnoDB bir tablo ayrılan bellek alanının tamamını doldurursa, mySQL hızlı sonuç döndürmek için b-tree indexler yerine innodb adaptive hash indexleri kullanmayı tercih eder. InnoDB adaptive hash indexler var olan B-Tree indexleme mimarisine göre oluşturulur ve yeni bellek eklediğiniz de InnoDB daha çok hash indexler oluşturacaktır. Burada unutulmaması gereken bu indexlerinde normal indexler gibi arada bir elden geçirilip optimize edilmesi gerektiğidir.

InnoDB bir tabloda tanımlanmış indexlerin kullanımlarını inceler. Daha sonrasında sık talep edilen indexleri bellekte otomatik oluşturulan bir hash tablosunda tutar. İndexlerin keylerine göre isimlendirerek tutar. Böylece bütün b-tree indexlerini bellekte tutmak zorunda kalmaz ve sadece en çok erişilen, istenen indexleri tutar.

Bu da ciddi performans kazanımlarına vesile olur.

B-Tree ve InnoDB adaptive hash index ile ilgili bir kaç link :

http://www.youtube.com/watch?v=coRJrcIYbF4
http://en.wikipedia.org/wiki/B-tree
http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-performance-adaptive_hash_index.html
http://dev.mysql.com/doc/refman/5.0/en/innodb-adaptive-hash.html

Posted in MySQL, Veritabanları on November 13th, 2013 by Kürşad DARA | | 0 Comments

Fusion-io iodrive 2 mysql ve bios ayarları

MySQL 5.5 sunucusunda tavsiye edilen ayarlar şunlar :

innodb_flush_method=O_DIRECT
Use direct IO and fsync to flush data and log files.
innodb_max_dirty_pages_pct=60
Percentage of dirty pages to keep in the pool
Keep at 60-80% of total system memory. Always monitor memory usage to make sure the system does not start to swap out pages.
innodb_write_io_threads=16
Number of background write I/O threads
innodb_read_io_threads=8
Number of background read I/O threads
innodb_adaptive_flushing=1
Adaptively flush dirty pages.
innodb_log_files_in_group=2
Default and recommended value is 2
innodb_log_file_size=2G
Reduce checkpoint activity by increasing log file size. This also increases recovery time, but fast storage like ioMemory handles this well.
innodb_io_capacity=10000
Number of I/O operations
innodb_thread_concurrency=0
Set an unlimited number of threads. For CPU-bound workloads, it may make sense to limit this to 16.
innodb_purge_threads=1
Let the InnoDB purge operation run in a separate thread to reduce contention
BIOS tarafında ise benim yaptığım ayarlar aşağıda fakat siz kendi donanımınıza göre bu ayarlarla oynayabilirsiniz. Fusion IO sitesinde gerekli dökümanlar mevcut.

Hyperthreading – disabled or enabled – try both & test with your application.  Quite often disabling HT can help performance.
Intel Virtualization Technology – if you’re not virtualizing, disable this
VT-d – if you’re not virtualizing, disable this
Power Technology – disabled
Enhanced SpeedStep – disabled
Turbo Mode – disabled
P-STATE Coordination – if this option is still available, set to HW_ALL
Package C State limit – C0, this option should effectively disable C-states for the server
Fan Control – ideally I’d recommend that this was set to “Enhanced”.  The ideal scenario is that you’re maximizing CPU usage by your application – enabling your server to support a larger workload.  Turning the fans up will help cool that busy CPU and avoid any thermal throttling of the CPU.

Posted in Fusion IO, Linux, MySQL, Veritabanları on December 24th, 2012 by Kürşad DARA | | 0 Comments

Fusion-io iodrive 2 300K iops ve maceralarım (Kurulum, testler)

Kurulum, Donanım, işletim sistemi ve sorunlar :

Kuracağım kart :
Fusion-io ioDrive 2 365 GB MLC

Fusion IO ioDrive2

İşletim sistemi ve kernel :
Centos 5.5, Kernel 2.6.18
mySQL version 5.5.16-log MySQL Community Server (GPL)

Sunucular :
1. sunucu Fujitsu PRIMERGY TX100 S3
2. sunucu Fujitsu PRIMERGY RX300
Bu iki sunucu da Intel sunucular.

Fusion IO yetkilileri ile yaptığım yazışmalarda problem çıkmayacağı ve kolaylıkla kartı taktıktan sonra sürücülerini yükleyip kullanabileceğimi söylediler. Fakat her zaman ki gibi uygulamada öyle olmadı ya da uzun uğraşlar sonunda benim kafam durdu yapamadım.

Önce kartı taktıktan sonra yukarda belirttiğimi konfigürasyonlarda kurulumu başlattım.

Şu adresten intel sürücülerini indirdim. Önce RHEL5 Intel sürücülerini sonra RHEL6 Intel sürücülerini indirdim ve yüklemeye çalıştım ama başarılı olamadım bir türlü. Hataları buraya yazmıyorum ama birçok hata aldım ve sonunda bunların hepsinin sebebinin centos 5.5 ten kaynaklandığına kanaat getirdim.

Sunucu kartı görüyor fakat işletim sistemi tanımıyordu.

[root@mgw03 /]# lspci |grep Fusion
01:00.0 Mass storage controller: Fusion-io ioDrive2 (rev 04)

Sunucuları yedekledikten sonra Centos 6.3 2.6.32 kernel e upgrade ettikten Fusion-io sürücü kurulumunu tekrar başlattım. Fakat yine bir çok hata ile karşılaştım.

Fusion-io’dan bir yetkili ile görüştükten sonra yukarda verdiğim sürücü adresinden AMD sürücülerini yüklememi önerdiler ve sonuç başarılı oldu.

Garip ama Intel sunucu da Intel sürücüleri çalışmadı AMD sürücüleri çalıştı.

Çalıştığını da fio-status ve fio-pci-check komutları ile anlayabilirsiniz.

[root@mgw03 ~]# lspci -s 01:00.0 -v  
01:00.0 Mass storage controller: Fusion-io ioDrive2 (rev 04)
        Subsystem: Hewlett-Packard Company Device 006d
        Flags: bus master, fast devsel, latency 0, IRQ 36
        Memory at fe600000 (32-bit, non-prefetchable) [size=64K]
        Capabilities: [40] Power Management version 3
        Capabilities: [48] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [60] Express Endpoint, MSI 01
        Capabilities: [100] Device Serial Number 0c-00-00-00-00-01-00-00
        Kernel driver in use: iodrive
        Kernel modules: iomemory-vsl
[root@mgw03 fusion]# fio-status

Found 1 ioMemory device in this system
Fusion-io driver version: 3.1.1 build 172

Adapter: Single Controller Adapter
        HP 365GB MLC PCIe ioDrive2 for ProLiant Servers, Product Number:673642-B21, SN:3UN234E06A
        External Power: NOT connected
        Connected ioMemory modules:
          fct0: Product Number:673642-B21, SN:3UN234E06A

fct0    Attached as 'fioa' (block device)
        ioDrive2 Adapter Controller, Product Number:673642-B21
        PCI:01:00.0, Slot Number:2
        Firmware v6.0.2, rev 108609 Public
        365.00 GBytes block device size
        Internal temperature: 63.00 degC, max 64.97 degC
        Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
[root@mgw03 fusion]# fio-pci-check 

Root Bridge PCIe 2250 MB/sec needed max

      Bridge 00:1c.00 (01-01)
          Needed 2000 MB/sec Avail 2000 MB/sec

            ioDrive 01:00.0 Firmware 108609
                Slot Power limit: 25.0W (25000mw)
[root@mgw03 ~]# lsmod |grep iomemory_vsl
iomemory_vsl         1023734  2

Sonuç olarak sunucuya kartı tanıttım. Kısaca kurulum macerası böyle.

Şimdi gelelim detaylara.

Öncelikle sürücü dosyalarını indirip diske açtıktan sonra

[root@mgw03 hpioaccel]# ls -lah
total 113M
drwxr-xr-x  2 root root 4.0K Mar 22  2012 .
drwxr-xr-x. 5 root root 4.0K Oct 30 16:24 ..
-rwxr-xr-x  1 root root 4.9K Mar 22  2012 buildsrc
-rw-r--r--  1 root root  10K Mar 21  2012 fio-common-3.1.1.172-1.0.el6.x86_64.rpm
-rw-r--r--  1 root root  82M Mar 21  2012 fio-firmware-ioaccelerator-107004-1.0.noarch.rpm
-rw-r--r--  1 root root  23K Mar 21  2012 fio-remote-util-3.1.0.63-1.0.noarch.rpm
-rw-r--r--  1 root root  18M Mar 21  2012 fio-smis-3.1.0.63-1.0.x86_64.rpm
-rw-r--r--  1 root root 3.7M Mar 21  2012 fio-snmp-agentx-3.1.0.63-1.0.x86_64.rpm
-rw-r--r--  1 root root  20K Mar 21  2012 fio-snmp-mib-hp-3.1.0.63-1.0.noarch.rpm
-rw-r--r--  1 root root 7.4K Mar 21  2012 fio-sysvinit-3.1.1.172-1.0.el6.x86_64.rpm
-rw-r--r--  1 root root 1.6M Mar 21  2012 fio-util-3.1.1.172-1.0.el6.x86_64.rpm
-rwxr-xr-x  1 root root 2.9K Mar 22  2012 hpinstall.sh
-rw-r--r--  1 root root 1.4M Mar 21  2012 iomemory-vsl-2.6.32-131.0.15.el6.x86_64-3.1.1.172-1.0.el6.x86_64.rpm
-rw-r--r--  1 root root 1.4M Mar 21  2012 iomemory-vsl-2.6.32-220.el6.x86_64-3.1.1.172-1.0.el6.x86_64.rpm
-rw-r--r--  1 root root 1.4M Mar 21  2012 iomemory-vsl-2.6.32-71.el6.x86_64-3.1.1.172-1.0.el6.x86_64.rpm
-rw-r--r--  1 root root 3.0M Mar 21  2012 iomemory-vsl-3.1.1.172-1.0.el6.src.rpm
-rw-r--r--  1 root root 459K Mar 21  2012 libfio-dev-3.1.0.63-1.0.x86_64.rpm
-rw-r--r--  1 root root 205K Mar 21  2012 libfio-doc-3.1.0.63-1.0.noarch.rpm
-rw-r--r--  1 root root 435K Mar 21  2012 libvsl-3.1.1.172-1.0.el6.x86_64.rpm

bu dosyaları göreceksiniz.

Sıra ile aşağıdali işlemleri yapın.

[root@mgw03 hpioaccel]# chmod +x buildsrc hpinstall.sh
[root@mgw03 hpioaccel]# ./hpinstall.sh -build
[root@mgw03 hpioaccel]# ./hpinstall.sh

Bu işlemler sonunda sunucuyu restart ediyoruz.

Açılışta çalışsın diye chkconfig ile enable ediyoruz.

[root@mgw03 hpioaccel]# chkconfig iomemory-vsl on

Daha sonra /etc/sysconfig dizini altında iomemory-vsl dosyası içinden ilgili satırın başındaki # i kaldırıp servisi tekrar başlatıyoruz.

[root@srv init.d]# cd /etc/sysconfig
[root@srv sysconfig]# grep ENABLED iomemory-vsl 
# If ENABLED is not set (non-zero) then iomemory-vsl init script will not be
#ENABLED=1
[root@srv sysconfig]# vi iomemory-vsl
[root@srv sysconfig]# grep ENABLED iomemory-vsl 
# If ENABLED is not set (non-zero) then iomemory-vsl init script will not be
ENABLED=1
[root@srv sysconfig]# service iomemory-vsl restart
Stopping iomemory-vsl: 
Unloading module iomemory-vsl
                                                           [FAILED]
Starting iomemory-vsl: 
Loading module iomemory-vsl
Attaching: [                    ] (  0%) /Attaching:
[                    
Attaching: [====================] (100%) \
fioa
Attaching: [====================] (100%)
fiob
                                                           [  OK  ]

Gördüğünüz gibi fusion-io device geldi.

[root@mgw03 hpioaccel]# ls -la /dev/fio?
brw-rw---- 1 root disk 252, 0 Oct 30 10:24 /dev/fioa

Şimdi sıra geldi device partition oluşturup formatlamaya.

[root@localhost fio]# fdisk /dev/fioa
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x2171b966.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-44375, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-44375, default 44375): 
Using default value 44375

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@localhost fio]# fdisk -l

Disk /dev/sda: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x49e2fd2f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64       30402   243685376   8e  Linux LVM

Disk /dev/mapper/VolGroup-lv_root: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup-lv_swap: 4160 MB, 4160749568 bytes
255 heads, 63 sectors/track, 505 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup-lv_home: 191.7 GB, 191683887104 bytes
255 heads, 63 sectors/track, 23304 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/fioa: 365.0 GB, 365000000000 bytes
255 heads, 63 sectors/track, 44375 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 32768 bytes
Disk identifier: 0x2171b966

    Device Boot      Start         End      Blocks   Id  System
/dev/fioa1               1       44375   356442156   83  Linux
[root@localhost fio]# mkfs.ext3 /dev/fioa1
mke2fs 1.41.12 (17-May-2010)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=8 blocks
22282240 inodes, 89110539 blocks
4455526 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
2720 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 25 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Artık diskimizi mount edebiliriz.

[root@localhost /]# mkdir sql
[root@localhost /]# mount /dev/fioa1 sql/
[root@localhost /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       50G  2.0G   45G   5% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
/dev/sda1             485M   84M  376M  19% /boot
/dev/mapper/VolGroup-lv_home
                      176G  188M  167G   1% /home
/dev/fioa1            335G  195M  318G   1% /sql

Şimdi küçük bir test.

[root@localhost sql]# dd bs=100M count=128 if=/dev/zero of=test
128+0 records in
128+0 records out
13421772800 bytes (13 GB) copied, 56.0903 s, 239 MB/s

Testlere devam.

http://www.linuxinsight.com/files/seeker.c adresinden seeker.c dosyasını çekin.

[root@localhost sql]# gcc seeker.c -o seeker
[root@localhost sql]# ./seekmark -t100 -f/dev/fioa1 -s10000

READ benchmarking against /dev/fioa1 348088 MB

threads to spawn: 100
seeks per thread: 10000
io size in bytes: 512

Spawning worker 0 to do 10000 seeks
Spawning worker 1 to do 10000 seeks
Spawning worker 2 to do 10000 seeks
Spawning worker 3 to do 10000 seeks
Spawning worker 4 to do 10000 seeks
Spawning worker 5 to do 10000 seeks
Spawning worker 6 to do 10000 seeks
Spawning worker 7 to do 10000 seeks
Spawning worker 8 to do 10000 seeks
Spawning worker 9 to do 10000 seeks
Spawning worker 10 to do 10000 seeks
Spawning worker 11 to do 10000 seeks
Spawning worker 12 to do 10000 seeks
Spawning worker 13 to do 10000 seeks
Spawning worker 14 to do 10000 seeks
Spawning worker 15 to do 10000 seeks
Spawning worker 16 to do 10000 seeks
Spawning worker 17 to do 10000 seeks
Spawning worker 18 to do 10000 seeks
Spawning worker 19 to do 10000 seeks
Spawning worker 20 to do 10000 seeks
Spawning worker 21 to do 10000 seeks
Spawning worker 22 to do 10000 seeks
Spawning worker 23 to do 10000 seeks
Spawning worker 24 to do 10000 seeks
Spawning worker 25 to do 10000 seeks
Spawning worker 26 to do 10000 seeks
Spawning worker 27 to do 10000 seeks
Spawning worker 28 to do 10000 seeks
Spawning worker 29 to do 10000 seeks
Spawning worker 30 to do 10000 seeks
Spawning worker 31 to do 10000 seeks
Spawning worker 32 to do 10000 seeks
Spawning worker 33 to do 10000 seeks
Spawning worker 34 to do 10000 seeks
Spawning worker 35 to do 10000 seeks
Spawning worker 36 to do 10000 seeks
Spawning worker 37 to do 10000 seeks
Spawning worker 38 to do 10000 seeks
Spawning worker 39 to do 10000 seeks
Spawning worker 40 to do 10000 seeks
Spawning worker 41 to do 10000 seeks
Spawning worker 42 to do 10000 seeks
Spawning worker 43 to do 10000 seeks
Spawning worker 44 to do 10000 seeks
Spawning worker 45 to do 10000 seeks
Spawning worker 46 to do 10000 seeks
Spawning worker 47 to do 10000 seeks
Spawning worker 48 to do 10000 seeks
Spawning worker 49 to do 10000 seeks
Spawning worker 50 to do 10000 seeks
Spawning worker 51 to do 10000 seeks
Spawning worker 52 to do 10000 seeks
Spawning worker 53 to do 10000 seeks
Spawning worker 54 to do 10000 seeks
Spawning worker 55 to do 10000 seeks
Spawning worker 56 to do 10000 seeks
Spawning worker 57 to do 10000 seeks
Spawning worker 58 to do 10000 seeks
Spawning worker 59 to do 10000 seeks
Spawning worker 60 to do 10000 seeks
Spawning worker 61 to do 10000 seeks
Spawning worker 62 to do 10000 seeks
Spawning worker 63 to do 10000 seeks
Spawning worker 64 to do 10000 seeks
Spawning worker 65 to do 10000 seeks
Spawning worker 66 to do 10000 seeks
Spawning worker 67 to do 10000 seeks
Spawning worker 68 to do 10000 seeks
Spawning worker 69 to do 10000 seeks
Spawning worker 70 to do 10000 seeks
Spawning worker 71 to do 10000 seeks
Spawning worker 72 to do 10000 seeks
Spawning worker 73 to do 10000 seeks
Spawning worker 74 to do 10000 seeks
Spawning worker 75 to do 10000 seeks
Spawning worker 76 to do 10000 seeks
Spawning worker 77 to do 10000 seeks
Spawning worker 78 to do 10000 seeks
Spawning worker 79 to do 10000 seeks
Spawning worker 80 to do 10000 seeks
Spawning worker 81 to do 10000 seeks
Spawning worker 82 to do 10000 seeks
Spawning worker 83 to do 10000 seeks
Spawning worker 84 to do 10000 seeks
Spawning worker 85 to do 10000 seeks
Spawning worker 86 to do 10000 seeks
Spawning worker 87 to do 10000 seeks
Spawning worker 88 to do 10000 seeks
Spawning worker 89 to do 10000 seeks
Spawning worker 90 to do 10000 seeks
Spawning worker 91 to do 10000 seeks
Spawning worker 92 to do 10000 seeks
Spawning worker 93 to do 10000 seeks
Spawning worker 94 to do 10000 seeks
Spawning worker 95 to do 10000 seeks
Spawning worker 96 to do 10000 seeks
Spawning worker 97 to do 10000 seeks
Spawning worker 98 to do 10000 seeks
Spawning worker 99 to do 10000 seeks
thread 12 completed, time: 11.84, 844.81 seeks/sec, 1.2ms per request
thread 0 completed, time: 11.96, 836.33 seeks/sec, 1.2ms per request
thread 24 completed, time: 12.07, 828.36 seeks/sec, 1.2ms per request
thread 37 completed, time: 12.09, 826.86 seeks/sec, 1.2ms per request
thread 6 completed, time: 12.13, 824.47 seeks/sec, 1.2ms per request
thread 57 completed, time: 12.11, 825.90 seeks/sec, 1.2ms per request
thread 42 completed, time: 12.13, 824.40 seeks/sec, 1.2ms per request
thread 95 completed, time: 12.15, 823.11 seeks/sec, 1.2ms per request
thread 71 completed, time: 12.17, 821.42 seeks/sec, 1.2ms per request
thread 36 completed, time: 12.21, 819.07 seeks/sec, 1.2ms per request
thread 90 completed, time: 12.20, 819.74 seeks/sec, 1.2ms per request
thread 39 completed, time: 12.24, 816.99 seeks/sec, 1.2ms per request
thread 40 completed, time: 12.24, 816.79 seeks/sec, 1.2ms per request
thread 32 completed, time: 12.28, 814.27 seeks/sec, 1.2ms per request
thread 50 completed, time: 12.28, 814.53 seeks/sec, 1.2ms per request
thread 49 completed, time: 12.27, 815.13 seeks/sec, 1.2ms per request
thread 4 completed, time: 12.29, 813.80 seeks/sec, 1.2ms per request
thread 91 completed, time: 12.27, 814.86 seeks/sec, 1.2ms per request
thread 89 completed, time: 12.27, 814.73 seeks/sec, 1.2ms per request
thread 86 completed, time: 12.29, 813.74 seeks/sec, 1.2ms per request
thread 67 completed, time: 12.32, 811.69 seeks/sec, 1.2ms per request
thread 59 completed, time: 12.32, 811.49 seeks/sec, 1.2ms per request
thread 34 completed, time: 12.35, 809.91 seeks/sec, 1.2ms per request
thread 30 completed, time: 12.36, 809.13 seeks/sec, 1.2ms per request
thread 93 completed, time: 12.33, 810.77 seeks/sec, 1.2ms per request
thread 11 completed, time: 12.37, 808.41 seeks/sec, 1.2ms per request
thread 13 completed, time: 12.38, 807.88 seeks/sec, 1.2ms per request
thread 33 completed, time: 12.38, 807.75 seeks/sec, 1.2ms per request
thread 81 completed, time: 12.37, 808.47 seeks/sec, 1.2ms per request
thread 17 completed, time: 12.40, 806.32 seeks/sec, 1.2ms per request
thread 99 completed, time: 12.37, 808.15 seeks/sec, 1.2ms per request
thread 23 completed, time: 12.40, 806.32 seeks/sec, 1.2ms per request
thread 80 completed, time: 12.39, 807.36 seeks/sec, 1.2ms per request
thread 60 completed, time: 12.39, 807.23 seeks/sec, 1.2ms per request
thread 85 completed, time: 12.39, 807.23 seeks/sec, 1.2ms per request
thread 61 completed, time: 12.39, 806.91 seeks/sec, 1.2ms per request
thread 83 completed, time: 12.40, 806.45 seeks/sec, 1.2ms per request
thread 8 completed, time: 12.44, 804.12 seeks/sec, 1.2ms per request
thread 26 completed, time: 12.44, 803.99 seeks/sec, 1.2ms per request
thread 1 completed, time: 12.41, 805.74 seeks/sec, 1.2ms per request
thread 55 completed, time: 12.44, 804.12 seeks/sec, 1.2ms per request
thread 48 completed, time: 12.44, 803.66 seeks/sec, 1.2ms per request
thread 10 completed, time: 12.45, 803.15 seeks/sec, 1.2ms per request
thread 20 completed, time: 12.45, 803.02 seeks/sec, 1.2ms per request
thread 78 completed, time: 12.43, 804.76 seeks/sec, 1.2ms per request
thread 72 completed, time: 12.43, 804.51 seeks/sec, 1.2ms per request
thread 15 completed, time: 12.46, 802.25 seeks/sec, 1.2ms per request
thread 21 completed, time: 12.47, 802.05 seeks/sec, 1.2ms per request
thread 7 completed, time: 12.47, 801.67 seeks/sec, 1.2ms per request
thread 92 completed, time: 12.45, 803.34 seeks/sec, 1.2ms per request
thread 28 completed, time: 12.48, 801.15 seeks/sec, 1.2ms per request
thread 19 completed, time: 12.49, 800.90 seeks/sec, 1.2ms per request
thread 43 completed, time: 12.49, 800.70 seeks/sec, 1.2ms per request
thread 73 completed, time: 12.46, 802.31 seeks/sec, 1.2ms per request
thread 98 completed, time: 12.47, 801.80 seeks/sec, 1.2ms per request
thread 65 completed, time: 12.47, 801.73 seeks/sec, 1.2ms per request
thread 47 completed, time: 12.48, 801.03 seeks/sec, 1.2ms per request
thread 46 completed, time: 12.50, 800.06 seeks/sec, 1.2ms per request
thread 66 completed, time: 12.48, 801.54 seeks/sec, 1.2ms per request
thread 16 completed, time: 12.51, 799.55 seeks/sec, 1.3ms per request
thread 2 completed, time: 12.51, 799.17 seeks/sec, 1.3ms per request
thread 14 completed, time: 12.51, 799.23 seeks/sec, 1.3ms per request
thread 53 completed, time: 12.49, 800.64 seeks/sec, 1.2ms per request
thread 76 completed, time: 12.50, 800.19 seeks/sec, 1.2ms per request
thread 31 completed, time: 12.52, 798.85 seeks/sec, 1.3ms per request
thread 9 completed, time: 12.53, 798.28 seeks/sec, 1.3ms per request
thread 94 completed, time: 12.50, 800.00 seeks/sec, 1.2ms per request
thread 52 completed, time: 12.52, 798.53 seeks/sec, 1.3ms per request
thread 82 completed, time: 12.50, 799.74 seeks/sec, 1.3ms per request
thread 87 completed, time: 12.51, 799.62 seeks/sec, 1.3ms per request
thread 84 completed, time: 12.51, 799.42 seeks/sec, 1.3ms per request
thread 51 completed, time: 12.51, 799.11 seeks/sec, 1.3ms per request
thread 45 completed, time: 12.52, 798.59 seeks/sec, 1.3ms per request
thread 25 completed, time: 12.54, 797.51 seeks/sec, 1.3ms per request
thread 41 completed, time: 12.54, 797.38 seeks/sec, 1.3ms per request
thread 5 completed, time: 12.55, 797.07 seeks/sec, 1.3ms per request
thread 29 completed, time: 12.54, 797.38 seeks/sec, 1.3ms per request
thread 58 completed, time: 12.52, 798.66 seeks/sec, 1.3ms per request
thread 79 completed, time: 12.52, 798.53 seeks/sec, 1.3ms per request
thread 3 completed, time: 12.54, 797.58 seeks/sec, 1.3ms per request
thread 44 completed, time: 12.55, 796.50 seeks/sec, 1.3ms per request
thread 62 completed, time: 12.53, 797.96 seeks/sec, 1.3ms per request
thread 68 completed, time: 12.53, 797.96 seeks/sec, 1.3ms per request
thread 74 completed, time: 12.54, 797.51 seeks/sec, 1.3ms per request
thread 88 completed, time: 12.54, 797.32 seeks/sec, 1.3ms per request
thread 35 completed, time: 12.57, 795.54 seeks/sec, 1.3ms per request
thread 18 completed, time: 12.57, 795.36 seeks/sec, 1.3ms per request
thread 56 completed, time: 12.55, 796.88 seeks/sec, 1.3ms per request
thread 27 completed, time: 12.57, 795.23 seeks/sec, 1.3ms per request
thread 75 completed, time: 12.55, 796.62 seeks/sec, 1.3ms per request
thread 38 completed, time: 12.57, 795.54 seeks/sec, 1.3ms per request
thread 70 completed, time: 12.56, 796.31 seeks/sec, 1.3ms per request
thread 64 completed, time: 12.56, 795.92 seeks/sec, 1.3ms per request
thread 69 completed, time: 12.57, 795.67 seeks/sec, 1.3ms per request
thread 96 completed, time: 12.57, 795.48 seeks/sec, 1.3ms per request
thread 97 completed, time: 12.59, 794.41 seeks/sec, 1.3ms per request
thread 77 completed, time: 12.59, 794.22 seeks/sec, 1.3ms per request
thread 63 completed, time: 12.59, 794.16 seeks/sec, 1.3ms per request
thread 22 completed, time: 12.62, 792.14 seeks/sec, 1.3ms per request
thread 54 completed, time: 12.62, 792.52 seeks/sec, 1.3ms per request

total time: 12.63, time per READ request(ms): 0.013
79170.30 total seeks per sec, 791.70 READ seeks per sec per thread
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.28    0.00   91.65    6.08    0.00    0.00

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
fioa          304923.00   2445008.00         0.00    2445008          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.53    0.00   88.61    8.86    0.00    0.00

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
fioa          293798.00   2368488.00         0.00    2368488          0

Gördüğünüz gibi 300K iops değerlerine ulaştık.

Canlı sisteme aldığımızda bu değerleri yakalayamamış olsamda oldukça yüksek iops değerleri elde ettim.

MySQL ve BIOS seviyesinde yaptığım ayarlar için buraya tıklayın

Posted in Fusion IO, Linux, MySQL, Veritabanları on December 24th, 2012 by Kürşad DARA | | 0 Comments

mysql de alırken tabloları lock etmeden dump alma

–single-transaction parametresi ile bunu yapabilirsiniz.

mysqldump -q --single-transaction veritabani_ismi > mysqldump.sql

Posted in Linux, MySQL, Veritabanları on December 15th, 2012 by Kürşad DARA | | 0 Comments

mysqldump: Got error: 1556: You can’t use locks with log tables. when using LOCK TABLES hatası

mysqldump ile backup alırken aşağıdaki hata ile karşılaşıyorsanız tabloları lock etmesini kaldırıp devam edebilirsiniz veya mysql veritabanını yedeklemezsiniz.

mysqldump: Got error: 1556: You can't use locks with log tables. when using LOCK TABLES

lock etmesini kaldırmak için dump parametreleri arasına –lock-tables=0 eklemelisiniz.

--lock-tables=0

Posted in Linux, MySQL, Veritabanları on December 6th, 2012 by Kürşad DARA | | 0 Comments

mysql ile dump alırken bazı tabloları dahil etmemek için

–ignore-table parametresini kullanabilirsiniz.

mysqldump --ignore-table=db_ismi.table1 --ignore-table=db_ismi.table2 db_ismi > dump.sql

Posted in Linux, MySQL, Veritabanları on November 23rd, 2012 by Kürşad DARA | | 0 Comments

Innodb veritabanları için ibdata1’in dosyasının boyutunu küçültme (shrink edilmesi )

Innodb veritabanı kullanıyorsanız bir süre sonra ibdata1 dosyanızın çok fazla büyüdüğünü göreceksiniz.

Bu dosyanın büyümesi performans ve disk kullanımı açısından problem yaratacaktır. MySQL üstünde bu dosyanın küçültülmesi ile ilgili direk bir komut yok. O yüzden biraz çetrefilli bir yöntemle bu dosyanın boyutunu küçültmek gerekiyor.

Kısaca;

1. MySQL sunucuyu durdurun,
2. MySQL data dizinini komple yedekleyin,
3. MySQL sunucuyu tekrar başlatın,
4. Veritabanının yedeğini mysqldump ile alın,
5. Veritabanlarını silin,
6. MySQL sunucuyu tekrar durdurun,
7. ibdata1, ib_logfile0 ve ib_logfile1 dosyalarını silin,
8. MySQL sunucuyu tekrar çalıştırın,
9. Aldığınız yedeği tekrar yükleyin ( import )

bu adımlarla bu işlemi yapıyoruz.

Detaylara gelirsek;

1. MySQL sunucuyu durdurun.

2. MySQL data dizininin tamamını yedekleyin. Böylece herhangi bir sorun halinde geri dönmemiz kolay olacaktır.

3. MySQL sunucuyu tekrar başlatın.

4. Öncelikle veritabanının yedeğini alıyoruz.


/usr/bin/mysqldump --opt --routines --triggers --all-databases > all.sql

Yedeğin sağlık ve hatasız bir şekilde alındığından emin olun.

5. Veribanlarını drop ile silin.

6. MySQL sunucuyu tekrar durdurun.

7. MySQL data dizininde bulunan ibdata1, ib_logfile0 ve ib_logfile1 dosyalarını silin.

8. MySQL sunucuyu tekrar başlatın. MySQL sunucusunu çalıştırmadan önce. Konfigürasyon dosyanıza ( my.cnf )


innodb_file_per_table

parametresini ekleyin. Böylece artık her tablo için daha küçük bir ibd oluşmasını sağlayacaktır. Bu sayede ilerde çok büyüyen tabloları dump ve restore edip downtime yaşanmadan bu işlemleri yapabilir olursunuz.

Detaylı bilgi için : http://dev.mysql.com/doc/refman/5.6/en/innodb-multiple-tablespaces.html

9. Şimdi mysqldump ile aldığımız dosyayı tekrar import edip veritabanlarını tekrar yükleyin.


/usr/bin/mysql < all.sql

Posted in Linux, MySQL on November 21st, 2012 by Kürşad DARA | | 0 Comments

MySQL’de fragmented olmuş tabloları bulma ve optimize etme

Aşağıdaki query ile fragmented tabloları bulabilirsiniz.

mysql> select TABLE_NAME,Data_free from information_schema.TABLES where TABLE_SCHEMA NOT IN ('information_schema','mysql')
and Data_free > 0;
mysql> OPTIMIZE TABLE tablo_adi

şeklinde optimize edebilirsiniz. Bu işlem performansı artıracaktır.

Aşağıdaki script te otomatize edilmiş halde yapabilirsiniz. ( Google search )

#!/bin/sh

echo -n "MySQL username: " ; read username
echo -n "MySQL password: " ; stty -echo ; read password ; stty echo ; echo

mysql -u $username -p"$password" -NBe "SHOW DATABASES;" | grep -v 'lost+found' | while read database ; do
mysql -u $username -p"$password" -NBe "SHOW TABLE STATUS WHERE engine='MyISAM';" $database | while read name engine version rowformat rows 
avgrowlength datalength maxdatalength indexlength datafree autoincrement createtime updatetime checktime collation checksum createoptions comment ; do
  if [ "$datafree" -gt 0 ] ; then
   fragmentation=$(($datafree * 100 / $datalength))
   echo "$database.$name is $fragmentation% fragmented."
   mysql -u "$username" -p"$password" -NBe "OPTIMIZE TABLE $name;" "$database"
  fi
done
done

Posted in Linux, MySQL on November 20th, 2012 by Kürşad DARA | | 0 Comments

Kullandığınız MySQL sunucunun 32 bit mi 64 bit mi olduğunu anlamak

Bunu anlamak için 2 yöntem var aslında ikisi de aynı kapıya çıkıyor.

1. Komut satırından

[root@mgw03 ~]# mysql -V
mysql  Ver 14.14 Distrib 5.5.16, for Linux (x86_64) using readline 5.1

2. MySQL konsolundan

mysql> \s ( veya status )
--------------
mysql  Ver 14.14 Distrib 5.5.16, for Linux (x86_64) using readline 5.1

Connection id:          9
Current database:
Current user:           kursad@localhost
SSL:                    Not in use
Current pager:          stdout
Using outfile:          ''
Using delimiter:        ;
Server version:         5.5.16-log MySQL Community Server (GPL)
Protocol version:       10
Connection:             Localhost via UNIX socket
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
UNIX socket:            /var/lib/mysql/mysql.sock
Uptime:                 2 min 42 sec

Threads: 1  Questions: 134612  Slow queries: 0  Opens: 136  Flush tables: 1  Open tables: 129  Queries per second avg: 830.938
--------------

Posted in Linux, MySQL on November 19th, 2012 by Kürşad DARA | | 0 Comments

mySQL Too many open files hatası

MySQL loglarında aşağıdaki hatayla karşılaşabilirsiniz.

121117 23:21:03  InnoDB: Operating system error number 24 in a file operation.
InnoDB: Error number 24 means 'Too many open files'.

Çözüm için my.cnf dosyanıza;

innodb_open_files=24000
open-files-limit=96000

Burada dikkat edilecek nokta open-files-limit parametresinin değeri innodb_open_files parametresinin yaklaşık 4-5 katı olarak set etmeniz.

MySQL i yeniden başlattığınızda sorun çözülecektir.

Detay için : http://dev.mysql.com/doc/refman/5.1/en/server-options.html#option_mysqld_open-files-limit

Posted in Linux, MySQL on November 17th, 2012 by Kürşad DARA | | 0 Comments

Next Page »