Monday, July 21, 2014

How to run SQL Server 2012 Best Practices Analyzer in Windows 2012

Step-by-step instructions:

Install the Microsoft Baseline Configuration Analyzer v2 (BCA) – This will install to C:\Program Files\Microsoft Baseline Configuration Analyzer 2\.  (http://www.microsoft.com/en-sg/download/details.aspx?id=16475)

Install the Microsoft SQL Server 2012 Best Practices Analyzer (SQL BPA)– This will install to C:\Windows\System32\BestPractices\v1.0\Models\SQL2012BPA\. (http://www.microsoft.com/en-sg/download/details.aspx?id=29302)

Navigate to the directory C:\Windows\System32\BestPractices\v1.0\Models and right-click the SQL2012BPA folder and make a copy.

Navigate to C:\ProgramData\Microsoft\Microsoft Baseline Configuration Analyzer 2\Models\ and paste the SQL2012BPA folder in this new location.

Start the BCA with Run As Administrator and you will now see the SQL Server 2012 BPA listed in the “Select a product” dropdown.

image

image

image

image

Sunday, July 20, 2014

Mysql Replication Crash Safe Slaves Replication Feature introduced in 5.6

What to fix:

2014-07-21 06:29:58 3850 [Warning] Slave SQL: If a crash happens this configuration does not guarantee that the relay log info will be consistent, Error_code: 0

Current Settings

root@server1> select table_name,engine from information_schema.tables
    -> where table_name in ('slave_relay_log_info','slave_master_info');
+----------------------+--------+
| table_name           | engine |
+----------------------+--------+
| slave_master_info    | InnoDB |
| slave_relay_log_info | InnoDB |
+----------------------+--------+
2 rows in set (0.00 sec)

root@server1> show variables like '%info_repository';
+---------------------------+-------+
| Variable_name             | Value |
+---------------------------+-------+
| master_info_repository    | FILE  |
| relay_log_info_repository | FILE  |
+---------------------------+-------+
2 rows in set (0.00 sec)

Add 2 lines to server 1/2 my.cnf


master_info_repository=TABLE
relay_log_info_repository=TABLE

Restart mysqld
Verify the Setting

root@server1> show variables like '%info_repository';
+---------------------------+-------+
| Variable_name             | Value |
+---------------------------+-------+
| master_info_repository    | TABLE |
| relay_log_info_repository | TABLE |
+---------------------------+-------+
2 rows in set (0.00 sec)

root@server1> select * from mysql.slave_master_info \G
*************************** 1. row ***************************
       Number_of_lines: 23
       Master_log_name: bin-log.000008
        Master_log_pos: 120
                  Host: 127.0.0.1
             User_name: repl_user
         User_password: repl_pass
                  Port: 3307
         Connect_retry: 60
           Enabled_ssl: 0
                Ssl_ca:
            Ssl_capath:
              Ssl_cert:
            Ssl_cipher:
               Ssl_key:
Ssl_verify_server_cert: 0
             Heartbeat: 1800
                  Bind:
    Ignored_server_ids: 0
                  Uuid: 37f99013-0fa4-11e4-ae2f-e13c88f43118
           Retry_count: 86400
               Ssl_crl:
           Ssl_crlpath:
Enabled_auto_position: 0
1 row in set (0.00 sec)

root@server1> select * from mysql.slave_relay_log_info \G
*************************** 1. row ***************************
  Number_of_lines: 7
   Relay_log_name: /mysql56/mysql-server1/slave/relay-bin.000021
    Relay_log_pos: 281
  Master_log_name: bin-log.000008
   Master_log_pos: 120
        Sql_delay: 0
Number_of_workers: 0
               Id: 1
1 row in set (0.00 sec)

Enable Semi-synchronous Replication on existing Multi-Master (2-node) replication

Install Plugins

root@server1> install plugin rpl_semi_sync_master soname 'semisync_master.so';
root@server1> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';

root@server2> install plugin rpl_semi_sync_master soname 'semisync_master.so';
root@server2> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';

Enable replication

root@server1> set global rpl_semi_sync_master_enabled = 1;
root@server1> set global rpl_semi_sync_master_timeout = 10000;
root@server1> set global rpl_semi_sync_slave_enabled = 1;

root@server2> set global rpl_semi_sync_master_enabled = 1;
root@server2> set global rpl_semi_sync_master_timeout = 10000;
root@server2> set global rpl_semi_sync_slave_enabled = 1;

Restart Slaves

root@server1> stop slave;
root@server1> start slave;

root@server2> stop slave;
root@server2> start slave;

Monitoring the Status

root@server1> show variables like 'rpl_semi_sync%';
+------------------------------------+-------+
| Variable_name                      | Value |
+------------------------------------+-------+
| rpl_semi_sync_master_enabled       | ON    |
| rpl_semi_sync_master_timeout       | 10000 |
| rpl_semi_sync_master_trace_level   | 32    |
| rpl_semi_sync_master_wait_no_slave | ON    |
| rpl_semi_sync_slave_enabled        | ON    |
| rpl_semi_sync_slave_trace_level    | 32    |
+------------------------------------+-------+
6 rows in set (0.00 sec)

root@server1> update appdb.t1 set name='server1' where id=2;

root@server1> show status like 'rpl_semi_sync%';
+--------------------------------------------+----------+
| Variable_name                              | Value    |
+--------------------------------------------+----------+
| Rpl_semi_sync_master_clients               | 1        |
| Rpl_semi_sync_master_net_avg_wait_time     | 2178     |
| Rpl_semi_sync_master_net_wait_time         | 2178     |
| Rpl_semi_sync_master_net_waits             | 1        |
| Rpl_semi_sync_master_no_times              | 1        |
| Rpl_semi_sync_master_no_tx                 | 3        |
| Rpl_semi_sync_master_status                | ON       |
| Rpl_semi_sync_master_timefunc_failures     | 0        |
| Rpl_semi_sync_master_tx_avg_wait_time      | 4011376  |
| Rpl_semi_sync_master_tx_wait_time          | 20056881 |
| Rpl_semi_sync_master_tx_waits              | 5        |
| Rpl_semi_sync_master_wait_pos_backtraverse | 4        |
| Rpl_semi_sync_master_wait_sessions         | 0        |
| Rpl_semi_sync_master_yes_tx                | 1        |
| Rpl_semi_sync_slave_status                 | ON       |
+--------------------------------------------+----------+
15 rows in set (0.01 sec)

root@server1> select * from information_schema.plugins where plugin_name like 'rpl%'\G
*************************** 1. row ***************************
           PLUGIN_NAME: rpl_semi_sync_master
        PLUGIN_VERSION: 1.0
         PLUGIN_STATUS: ACTIVE
           PLUGIN_TYPE: REPLICATION
   PLUGIN_TYPE_VERSION: 2.0
        PLUGIN_LIBRARY: semisync_master.so
PLUGIN_LIBRARY_VERSION: 1.4
         PLUGIN_AUTHOR: He Zhenxing
    PLUGIN_DESCRIPTION: Semi-synchronous replication master
        PLUGIN_LICENSE: GPL
           LOAD_OPTION: ON
*************************** 2. row ***************************
           PLUGIN_NAME: rpl_semi_sync_slave
        PLUGIN_VERSION: 1.0
         PLUGIN_STATUS: ACTIVE
           PLUGIN_TYPE: REPLICATION
   PLUGIN_TYPE_VERSION: 2.0
        PLUGIN_LIBRARY: semisync_slave.so
PLUGIN_LIBRARY_VERSION: 1.4
         PLUGIN_AUTHOR: He Zhenxing
    PLUGIN_DESCRIPTION: Semi-synchronous replication slave
        PLUGIN_LICENSE: GPL
           LOAD_OPTION: ON
2 rows in set (0.00 sec)

No built-in conflict detection in MySQL Multi-Master replication

Server version:         5.6.19-log MySQL Community Server (GPL)
Original Table Data:

root@server1> select * from appdb.t1;
+----+---------------------+
| id | name                |
+----+---------------------+
|  1 | N1                  |
|  2 | N2a                 |
| 11 | N1                  |
| 21 | 2014-07-20 09:51:53 |
+----+---------------------+
4 rows in set (0.08 sec)

Deletion without matching row ignored silently

root@server1> stop slave io_thread;
root@server2> stop slave io_thread;

root@server1> delete from appdb.t1 where id=1;
root@server2> delete from appdb.t1 where id=1;

root@server1> start slave io_thread;
root@server2> start slave io_thread;

Update executed without checking before image (very different from OGG, Streams, etc)

root@server1> stop slave io_thread;
root@server2> stop slave io_thread;

root@server1> update appdb.t1 set name='server1' where id=2;
root@server2> update appdb.t1 set name='server2' where id=2;

root@server1> start slave io_thread;
root@server2> start slave io_thread;

root@server1> select * from appdb.t1 where id=2;
+----+---------+
| id | name    |
+----+---------+
|  2 | server2 |
+----+---------+
1 row in set (0.00 sec)

root@server2> select * from appdb.t1 where id=2;
+----+---------+
| id | name    |
+----+---------+
|  2 | server1 |
+----+---------+

Friday, July 11, 2014

How to fix error FS-10751 & FS-10755 during the failsafe initial configuration

Error message:

12:13:08 AM 0x800706BA: The RPC server is unavailable.

12:13:08 AM FS-10751: Node node1 failed to create a class factory on remote node node2

12:13:08 AM FS-10755: Error creating the OracleFsWorker COM object

 

Fix: Disable the firewall rules between 2 failsafe nodes (netsh advfirewall set allprofiles state off)

image

Tuesday, July 8, 2014

Use fileshare (SMB) as shared storage for SQL Server Failover Clustering

image

image

[Warning Message]

You have specified a file server as the data directory \\dc1\sql2014_DB_SMB\MSSQL12.SQL2014\MSSQL\DATA. To avoid possible failures in the installation process, you must verify that the SQL Server service account has full control share permissions on the specified file server before continuing.

[Details]

Microsoft.SqlServer.Configuration.SqlEngine.ValidationException: You have specified a file server as the data directory \\dc1\sql2014_DB_SMB\MSSQL12.SQL2014\MSSQL\DATA. To avoid possible failures in the installation process, you must verify that the SQL Server service account has full control share permissions on the specified file server before continuing.

image

Monday, July 7, 2014

Fix the issue when configuring file share witness

 

Witness Type:

File Share Witness

Witness Resource:

\\dc1\Cluster_Vote

Cluster Managed Voting:

Enabled

Started

7/7/2014 9:52:24 PM

Completed

7/7/2014 9:52:24 PM

All nodes are configured to have quorum votes

Your cluster quorum configuration will be changed to the configuration shown above.

An error was encountered while modifying the quorum settings.
Your cluster quorum settings have not been changed.
There was an error configuring the file share witness '\\dc1\Cluster_Vote'.
Unable to save property changes for 'File Share Witness'.
The system cannot find the file specified

image

To fix this, add full permission to \\dc1\Cluster_Vote for user Cluster$ (in my case, it’s VirtualDB$)

image

image

image