MySQL Migrate Users and Grants to Another MySQL Database


Sometimes we need to change our database. In this case we have to migrate users and their grants to the new database server.

Here is a procedure for this issue

1- Prepare grant sqls

[root@olddb ~]# mysql -uroot -pmypass -s -N << EOF > showgr.sql
select distinct CONCAT('show grants for "', user, '"@"', host, '";') as query from mysql.user;

showgr.sql looks like :

[root@olddb ~]# head showgr.sql
show grants for "USER1"@"";
show grants for "USER2"@"";
show grants for "USER3"@"";

Let’s create grant sqls running showgr.sql against to the old database.

[root@olddb ~]# mysql -uroot -N -ppmypass -s -r < showgr.sql > grant.sql

Unfortunately MySQL creates grant sql without “;” at the end. To add “;” at the end of every line we use sed.

[root@olddb ~]# more grant.sql |sed 's/$/;/g' > grants.sql

2- SCP file and import it!

Now we have a list of grants for importing too the new one.

[root@olddb ~]# scp grants.sql user@newdb:/tmp/

Now  we can import grant file on the new host

[root@newdb ~]# mysql -uroot -pmypass  < grants.sql

Good Luck.


Thread 1 cannot allocate new log [Checkpoint not complete] or [archival required] in alert.log

In alert.log

ORACLE Instance ORCL - Can not allocate log, archival required
Thread 1 cannot allocate new log, sequence xxx
Checkpoint not complete

These messages are written to alert.log when the database wants to reuse a redo log file but it can not do. This is related with

1- )  Not finished checkpointing (DBWR) or
2- ) ARCH process can not copy the redo log file to the archive destination.

Oracle wants to reuse log file but current checkpoint is in that log file. The database halts until checkpoint completion or archiving activity finishes. So Oracle waits until reusing redo log file safely.

As you see one reason is slow DBWR. OK, how can we make faster DBWR?

  • You can use multiple DBWR processes
  • Enable ASYNC I/O : For RHEL , you can verify if relink libaio with your Oracle binary using ldd and nm commands. Verifying Asynchronous I/O Usage.
  • You can use DBWR slaves if ASYNC I/O is not supported.

Adding more redo log files can also solve this issue. This operation may create more room for DBWR as you guess.

Increasing size of redo log files can also help you. Like adding more redo log files, increasing size of them give you more time to reuse redo log file.

Good luck.

ORA-16651: requirements not met for enabling fast-start failover

When you want to enable fast start failover, you may get “ORA-16651: requirements not met for enabling fast-start failover”.

For solution

1-) Check if flashback is enable

select flashback_on from v$database;

If this returns NO, then enable it using “alter database flashback ON;” command.

2-) Check FastStartFailoverTarget parameter

You can set it using these commands :
PS: DBA is Primary, DBB is Standby database
edit database ‘DBA’ set property FastStartFailoverTarget=’DBB’;
edit database ‘DBB’ set property FastStartFailoverTarget=’DBA’;

MySQL Error 1236


When mysqld crashes before flushes the log , you may get this error and replication can not resume.

This can be shown like when “show slave status” performed:

Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: ‘Could not find first log file name in binary log index file’

For solution :

Suppose that you have a Master-Master Replication on HostA and HostB.

On HostA:

1- stop slave;

2-reset master;

3-reset slave;

On HostB:

1- stop slave;

2-reset master;

3-reset slave;

After all you can set up replication between them using this method

Good Luck.

Index Compression Part VI: 12c Index Advanced Compression Block Dumps (Tumble and Twirl)

Richard Foote's Oracle Blog

Sometimes, a few pictures (or in this case index block dumps) is better than a whole bunch of words 🙂

In my previous post, I introduced the new Advanced Index Compression feature, whereby Oracle automatically determines how to best compress an index. I showed a simple example of an indexed column that had sections of index entries that were basically unique (and so don’t benefit from compression) and other sections with index entries that had many duplicates (that do compress well). Advanced Index Compression enables Oracle to automatically just compress those index leaf blocks where compression is beneficial.

If we look at a couple of partial block dumps from this index, first a dump from a leaf block that did have duplicate index entries:

Leaf block dump
header address 216542820=0xce82e64
KDXCOLEV Flags = – – –
kdxcoopc 0xa0: opcode=0: iot flags=-C- is converted=Y
kdxconco 2
kdxconro 651
kdxcofbo 1346=0x542
kdxcofeo 2172=0x87c
kdxcoavs 826

View original post 857 more words

12c Fixed Subquery

Oracle Scratchpad

It’s been about 8 months since I posted a little note about a “notable change in behaviour” of the optimizer when dealing with subqueries in the where clause that could be used to return a constant, e.g.:

There’s been a note at the start of the script ever since saying: Check if this is also true for any table with ‘select fixed_value from table where primary = constant’ I finally had a few minutes this morning (San Francisco time) to check – and it does, in both and With the t1 table from the previous article run the following:

Instead of the historic 5% of 5% selectivity, the plan shows the optimizer predicting (approximately) the 80,000 rows that it will actually get:

I can’t think it’s very likely that anyone has written SQL that looks like this – but I’m often surprised by what I see in the…

View original post 28 more words