ELK Stack – Upgrade from 2.x to 5.x

Elasticsearch Snapshot and Restore



Elasticsearch Migration Helper Plugin

cd /usr/share/elasticsearch/

./bin/plugin install https://github.com/elastic/elasticsearch-migration/releases/download/v2.0.4/elasticsearch-migration-2.0.4.zip

you may get updated command from here -> Elasticsearch Migration Plugin Install

You may download plugin from here -> Elasticsearch Migration Plugin

After plugin install, access plugin using URL


netstat -na | egrep ‘9200|9300’

vi /etc/elasticsearch/elasticsearch.yml

network.host: localhost      to       network.host: “0”


If we have error for file descriptors.

Elasticsearch uses a lot of file descriptors or file handles. Running out of file descriptors can be disastrous and will most probably lead to data loss. Make sure to increase the limit on the number of open files descriptors for the user running Elasticsearch to 65,536 or higher.

Set ulimit -n 65536 as root before starting Elasticsearch, or set nofile to 65536 in /etc/security/limits.conf.

you may check current settings using command  ->    ulimit -a

To set it permanently, set this value in /etc/security/limits.conf file for user with which elasticsearch is running, its in most of cases elasticsearch

for more details check this link – Elasticsearch – Configuring System Settings

curl -X GET “localhost:9200/_nodes/stats/process?filter_path=**.max_file_descriptors”

cat /proc/sys/fs/file-max

vi /usr/lib/systemd/system/elasticsearch.service

/etc/init.d/elasticsearch restart

systemctl daemon-reload

/etc/init.d/elasticsearch restart

warning: /etc/elasticsearch/elasticsearch.yml created as /etc/elasticsearch/elasticsearch.yml.rpmnew
warning: /etc/sysconfig/elasticsearch created as /etc/sysconfig/elasticsearch.rpmnew
warning: /usr/lib/systemd/system/elasticsearch.service created as /usr/lib/systemd/system/elasticsearch.service.rpmnew

cd /usr/share/elasticsearch/

./bin/elasticsearch-plugin list

./bin/elasticsearch-plugin remove elasticsearch-migration


tail -f /path_to_logs/logs/elasticsearch.log


Kibana Upgrade


yum update kibana


if you get some error like “Login is currently disabled because the license could not be determined. Please check that Elasticsearch has the X-Pack plugin installed and is reachable, then refresh this page.”

update x-pack license using following command.

curl -XPUT 'http://<host>:<port>/_xpack/license' -H "Content-Type: application/json" -d @license.json
curl -XPUT 'localhost:9200/_xpack/security/user/elastic/_password' -H "Content-Type: application/json" -d '{
  "password" : "elasticpassword"

Ref: Elastic ,

Solaris OK prompt commands

OK show-disks  ——> To show the disks
OK probe-scsi  ——> To search the scsi devices attached to the primary scsi controller
OK probe-scsi-all —> To search all the scsi devices
OK devalias   —-> to list device alias names
OK devalias <alias> <path> —>To temporarily create a device alias
OK printenv            —->To view the current NVRAM settings
OK setenv <env> <value> —–> To set the envirement variables
OK set-defaults                 —–> To set the open boot prompt settings to the factory default
OK nvalias <alias> <path>  —>To set the device alias permanently to NVRAM
OK nvunalias cdrom1  —-> To remove the nvalias ‘cdrom1’ from NVRAMRC
OK .version   ——> To find out the Open boot prompt version
OK .ent_addr —–> To find out the ethernet MAC address
OK .speed    —–> To find out the CPU and PCI bus speeds
OK banner    —–> To display the Model,Architecture,processor,openboot version,ethernet address,hostid and etc
OK set-defaults —-> To reset variable values to the factory defaults
OK reset-all —–> To reboot the system from OK Prompt
OK show-devs  —–>To show the PCI devices
OK boot   —> boot the system from the default boot devices
OK boot cdrom —-> to boot from cdrom
OK boot disk —-> boots the system from device as specified by the disk device alias
OK boot device-path —->boot from the full device mentioned
OK boot net —-> network boot .boots from a TFTP boot server or Jumpstart server
OK boot net -install  —–> Jumpstart boot.
OK boot tape —–> Tape boot.boots off a SCSI tape if available
OK boot -h —-> boot halted .boot into a halted state(ok prompt) intersting for troubleshooting boot at the lowest level
OK boot -r —-> Reconfiguration boot.Boot and search for all attached device.useful when new device attached to the system
OK boot -s —-> Single user.boots the system to run level 1
OK boot -v —-> verbose boot.show good debugging information.
OK boot -F failsafe   —> to boot the server to failsafe mode

Displaying System Information
Commands to display additional system related information .Not all commands work on all Platforms
OK .idprom  ——–> Display ID PROM contents
OK .traps  ——–> Display a list of processor-dependent trap types
OK show-devs —–>display list of installed and probed devices
OK eject floppy —-> Eject the floppy
OK eject cdrom ——>eject the cdrom
OK sync —–> call the operating system to write information to hard disk

Emergency Keyboard Commands
These are key sequences recognized by the system to perform predetermined
actions at boot time or during normal operation.

Stop     —> Bypass POST .(This command does not depend on security-mode)
Stop-A —> Abort.(This will also stop a running system. You can
resume normal operations if you enter go at the prompt.
Enter anything else and you will stay halted)
Stop-D  —> Enter diagnostic mode(set diag-switch?to true)
Stop-N  —> Reset NVRAM contents to default values.


Ref: Suresh-Solaris

netstat – Find number of active connections in Linux using netstat

The “netstat” command is quite useful for checking connections to your machine. If we wanted to see ALL of the connections (which i really recommend you don’t do unless you’re trying to debug something and then you should probably pipe it to a file) we could use the “netstat -a” command.

Using “netstat -a” will give you something like this:


tcp	 0	 0 app.mydomain.com:http	 SYN_RECV
tcp	 0	 0 app.mydomain.com:http	 SYN_RECV
tcp	 0	 0 app.mydomain.com:http SYN_RECV
tcp	 0	 0 app.mydomain.com:http SYN_RECV
tcp	 0	 0 app.mydomain.com:http SYN_RECV
tcp	 0	 0 app.mydomain.com:http	 SYN_RECV
tcp	 0	 0 app.mydomain.com:http SYN_RECV
tcp	 0	 0 app.mydomain.com:http	 41-135-22-100.dsl.mwe:64774 SYN_RECV

As you can see it does name resolving for us and all that good stuff. Sometimes very hand but that’s not what this is about.

Total connections Count

We want to get some solid numbers so we can take a broader perspective. To do this we can use the following command:

netstat -an | wc -l

This will show us a count of all connections that we presently have to our machine.

Connections on specific port

We can take this one step further even. Lets say you only wanted to see traffic coming across port 80 (standard HTTP). We can grep our netstat then count it like so:

netstat -an | grep :80 | wc -l

Connections Count based on Connection state

Finally, lets take a look at the big picture in a category form. It is often extremely useful to see what those connections are doing, especially when you think you might just have tons of open connections that are idle and are trying to tweak your settings. It’s been known to happen where you have a really busy web server for instance, and maybe it’s running a lot of database connections to the same box, then stopping. That often causes things like the TIME_WAIT to pile up and a large number for any of these may be an indication that you need to adjust your tcp timeout settings.

netstat -ant | awk '{print $6}' | sort | uniq -c | sort -n
      1 CLOSING
      1 established
      1 FIN_WAIT2
      1 Foreign
      2 CLOSE_WAIT
      6 FIN_WAIT1
      7 LAST_ACK
      7 SYN_RECV
     44 LISTEN
    297 TIME_WAIT

So there you have it. A quick way to return counts on your connections in your linux environment.

Check opened ports on server

Occasionally, when using netstat you may only care about ports that you are listening on. This is especially important if you are running a server that isn’t behind a firewall because it helps you determine what you may be vulnerable to that you aren’t aware of. using the netstat -l provides us with an excellent way to view this information.

root@nox [~]# netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State
tcp        0      0 *:mysql                     *:*                         LISTEN
tcp        0      0 *:submission                *:*                         LISTEN
tcp        0      0 *:pop3                      *:*                         LISTEN
tcp        0      0 localhost:783               *:*                         LISTEN


Statistics by Protocol

Another very common thing and powerful tool that netstat has built in is to show you network statistics in an overview fashion. If you’re just trying to get a good idea about packet statistics then the netstat -s command may be what you’re looking for. Here is some sample output. Keep in mind that netstat -s will show statistics broken down by protocol, so the fewer protocol stacks you are running the more compacted this summary will be.

netstat -s
139502653 total packets received
28 with invalid addresses
0 forwarded
0 incoming packets discarded
133312468 incoming packets delivered
84570989 requests sent out
366 outgoing packets dropped
50 reassemblies required
25 packets reassembled ok
110 fragments received ok
220 fragments created
180285 ICMP messages received
1586 input ICMP message failed.
ICMP input histogram:
destination unreachable: 9516
timeout in transit: 331
echo requests: 170151
echo replies: 284
172009 ICMP messages sent
0 ICMP messages failed
ICMP output histogram:
destination unreachable: 1818
echo request: 40
echo replies: 170151
InType0: 284
InType3: 9516
InType8: 170151
InType11: 331
OutType0: 170151
OutType3: 1818
OutType8: 40
1104118 active connections openings
2918161 passive connection openings
26607 failed connection attempts
256788 connection resets received
10 connections established
128535136 segments received
78146054 segments send out
1645036 segments retransmited
0 bad segments received.
185776 resets sent
5125395 packets received
1867 packets to unknown port received.
0 packet receive errors
5158639 packets sent
511 SYN cookies sent
511 SYN cookies received
12748 invalid SYN cookies received
14894 resets received for embryonic SYN_RECV sockets
159972 packets pruned from receive queue because of socket buffer overrun
2 packets pruned from receive queue
73 ICMP packets dropped because they were out-of-window
1965839 TCP sockets finished time wait in fast timer
78 time wait sockets recycled by time stamp
36503 packets rejects in established connections because of timestamp
2487605 delayed acks sent
33477 delayed acks further delayed because of locked socket
Quick ack mode was activated 45146 times
233 times the listen queue of a socket overflowed
233 SYNs to LISTEN sockets ignored
9643039 packets directly queued to recvmsg prequeue.
7969358 packets directly received from backlog
3291115817 packets directly received from prequeue
24087199 packets header predicted
5532135 packets header predicted and directly queued to user
30481401 acknowledgments not containing data received
42935286 predicted acknowledgments
814 times recovered from packet loss due to fast retransmit
339835 times recovered from packet loss due to SACK data
336 bad SACKs received
Detected reordering 2070 times using FACK
Detected reordering 854 times using SACK
Detected reordering 10 times using reno fast retransmit
Detected reordering 1840 times using time stamp
3234 congestion windows fully recovered
20175 congestion windows partially recovered using Hoe heuristic
TCPDSACKUndo: 11509
14757 congestion windows recovered after partial ack
1004274 TCP data loss events
TCPLostRetransmit: 54568
129 timeouts after reno fast retransmit
33120 timeouts after SACK recovery
31346 timeouts in loss state
885023 fast retransmits
93299 forward retransmits
337378 retransmits in slow start
128472 other TCP timeouts
TCPRenoRecoveryFail: 356
35936 sack retransmits failed
9 times receiver scheduled too late for direct processing
57242284 packets collapsed in receive queue due to low socket buffer
49286 DSACKs sent for old packets
157 DSACKs sent for out of order packets
95033 DSACKs received
2091 DSACKs for out of order packets received
39363 connections reset due to unexpected data
35517 connections reset due to early user close
12861 connections aborted due to timeout
6 times unable to send RST due to no memory
TCPSACKDiscard: 60
TCPDSACKIgnoredOld: 2937
TCPDSACKIgnoredNoUndo: 38596
TCPSpuriousRTOs: 2925
TCPSackShifted: 1905464
TCPSackMerged: 2048679
TCPSackShiftFallback: 995770
TCPBacklogDrop: 41842
InBcastPkts: 20
InOctets: 60455654365
OutOctets: 154094094438
InBcastOctets: 6560

Process Information

Another extremely useful tool for server administrators who are trying to track down processes that have run amuck is the netstat -p command. This returns the PID of the process that has the connection. It’s also quite useful if you’ve got someone abusing a PID and you need to find out what IP it is so that you can get in touch with that individual or to block connections from that IP in the future. Here’s some sample output from netstat -p.

netstat -p
tcp        0      0 localhost:56423  example.domain.com:https ESTABLISHED 27911/java
tcp        0     52 localhost:ssh    oh-76-76-76-76.dhcp.e:51653 ESTABLISHED 3344/sshd
tcp        0      0 localhost:imaps  76.sub-76-76-76.myvz:9258 ESTABLISHED 14501/dovecot/imap-
Ref: Exchange Core

Web Server Performance Tuning

First off, Apache, nginx, or LightSpeed aside – if you’re running a server with 1000-2000 requests/second it’s time to start thinking about dual servers and load balancing. Depending on what you’re serving you can easily get more out of any of those servers, but at those rates you’re serving something important (or at least high-traffic), so you want redundancy in addition to the ability to handle momentary load spikes.
Start seriously considering a load balancing infrastructure i.e. HAProxy and NGINX.

You can certainly consider other high-performance web servers (nginx is very popular), or you can consider tuning your Apache configuration for better performance.

Some Apache suggestions.

Before doing anything else, read the Apache performance tuning documentation.

  1. MaxRequestsPerChild is really only useful for containing resource leaks.
    100 (your current value) is absolutely insane. You’re churning processes which kills performance.
    0 (Never kill a child) is certainly viable if all you’re serving are static resources.
    10000 (ten thousand, the default) is fine in almost all circumstances. 50000 (fifty thousand) is what I use for pure static HTML sites.
  2. StartServersMinSpareServers and MaxSpareServers can be tuned.
    I generally set StartServers and MinSpareServers to the same value.
    If there is a specific minimum number of spare servers you want to keep around, that is the number you should start with. A good value for this is your low-water-mark of simultaneous active connections.
    MaxSpareServers should be set to 75-80% of your high-water-mark of simultaneous active connections.
  3. ServerLimit and MaxClients can possibly be increased.
    If you have lots of free RAM and lots of free CPU, increase these numbers.
    If you’re running close to resource saturation, leave them as-is.
  4. Use graceful restarts
    You say you are seeing “momentary extreme peaks” in your load when Apache restarts.
    This tells me you’re probably not using graceful restarts.
    Whatever is causing Apache to restart, have it send SIGUSR1 to Apache rather than SIGHUP(or heaven forbid, actually stopping and starting the entire server). This is far less abusive and disruptive to the system than a regular restart of a full stop/start.
  5. Consider other MPMs
    You are almost certainly using the prefork MPM if you’re on a Unix system.
    Consider the Worker MPM instead.
    Tuning for the Worker MPM is a little different
  6. Spend some cache
    Apache has caching modules which can be used to hold frequently accessed data in RAM. This avoids a round-trip to the disk (or at least the filesystem layer) for frequently accessed data.
    Configuring memory backed caching can give you a pretty big performance boost for a relatively small amount of memory.

5 Tips to Boost the Performance of Your Apache Web Server

Install Mod_Pagespeed to Speed Up Apache and Nginx Performance Upto 10x

13 Apache Web Server Security and Hardening Tips

5 Ways to Optimize Apache Performance

Improving Linux System Performance with I/O Scheduler Tuning






top command on multi core processor

top command shows CPU usage as a percentage of a single CPU by default. That’s why you can have percentages that are >100. On a system with 4 physical or virtual cores, you can see up to 400% CPU usage.

You can change this behavior by pressing I (that’s Shift + i and toggles “Irix mode”) while top is running. That will cause it to show the percentage of available CPU power being used. As explained in man top:

    1. %CPU  --  CPU Usage
The task's share of the elapsed CPU time since the last screen
update, expressed as a percentage of total  CPU  time.   In  a
true  SMP environment, if 'Irix mode' is Off, top will operate
in 'Solaris mode' where a task's cpu usage will be divided  by
the  total  number  of  CPUs.  You toggle 'Irix/Solaris' modes
with the 'I' interactive command.

Alternatively, you can press 1 which will show you a breakdown of CPU usage per CPU:

top - 13:12:58 up 21:11, 17 users,  load average: 0.69, 0.50, 0.43
Tasks: 248 total,   3 running, 244 sleeping,   0 stopped,   1 zombie
%Cpu0  : 33.3 us, 33.3 sy,  0.0 ni, 33.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu1  : 16.7 us,  0.0 sy,  0.0 ni, 83.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  : 60.0 us,  0.0 sy,  0.0 ni, 40.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:   8186416 total,  6267232 used,  1919184 free,   298832 buffers
KiB Swap:  8191996 total,        0 used,  8191996 free,  2833308 cached

Windows – How to Kill hung service – Windows Service Which is Stuck

Sometimes as an administrator you may need to kill a service which is stuck in a ‘starting’ or ‘stopping’ state, in order to avoid having to reboot a server in the middle of the day.

These are the simple steps you need to do:

Find out the Service Name:

To do this, go in to services and double click on the service which has stuck.  Make a note of the “Service Name”.

Find out the PID of the service

To kill the service you have to know its PID or Process ID.

Open an elevated command prompt and type in:

sc queryex servicename

(where servicename is the name of the service you obtained from Step 1.)

Replace ‘servicename’ with the services registry name. For example: Print Spooler is spooler. (See Picture)

After running the query you will by presented with a list of details. You will want to locate the PID. (Highlighted)

Kill the PID

Now that you have the PID, you can run the following command to kill the hung process.

From the same command prompt type in:

taskkill /f /pid [PID]

Where [PID] is the service number.

This will force kill the hung service. (See Picture)

If it is successful you should receive the following message:

SUCCESS: The process with PID XXXX has been terminated.

Be careful of what you are killing though.  If you kill a critical windows service you may end up forcing the machine to reboot on it own.

Note: By forcing a service to stop you can also use these instructions to Kill a Windows Service which is stuck at starting as well.  This will allow you to restart the service.

Ref: SpiceWorks  Support4IT

The difference between fork(), vfork(), exec() and clone()


The fork call basically makes a duplicate of the current process, identical in almost every way (not everything is copied over, for example, resource limits in some implementations but the idea is to create as close a copy as possible).

The new process (child) gets a different process ID (PID) and has the the PID of the old process (parent) as its parent PID (PPID). Because the two processes are now running exactly the same code, they can tell which is which by the return code of fork – the child gets 0, the parent gets the PID of the child. This is all, of course, assuming the fork call works – if not, no child is created and the parent gets an error code.


The basic difference between vfork and fork is that when a new process is created with vfork(), the parent process is temporarily suspended, and the child process might borrow the parent’s address space. This strange state of affairs continues until the child process either exits, or calls execve(), at which point the parent process continues.

This means that the child process of a vfork() must be careful to avoid unexpectedly modifying variables of the parent process. In particular, the child process must not return from the function containing the vfork() call, and it must not call exit() (if it needs to exit, it should use _exit(); actually, this is also true for the child of a normal fork()).


The exec call is a way to basically replace the entire current process with a new program. It loads the program into the current process space and runs it from the entry point. exec() replaces the current process with a the executable pointed by the function. Control never returns to the original program unless there is an exec() error.


Clone, as fork, creates a new process. Unlike fork, these calls allow the child process to share parts of its execution context with the calling process, such as the memory space, the table of file descriptors, and the table of signal handlers.

When the child process is created with clone, it executes the function application fn(arg). (This differs from fork, where execution continues in the child from the point of the original fork call.) The fn argument is a pointer to a function that is called by the child process at the beginning of its execution. The arg argument is passed to the fn function.

When the fn(arg) function application returns, the child process terminates. The integer returned by fn is the exit code for the child process. The child process may also terminate explicitly by calling exit(2) or after receiving a fatal signal.

Reasons to Stick with MySQL

mysql update

MySQL’s founder is encouraging people to steer away from his creation. Here’s why he’s wrong.

Schism between the worlds of open source and proprietary software is never going to go away as long as open source remains viable and competitive. Eventually, you have a meeting of both worlds. In the case of MySQL, it was when Sun Microsystems purchased the open source database for $1 billion in 2008, a significant multiple given MySQL was about a $50 million business at the time.

MySQL was the cause of the significant hang-up when Oracle tried to acquire Sun in 2009, as MySQL founder Michael “Monty” Widenius staunchly opposed the deal and complained to the European Commission. The whole $7.5 billion deal was held up almost a year because of this one product, which Oracle made major promises to support in return for getting the European Commission off its back.

Monty hasn’t given up his assaults on MySQL, even though he took the entire code base circa the Oracle acquisition, forked it, and made a new product calledMariaDB. Monty is welcome to his own opinion, even if he has been on this charge now for four years and looks a little obsessed for it. He’s a successful, respected developer whose complaints can’t be merely dismissed as sour grapes. After all, he got $1 billion out of Sun. He can argue he’s fighting on principle.

Monty has his say, now we have ours. Here are five reasons why you should stick with the open source database. (And for balance, see Rikki Endsley’s article on thereasons you should leave MySQL behind.)

1. There is more MySQL investment and innovation than ever before.

The conventional wisdom in the open-source community is that Oracle wanted MySQL so it could throttle the threat to its RDBMS business. This accusation would make sense if Microsoft was the accused firm, but not Oracle. Its flagship database is far and away more advanced and MySQL was at best going to nibble around the edges.

Since the acquisition, Oracle has increased the MySQL staff and given it a more mature engineering process. Rather than the typical open-source project with people scattered around the planet, engineering and planning is driven from Oracle.

In this time, one developer notes, the company has been making the code more modular. That means short-term work but long-term payback. In MySQL 5.6, they split one of the crucial locks in the MySQL Server, the LOCK_open, which could improve top performance by more than 100%.

Plus, the major storage engine for MySQL is InnoDB, and Oracle acquired InnoDB back in 2005. The InnoDB developers, also located within Oracle, work with the MySQL and Oracle database teams for even better integration.

2. MySQL products remain solid.

MariaDB and open-source advocates complain that new code in MySQL 5.5 doesn’t have test cases and that some of the enterprise features in version 5.5 are closed source. That is a matter of open source purity, of course, and one for any customer to take into consideration.

Still, when it came out in February, MySQL 5.6 was well-received as a solid, well-performing product with a number of new features. Oracle spent two years releasing Development Milestone Releases (DMR) to the MySQL Community for testing and feedback.

MySQL 5.6 went from 4 CPU threads in prior versions to 64 CPU threads, nearly tripled the number of concurrent connections from the prior version, and saw a four-fold improvement in read speed. There are many more improvements that would take too long to list.

Robert Hodges, president of the database clustering and replication firm, said he has no doubt of the viability of MySQL and has yet to meet a manager who fears MySQL will be ruined by Oracle. The bottom line is that Oracle is growing MySQL into an enterprise-class DBMS.

3. MySQL is designed with a focus on the Web, Cloud, and Big Data

Oracle was not deaf to the trends in computing and put emphasis on the Web, cloud computing, and big data projects. The focus was on both MySQL and MySQL cluster to provide improvements in scale-up and scale-out performance, high availability, self-healing and data integrity, provisioning, monitoring and resource management, developer agility, and security.

To support cloud services, MySQL replication has been greatly enhanced to include a new feature, Global Transaction Identifiers (GTIDs). GTIDs make it simple to track and compare replication progress between the master and slave servers. This makes it easier to recover from failures while offering flexibility in the provisioning and on-going management of multi-tier replication.

In April 2013,  Oracle announced the MySQL Applier for Hadoop. The Applier enables the replication of events from MySQL to Hadoop / Hive / HDFS as they happen and complements existing batch-based Apache Sqoop connectivity.

One of the first firms to embrace MySQL in a Big Data environment is Nokia, which consists of a centralized, petabyte-scale Hadoop cluster that is interconnected with a 100TB Teradata enterprise data warehouse (EDW), numerous Oracle and MySQL data marts, and visualization technologies that allow Nokia’s 60,000+ users around the world tap into the massive data store. And MariaDB? Good luck finding anything related to Big Data there.

4. MySQL Enterprise

MySQL Enterprise was introduced before the Oracle purchase, but Oracle has significantly improved the product. Version 5.6 added High Availability features like Replication, Oracle VM Templates for MySQL, DRBD, Oracle Solaris Clustering, and Windows Failover Clustering for MySQL. It also introduced Enterprise Audit to perform policy-based auditing compliance on new and existing applications.

There’s also the Enterprise Monitor, which continuously monitors your database and advises you of best practices to implement. It also offers Query Analyzer to monitor application performance and Workbench, which offers data modeling, SQL development, and comprehensive administration tools for server configuration and user administration.

5. There are more MySQL projects than before.

Before the acquisition, MySQL AB had 400 employees in 25 countries, with 70% working from home offices. That’s a remarkable bit of juggling, but there are arguments to be had for working together. But as Yahoo CEO Marissa Mayer noted when she put an end to the extensive remote workforce at Yahoo, to get things done, you need to collaborate. That means being in the same building.

A MySQL Architect at Oracle said in his blog that Oracle has new, whole teams working together, some in its giant towers in Redwood Shores, California and other based elsewhere, working on special projects for MySQL. A whole group is working on the clustering software. There is another group working on manageability, a whole optimization team working on database algorithms, another team working on replication (vital for cloud and Big Data), and a whole team making it more scalable.

None of the anti-MySQL arguments are against the product’s performance. Most of Monty’s arguments stem from open source purity, and he has the right to make that complaint. But MySQL was a $75 million company when Sun bought it. Oracle is a $37 billion company. It knows a few things about professional software development and it’s going to do things its own way. It has turned over the basic MySQL code to the GPL, but its own extensions, like Enterprise Edition, are not obliged to be open source.

Monty more or less held up the Sun acquisition for months with his protests to the European Commission. Now he’s throwing stones from the outside. There comes a point when an advocate can become a nuisance and his actions can backfire.

Ref: Blog SmartBear

MySQL Update – MySQL Upgrade

mysql update

MySQL update process some times create issues like, MySQL server doesn’t start.

when you perform MySQL update using

yum update


yum update mysql


yum update mysql-server

before restarting or stopping server  , you need to run command



mysql_upgrade -uroot -p

MySQL Update Errors

some time mysql server stop and doesn’t start and you are unable to perform mysql_upgrade command,

you get error like this in mysql logs  /var/log/mysqld.log

[ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'
[ERROR] Fatal error: mysql.user table is damaged. Please run mysql_upgrade.
[ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist
[ERROR] Native table 'performance_schema'.'events_waits_current' has the wrong structure
[ERROR] Native table 'performance_schema'.'events_waits_history' has the wrong structure
[ERROR] Native table 'performance_schema'.'events_waits_history_long' has the wrong structure
[ERROR] Native table 'performance_schema'.'setup_consumers' has the wrong structure
[ERROR] Native table 'performance_schema'.'setup_instruments' has the wrong structure
[ERROR] Native table 'performance_schema'.'setup_timers' has the wrong structure
[ERROR] Native table 'performance_schema'.'performance_timers' has the wrong structure
[ERROR] Native table 'performance_schema'.'threads' has the wrong structure
[ERROR] Native table 'performance_schema'.'events_waits_summary_by_thread_by_event_name' has the wrong structure
[ERROR] Native table 'performance_schema'.'events_waits_summary_by_instance' has the wrong structure
[ERROR] Native table 'performance_schema'.'events_waits_summary_global_by_event_name' has the wrong structure
[ERROR] Native table 'performance_schema'.'file_summary_by_event_name' has the wrong structure
[ERROR] Native table 'performance_schema'.'file_summary_by_instance' has the wrong structure
[ERROR] Native table 'performance_schema'.'mutex_instances' has the wrong structure
[ERROR] Native table 'performance_schema'.'rwlock_instances' has the wrong structure
[ERROR] Native table 'performance_schema'.'cond_instances' has the wrong structure
[ERROR] Native table 'performance_schema'.'file_instances' has the wrong structure

in that case first start mysql server in safe mode and then perform mysql_upgrade like this.

mysqld_safe --skip-grant-tables

mysql_upgrade -uroot -p

Supported MySQL Update Paths

  • MySQL Update from a release series version to a newer release series version is supported. For example, MySQL Update from 5.6.26 to 5.6.27 is supported. Skipping release series versions is also supported. For example, upgrading from 5.6.25 to 5.6.27 is supported.
  • MySQL Update one release level is supported. For example, upgrading from 5.5 to 5.6 is supported. MySQL Update to the latest release series version is recommended before upgrading to the next release level. For example, upgrade to the latest 5.5 release before upgrading to 5.6.
  • MySQL Update more than one release level is supported, but only if you upgrade one release level at a time. For example, MySQL Update from 5.1 to 5.5, and then to 5.6. Follow the upgrade instructions for each release, in succession.
  • Direct MySQL Update that skip a release level (for example, MySQL Update directly from MySQL 5.1 to 5.6) are not recommended or supported.

KVM Installation on CentOS 6

KVM and CentOS 6

CentOS 6 has native availability of KVM virtualization support and tools in the base distribution. 


See the meta packages contained in:

# yum grouplist | grep -i virt

1. Host Setup

Install all the packages you might need.

yum -y install @virt* dejavu-lgc-* xorg-x11-xauth tigervnc \
libguestfs-tools policycoreutils-python bridge-utils

If you have use any directories other than /var/lib/libvirt for kvm files, set the selinux context. In this example I use /vm to store my disk image files.

semanage fcontext -a -t virt_image_t "/vm(/.*)?"; restorecon -R /vm

Allow packet forwarding between interfaces.

sed -i 's/^\(net.ipv4.ip_forward =\).*/\1 1/' /etc/sysctl.conf; sysctl -p

Configure libvirtd service to start automatically and reboot.

chkconfig libvirtd on; shutdown -r now

Optionally you can set up bridging which will allow guests to have a network adaptor on the same physical lan as the host. In this example eth0 is the device to support the bridge and br0 will be the new device.

chkconfig network on
service network restart
yum -y erase NetworkManager
cp -p /etc/sysconfig/network-scripts/ifcfg-{eth0,br0}
sed -i -e'/HWADDR/d' -e'/UUID/d' -e's/eth0/br0/' -e's/Ethernet/Bridge/' \
echo DELAY=0 >> /etc/sysconfig/network-scripts/ifcfg-br0
echo 'BOOTPROTO="none"' >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo BRIDGE=br0 >> /etc/sysconfig/network-scripts/ifcfg-eth0
service network restart
brctl show

The host is now ready to start creating kvm guests.

2. Guest Setup

Since there are many options for setting up a guest, it is easier to have variables collect the information which will be used in a single command to create the guest. Several options are shown, and most can be adjusted as needed.

Start by reviewing the available OS variants.

virt-install --os-variant=list | more

Select one of the OS options:

OS="--os-variant=win7 --disk path=/var/lib/libvirt/iso/virtio-win.iso,device=cdrom"
OS="--os-variant=win2k8 --disk path=/var/lib/libvirt/iso/virtio-win.iso,device=cdrom"

Select a network option, replacing the MAC address if needed:

Net="--network bridge=br0"
Net="--network model=virtio,bridge=br0"
Net="--network model=virtio,mac=52:54:00:00:00:00"
Net="--network model=virtio,bridge=br0,mac=52:54:00:00:00:00"

Select a disk option, replacing the filename and size with desired values:

Disk="--disk /vm/Name.img,size=8"
Disk="--disk /var/lib/libvirt/images/Name.img,size=8"
Disk="--disk /var/lib/libvirt/images/Name.img,sparse=false,size=8"
Disk="--disk /var/lib/libvirt/images/Name.qcow2,sparse=false,bus=virtio,size=8"
Disk="--disk vol=pool/volume"
Disk="--livecd --nodisks"
Disk="--disk /dev/mapper/vg_..."

Select a source (live cd iso, pxe or url):

Src="-l http://ftp.cuhk.edu.hk/pub/linux/fedora/releases/24/Server/x86_64/iso/"
Src="-l http://ftp.cuhk.edu.hk/pub/linux/fedora/releases/24/Server/x86_64/iso/"
Src="-l http://ftp.us.debian.org/debian/dists/stable/main/installer-amd64/
Src="-l http://ftp.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/"
Src="-l http://download.opensuse.org/distribution/openSUSE-stable/repo/oss/"

Optionally add a URL for a kickstart file:

KS="-x ks=http://ks.example.com/kickstart/c6-64.ks"

Optionally select a graphics option:

Gr="--graphics none"
Gr="--graphics vnc"
Gr="--graphics vnc,password=foo"
Gr="--graphics spice"

Select number of cpus:


Select amount of ram:


Choose a name for the guest:


Create the guest:

virt-install $OS $Net $KS $Disk $Src $Gr $Cpu $Ram --name=$Name

Note that it could take a considerable amount of time to complete, especially if you have chosen a large, non-sparse disk file on a slow harddrive. If you have selected an interactive installation, you will need to connect to the console to complete the installation.

Connect to the console, using myhost as an example host:

virt-viewer --connect qemu_ssh://myhost/$Name

If you would prefer a gui application:

virt-manager &

Finally, you can set up this guest to start automatically whenever the host is booted:

virsh autostart $Name

Ref: Dell provides two whitepapers about how to use KVM in CentOS 6, part 1 and part 2.