ELK Stack – Upgrade from 2.x to 5.x

Elasticsearch Snapshot and Restore



Elasticsearch Migration Helper Plugin

cd /usr/share/elasticsearch/

./bin/plugin install https://github.com/elastic/elasticsearch-migration/releases/download/v2.0.4/elasticsearch-migration-2.0.4.zip

you may get updated command from here -> Elasticsearch Migration Plugin Install

You may download plugin from here -> Elasticsearch Migration Plugin

After plugin install, access plugin using URL


netstat -na | egrep ‘9200|9300’

vi /etc/elasticsearch/elasticsearch.yml

network.host: localhost      to       network.host: “0”


If we have error for file descriptors.

Elasticsearch uses a lot of file descriptors or file handles. Running out of file descriptors can be disastrous and will most probably lead to data loss. Make sure to increase the limit on the number of open files descriptors for the user running Elasticsearch to 65,536 or higher.

Set ulimit -n 65536 as root before starting Elasticsearch, or set nofile to 65536 in /etc/security/limits.conf.

you may check current settings using command  ->    ulimit -a

To set it permanently, set this value in /etc/security/limits.conf file for user with which elasticsearch is running, its in most of cases elasticsearch

for more details check this link – Elasticsearch – Configuring System Settings

curl -X GET “localhost:9200/_nodes/stats/process?filter_path=**.max_file_descriptors”

cat /proc/sys/fs/file-max

vi /usr/lib/systemd/system/elasticsearch.service

/etc/init.d/elasticsearch restart

systemctl daemon-reload

/etc/init.d/elasticsearch restart

warning: /etc/elasticsearch/elasticsearch.yml created as /etc/elasticsearch/elasticsearch.yml.rpmnew
warning: /etc/sysconfig/elasticsearch created as /etc/sysconfig/elasticsearch.rpmnew
warning: /usr/lib/systemd/system/elasticsearch.service created as /usr/lib/systemd/system/elasticsearch.service.rpmnew

cd /usr/share/elasticsearch/

./bin/elasticsearch-plugin list

./bin/elasticsearch-plugin remove elasticsearch-migration


tail -f /path_to_logs/logs/elasticsearch.log


Kibana Upgrade


yum update kibana


if you get some error like “Login is currently disabled because the license could not be determined. Please check that Elasticsearch has the X-Pack plugin installed and is reachable, then refresh this page.”

update x-pack license using following command.

curl -XPUT 'http://<host>:<port>/_xpack/license' -H "Content-Type: application/json" -d @license.json
curl -XPUT 'localhost:9200/_xpack/security/user/elastic/_password' -H "Content-Type: application/json" -d '{
  "password" : "elasticpassword"

Ref: Elastic ,

Web Server Performance Tuning

First off, Apache, nginx, or LightSpeed aside – if you’re running a server with 1000-2000 requests/second it’s time to start thinking about dual servers and load balancing. Depending on what you’re serving you can easily get more out of any of those servers, but at those rates you’re serving something important (or at least high-traffic), so you want redundancy in addition to the ability to handle momentary load spikes.
Start seriously considering a load balancing infrastructure i.e. HAProxy and NGINX.

You can certainly consider other high-performance web servers (nginx is very popular), or you can consider tuning your Apache configuration for better performance.

Some Apache suggestions.

Before doing anything else, read the Apache performance tuning documentation.

  1. MaxRequestsPerChild is really only useful for containing resource leaks.
    100 (your current value) is absolutely insane. You’re churning processes which kills performance.
    0 (Never kill a child) is certainly viable if all you’re serving are static resources.
    10000 (ten thousand, the default) is fine in almost all circumstances. 50000 (fifty thousand) is what I use for pure static HTML sites.
  2. StartServersMinSpareServers and MaxSpareServers can be tuned.
    I generally set StartServers and MinSpareServers to the same value.
    If there is a specific minimum number of spare servers you want to keep around, that is the number you should start with. A good value for this is your low-water-mark of simultaneous active connections.
    MaxSpareServers should be set to 75-80% of your high-water-mark of simultaneous active connections.
  3. ServerLimit and MaxClients can possibly be increased.
    If you have lots of free RAM and lots of free CPU, increase these numbers.
    If you’re running close to resource saturation, leave them as-is.
  4. Use graceful restarts
    You say you are seeing “momentary extreme peaks” in your load when Apache restarts.
    This tells me you’re probably not using graceful restarts.
    Whatever is causing Apache to restart, have it send SIGUSR1 to Apache rather than SIGHUP(or heaven forbid, actually stopping and starting the entire server). This is far less abusive and disruptive to the system than a regular restart of a full stop/start.
  5. Consider other MPMs
    You are almost certainly using the prefork MPM if you’re on a Unix system.
    Consider the Worker MPM instead.
    Tuning for the Worker MPM is a little different
  6. Spend some cache
    Apache has caching modules which can be used to hold frequently accessed data in RAM. This avoids a round-trip to the disk (or at least the filesystem layer) for frequently accessed data.
    Configuring memory backed caching can give you a pretty big performance boost for a relatively small amount of memory.

5 Tips to Boost the Performance of Your Apache Web Server

Install Mod_Pagespeed to Speed Up Apache and Nginx Performance Upto 10x

13 Apache Web Server Security and Hardening Tips

5 Ways to Optimize Apache Performance

Improving Linux System Performance with I/O Scheduler Tuning






top command on multi core processor

top command shows CPU usage as a percentage of a single CPU by default. That’s why you can have percentages that are >100. On a system with 4 physical or virtual cores, you can see up to 400% CPU usage.

You can change this behavior by pressing I (that’s Shift + i and toggles “Irix mode”) while top is running. That will cause it to show the percentage of available CPU power being used. As explained in man top:

    1. %CPU  --  CPU Usage
       The task's share of the elapsed CPU time since the last screen
       update, expressed as a percentage of total  CPU  time.   In  a
       true  SMP environment, if 'Irix mode' is Off, top will operate
       in 'Solaris mode' where a task's cpu usage will be divided  by
       the  total  number  of  CPUs.  You toggle 'Irix/Solaris' modes
       with the 'I' interactive command.

Alternatively, you can press 1 which will show you a breakdown of CPU usage per CPU:

top - 13:12:58 up 21:11, 17 users,  load average: 0.69, 0.50, 0.43
Tasks: 248 total,   3 running, 244 sleeping,   0 stopped,   1 zombie
%Cpu0  : 33.3 us, 33.3 sy,  0.0 ni, 33.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu1  : 16.7 us,  0.0 sy,  0.0 ni, 83.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  : 60.0 us,  0.0 sy,  0.0 ni, 40.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:   8186416 total,  6267232 used,  1919184 free,   298832 buffers
KiB Swap:  8191996 total,        0 used,  8191996 free,  2833308 cached

Reasons to Stick with MySQL

mysql update

MySQL’s founder is encouraging people to steer away from his creation. Here’s why he’s wrong.

Schism between the worlds of open source and proprietary software is never going to go away as long as open source remains viable and competitive. Eventually, you have a meeting of both worlds. In the case of MySQL, it was when Sun Microsystems purchased the open source database for $1 billion in 2008, a significant multiple given MySQL was about a $50 million business at the time.

MySQL was the cause of the significant hang-up when Oracle tried to acquire Sun in 2009, as MySQL founder Michael “Monty” Widenius staunchly opposed the deal and complained to the European Commission. The whole $7.5 billion deal was held up almost a year because of this one product, which Oracle made major promises to support in return for getting the European Commission off its back.

Monty hasn’t given up his assaults on MySQL, even though he took the entire code base circa the Oracle acquisition, forked it, and made a new product calledMariaDB. Monty is welcome to his own opinion, even if he has been on this charge now for four years and looks a little obsessed for it. He’s a successful, respected developer whose complaints can’t be merely dismissed as sour grapes. After all, he got $1 billion out of Sun. He can argue he’s fighting on principle.

Monty has his say, now we have ours. Here are five reasons why you should stick with the open source database. (And for balance, see Rikki Endsley’s article on thereasons you should leave MySQL behind.)

1. There is more MySQL investment and innovation than ever before.

The conventional wisdom in the open-source community is that Oracle wanted MySQL so it could throttle the threat to its RDBMS business. This accusation would make sense if Microsoft was the accused firm, but not Oracle. Its flagship database is far and away more advanced and MySQL was at best going to nibble around the edges.

Since the acquisition, Oracle has increased the MySQL staff and given it a more mature engineering process. Rather than the typical open-source project with people scattered around the planet, engineering and planning is driven from Oracle.

In this time, one developer notes, the company has been making the code more modular. That means short-term work but long-term payback. In MySQL 5.6, they split one of the crucial locks in the MySQL Server, the LOCK_open, which could improve top performance by more than 100%.

Plus, the major storage engine for MySQL is InnoDB, and Oracle acquired InnoDB back in 2005. The InnoDB developers, also located within Oracle, work with the MySQL and Oracle database teams for even better integration.

2. MySQL products remain solid.

MariaDB and open-source advocates complain that new code in MySQL 5.5 doesn’t have test cases and that some of the enterprise features in version 5.5 are closed source. That is a matter of open source purity, of course, and one for any customer to take into consideration.

Still, when it came out in February, MySQL 5.6 was well-received as a solid, well-performing product with a number of new features. Oracle spent two years releasing Development Milestone Releases (DMR) to the MySQL Community for testing and feedback.

MySQL 5.6 went from 4 CPU threads in prior versions to 64 CPU threads, nearly tripled the number of concurrent connections from the prior version, and saw a four-fold improvement in read speed. There are many more improvements that would take too long to list.

Robert Hodges, president of the database clustering and replication firm, said he has no doubt of the viability of MySQL and has yet to meet a manager who fears MySQL will be ruined by Oracle. The bottom line is that Oracle is growing MySQL into an enterprise-class DBMS.

3. MySQL is designed with a focus on the Web, Cloud, and Big Data

Oracle was not deaf to the trends in computing and put emphasis on the Web, cloud computing, and big data projects. The focus was on both MySQL and MySQL cluster to provide improvements in scale-up and scale-out performance, high availability, self-healing and data integrity, provisioning, monitoring and resource management, developer agility, and security.

To support cloud services, MySQL replication has been greatly enhanced to include a new feature, Global Transaction Identifiers (GTIDs). GTIDs make it simple to track and compare replication progress between the master and slave servers. This makes it easier to recover from failures while offering flexibility in the provisioning and on-going management of multi-tier replication.

In April 2013,  Oracle announced the MySQL Applier for Hadoop. The Applier enables the replication of events from MySQL to Hadoop / Hive / HDFS as they happen and complements existing batch-based Apache Sqoop connectivity.

One of the first firms to embrace MySQL in a Big Data environment is Nokia, which consists of a centralized, petabyte-scale Hadoop cluster that is interconnected with a 100TB Teradata enterprise data warehouse (EDW), numerous Oracle and MySQL data marts, and visualization technologies that allow Nokia’s 60,000+ users around the world tap into the massive data store. And MariaDB? Good luck finding anything related to Big Data there.

4. MySQL Enterprise

MySQL Enterprise was introduced before the Oracle purchase, but Oracle has significantly improved the product. Version 5.6 added High Availability features like Replication, Oracle VM Templates for MySQL, DRBD, Oracle Solaris Clustering, and Windows Failover Clustering for MySQL. It also introduced Enterprise Audit to perform policy-based auditing compliance on new and existing applications.

There’s also the Enterprise Monitor, which continuously monitors your database and advises you of best practices to implement. It also offers Query Analyzer to monitor application performance and Workbench, which offers data modeling, SQL development, and comprehensive administration tools for server configuration and user administration.

5. There are more MySQL projects than before.

Before the acquisition, MySQL AB had 400 employees in 25 countries, with 70% working from home offices. That’s a remarkable bit of juggling, but there are arguments to be had for working together. But as Yahoo CEO Marissa Mayer noted when she put an end to the extensive remote workforce at Yahoo, to get things done, you need to collaborate. That means being in the same building.

A MySQL Architect at Oracle said in his blog that Oracle has new, whole teams working together, some in its giant towers in Redwood Shores, California and other based elsewhere, working on special projects for MySQL. A whole group is working on the clustering software. There is another group working on manageability, a whole optimization team working on database algorithms, another team working on replication (vital for cloud and Big Data), and a whole team making it more scalable.

None of the anti-MySQL arguments are against the product’s performance. Most of Monty’s arguments stem from open source purity, and he has the right to make that complaint. But MySQL was a $75 million company when Sun bought it. Oracle is a $37 billion company. It knows a few things about professional software development and it’s going to do things its own way. It has turned over the basic MySQL code to the GPL, but its own extensions, like Enterprise Edition, are not obliged to be open source.

Monty more or less held up the Sun acquisition for months with his protests to the European Commission. Now he’s throwing stones from the outside. There comes a point when an advocate can become a nuisance and his actions can backfire.

Ref: Blog SmartBear

MySQL Update – MySQL Upgrade

mysql update

MySQL update process some times create issues like, MySQL server doesn’t start.

when you perform MySQL update using

yum update


yum update mysql


yum update mysql-server

before restarting or stopping server  , you need to run command



mysql_upgrade -uroot -p

MySQL Update Errors

some time mysql server stop and doesn’t start and you are unable to perform mysql_upgrade command,

you get error like this in mysql logs  /var/log/mysqld.log

[ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'

[ERROR] Fatal error: mysql.user table is damaged. Please run mysql_upgrade.

[ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist
[ERROR] Native table 'performance_schema'.'events_waits_current' has the wrong structure
[ERROR] Native table 'performance_schema'.'events_waits_history' has the wrong structure
[ERROR] Native table 'performance_schema'.'events_waits_history_long' has the wrong structure
[ERROR] Native table 'performance_schema'.'setup_consumers' has the wrong structure
[ERROR] Native table 'performance_schema'.'setup_instruments' has the wrong structure
[ERROR] Native table 'performance_schema'.'setup_timers' has the wrong structure
[ERROR] Native table 'performance_schema'.'performance_timers' has the wrong structure
[ERROR] Native table 'performance_schema'.'threads' has the wrong structure
[ERROR] Native table 'performance_schema'.'events_waits_summary_by_thread_by_event_name' has the wrong structure
[ERROR] Native table 'performance_schema'.'events_waits_summary_by_instance' has the wrong structure
[ERROR] Native table 'performance_schema'.'events_waits_summary_global_by_event_name' has the wrong structure
[ERROR] Native table 'performance_schema'.'file_summary_by_event_name' has the wrong structure
[ERROR] Native table 'performance_schema'.'file_summary_by_instance' has the wrong structure
[ERROR] Native table 'performance_schema'.'mutex_instances' has the wrong structure
[ERROR] Native table 'performance_schema'.'rwlock_instances' has the wrong structure
[ERROR] Native table 'performance_schema'.'cond_instances' has the wrong structure
[ERROR] Native table 'performance_schema'.'file_instances' has the wrong structure

in that case first start mysql server in safe mode and then perform mysql_upgrade like this.

mysqld_safe --skip-grant-tables

mysql_upgrade -uroot -p

Supported MySQL Update Paths

  • MySQL Update from a release series version to a newer release series version is supported. For example, MySQL Update from 5.6.26 to 5.6.27 is supported. Skipping release series versions is also supported. For example, upgrading from 5.6.25 to 5.6.27 is supported.
  • MySQL Update one release level is supported. For example, upgrading from 5.5 to 5.6 is supported. MySQL Update to the latest release series version is recommended before upgrading to the next release level. For example, upgrade to the latest 5.5 release before upgrading to 5.6.
  • MySQL Update more than one release level is supported, but only if you upgrade one release level at a time. For example, MySQL Update from 5.1 to 5.5, and then to 5.6. Follow the upgrade instructions for each release, in succession.
  • Direct MySQL Update that skip a release level (for example, MySQL Update directly from MySQL 5.1 to 5.6) are not recommended or supported.

KVM Installation on CentOS 6

KVM and CentOS 6

CentOS 6 has native availability of KVM virtualization support and tools in the base distribution. 


See the meta packages contained in:

# yum grouplist | grep -i virt

1. Host Setup

Install all the packages you might need.

yum -y install @virt* dejavu-lgc-* xorg-x11-xauth tigervnc \
libguestfs-tools policycoreutils-python bridge-utils

If you have use any directories other than /var/lib/libvirt for kvm files, set the selinux context. In this example I use /vm to store my disk image files.

semanage fcontext -a -t virt_image_t "/vm(/.*)?"; restorecon -R /vm

Allow packet forwarding between interfaces.

sed -i 's/^\(net.ipv4.ip_forward =\).*/ 1/' /etc/sysctl.conf; sysctl -p

Configure libvirtd service to start automatically and reboot.

chkconfig libvirtd on; shutdown -r now

Optionally you can set up bridging which will allow guests to have a network adaptor on the same physical lan as the host. In this example eth0 is the device to support the bridge and br0 will be the new device.

chkconfig network on
service network restart
yum -y erase NetworkManager
cp -p /etc/sysconfig/network-scripts/ifcfg-{eth0,br0}
sed -i -e'/HWADDR/d' -e'/UUID/d' -e's/eth0/br0/' -e's/Ethernet/Bridge/' \
echo DELAY=0 >> /etc/sysconfig/network-scripts/ifcfg-br0
echo 'BOOTPROTO="none"' >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo BRIDGE=br0 >> /etc/sysconfig/network-scripts/ifcfg-eth0
service network restart
brctl show

The host is now ready to start creating kvm guests.

2. Guest Setup

Since there are many options for setting up a guest, it is easier to have variables collect the information which will be used in a single command to create the guest. Several options are shown, and most can be adjusted as needed.

Start by reviewing the available OS variants.

virt-install --os-variant=list | more

Select one of the OS options:

OS="--os-variant=win7 --disk path=/var/lib/libvirt/iso/virtio-win.iso,device=cdrom"
OS="--os-variant=win2k8 --disk path=/var/lib/libvirt/iso/virtio-win.iso,device=cdrom"

Select a network option, replacing the MAC address if needed:

Net="--network bridge=br0"
Net="--network model=virtio,bridge=br0"
Net="--network model=virtio,mac=52:54:00:00:00:00"
Net="--network model=virtio,bridge=br0,mac=52:54:00:00:00:00"

Select a disk option, replacing the filename and size with desired values:

Disk="--disk /vm/Name.img,size=8"
Disk="--disk /var/lib/libvirt/images/Name.img,size=8"
Disk="--disk /var/lib/libvirt/images/Name.img,sparse=false,size=8"
Disk="--disk /var/lib/libvirt/images/Name.qcow2,sparse=false,bus=virtio,size=8"
Disk="--disk vol=pool/volume"
Disk="--livecd --nodisks"
Disk="--disk /dev/mapper/vg_..."

Select a source (live cd iso, pxe or url):

Src="-l http://ftp.cuhk.edu.hk/pub/linux/fedora/releases/24/Server/x86_64/iso/"
Src="-l http://ftp.cuhk.edu.hk/pub/linux/fedora/releases/24/Server/x86_64/iso/"
Src="-l http://ftp.us.debian.org/debian/dists/stable/main/installer-amd64/
Src="-l http://ftp.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/"
Src="-l http://download.opensuse.org/distribution/openSUSE-stable/repo/oss/"

Optionally add a URL for a kickstart file:

KS="-x ks=http://ks.example.com/kickstart/c6-64.ks"

Optionally select a graphics option:

Gr="--graphics none"
Gr="--graphics vnc"
Gr="--graphics vnc,password=foo"
Gr="--graphics spice"

Select number of cpus:


Select amount of ram:


Choose a name for the guest:


Create the guest:

virt-install $OS $Net $KS $Disk $Src $Gr $Cpu $Ram --name=$Name

Note that it could take a considerable amount of time to complete, especially if you have chosen a large, non-sparse disk file on a slow harddrive. If you have selected an interactive installation, you will need to connect to the console to complete the installation.

Connect to the console, using myhost as an example host:

virt-viewer --connect qemu_ssh://myhost/$Name

If you would prefer a gui application:

virt-manager &

Finally, you can set up this guest to start automatically whenever the host is booted:

virsh autostart $Name

Ref: Dell provides two whitepapers about how to use KVM in CentOS 6, part 1 and part 2.

nfsen and nfdump installation on CentOS for netflow and sflow collection

nfdump was born out of a research network, requiring it to be able to consume huge amounts of flows efficiently. This makes it very powerful and very useful for nearly anyone. nfsen is really just a php wrapper for nfdump, however, the really nice thing about it (other then being free, opensource software) is that it is extendable via plugins. From botnet detection to displaying IP geo-data on a map, there is likely a plugin for it.  Not finding what you are looking for?  Write it!  The architecture to use it is already there and documented.

Install instructions for CentOS.  Once you have a system up and running, to get nfsen and nfdump working, here is what you need to do.

yum install httpd php wget gcc make rrdtool-devel flex byacc yum install rrdtool-perl perl-MailTools perl-Socket6 perl-Sys-Syslog perl-Data-Dumper if some perl modules do not install using above commands, you may try install using CPAN

perl -MCPAN -e shell

install Data::Dumper

install Sys::Syslog quit

disable selinux vi /etc/selinux/config set SELINUX=disabled reboot

If iptables running, you’ll need to make an iptables rule

sudo iptables -I INPUT -p udp -m state --state NEW -m udp --dport 9995 -j ACCEPT
sudo ipt6ables -I INPUT -p udp -m state --state NEW -m udp --dport 9995 -j ACCEPT

Also allow for access to the web server you just installed.

sudo iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
sudo iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
sudo ip6tables -I INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
sudo ip6tables -I INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT  

service iptables save
service ip6tables save

Once you enable https you can safely remove the table rules for port 80.

Start HTTPd

sudo service httpd start

Enable HTTPd at boot

chkconfig httpd on

cd /usr/src

Now you need the actual code. Download the latest from sourceforge.

(nfdump-1.6.11 and nfsen-1.3.6p1 at the time of this writing)

tar -zxvf nfdump-1.6.13.tar.gz
cd ./nfdump-1.6.13
./configure --enable-nfprofile --enable-nftrack --enable-sflow
make install

By default 1.6.13 enables v9 and ipfix.
adduser netflow
usermod -a -G apache netflow
vi /etc/group

Add user netflow to group apache

like add user netflow in line where apache group defined


mkdir -p /opt/nfsen
mkdir -p /var/www/html/nfsen

Now we need to edit this configuration file to make sure all variables are set correctly

tar -zxvf nfsen-1.3.6p1.tar.gz
cd ./nfsen-1.3.6p1

vi ./etc/nfsen.conf

Make sure all data path variables are set correctly

$BASEDIR= “/opt/nfsen”;

$HTMLDIR = “/var/www/html/nfsen”;

For CentOS based systems change

$WWWUSER  = “www”;   change to apache
$WWWGROUP = “www”; change to apache
$WWWUSER  = “apache”;
$WWWGROUP = “apache”;

Add your host to the file to allow for collection, my %sources looks like this:

%sources = (
    'home'    => { 'port' => '9995', 'col' => '#0000ff', 'type' => 'netflow' },
    'internal'    => { 'port' => '9996', 'col' => '#FF0000', 'type' => 'netflow' },
#    'gw'    => { 'port' => '9995', 'col' => '#0000ff', 'type' => 'netflow' },
#    'peer1'        => { 'port' => '9996', 'IP' => '' },
#    'peer2'        => { 'port' => '9996', 'IP' => '' },

As you can see, I have two valid sources with different ports and different colors. You can make all netflow, all sflow, or any combination of protocol.

./install.pl etc/nfsen.conf

cd /opt/nfsen/bin/ ./nfsen start

Make it start at boot (referenced from this post).

vi /etc/init.d/nfsen

Add this into the file:

# chkconfig: - 50 50
# description: nfsen


case "$1" in
		$DAEMON start
		$DAEMON stop
		$DAEMON status
		$DAEMON stop
		sleep 1
		$DAEMON start
		echo "Usage: $0 {start|stop|status|restart}"
		exit 1

exit 0

Then chkconfig it on to start it at boot:

chmod 755 nfsen && chkconfig --add nfsen && chkconfig nfsen on

That’s pretty much it. Once you configure your netflow or sflow source, you should start seeing data in ~5-10 minutes.


Point your browser at your web server and see: Mine is set as https://server ip/nfsen/nfsen.php (you’ll need to include the “nfsen.php” uness you edit your apache config to recognize “nfsen.php” as in index.


Common issues:

I see this one every time: “ERROR: nfsend connect() error: Permission denied!” It’s always a permissions issue, as documented here.  You need to make sure that the nfsen package can read the nfsen.comm socket file. I fixed mine by doing

chmod g+rwx ~netflow/

My nfsen.conf file is using /home/netflow as the $BASEDIR.



You’ll likely see “Frontend – Backend version mismatch!”, this is a known issue. There is a patch to fix it here, I never bothered since it did not cause any issues for me.

Disk full. Depending on your setup, you may generate a firehose worth of data. I have filled disks in less than a day in the past on a good sized regional WAN. I generally keep a month of data, but you can store as much data as disk you want to buy. I have a script run from cron to prune data, if you want to do the same:

vi /usr/local/etc/rmflowdata.sh

Paste this in:

# prune flow data 
# Usage:
# +30 is the number of days, adjust accordingly. 

/bin/find /home/netflow/profiles-data/live -name "nfcapd.*" -type f -mtime +30 -delete

Add this to your crontab:

@daily /usr/local/bin/rmflowdata.sh

Make it executable

chmod 755 /usr/local/bin/rmflowdata.sh

There are probably more elegant ways to do it but this works just fine, is lightweight and can be run manually if needed.

There are a lot of great use cases for this.  If you’re looking for an SDN tie-in, guess what, there is one.  OpenVSwitch supports sflow export and low-and-behold, nfsen and nfdump can easily consume and display sflow data. Want flow statistics on your all VM, OVS based SDN lab?  Guess what, you can!

There are some other great things you can do with flow data, too, specifically sflow.  It’s not just for network statistics, there is a host based sflow implementation that track any number of interesting metrics.  blog.sflow.com is a great resource for all things sflow (also, it does IPv6 by default, as it should).

Ok, now you have absolutely no good reason not to be collecting flow data.   It’s easy, it’s useful and almost everything (hosts, routers, virtual switches) supports exporting some kind of flow information.  You can even generate it from an inline linux box or a box off of an optical tap or a span port running softflowd or nprobe.   Both of which I can confirm work wonderfully (the above collector is gathering flows from softflowd running on my security onion box as well as exported flows from pfflowd on a pfsense router).


Ref: ForwardingPlane & Tekyhost


NFS Server and client configuration Centos

NFS Server side configuration

Create directory which you want to share

mkdir -p /nfsshare

yum install nfs-utils rpcbind nfs-utils-lib

chkconfig rpcbind on

chkconfig nfs on

chkconfig nfslock on

service rpcbind start

service nfs start


/etc/init.d/nfs start

service nfslock start

define allowed IPs on server

lets suppose

Server IP : a.a.a.a

Client IP : b.b.b.b

vi /etc/hosts.allow

mountd: b.b.b.b

mkdir -p /nfsshare

chown -R nfsnobody.nfsnobody /nfsshare

chmod -R 750 /nfsshare


chown -R 65534:65534 /nfsshare

chmod -R 755 /nfsshare

vi /etc/exports

/nfsshare           b.b.b.b(rw,sync,no_root_squash,no_subtree_check)


/nfsshare   b.b.b.b/,sync,no_wdelay,all_squash)

exportfs -a

service nfs restart


(The no_root_squash option makes that /home will be accessed as root.) you may skip this for non root directories


NFS Client side configurations

yum install nfs-utils nfs-utils-lib

Make sure portmapper is running other wise its authenticate but give errors like this

chkconfig –list | grep portmap

To start it on centos and set it to start at boot

service portmap start

chkconfig portmap on

mkdir -p /mnt/nfsshare

mount a.a.a.a:/nfsshare  /mnt/nfsshare

vi /etc/fstab

a.a.a.a:/nfsshare /mnt/nfsshare nfs rw,sync,hard,intr 0 0

df -h



rpcinfo -p

program vers proto   port
    100000    2   tcp    111  portmapper
    100024    1   udp  36849  status
    100024    1   tcp  47263  status
    100000    2   udp    111  portmapper
    100021    1   udp  34940  nlockmgr
    100021    3   udp  34940  nlockmgr
    100021    4   udp  34940  nlockmgr
    100021    1   tcp  60818  nlockmgr
    100021    3   tcp  60818  nlockmgr
    100021    4   tcp  60818  nlockmgr
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100005    1   udp  52727  mountd
    100005    1   tcp  42462  mountd
    100005    2   udp  52727  mountd
    100005    2   tcp  42462  mountd
    100005    3   udp  52727  mountd
    100005    3   tcp  42462  mountd

netstat -ntlp | grep 111

stop your firewall and disable the selinux

#/etc/init.d/iptables stop
#vi /etc/sysconfig/selinux
these two services should be running on server

#rpcinfo -p
100021    4    tcp6      ::.157.164             nlockmgr   unknown
100000    4    tcp6      ::.0.111               portmapper superuser

cat /var/log/messages|grep nfs

can you try mounting using this command and make sure portmap service is running on server

mount -t nfs -v a.a.a.a:/nfsshare /mnt/nfsshare
and post the last log messages related to nfs server
cat /var/log/messages