Ethical Hacking Courses

owadays, it is possible to break an organization down to its roots without even using a needle, forget about other things. This has been possible because of the digitization. Most of the information is now present on data servers. The worst part is that these data servers can be hacked. Even if you forget about such big scenarios, personal machines inside people’s homes aren’t safe now.

It is high time we should start learning how hacking works and the methods we can utilize to protect us and our close connections from any data breach. Here are a bunch of ethical hacking courses which will give you an idea about hacking and how you can become an ethical hacker yourself.

Now, if you’re a beginner you can try one these affordable ethical hacking courses to get a basic idea and these will also help you proceed further to the advanced level:

How about hacking some passwords, because many dumbos helped 123456 to become the most popular password in 2016.

With more and more vulnerabilities being discovered in the software nowadays, penetration testing is one of the lucrative career options you might want to try in the years down the line. So, be well prepared in advance using these online courses:

You would agree to the fact that WiFi networks are omnipresent in today’s world. The course mentioned below can help you tackle some basic issues with WiFi networks. For instance, you can identify and neutralize security threats from an attacker’s perspective.

Pay what you want

We also have Pay What You Want bundles available for you. As the name suggests, pay anything and if it’s less than the average price—you’ll still grab something great. In another case, if you manage to beat the average price, you’ll get the entire bundle. So, do try out our most-selling bundle:

Bumping up the level a bit

If you have gone past the beginner level in ethical hacking, then, you might want to go the advanced level. These online courses will help reach the level you want:

How about a complete bundle

This bundle can be great cost cutter. It saves your hard earned money and helps you productive things at less cost. So, how about a bundle with provides 9 courses including ethical hacking, mobile security, penetration testing for a price of one course? Take a look this one.

Ref: FossBytes

Strace Examples to Debug the Execution of a Program in Linux

Strace is a debugging tool that will help you troubleshoot issues.

Strace monitors the system calls and signals of a specific program. It is helpful when you do not have the source code and would like to debug the execution of a program. strace provides you the execution sequence of a binary from start to end.

This article explains 7 strace examples to get you started.

1. Trace the Execution of an Executable

You can use strace command to trace the execution of any executable. The following example shows the output of strace for the Linux ls command.

$  strace ls
execve("/bin/ls", ["ls"], [/* 21 vars */]) = 0
brk(0)                                  = 0x8c31000
access("/etc/", F_OK)      = -1 ENOENT (No such file or directory)
mmap2(NULL, 8192, PROT_READ, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb78c7000
access("/etc/", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/", O_RDONLY)      = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=65354, ...}) = 0

2. Trace a Specific System Calls in an Executable Using Option -e

Be default, strace displays all system calls for the given executable. To display only a specific system call, use the strace -e option as shown below.

$ strace -e open ls
open("/etc/", O_RDONLY)      = 3
open("/lib/", O_RDONLY)  = 3
open("/lib/", O_RDONLY)       = 3
open("/lib/", O_RDONLY)      = 3
open("/lib/", O_RDONLY)        = 3
open("/lib/", O_RDONLY)       = 3
open("/lib/", O_RDONLY)  = 3
open("/lib/", O_RDONLY)     = 3
open("/proc/filesystems", O_RDONLY|O_LARGEFILE) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY|O_LARGEFILE) = 3
Desktop  Documents  Downloads  examples.desktop 
Music  Pictures  Public  Templates  Ubuntu_OS  Videos

The above output displays only the open system call of the ls command. At the end of the strace output, it also displays the output of the ls command.

If you want to trace multiple system calls use the “-e trace=” option. The following example displays both open and read system calls.


$ strace -e trace=open,read ls /home
open("/etc/", O_RDONLY)      = 3
open("/lib/", O_RDONLY)  = 3
read(3, "7ELF0G004"..., 512) = 512
open("/lib/", O_RDONLY)       = 3
read(3, "7ELF0004"..., 512) = 512
open("/lib/", O_RDONLY)     = 3
read(3, "7ELF0\r004"..., 512) = 512
open("/proc/filesystems", O_RDONLY|O_LARGEFILE) = 3
read(3, "nodev\tsysfs\nnodev\trootfs\nnodev\tb"..., 1024) = 315
read(3, "", 1024)                       = 0
open("/usr/lib/locale/locale-archive", O_RDONLY|O_LARGEFILE) = 3

3. Save the Trace Execution to a File Using Option -o

The following examples stores the strace output to output.txt file.

$ strace -o output.txt ls
Desktop  Documents  Downloads  examples.desktop
Music  output.txt  Pictures  Public  Templates  Ubuntu_OS  Videos

$ cat output.txt 
execve("/bin/ls", ["ls"], [/* 37 vars */]) = 0
brk(0)                                  = 0x8637000
access("/etc/", F_OK)      = -1 ENOENT (No such file or directory)
mmap2(NULL, 8192, PROT_READ, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7860000
access("/etc/", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/", O_RDONLY)      = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=67188, ...}) = 0

4. Execute Strace on a Running Linux Process Using Option -p

You could execute strace on a program that is already running using the process id. First, identify the PID of a program using ps command.

For example, if you want to do strace on the firefox program that is currently running, identify the PID of the firefox program.

$ ps -C firefox-bin
  PID TTY          TIME CMD
 1725 ?        00:40:50 firefox-bin

Use strace -p option as shown below to display the strace for a given process id.

$  sudo strace -p 1725 -o firefox_trace.txt

$ tail -f firefox_trace.txt

Now the execution trace of firefox process will be logged into firefox_trace.txt text file. You can tail this text file to watch the live trace of the firefox executable.

Strace will display the following error when your user id does not match the user id of the given process.

$  strace -p 1725 -o output.txt
attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf

5. Print Timestamp for Each Trace Output Line Using Option -t

To print the timestamp for each strace output line, use the option -t as shown below.

$ strace -t -e open ls /home
20:42:37 open("/etc/", O_RDONLY) = 3
20:42:37 open("/lib/", O_RDONLY) = 3
20:42:37 open("/lib/", O_RDONLY) = 3
20:42:37 open("/lib/", O_RDONLY) = 3
20:42:37 open("/lib/", O_RDONLY) = 3
20:42:37 open("/lib/", O_RDONLY) = 3
20:42:37 open("/lib/", O_RDONLY) = 3
20:42:37 open("/lib/", O_RDONLY) = 3
20:42:37 open("/proc/filesystems", O_RDONLY|O_LARGEFILE) = 3
20:42:37 open("/usr/lib/locale/locale-archive", O_RDONLY|O_LARGEFILE) = 3

6. Print Relative Time for System Calls Using Option -r

Strace also has the option to print the execution time for each system calls as shown below.

$ strace -r ls 
     0.000000 execve("/bin/ls", ["ls"], [/* 37 vars */]) = 0
     0.000846 brk(0)                    = 0x8418000
     0.000143 access("/etc/", F_OK) = -1 ENOENT (No such file or directory)
     0.000163 mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb787b000
     0.000119 access("/etc/", R_OK) = -1 ENOENT (No such file or directory)
     0.000123 open("/etc/", O_RDONLY) = 3
     0.000099 fstat64(3, {st_mode=S_IFREG|0644, st_size=67188, ...}) = 0
     0.000155 mmap2(NULL, 67188, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb786a000

7. Generate Statistics Report of System Calls Using Option -c

Using option -c, strace provides useful statistical report for the execution trace. The “calls” column in the following output indicated how many times that particular system call was executed.

$ strace -c ls /home
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
  -nan    0.000000           0         9           read
  -nan    0.000000           0         1           write
  -nan    0.000000           0        11           open
  -nan    0.000000           0        13           close
  -nan    0.000000           0         1           execve
  -nan    0.000000           0         9         9 access
  -nan    0.000000           0         3           brk
  -nan    0.000000           0         2           ioctl
  -nan    0.000000           0         3           munmap
  -nan    0.000000           0         1           uname
  -nan    0.000000           0        11           mprotect
  -nan    0.000000           0         2           rt_sigaction
  -nan    0.000000           0         1           rt_sigprocmask
  -nan    0.000000           0         1           getrlimit
  -nan    0.000000           0        25           mmap2
  -nan    0.000000           0         1           stat64
  -nan    0.000000           0        11           fstat64
  -nan    0.000000           0         2           getdents64
  -nan    0.000000           0         1           fcntl64
  -nan    0.000000           0         2         1 futex
  -nan    0.000000           0         1           set_thread_area
  -nan    0.000000           0         1           set_tid_address
  -nan    0.000000           0         1           statfs64
  -nan    0.000000           0         1           set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00    0.000000                   114        10 total

Ref: TheGeekStuff

Most Common Mistakes That PHP Developers Make

PHP makes it relatively easy to build a web-based system, which is much of the reason for its popularity. But its ease of use notwithstanding, PHP has evolved into quite a sophisticated language with many frameworks, nuances, and subtleties that can bite developers, leading to hours of hair-pulling debugging. This article highlights ten of the more common mistakes that PHP developers need to beware of.

This PHP logo broke thanks to one of the 10 most common PHP mistakes

Common Mistake #1: Leaving dangling array references after foreach loops

Not sure how to use foreach loops in PHP? Using references in foreach loops can be useful if you want to operate on each element in the array that you are iterating over. For example:

$arr = array(1, 2, 3, 4);
foreach ($arr as &$value) {
    $value = $value * 2;
// $arr is now array(2, 4, 6, 8)

The problem is that, if you’re not careful, this can also have some undesirable side effects and consequences. Specifically, in the above example, after the code is executed, $value will remain in scope and will hold a reference to the last element in the array. Subsequent operations involving $value could therefore unintentionally end up modifying the last element in the array.

The main thing to remember is that foreach does not create a scope. Thus, $value in the above example is areference within the top scope of the script. On each iteration foreach sets the reference to point to the next element of $array. After the loop completes, therefore, $value still points to the last element of $array and remains in scope.

Here’s an example of the kind of evasive and confusing bugs that this can lead to:

$array = [1, 2, 3];
echo implode(',', $array), "\n";

foreach ($array as &$value) {}    // by reference
echo implode(',', $array), "\n";

foreach ($array as $value) {}     // by value (i.e., copy)
echo implode(',', $array), "\n";

The above code will output the following:


No, that’s not a typo. The last value on the last line is indeed a 2, not a 3.


After going through the first foreach loop, $array remains unchanged but, as explained above, $value is left as a dangling reference to the last element in $array (since that foreach loop accessed $value byreference).

As a result, when we go through the second foreach loop, “weird stuff” appears to happen. Specifically, since $value is now being accessed by value (i.e., by copy), foreach copies each sequential $array element into $value in each step of the loop. As a result, here’s what happens during each step of the second foreach loop:

  • Pass 1: Copies $array[0] (i.e., “1”) into $value (which is a reference to $array[2]), so $array[2] now equals 1. So $array now contains [1, 2, 1].
  • Pass 2: Copies $array[1] (i.e., “2”) into $value (which is a reference to $array[2]), so $array[2] now equals 2. So $array now contains [1, 2, 2].
  • Pass 3: Copies $array[2] (which now equals “2”) into $value (which is a reference to $array[2]), so $array[2] still equals 2. So $array now contains [1, 2, 2].

To still get the benefit of using references in foreach loops without running the risk of these kinds of problems, call unset() on the variable, immediately after the foreach loop, to remove the reference; e.g.:

$arr = array(1, 2, 3, 4);
foreach ($arr as &$value) {
    $value = $value * 2;
unset($value);   // $value no longer references $arr[3]

Common Mistake #2: Misunderstanding isset() behavior

Despite its name, isset() not only returns false if an item does not exist, but also returns false for nullvalues.

This behavior is more problematic than it might appear at first and is a common source of problems.

Consider the following:

$data = fetchRecordFromStorage($storage, $identifier);
if (!isset($data['keyShouldBeSet']) {
    // do something here if 'keyShouldBeSet' is not set

The author of this code presumably wanted to check if keyShouldBeSet was set in $data. But, as discussed, isset($data['keyShouldBeSet']) will also return false if $data['keyShouldBeSet'] was set, but was set to null. So the above logic is flawed.

Here’s another example:

if ($_POST['active']) {
    $postData = extractSomething($_POST);

// ...

if (!isset($postData)) {
    echo 'post not active';

The above code assumes that if $_POST['active'] returns true, then postData will necessarily be set, and therefore isset($postData) will return true. So conversely, the above code assumes that the only way that isset($postData) will return false is if $_POST['active'] returned false as well.


As explained, isset($postData) will also return false if $postData was set to null. It therefore is possible for isset($postData) to return false even if $_POST['active'] returned true. So again, the above logic is flawed.

And by the way, as a side point, if the intent in the above code really was to again check if $_POST['active']returned true, relying on isset() for this was a poor coding decision in any case. Instead, it would have been better to just recheck $_POST['active']; i.e.:

if ($_POST['active']) {
    $postData = extractSomething($_POST);

// ...

if ($_POST['active']) {
    echo 'post not active';

For cases, though, where it is important to check if a variable was really set (i.e., to distinguish between a variable that wasn’t set and a variable that was set to null), the array_key_exists() method is a much more robust solution.

For example, we could rewrite the first of the above two examples as follows:

$data = fetchRecordFromStorage($storage, $identifier);
if (! array_key_exists('keyShouldBeSet', $data)) {
    // do this if 'keyShouldBeSet' isn't set

Moreover, by combining array_key_exists() with get_defined_vars(), we can reliably check whether a variable within the current scope has been set or not:

if (array_key_exists('varShouldBeSet', get_defined_vars())) {
    // variable $varShouldBeSet exists in current scope

Common Mistake #3: Confusion about returning by reference vs. by value

Consider this code snippet:

class Config
    private $values = [];

    public function getValues() {
        return $this->values;

$config = new Config();

$config->getValues()['test'] = 'test';
echo $config->getValues()['test'];

If you run the above code, you’ll get the following:

PHP Notice:  Undefined index: test in /path/to/my/script.php on line 21

What’s wrong?

The issue is that the above code confuses returning arrays by reference with returning arrays by value. Unless you explicitly tell PHP to return an array by reference (i.e., by using&), PHP will by default return the the array “by value”. This means that a copy of the array will be returned and therefore the called function and the caller will not be accessing the same instance of the array.

So the above call to getValues() returns a copy of the $values array rather than a reference to it. With that in mind, let’s revisit the two key lines from the above the example:

// getValues() returns a COPY of the $values array, so this adds a 'test' element
// to a COPY of the $values array, but not to the $values array itself.
$config->getValues()['test'] = 'test';

// getValues() again returns ANOTHER COPY of the $values array, and THIS copy doesn't
// contain a 'test' element (which is why we get the "undefined index" message).
echo $config->getValues()['test'];

One possible fix would be to save the first copy of the $values array returned by getValues() and then operate on that copy subsequently; e.g.:

$vals = $config->getValues();
$vals['test'] = 'test';
echo $vals['test'];

That code will work fine (i.e., it will output test without generating any “undefined index” message), but depending on what you’re trying to accomplish, this approach may or may not be adequate. In particular, the above code will not modify the original $values array. So if you do want your modifications (such as adding a ‘test’ element) to affect the original array, you would instead need to modify the getValues() function to return a reference to the $values array itself. This is done by adding a & before the function name, thereby indicating that it should return a reference; i.e.:

class Config
    private $values = [];

    // return a REFERENCE to the actual $values array
    public function &getValues() {
        return $this->values;

$config = new Config();

$config->getValues()['test'] = 'test';
echo $config->getValues()['test'];

The output of this will be test, as expected.

But to make things more confusing, consider instead the following code snippet:

class Config
    private $values;

    // using ArrayObject rather than array
    public function __construct() {
        $this->values = new ArrayObject();

    public function getValues() {
        return $this->values;

$config = new Config();

$config->getValues()['test'] = 'test';
echo $config->getValues()['test'];

If you guessed that this would result in the same “undefined index” error as our earlier array example, you were wrong. In fact, this code will work just fine. The reason is that, unlike arrays, PHP always passes objects by reference. (ArrayObject is an SPL object, which fully mimics arrays usage, but works as an object.)

As these examples demonstrate, it is not always entirely obvious in PHP whether you are dealing with a copy or a reference. It is therefore essential to understand these default behaviors (i.e., variables and arrays are passed by value; objects are passed by reference) and also to carefully check the API documentation for the function you are calling to see if it is returning a value, a copy of an array, a reference to an array, or a reference to an object.

All that said, it is important to note that the practice of returning a reference to an array or an ArrayObject is generally something that should be avoided, as it provides the caller with the ability to modify the instance’s private data. This “flies in the face” of encapsulation. Instead, it’s better to use old style “getters” and “setters”, e.g.:

class Config
    private $values = [];
    public function setValue($key, $value) {
        $this->values[$key] = $value;
    public function getValue($key) {
        return $this->values[$key];

$config = new Config();

$config->setValue('testKey', 'testValue');
echo $config->getValue('testKey');    // echos 'testValue'

This approach gives the caller the ability to set or get any value in the array without providing public access to the otherwise-private $values array itself.

Common Mistake #4: Performing queries in a loop

It’s not uncommon to come across something like this if your PHP is not working:

$models = [];

foreach ($inputValues as $inputValue) {
    $models[] = $valueRepository->findByValue($inputValue);

While there may be absolutely nothing wrong here, but if you follow the logic in the code, you may find that the innocent looking call above to $valueRepository->findByValue() ultimately results in a query of some sort, such as:

$result = $connection->query("SELECT `x`,`y` FROM `values` WHERE `value`=" . $inputValue);

As a result, each iteration of the above loop would result in a separate query to the database. So if, for example, you supplied an array of 1,000 values to the loop, it would generate 1,000 separate queries to the resource! If such a script is called in multiple threads, it could potentially bring the system to a grinding halt.

It’s therefore crucial to recognize when queries are being made by your code and, whenever possible, gather the values and then run one query to fetch all the results.

One example of a fairly common place to encounter querying being done inefficiently (i.e., in a loop) is when a form is posted with a list of values (IDs, for example). Then, to retrieve the full record data for each of the IDs, the code will loop through the array and do a separate SQL query for each ID. This will often look something like this:

$data = [];
foreach ($ids as $id) {
    $result = $connection->query("SELECT `x`, `y` FROM `values` WHERE `id` = " . $id);
    $data[] = $result->fetch_row();

But the same thing can be accomplished much more efficiently in a single SQL query as follows:

$data = [];
if (count($ids)) {
    $result = $connection->query("SELECT `x`, `y` FROM `values` WHERE `id` IN (" . implode(',', $ids));
    while ($row = $result->fetch_row()) {
        $data[] = $row;

It’s therefore crucial to recognize when queries are being made, either directly or indirectly, by your code. Whenever possible, gather the values and then run one query to fetch all the results. Yet caution must be exercised there as well, which leads us to our next common PHP mistake…

Common Mistake #5: Memory usage headfakes and inefficiencies

While fetching many records at once is definitely more efficient than running a single query for each row to fetch, such an approach can potentially lead to an “out of memory” condition in libmysqlclient when using PHP’s mysql extension.

To demonstrate, let’s take a look at a test box with limited resources (512MB RAM), MySQL, and php-cli.

We’ll bootstrap a database table like this:

// connect to mysql
$connection = new mysqli('localhost', 'username', 'password', 'database');

// create table of 400 columns
for ($col = 0; $col < 400; $col++) {
    $query .= ", `col$col` CHAR(10) NOT NULL";
$query .= ');';

// write 2 million rows
for ($row = 0; $row < 2000000; $row++) {
    $query = "INSERT INTO `test` VALUES ($row";
    for ($col = 0; $col < 400; $col++) {
        $query .= ', ' . mt_rand(1000000000, 9999999999);
    $query .= ')';

OK, now let’s check resources usage:

// connect to mysql
$connection = new mysqli('localhost', 'username', 'password', 'database');
echo "Before: " . memory_get_peak_usage() . "\n";

$res = $connection->query('SELECT `x`,`y` FROM `test` LIMIT 1');
echo "Limit 1: " . memory_get_peak_usage() . "\n";

$res = $connection->query('SELECT `x`,`y` FROM `test` LIMIT 10000');
echo "Limit 10000: " . memory_get_peak_usage() . "\n";


Before: 224704
Limit 1: 224704
Limit 10000: 224704

Cool. Looks like the query is safely managed internally in terms of resources.

Just to be sure, though, let’s boost the limit one more time and set it to 100,000. Uh-oh. When we do that, we get:

PHP Warning:  mysqli::query(): (HY000/2013):
              Lost connection to MySQL server during query in /root/test.php on line 11

What happened?

The issue here is the way PHP’s mysql module works. It’s really just a proxy for libmysqlclient, which does the dirty work. When a portion of data is selected, it goes directly into memory. Since this memory is not managed by PHP’s manager, memory_get_peak_usage() won’t show any increase in resources utilization as we up the limit in our query. This leads to problems like the one demonstrated above where we’re tricked into complacency thinking that our memory management is fine. But in reality, our memory management is seriously flawed and we can experience problems like the one shown above.

You can at least avoid the above headfake (although it won’t itself improve your memory utilization) by instead using the mysqlnd module. mysqlnd is compiled as a native PHP extension and it does use PHP’s memory manager.

Therefore, if we run the above test using mysqlnd rather than mysql, we get a much more realistic picture of our memory utilization:

Before: 232048
Limit 1: 324952
Limit 10000: 32572912

And it’s even worse than that, by the way. According to PHP documentation, mysql uses twice as many resources as mysqlnd to store data, so the original script using mysql really used even more memory than shown here (roughly twice as much).

To avoid such problems, consider limiting the size of your queries and using a loop with small number of iterations; e.g.:

$totalNumberToFetch = 10000;
$portionSize = 100;

for ($i = 0; $i <= ceil($totalNumberToFetch / $portionSize); $i++) {
    $limitFrom = $portionSize * $i;
    $res = $connection->query(
                         "SELECT `x`,`y` FROM `test` LIMIT $limitFrom, $portionSize");

When we consider both this PHP mistake and mistake #4 above, we realize that there is a healthy balance that your code ideally needs to achieve between, on the one hand, having your queries being too granular and repetitive, vs. having each of your individual queries be too large. As is true with most things in life, balance is needed; either extreme is not good and can cause problems with PHP not working properly.

Common Mistake #6: Ignoring Unicode/UTF-8 issues

In some sense, this is really more of an issue in PHP itself than something you would run into while debugging PHP, but it has never been adequately addressed. PHP 6’s core was to be made Unicode-aware, but that was put on hold when development of PHP 6 was suspended back in 2010.

But that by no means absolves the developer from properly handing UTF-8 and avoiding the erroneous assumption that all strings will necessarily be “plain old ASCII”. Code that fails to properly handle non-ASCII strings is notorious for introducing gnarly heisenbugs into your code. Even simple strlen($_POST['name'])calls could cause problems if someone with a last name like “Schrödinger” tried to sign up into your system.

Here’s a small checklist to avoid such problems in your code:

  • If you don’t know much about Unicode and UTF-8, you should at least learn the basics. There’s a great primer here.
  • Be sure to always use the mb_* functions instead of the old string functions (make sure the “multibyte” extension is included in your PHP build).
  • Make sure your database and tables are set to use Unicode (many builds of MySQL still use latin1 by default).
  • Remember that json_encode() converts non-ASCII symbols (e.g., “Schrödinger” becomes “Schr\u00f6dinger”) but serialize() does not.
  • Make sure your PHP code files are also UTF-8 encoded to avoid collisions when concatenating strings with hardcoded or configured string constants.

Ref: Topal

How to protect PHP from running out of memory

Poor programming could result in the script running out of memory rather quickly, there are a great deal of factors that could cause this – not just improper programming techniques but lack of knowledge in what affects memory. Because of this I’ve decided to write a few notes and tips on improving performance and memory usage.

Memory for PHP is cleared up my PHPs garbage collector. Sadly the garbage collector is a little lazy. In fact, it never seems to get to work on time and as a result memory is not freed quick enough for the scripts to progress. To work inline with the garbage collector here are a few tips on improving your code:

Tip 1 (Knowing what parts of your script is using the most memory)
If you’re looking to find out where your script is running out of memory the following function would be your best bet:


This function will return the current memory usage at the point it is executed. You’ll eventually see where your script is using the most amount of memory, or in my case, where the garbage collector decided it would rather go on lunch than do its job.

Tip 2 (Reassign null to vars along with un-setting them)

The unset(); function is useful when the garbage collector is doing its rounds however until then the unset(); function simply destroys the variable reference to the data, the data still exists in memory and PHP sees the memory as inuse despite no longer having a pointer to it. The solution: Assign null to your variables to clear the data, at least until the garbage collector gets ahold of it.

$var = null;

You can also use unset(); to unset the variable pointer, however there is little difference in memory usage, as far as I can see:


Tip 3 (__destruct your object references upon disposing of an object)

PHP does not release memory dedicated to an objects internal references to other objects until the garbage collector gets round to it. Because of this it’s worth adding a __destruct method to your objects which unsets all references to other objects. This can drastically help lower memory usage and is often ignored.

protected function __distruct()
  $this-&gt;childObject = null;

Tip 4 (Use functions where possible)

Upon the ending of an in use function PHP clears the memory it was using, at least more efficiently than if not using a function. If you are using recursive code or something similar that is memory intensive try putting the code into a function or method, upon closing of the function/method the memory used for the function will be garbaged much more efficiently than that of unsetting variables within the loop itself.

Tip 5 (Cache your filesystem checks, such as file_exists)

Checking if a file or directory exists before creating it, knowing a directory may be checked more than once? Using file_exists(); costs memory, not much, granted, but it does. The solution? Store the file paths you’ve already checked in an array (Or object property) and use in_array();

if(!in_array($path, $this-&gt;path_list))
  // ... Your code to deal with the file

Tip 6 (Database access is expensive)

Accessing a database is expensive, if you don’t have to, don’t. Based on tip 5, I am checking the filesystem despite being able to access the database to see if an asset exists. A lot more processing is required to access a database than there is checking the filesystem.

Tip 7 (Do not use objects if not needed)

Unless little or no performance difference I find it best to not using objects when obtaining data from a database, unless updating them. Objects add extra overhead for your scripts, as a result I tend to follow this rule of thumb:

– If not writing to a database, do not use objects, use arrays
– If you will be modifying data in a database, use objects – it tends to be cleaner and somewhat more efficient in the long run.

13 Disturbing Facts About Employee Engagement [INFOGRAPHIC]

If you are an employee, are you stressed out and feel a lack of passion about what you do? If you are a manager, do you notice that your company loses suffers from lost productivity, absenteeism, and high turnover? If you answered yes to either of these questions, your problem is likely low employee engagement.

The folks over at OfficeVibe put together an infographic about the high costs of low employee engagement. It also reveals how common a problem it really is.

75% of companies say they can’t attract the right talent.

83% worry that their employer brand isn’t compelling.

Jacob Shriar of Digitalist suggests 4 simple (and free) things you can do to combat low employee engagement based on research from Deloitte”

1.  Encourage side projects “Employees feel overworked and underappreciated, so as leaders, we need to stop overloading them to the point where they can’t handle the workload,” writes Shriar. “Let them explore their own passions and interests, and work on side projects.” Why? Happy employees feel some autonomy and mastery. Having them work on projects that provide those two things improves motivation and engagement.

2.  Encourage workers to engage with customers “At Wistia, a video hosting company, they make everyone in the company do customer support during their onboarding, and they often rotate people into customer support,” writes Shriar. “When I asked Chris, their CEO, why they do this, he mentioned to me that it’s so every single person in the company understands how their customers are using their product.” Why? Happy employees see that they are working toward something that affects the lives of other people. It’s important to see that kind of concrete evidence that what you do matters.

3.  Encourage workers to work cross-functionally “Both Apple and Google have created common areas in their offices, specifically and strategically located, so that different workers that don’t normally interact with each other can have a chance to chat,” writes Shriar. Why? Happy employees collaborate across disciplines and build productive relationships with their fellow employees. Cross-functional work also means a company can break out of those dreaded silos.

4.  Encourage networking in their industry “It’s important for employees to grow and learn more about what they do. It helps them build that passion for their industry,” writes Shriar. “It’s important to go to networking events, and encourage your employees to participate in these things. Websites like Eventbrite or Meetup have lots of great resources, and most of the events on there are free.” Why? Happy employees are constantly learning and keeping up with the best practices of their industries.


Ref: Social Media Today

What Is HTTP/2 And How It Works

HTTP/2 or HTTP Version 2 is the second major version of HTTP network protocol. It is based on SPDY/2 to improve web performance to a very great extent. HTTP/2 is developed by the Hypertext Transfer Protocol working group of the Internet Engineering Task Force.

HTTP 1.1 was a major move from HTTP 1.0 as it allowed persisted connections(more than one request/response on the same HTTP connection), improved caching, new status code, Enhanced compression support, OPTION method for Cross Origin Resource Sharing in web application and much more.

HTTP/1.1 has served the Web well for more than fifteen years, but its age is starting to show. Loading a Web page is more resource intensive than ever, and loading all of those assets efficiently is difficult, because HTTP practically only allows one outstanding request per TCP connection.

Because of which, many TCP connections are created to issue parallel requests. And since there isn’t any limit on this; too many connections are used most of the times which leads to TCP congestion and unfair resource usage by browsers. It also means lots of duplicate data being transferred “on the wire”.

If too many requests are made, it hurts performance. This has led the industry to a place where it’s considered Best Practice to do things like spriting, data: inlining, domain sharding and concatenation. These hacks are indications of underlying problems in the protocol itself, and cause a number of problems on their own when used.

HTTP/2, on the other hand, enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent exchanges on the same connection. It also introduces unsolicited push of representations from servers to clients. This specification is an alternative to, but does not obsolete, the HTTP/1.1 message syntax. HTTP’s existing semantics remains unchanged.

At a high level, HTTP/2:

  • is binary, instead of textual
  • is fully multiplexed, instead of ordered and blocking
  • can therefore, use one connection for parallelism
  • uses header compression to reduce overhead
  • allows servers to “push” responses proactively into client caches

Read more about HTTP/2 here.

Ref: FossBytes

Factors which determine the cost of a Mobile App

Since mobile apps are usually free to download, we often think that these apps are free. But this is not the case.

To create great apps, an efficient mobile app development process requires a number of efforts, a strong foundation, excellent planning, an extraordinary ecosystem, and a perfect team with skilled human resource.

The cost of apps varies due to a number of factors and the market conditions at a particular point in time. It also largely depends on the type of the app which needs to be developed.

According to a survey in which almost 96 app developers were interviewed, the average cost of developing a mobile app is US$6,450. There are various components which derive the cost of an app and it may differ from market to market and product to  product.

Let’s take a quick look at the factors which can truly determine the cost of developing a mobile app:

1. Research

The first and foremost phase of developing an app is research. The success of your app largely depends on the amount of research done before the execution of the plan. Hence, it is the first area in which we need to invest in, and that too very cautiously.

Good research does not mean more cost for sure – it depends on the type of research and the data collection methods. The primary purpose of research is to understand the market needs and demand.

The primary source of data collection is much more expensive compared to secondary sources, but is definitely much more reliable. You can also interact face to face with the users, which gives a better estimation of their requirement.


This helps in enhancing the quality of the end product. But for this type of data, we first need to identify our consumer segment, where secondary data comes in handy at a very reasonable cost.

Now, the ball lies in the smartphone app developers’ court to make the decisions of investment. Startups usually do not have sufficient time and resources to pay more heed to this very important segment, while already established companies can afford to make a huge investment in research.

2. Human resource

One of the most crucial factors which determines the cost of your app is the people working on it.

It is very important to understand this fact that skilled labor makes a big difference, although they can charge a bit more than the inexperienced, but then they come with a quality and guarantee.

The next element in this phase is to decide whether to go by the in-house developers, or get it done offshore. The cost varies widely between both these components. While in-house developers are a reasonable cost, offshore developers are expensive to hire.

Had I been in your place, I would have opted for a blend, a mixed team of all these different resources. A team of experienced developers for their skills and attention to detail, and a few fresh developers to put in those extra efforts and their innovative ideas.

This can help you to maintain the budget of the mobile app, while working with people who can get you to the timely completion and success of the app.

3. Regular updates/ maintenance

Whenever you see that a new version of an already installed app is available, you tend to update the app as soon as you can. Sometimes the updates is so minor that you wonder what has been updated.

But what is minor for someone may be important to others.  A team of developers are always working to provide the users with the best possible technology that they have.

It is one of those significant levers that enable an app to retain its position in the market. The average maintenance cost of even the smallest of projects is not insubstantial – it’s almost US$3000 per month. This is one of the lowest estimates. The size of the project and the cost of maintenance goes hand in hand, and could cost up to US$25,000 per month.

Again, it depends on the companies, and how often they update their apps. The cost will largely depend on this decision.

Almost 30 percent of the existing mobile apps are being updated at least once or more in a month, while another section of 52.8 percent apps are updated in a gap of six months.

4. Testing

Testing is a symbolic element to determine the cost. It is ultimately important to conduct an extensive testing before the launch of the app.

The first testing may not be the last testing. So there is a possibility of repetitive testing in case of non-achievement of the desired success rate. To add to that fact, you may need to make some changes based on external feedback.

This will add an extra burden to your budget, but it is hard to avoid this cost as it directly impacts the success rate of the app.

5. Mobile platform and devices

Defining the mobile platforms for which you are developing the app is sensitive in determining the cost of the app at large.  Four major platforms which are available in the market are iOS, Android, Blackberry and Windows. Each platform is different and will have different sets of integration.

Different platforms mean different costs. If you choose to develop an Android app, it will give you a pain in your neck as it has more than 18,000 different devices and screens and will pose a difficulty to ensure that the app is working fine on all the devices.

This segment needs multiple efforts to make the app available on all the devices, whereas iOS needs to support only a handful of devices, so Android apps are said to be the most expensive apps to develop.

6. Time

Time is another factor which can conclude the cost. The more extensive an app would be, the more time it will take. More time would definitely mean more cost as it requires an additional consumption of resources.

7. Native apps or hybrid apps

Native apps are those apps which are designed for a particular platform and can be installed through app store while hybrid apps are not designed for a particular platform.

It is said that native mobile apps are relatively faster but costlier to create than the hybrid apps, and thus both types of apps can hugely affect the total cost of developing an app.


There are no such definite factors which can lead you to calculate the cost of development of applications. Here, I have tried to quote all the possible factors which significantly impact the cost of development.

There could be different factors or different market conditions which have an impact on the cost and cannot be determined as such. The actual cost all depends on the time and the people associated with it.

Ref: Tech in Asia

Software Houses in Lahore you should know about

Growing up in Lahore, we were always told that our city was hailed for its abundance of great food. People from all over the country used to (and still do) come to Lahore for their foodie fix, whether it was desi food they craved, Chinese or fast food. Now, with the advent of the modern age, I hope that the new tech-savvy generations growing up in Lahore will be urged to feel the same pride when they learn about the hundreds of people who leave their own cities to come work in one of Lahore’s famed software houses.

Not only does Lahore have some of the largest software houses of the country, it is also home to some of the most innovative software houses in Pakistan. These are companies that have withstood the test of time and succeeded by harnessing the vast array of local talent available to them. Each success story in this article will convince you that with enough determination and enough faith in your fellow countrymen, you can accomplish great feats. Here is a list of 5 software houses in Lahore, in no particular order that you should definitely be aware of:

1. Techlogix:

One of the leading names in the local software industry, Techlogix has been around since 1996 and was founded by 3 MIT graduates: Salman Akhtar, Kewan Qadre Khawaja and Khurram Afridi. It is primarily a global consulting, eBusiness services and IT services company that employs more than 300 employees. Their main office is located in Lahore but they also have branches in Karachi and Islamabad. Techlogix has marketing/support offices and branches in Dubai, Malaysia, Kuala Lumpur and Boston along with a software development centre in Beijing.

Although it is primarily an IT services company that mostly deals in software product engineering, business intelligence, enterprise applications and financial services, Techlogix has also released two independent products in the past; 1) MAESTrO, which is a software tool that is used in the automotive industry and is being used by 20 automotive companies, including BMW, Honda, Ford and Renault, for the creation of electrical systems for fuel-efficient vehicles. MAESTrO was made solely by one of the founders, Khurram K. Afridi and released under the license of Techlogix. 2) Jazba, which is a web-based design tool/publishing software that allows creation of catalogues and other publications. It enables users to incorporate graphic templates, layouts and Web pages directly from the camera.

Some of Techlogix’s core projects include the development of a mobile wallet service called Omni (similar to Easypaisa) for United Bank Limited, the implementation of PeopleSoft Campus Solution for LUMS as their primary campus automation software and the assembly of an enterprise architecture solution for Ufone.

Interested in working here? They’re hiring!

2. NetSol Technologies:

NetSol Technologies is among the top software houses in terms of infrastructure. It was a small setup that was started back in December of 1995 by Salim Ghauri (who incorporated his brothers, Najeeb, Naeem and Shahab Ghauri, into the company later on) and has since grown to a behemoth of a company that employs 1500+ people in Los Angeles, London, Lahore, Riyadh, Beijing, Adelaide and Bangkok. It is a large, internationally-renowned IT consultancy and software developer company that has global experience in allocating enterprise solutions to the leasing/lending and finance industry. NetSol also provides banking, healthcare, system integration and IT services to their clients.

Salim Ghauri had the vision of establishing a globally recognized corporation. He certainly managed to make this dream a reality when NetSol became the first and only Pakistani company to get listed on the NASDAQ, the world’s second largest global, electronic marketplace. It did not take the company less than 5 years to make its mark, not only on the Pakistani IT industry’s map but on the international map as well. With a small initial team of 200+ employees in 2000, NetSol became the first Pakistani software house to receive the ISO-9000 certification.

All of these achievements are overshadowed by the one, distinctive accomplishment of NetSol. In 2006, the company obtained the title of the first CMMI Level 5 certified company in Pakistanwhich is the highest level of sophistication a company can receive. To this date, NetSol Technologies and NCR Pakistan are the only companies to have attained CMMI Level 5 certification in Pakistan. Due to their international quality certifications and solid infrastructure NetSol Technologies has landed deals with quite a number of blue chip companies such as BMW, Nissan, Toyota, Mercedes-Benz, Yamaha, Fiat and Volkswagen.

Interested in working here? They’re hiring!

3. Arbisoft:

Founded in the year 2007 by Yasser Bashir, Arbisoft was initially named DeepPixels and handled only post-processing of 3D medical images but the software-oriented team soon grew weary and started experimenting with software development services which bore more profitable results. Consequently, Arbisoft grew into the prominent company that it is today.

They have dabbled and specialized in a wide variety of disciplines that include 3D modelling, game development, cloud enterprise applications, automated data scraping and search analytics, mobile app development and IT solutions. They do not have any offices overseas, however they do operate in four continents and have signed deals with major brand names like Kayak, Stanford University, Indeed, edX and TripScanner. One of their most innovative technologies is the ViewDDD iOS application in which a user can view, explore and visualize 3D models.

The first fully-owned product they’ve released is Intellistats, an app which analyses your phone usage, locates patterns and uses that information to tell you what carrier and plan is adequate for your daily usage. This venture was made possible because Arbisoft aims to establish themselves as an enabling platform for their own people, readily encouraging independent projects so that their employees may have the opportunity to grow by taking advantage of Arbisoft’s extensive resources.

Interested in working here? They’re hiring!

4. Conrad Labs:

Conrad Labs was founded in March 15, 2009 by Abbas Yousafzai, who maintains his position as CEO. He describes the company as a specialized firm consisting of experienced software engineers who work with early-stage startups as their partial engineering team, or at times, the said startup’s only engineering team. Conrad Labs is following a startup studio model.

Their entire team consists of 25+ people so far. Conrad Labs started out as the research and development wing for Conformity, a web-based SSO startup from Austin, Texas. The connection between them is evident when one notices that the current name of the company is a play on the words; Conformity Research And Development.

Although their team is relatively small, Conrad Labs has an experienced group of individuals working for them. Their team has extensive SaaS (software-as-a-service) and cloud knowledge and the technologies they work with include Java, MySQL, Python, jQuery and PHP which they apply for application development and purpose-built software services.

The most unique feature of Conrad Labs is their uncommon cash-and-equity business model. In simple terms, they have unparalleled flexibility when it comes to reimbursement because their setup focuses on both equity and cash. In order to grant more services at a lower rate to startups that are in their initial stages, they provide a year-long R&D agreement where they provide full services and put their hopes into a successful product or profitable acquisition (and this has even led to them giving up cash payment completely in some cases). Their contract also entails an agreement that their employees will be treated as full-time employees of the company they are working with. In fact, they have turned down various projects simply because the company in question turned down said agreement.

Conrad Labs has worked with more than half-a-dozen startups like Gemalto, Wisegate, Provio and Boomerang Commerce, but two of their clients are particularly interesting; 1) StoredIQ, which analyzes and processes petabytes of unstructured and irregular business data without shifting the large volume of data to a repository or special program. StoredIQ was later acquired by IBM. 2) Axcient, replaces traditional methods of corporate data backup, business continuity and disaster recovery with a sole, connected platform that mirrors a business’s entire data in the cloud.

Interested in working here? They’re hiring! Email Conrad Labs.

5. Mindstorm Studios:

Mindstorm Studios was formed by Babar Ahmed along with his brother Faraz Ahmed in 2006 after the former left a career as an electrical engineer in Austin, Texas. Babar made headlines when his company rose to fame and was hailed as one of the leading entrepreneurs of that time. Mindstorm Studios employs 40+ people with substantial experience in cross-platform game development, animation, character design, architectural visualization and graphic design.

The primary focus of the game development studio is the creation and publishing of first-party titles. Initially, they started out with PC games and gained fame for developing the official game for the 2011 Cricket World Cup. They soon realized that there were bigger fish to fry when the advent of smartphones changed the gaming landscape forever and an ever-increasing number of people started playing games extensively on their smartphones. Mindstorm Studios realized this shift in the dynamics and eased mobile game development into their workflow.

Since then, they have made a number of games for iPhone, Android and Windows Phone devices that have hit top spots in their respective app stores. One of their recent releases, Alliance Wars: Global Invasion has installs ranging from 10,000-50,000+ and some of its players spend up to 5000 USD per month on in-app purchases.

Although, Mindstorm Studios is more along the lines of a game development company rather than a software house, it has every right to be on this list due to the permanent mark they’ve left on the face of Pakistani game development market and is an example for future entrepreneurs and companies alike.

Interested in working here? They’re hiring! Email Mindstorm Studios.

Ref: Techjuice

MomentCam, with 200M downloads, wins Facebook’s first Fb Start mobile app contest

Facebook has crowned the winner of its first-ever Fb Start competition, a contest that sought to identify the best apps in the world that are part of its program aimed at helping mobile developers.


The grand prize winning team, which will receive $50,000 in cash and $25,000 in Facebook Ad Credit, is Beijing, China-based Hightalk Software, makers of MomentCam. The app, which allows users to easily create and share customized caricatures of people, has been downloaded more than 200 million times.

Fb Start also named a set of regional winners, each of which will get $5,000 in cash and $5,000 in Facebook Ad Credit. For Europe, the Middle East, and Africa, the winner was the London-based social polling app, Whichit, which also allows businesses to create poll-based coupons that lead to heavy customer engagement; for Latin America, it was Brazil’s Qranio, which incentivizes players to complete knowledge-based trivia quizzes; for North America, it was U.S.-based Workflow, which enables automating actions between apps on your iOS device. And the “social good” winner was Hunger Crunch, which helps people “fight hunger” through a series of real-world and virtual experiences.

According to Facebook, apps were judged based on four main criteria:

1. growth and engagement: is it growing quickly, entering new markets, or keeping new users

2. experience and design: is it optimized for a good user experience, and does it look and feel streamlined

3. efficiency at scale: how well does it utilize Facebook or other tools “to make app development and iteration efficient”

4. how well they leverage Facebook Platform: did the app developers make Facebook and/or Parse integral to the app?

Julien Codorniou, Facebook’s head of global platform partnerships, told VentureBeat that the selection of MomentCam “highlights the fact that we work with mainly mobile-first companies not based in Silicon Valley that are going through exponential growth.”

Codorniou added that the Fb Start team was looking for a winner that is doing interesting things on Facebook Platform, including the way it uses social login, Parse, and organic sharing tools. As well, it was looking for apps that were successfully monetizing with Facebook Platform that were mobile only. “We were looking for a company that could inspire a lot of developers,” he said.

Speaking of MomentCam, he added, “These guys could be the poster child of the next generation of partners for Facebook.”

Codorniou said Facebook wants to support startups like those participating in the competition as well as it can. Speaking of MomentCam’s huge growth, he said, “When we see something amazing happening on the platform, like the growth of these guys, we tend to jump in and” help.

And while he noted that it’s critical for Facebook to “manage the big guys”
like Uber, Spotify, Candy Crush, and others, the social network is always looking for “the next generation of things.”

For his part, Filipe Santos, the director of international business for Hightalk Software, said Fb Start was instrumental in the creation of MomentCam in a number of ways. First and foremost, he said, was advice the Chinese-based company got from Facebook’s Singapore office, as well as the fact that the startup was able to count on using tools like MailChimp and SurveyMonkey that are supported by Facebook. That support, Santos said, gives the tools “credibility.”

Santos also said Facebook supported his team’s use of Facebook ads, particularly when they wanted to do A/B testing. In the end, he said, being able to count on Facebook Platform made user acquisition “much cheaper” than what MomentCam had done before.

To Codorniou, one of the things Facebook likes to do — and which MomentCam and other apps in the Fb Start competition demonstrated — was building apps for multiple platforms.

“We are on top of operating systems and hardware,” Codorniou said. “We love to take companies from one operating system to another. Like Candy Crush and Spotify.”

Ultimately, that’s one of the biggest benefits Parse has brought to the Facebook development ecosystems. “This is exactly the vision of Parse,” he said. “This is why we bought the company. It’s faster to develop [simultaneously for iOS and Android]. That’s really the vision for the platform.”

Ref: VentureBeat