Jedv

D3 project 2 – Public DNS World Nodes Map (Part 1)

I have been working on my final project for my time with D3. I decided to focus purely on the (Geo|Topo)JSON style world maps and try to parse then display some sort of network information using a combination of geoip lookups and longitude/latitude.

For the first part of this process, I needed to find some data and get it ready for use. I stumbled across this website: https://public-dns.info/.

Public DNS servers are globally indexed on this site with some basic information on their reliability, reverse DNS, software version etc & most of all their IP – perfect for what we will need.

I downloaded the full CSV list of all ‘valid’ nameservers present, which is located at https://public-dns.info/nameservers.csv – there is many thousands of nameservers listed at the time of writing. This is too many for a nice looking node map, so I use some GNU/Linux trickery to get a randomly selected csv of 50 nodes.

$ shuf -n 50 nameservers.csv > c.csv

Make sure the CSV header is copied to the top of your new c.csv file – this is important for the next step which is to get Geographical locations for these different IP addresses.

$ cat c.csv
ip,name,country_id,city,version,error,dnssec,reliability,checked_at,created_at
161.0.153.46,,TT,Port of Spain,dnsmasq-2.49,,false,1.00,2019-02-27T13:00:21Z,2017-01-08T22:03:27Z
201.76.162.156,mvx-201-76-162-156.mundivox.com.,BR,Rio de Janeiro,,,false,0.99,2019-02-27T12:54:31Z,2015-01-10T15:46:41Z
.........

I chose to use the https://api.ipstack.com service to get my geoip information but there is a few services around, ipstack lets you call a certain amount of times a day for free and we only need 50 calls to get our data.json file ready for D3.

I used Node.JS along with a couple of modules from npm to hit up the API and make a data.json file neatly organised with all the data we need.

$ npm install --save request request-promise-native csvtojson

Once you have the required modules, its as simple as a few neat lines of Node.js – this file is parse_csv.js.

/* Parse the DNS public node list to JSON, then collect GeoIP data for each IP.
 * By Jed V. (2019 root@livingthedream.fun) */
const csv = require('csvtojson');
const fs = require('fs');
const request = require('request');
const rp = require('request-promise-native');

const orig = 'c.csv';

console.log(`Parsing file: ${orig}`);

function initial_csvtojson() {
    csv()
        .fromFile(orig)
        .subscribe((json, line) => {
            return new Promise((resolve, reject) => {
                // long operation for each json e.g. transform / write into database.
                console.log(line);
                console.log(json['ip']);
                var options = {
                    uri: `http://api.ipstack.com/${json['ip']}?access_key=********************`,
                    json: true
                };
                rp(options)
                    .then(function(data) {
                        console.log(`IP: ${data.ip}, Long: ${data.latitude},${data.longitude}`);
                        json['longitude'] = data.longitude;
                        json['latitude'] = data.latitude;
                        console.log('Changed obj:');
                        console.log(json);
                        resolve();
                    })
                    .catch(function(err) {
                        reject();
                    });
            });

        }).then((json) => {
            console.log('Writing JSON object to file...');
            fs.writeFile('./data.json', JSON.stringify(json, null, 2), 'utf-8');
        }).catch((err) => {
            throw err;
        });
}

initial_csvtojson();

Make sure to replace the *s with your ipstack.com API key if you wish to use the above script. We use the csvtojson module with its native promise support alongside request module’s native promises to effectively quickly turn c.csv into data.json.

We make the ipstack.com API request for each row in the CSV and then append the returned longitude/latitude to the object which csvtojson then passes on until the final data.json object is created and copied to file – which then looks like the snippet below:

[{
        "ip": "161.0.153.46",
        "name": "",
        "country_id": "TT",
        "city": "Port of Spain",
        "version": "dnsmasq-2.49",
        "error": "",
        "dnssec": "false",
        "reliability": "1.00",
        "checked_at": "2019-02-27T13:00:21Z",
        "created_at": "2017-01-08T22:03:27Z",
        "longitude": -61.5167,
        "latitude": 10.65
    },
    {
        "ip": "201.76.162.156",
        "name": "mvx-201-76-162-156.mundivox.com.",
        "country_id": "BR",
        "city": "Rio de Janeiro",
        "version": "",
        "error": "",
        "dnssec": "false",
        "reliability": "0.99",
        "checked_at": "2019-02-27T12:54:31Z",
        "created_at": "2015-01-10T15:46:41Z",
        "longitude": -43.3307,
        "latitude": -22.9201
    }
    {.........}
]

Great, in just a couple of motions we have got a data.json file that we can use in the browser with D3.js to produce a world map of highlighted public DNS nodes.

I’m going to write up how I did this in part 2!

Jedv

Using Command Line Tools to Aid Development – Part 2 (Sed)

The first part of this series of articles is here and I recommend you start with that one to get an idea of this series of articles and why using command line tools to aid your development (in any language/environment) can be very useful.

I am going to dive right in here with another text based tool called ‘sed’. Sed stands for ‘stream editor’ and it is another very early Unix days tool created in 1974, its goal being a streams based implementation of text editing that utilises regular expressions which back then was a new era for efficiently processing text.

These days GNU have their own offering of sed which has become the standard edition, this version includes some big improvements and new features including editing files in-place and other functionality for convenience. This package is default available on most Unix/Linux distros and can be installed for use on Windows based setups as well.

Once you have made sure it is available on your chosen development environment you can use sed for various operations that would take longer or be tedious if you were doing it manually or using GUI tools to accomplish. I will show examples of some of these situations for you below.

In modern programming there is a big emphasis on splitting your projects/scripts/programs into multiple neatly organised files that are in various formats and structures in order to keep your project to standards, make it easier for other developers to read & modify, for compiler rules etc. This can mean you end up needing to replace certain words, phrases, variable names and the like project-wide or folder-wide in order to update name changes and such. Sed can help you handle situations like this with ease.

Use – Replacing text in files
sed -i /<search>/<replace>/ <filename>
e.g.
sed -i s/abc/cba/ file1
Use – Replacing text in all files in a folder/project
sed -i /<search>/<replace>/ *
e.g.
sed -i s/abc/cba/ *
Use – Cleaning up code in your project or file

You can go further with this sort of concept and start using the functionality of sed to clean up your code, remove unwanted function calls and such with commands such as the ones below.

sed -i '/^ *$/d' <filename>
^ This removes all lines with just whitespace or that are blank.

sed -i s/<function name>\(\)\;// <filename>
^ This would remove all calls in a file or project to a certain function.

sed -i '/\s*#.*$/d' <filename>
^ This would remove all comments that use the # style from your file or project.

sed -i s/<old variable/function/class name>/<new variable/function/class name>/ <filename>

^ This last one is self explanatory really but it is very useful for when you need to change the name of something that gets referenced or called all over the place in a large project.

You will notice the different styles of sed use that you can do, single quoting can be useful if your using certain characters in your command that will upset your preferred command line shell. The function call example above shows the other method of escaping your special characters using \, this is just another way to do the same thing.

Advanced switches and uses

Once you get the hang of some basic sed commands you can use some of the switches to chain commands, store commands in sed ‘script’ files etc, here is some examples of switches:

The -e switch can be used for chaining 
sed -i -e '<command>' -e '<command>' <filename>
e.g.
sed -i -e '/\s*#.*$/d' -e '/^ *$/d' <filename>
^ This chains the commands up to remove all # style comments and all blank/whitespace lines from your file or project, you can chain up more expressions and eventually come up with one-liners to clean your code up when required.

You can create sed scripts by just putting one sed command per line into a file and calling it with the -f switch.

File x.sed:

/\s*#.*$/d
/^ *$/d

Command:

sed -f x.sed <filename>

Creating little sed scripts and keeping them handy to clean up code or even data-files (e.g. simple csv processing) can be a good use of this feature.

This concludes our introduction to using the great command line tool ‘sed’ for aiding development. There is lots of other great ways you can use sed to process any type of text file or even streams from pipes similar to grep in the first article to leverage the most out of its functionality. If you need a more adaptable advanced version of what sed does with even more functionality to really mangle your text based files definitely look into the command line tool AWK.

There will be a part 3 coming soon, I apologise for the long waits between my articles.

Live the dream! 🙂

Jedv

Using Command Line Tools to Aid Development – Part 1 (Grep)

In the world of development today everyone is looking to speed up the process while remaining efficient and not cutting corners, there is lots of different ways to achieve this but a lot of the time I find developers miss using some simple command line tools that can help certain things such as locating every use of a keyword (be it a variable name, function name, value etc) in an entire project and their locations to using linters to help you brush up to the strictest of standards.

Between the Bash shell, GNU/Linux and other command line tools you can save yourself time and pain while also boosting your overall skill set. This is going to be part one of a few parts where I will show not just the tools themselves but a couple of use cases that actually help during handling the development or maintenance of code/projects.

Introducing… grep, taken from Wikipedia the standard descriptive line would be ‘grep is a command-line utility for searching plain-text data sets for lines that match a regular expression’. As a base description that is pretty accurate but as time has gone on (the first version was released 1974 for the original Unix operating system) it has now got quite a few features and uses.

Use – Searching a file for occurrences:

The plainest use of grep would be to purely search a file for an occurrence of a string. You can specify a regular expression or just a search string like so:

grep '<search>' <filename>
e.g.
grep test file.txt
grep 'test' file.txt
grep ^test$ file.txt

Note: The use of quote marks around the search term is optional but should be used if search contains any characters that would trigger the Bash shell (or your preferred platform) to interpret them as a split of the command/line or things like pipes etc.

Use – Searching wider for occurrences:

You can use grep to search multiple files all the way up to multiple directories recursively file-by-file.

grep <search> <multiple files>
e.g. 
grep test file1.txt file2.txt file3.h file4.cpp file5.txt

grep -R <search> <directory>
e.g.
grep -R test test_files/ 
grep -R test test_directory1/ test_directory2/
grep -R test * - to search all files in current directory and all sub-directories, wildcard style.

You can also mix file & directories in the search radius.

grep -R test.txt test_files/

You will notice immediately when you give more than one file or any directories to be searched by grep that it will start reporting the file that the occurrence was found in at the start of each line so you can find which files contain what you were searching for.

livethedream$ grep -R test test.txt some_files/
test.txt:testing
test.txt:test
livethedream$ 

You can use grep with the wildcard solution in the root of your or other people’s code, libraries, projects etc to find where a function/class/variable is declared, defined, used etc this is really helpful in speeding up trying to find your way around code you either did a long time back or you are taking over maintaining from another developer or even modifying 3rd party code for your own devices.

Use – Searching the pipe for occurrences:

Grep would not be what it is today without its use when put together with the traditional stream pipes that operating systems have been making use of since around the time of its first version. In a traditional Bash shell setup on a Unix/Linux system the pipe symbol | works by ‘piping’ the output of the command on the left hand side to the command on the right hand side and this can be chained over and over again for great effect.

Here is a typical example of using grep with a pipe to search the output of any command you can execute that has output:

<command> | grep <search>
e.g.
livethedream$ ls 
file1.txt	file2.txt	file3.txt	file4.txt	file5.txt	test.txt	test_files
livethedream$ ls | grep test
test.txt
test_files
livethedream$ 

You can get really good results using grep in this way in regards to the development process by combining it with the output of tools like package managers, compilers, shell scripts, source control tools (git, hg, svn etc).

Another good use in the pipe instance is the -v switch which will reverse the usage to remove the occurrences rather than return them, so taking our above example lets try the same with -v.

<command> | grep <search>
e.g.
livethedream$ ls 
file1.txt	file2.txt	file3.txt	file4.txt	file5.txt	test.txt	test_files
livethedream$ ls | grep -v test
file1.txt
file2.txt
file3.txt
file4.txt
file5.txt
livethedream$ 

You can see how this would be useful, you can even chain it instead of using a regular expression for quick removal of one or more separate searches like so:

livethedream$ ls | grep -v test | grep -v file1 | grep -v file3
file2.txt
file4.txt
file5.txt
livethedream$  

You can also use this method with commands like cat, curl etc to parse your way through various types of data removing different searches quickly without mucking about with regex – a good example just for thought is using cat to view a csv file and using grep -v with multiple pipes to remove all lines that have a certain field value, only leaving the data left.

A quick example of this applying for searching development projects could be wanting to search for a function, but only search for it when it is called in a certain way or without a certain argument. I add the ( after the function name as an example of narrowing down making sure it is a function called.

grep -R '<function name>(' * | grep -v '(true)' 

This call would search for all the calls (and the definition, but that isn’t relevant) of a function inside the entire project/codebase but removes all the times it is called with just one argument with a value of true as a boolean.

This concludes the introduction to grep for use with searching through code specifically. There is a whole host of other applications for grep and there is a couple of other really useful switches that can change the regular expression engine in use (if you use the regex with it) or how it searches. Be aware grep has different versions depending on the operating system but mostly it is always the same in the way it acts for most basic operations.

In part 2 of this series I will go even further with helping the development process using command line tools by introducing another key program that has a long history starting from the days of original Unix.

Jedv

Rsync backup script for websites (document root) & MySQL databases – to rsync.net or others

I have just got an account with rsync.net to backup one of my server’s websites, including the document root for the webserver and all the MySQL databases for them. I made a little more fancy Bash script for storing everything that is a little more complex than just one rsync command and caters for the dumping of the databases to individual named/dated files.

Read the header comments of the file for instructions on usage.

#!/bin/bash
# Backup a directory (document root) and all MySQL databases and send to rsync.net.
# You must have an account with rsync.net backup service (or similar) and have setup automatic ssh login via keys.
# See: http://www.rsync.net/resources/howto/ssh_keys.html
# Create the remote RSYNC_PATH using ssh before running the script.
# It is best to run this with a cronjob either daily/weekly/monthly,
# run it with --quiet in the crontab for no output except from errors
# <root@livingthedream.fun>

RSYNC_HOST=""
RSYNC_USER=""
RSYNC_PATH="" # Do not start or end the path with a /
MYSQL_USER=""
MYSQL_PASSWORD=""
DOCUMENT_ROOT_PATH="" # Do put trailing slash on this path.

# No need to edit below lines unless you need to modify behaviour.

databases=`mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database`

QUIET=false
RSYNC_OPT="-avH"
if [ "$@" ] && [ "$@" == "--quiet" ]; then
 QUIET=true
 RSYNC_OPT="-avHq"
 echo "Starting backup.."
 echo "Dumping all MySQL databases.."
fi
mkdir -p sql
for db in $databases; do
    if [[ "$db" != "information_schema" ]] && [[ "$db" != "performance_schema" ]] && [[ "$db" != "mysql" ]] && [[ "$db" != _* ]] ; then
        if [ "$QUIET" = false ]; then
          echo "Dumping database: $db"
        fi
        mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD --databases $db > sql/`date +%Y%m%d`.$db.sql
    fi
done
if [ "$QUIET" = false ]; then
  echo "Done."
  echo "Sending SQL databases to $RSYNC_PATH/sql"
fi
rsync $RSYNC_OPT sql/ $RSYNC_USER@$RSYNC_HOST:$RSYNC_PATH/sql/
if [ "$QUIET" = false ]; then
  echo "Done."
  echo "Backing up document root to $RSYNC_PATH/sites"
fi
rsync $RSYNC_OPT $DOCUMENT_ROOT_PATH $RSYNC_USER@$RSYNC_HOST:$RSYNC_PATH/sites/
if [ "$QUIET" = false ]; then
  echo "Done."
  echo "All Done."
fi

Make sure to fill in the variables at the top and setup your cron job once you have tested it – checkout rsync.net for their awesome setup and nicely priced remote backup solutions. Their technical support is also most helpful.

 

Jedv

Fixing: PHP Fatal error: Cannot redeclare wp_ajax_press_this_save_post() on WordPress

So today a client’s site that had its WordPress version updated (to 4.9.1) started throwing this error when they tried to use the Metaslider plugin to upload new images. Some searching confirmed this was happening to more than a few other people but I did not easily find any actual solutions by people asking who probably like me had sites that needed the functionality this presented. The error:

PHP Fatal error:
Cannot redeclare wp_ajax_press_this_save_post()
(previously declared in wp-includes/deprecated.php:3921)
in wp-admin/includes/ajax-actions.php on line 3197

Some digging later I found out the function was not just deprecated but in 4.9.1 and above moved outside of the core entirely to an optional plugin that you can download called Press This. This is part of a general clean up initiative where non-essential functionality is being slowly moved out of a more streamlined core and moved into optional plugins.

Just installing the now optional plugin is not enough to fix the issue entirely though at least for now the easiest fix I could come up with that was the least intrusive was to comment out two functions in this file:

wp-admin/includes/ajax-actions.php

If you use your text editor to locate the top function, which is wp_ajax_press_this_save_post() you will find the other one below it and they should look like this:

/**
 * Ajax handler for saving a post from Press This.
 *
 * @since 4.2.0
 */
function wp_ajax_press_this_save_post() {
      include( ABSPATH . 'wp-admin/includes/class-wp-press-this.php' );
      $wp_press_this = new WP_Press_This();
      $wp_press_this->save_post();
}

/**
 * Ajax handler for creating new category from Press This.
 *
 * @since 4.2.0
 */
function wp_ajax_press_this_add_category() {
      include( ABSPATH . 'wp-admin/includes/class-wp-press-this.php' );
      $wp_press_this = new WP_Press_This();
      $wp_press_this->add_category();
}

Now use PHP comments to just comment out both functions entirely from being declared, so then only does the deprecation/new plugin functionality take over to make them appear like below and save the file.

/**
 * Ajax handler for saving a post from Press This.
 *
 * @since 4.2.0
 */
//function wp_ajax_press_this_save_post() {
//      include( ABSPATH . 'wp-admin/includes/class-wp-press-this.php' );
//      $wp_press_this = new WP_Press_This();
//      $wp_press_this->save_post();
//}

/**
 * Ajax handler for creating new category from Press This.
 *
 * @since 4.2.0
 */
//function wp_ajax_press_this_add_category() {
//      include( ABSPATH . 'wp-admin/includes/class-wp-press-this.php' );
//      $wp_press_this = new WP_Press_This();
//      $wp_press_this->add_category();
//}

Then your all set, hopefully WordPress have picked up on this and will just take these two declarations out in the next version or two before it gets annoying.

Jedv

Getting free DNS & email for your domain

SO. I wanted to start this site/project myself but I didn’t have any real funds to donate to its cause at all. Luckily it was recently the horrifyingly pointless ‘black Friday/cyber Monday’ thing that people care way too much about – the side effect of this is super ridiculously cheap domain names and other online bits ‘n bobs. So from my usual domain buying site that I always recommend which is Namecheap I got this domain for under £2 which was a steal for a year.

I already had some free web hosting to make use of so there was only two things not included with what I already had/bought which is a DNS provider for the domain and of course email for the domain too.

These two things can always be found for free if you know where to look so I will show you a couple of services that provide it for me on the regular.

DNS


Copyright 2017 DNSExit

DNS can be found for free on an old looking not well known site called DNS Exit. I have used DNS Exit to provide free DNS for domains for at least 6 years and never once had any issues or failures from their service. It even supports Dynamic IP addressing using software they provide. They also do sell all the usual hosting provider stuff like premium DNS services, web hosting, email hosting/email relays etc.

Email

Zoho logo
Zoho logo (Copyright 2017 Zoho)

Free email is a bit harder to find if you wish to use your own domain without any forwarding situation but there is some good providers if you search around and I always end up going for Zoho Mail. Think of Zoho like Google in the sense of email providers, they don’t just provide an email service they go all out.

Features of Zoho Mail/Zoho ‘Workplace’ are:

  • Full featured email service with Webmail/Mobile apps/Mail servers for your own clients with all modern features you’d expect when you are paying & some more.
  • Migration services for getting your data across from most other providers you could think up.
  • Integration with CRM setups/apps.
  • Docs. (Just like Google Docs but free for business/own domain)
  • Calendar.
  • Tasks.
  • Notes.
  • Contacts.
  • Integration with all other Zoho services. (See below)

Basically Zoho are an awesome company that provide all these services and others too like their project management (Basecamp/Asana-eqsue) service, Invoicing and CRM services and a bunch of other things so check them out and ditch overpriced greedy services that offer a lot less. Even if you end up requiring some of the paid features on a Zoho service (of which I find it hard to come across any) then your still on a big gain overall. Zoho have also just launched ‘One’ which is supposed to be a service combining all of their offerings to run your ‘entire business’ from one suite, which looks promising.

Services like these are how I was even able to make this site possible so thought I’d give them some props and help others who are on really tight budgets launch their own stuff.