D3 project 2 – Public DNS World Nodes Map (Part 1)

I have been working on my final project for my time with D3. I decided to focus purely on the (Geo|Topo)JSON style world maps and try to parse then display some sort of network information using a combination of geoip lookups and longitude/latitude.

For the first part of this process, I needed to find some data and get it ready for use. I stumbled across this website:

Public DNS servers are globally indexed on this site with some basic information on their reliability, reverse DNS, software version etc & most of all their IP – perfect for what we will need.

I downloaded the full CSV list of all ‘valid’ nameservers present, which is located at – there is many thousands of nameservers listed at the time of writing. This is too many for a nice looking node map, so I use some GNU/Linux trickery to get a randomly selected csv of 50 nodes.

$ shuf -n 50 nameservers.csv > c.csv

Make sure the CSV header is copied to the top of your new c.csv file – this is important for the next step which is to get Geographical locations for these different IP addresses.

$ cat c.csv
ip,name,country_id,city,version,error,dnssec,reliability,checked_at,created_at,,TT,Port of Spain,dnsmasq-2.49,,false,1.00,2019-02-27T13:00:21Z,2017-01-08T22:03:27Z,,BR,Rio de Janeiro,,,false,0.99,2019-02-27T12:54:31Z,2015-01-10T15:46:41Z

I chose to use the service to get my geoip information but there is a few services around, ipstack lets you call a certain amount of times a day for free and we only need 50 calls to get our data.json file ready for D3.

I used Node.JS along with a couple of modules from npm to hit up the API and make a data.json file neatly organised with all the data we need.

$ npm install --save request request-promise-native csvtojson

Once you have the required modules, its as simple as a few neat lines of Node.js – this file is parse_csv.js.

/* Parse the DNS public node list to JSON, then collect GeoIP data for each IP.
 * By Jed V. (2019 */
const csv = require('csvtojson');
const fs = require('fs');
const request = require('request');
const rp = require('request-promise-native');

const orig = 'c.csv';

console.log(`Parsing file: ${orig}`);

function initial_csvtojson() {
        .subscribe((json, line) => {
            return new Promise((resolve, reject) => {
                // long operation for each json e.g. transform / write into database.
                var options = {
                    uri: `${json['ip']}?access_key=********************`,
                    json: true
                    .then(function(data) {
                        console.log(`IP: ${data.ip}, Long: ${data.latitude},${data.longitude}`);
                        json['longitude'] = data.longitude;
                        json['latitude'] = data.latitude;
                        console.log('Changed obj:');
                    .catch(function(err) {

        }).then((json) => {
            console.log('Writing JSON object to file...');
            fs.writeFile('./data.json', JSON.stringify(json, null, 2), 'utf-8');
        }).catch((err) => {
            throw err;


Make sure to replace the *s with your API key if you wish to use the above script. We use the csvtojson module with its native promise support alongside request module’s native promises to effectively quickly turn c.csv into data.json.

We make the API request for each row in the CSV and then append the returned longitude/latitude to the object which csvtojson then passes on until the final data.json object is created and copied to file – which then looks like the snippet below:

        "ip": "",
        "name": "",
        "country_id": "TT",
        "city": "Port of Spain",
        "version": "dnsmasq-2.49",
        "error": "",
        "dnssec": "false",
        "reliability": "1.00",
        "checked_at": "2019-02-27T13:00:21Z",
        "created_at": "2017-01-08T22:03:27Z",
        "longitude": -61.5167,
        "latitude": 10.65
        "ip": "",
        "name": "",
        "country_id": "BR",
        "city": "Rio de Janeiro",
        "version": "",
        "error": "",
        "dnssec": "false",
        "reliability": "0.99",
        "checked_at": "2019-02-27T12:54:31Z",
        "created_at": "2015-01-10T15:46:41Z",
        "longitude": -43.3307,
        "latitude": -22.9201

Great, in just a couple of motions we have got a data.json file that we can use in the browser with D3.js to produce a world map of highlighted public DNS nodes.

I’m going to write up how I did this in part 2!


Learning D3

This week I have been getting up to speed on one of the most powerful ways to visualize any datasets in any way you could imagine on the browser – using D3.

D3 describes itself as

“D3.js is a JavaScript library for manipulating documents based on data. D3 helps you bring data to life using HTML, SVG, and CSS.” – Homepage

This might seem on the face of it just another Javascript jQuery style graphing library like highcharts, Chart.js etc. I thought this too but this couldn’t be further from the truth. D3 focuses on pure data driven development, leaving you to create the entire visual side from scratch using the power of SVG.

This allows you to create graphs from the very ground up, giving unlimited customisation options and never leaving you trying to work around the limitations of the library. There is a steep learning curve if like myself, you haven’t done much with SVG before. I recommend getting cheatsheets for both the library & SVG and get as in-depth material to learn from that you can find/afford.

There is multiple sites including the original D3 site and that offer hundreds and hundreds of detailed & amazing examples of what you can accomplish if you just start with D3 & the browser, it doesn’t even have any dependencies whatsoever – allowing you to get straight into the fight.

Data sourced from:

This is a chart I created using just HTML, CSS, SVG & D3 in only a modest amount of lines. There is a tooltip highlight that shows the actual figures of the data point you hover.

Tooltip highlight on scatter points in D3

Here is the code from the app.js file, containing all the D3 for plotting & interaction.

document.addEventListener('DOMContentLoaded', function(){ // Just make sure DOM is ready.
    // Padding, width & height for SVG element - used to calculate everything else.
    var padding = 40;
    var width = 700;
    var height = 700;

    // Setup scales for X, Y position, colour & radius of points.
    var yScale = d3.scaleLinear()
                  .domain(d3.extent(regionData, d => d.subscribersPer100))
                  .range([height - padding, padding]);

    var xScale = d3.scaleLinear()
                  .domain(d3.extent(regionData, d => d.medianAge))
                  .range([padding, width - padding]);

    var colourScale = d3.scaleLinear()
                  .domain(d3.extent(regionData, d => d.medianAge))
                  .range(['lightgreen', 'black']);

    var radiusScale = d3.scaleLinear()
                  .domain(d3.extent(regionData, d => d.growthRate))
                  .range([2, 20]);

    // Register the axis & individual ticks for the grid.
    var xAxis = d3.axisBottom(xScale)
                  .tickSize(-height + 2 * padding)

    var yAxis = d3.axisLeft(yScale)
                  .tickSize(-width + 2 * padding)

    // Draw each axis.'svg')
      .attr('transform', 'translate(0, '+ (height - padding)+')')
      .attr('transform', 'translate('+padding+', 0)')

    // Plotting the data.'svg')
        .attr('width', width)
        .attr('height', height)
        .attr('cx', d => xScale(d.medianAge))
        .attr('cy', d => yScale(d.subscribersPer100))
        .attr('fill', d => colourScale(d.medianAge))
        .attr('r', d => radiusScale(d.growthRate))
        .attr('stroke', 'black');

    // Axis labels & Title'svg')
        .attr('x', width / 2)
        .attr('y', height - padding)
        .attr('dy', '1.5em')
        .style('text-anchor', 'middle')
        .text('Median Age');'svg')
      .attr('transform', 'rotate(-90)')
      .attr('x', - height / 2)
      .attr('y', padding)
      .attr('dy', '-1.5em')
      .style('text-anchor', 'middle')
      .text('Subscribers per 100');'svg')
      .attr('x', width / 2)
      .attr('y', padding - 20)
      .attr('font-size', '1.5em')
      .style('text-anchor', 'middle')
      .text('Regional Statistical Data');

    // Transition hover & Tooltip for each plotted point.
    var circle = d3.selectAll("circle");
    circle.on('mouseover', function(d) {
        let r ='r');'r', r*1.1);

        let html  = '&nbsp;<strong>Region:</strong> ' + d.region + '<br />' +
                    '&nbsp;<strong>Median Age:</strong> ' + d.medianAge + '<br/>' +
                    '&nbsp;<strong>Subscribers per 100:</strong> ' + d.subscribersPer100 + '<br />' +
                    '&nbsp;<strong>Growth Rate:</strong> ' + d.growthRate + '<br />';

            .style('left', (d3.event.pageX + 15) + 'px')
            .style('top', (d3.event.pageY - 28) + 'px')
            .style('opacity', .9)
    }).on('mouseout', function(d) {
        let r ='r');'r', r/1.1);
            .style('opacity', 0);

    // Append tooltip div to body for use later.
    var tooltip ="body").append("div")
      .attr("class", "tooltip")
      .style("opacity", 0);

}, false);

As you can see, the level of detail on drawing the graph from the axis to the points and the text labelling them is all thought of and catered for by D3 – even if it can seem alien to someone who is used to a world of high-level Javascript plotting libraries.

If you want to get the full code for each file to run/modify this yourself, you can get it here:

I am continuing on to learn the advanced D3 module of my current studying but I think its important people realise that although it can be challenging to get into advanced SVG drawing (opposed to generating automatically) that D3 is wholly worthwhile!


Leaving behind ES5 for ES20xx – The (r)evolution of Javascript

The evolution of Javascript as a language is speeding up more than ever before. It’s debated humble beginnings on Netscape now hidden behind a formidable community spirit that seems to be growing out of the recent big changes in how Javascript is seen & used by different developers. Now harnessed for more uses than thought possible by the average developer just around 10 years ago*.

Lets just re-cap on a brief history of how this all came to be:

  • 2008 – ECMA specification 4 is due to be released, after already being worked on for years and causing many a disagreement between parties trying to have it swing more to their needs, it ends up a blog-based sparring match between Brendan Eich (Mozilla) & Chris Wilson (Microsoft). This is all based around the argument of incompatibility that would occur with the proposed block of changes among general open/closed rivalry.
  • 2009 – After many a specification revision, incompatibility argument and unhappy campers on all sides of the Javascript camp, ECMA TC39 publish ES5 in December; the final edition being agreed by all parties but as we all find out loosely interpreted.
  • 2015 – Just when you thought ECMA would never unite, quite like the factions in the film Braveheart – the release of ES2015 is finalised. It comes packed with new features and new ways of doing faster cleaner and more efficient code, especially for the class-based programming camp. (2015 Specification)

Now up to this point, the updates and periods between each had been drawn out and more about semantic cross-compatibility than advancing the language . ES2015 showed a more decisive specification, with features for all and a brighter future. All they had to do now was keep it up.

Meanwhile you & everyone else were crawling articles and whitepapers detailing the new way of getting by on the client or server-side of Javascript. The clean pretty syntax of arrow functions still fresh every time you view your newly ES2015 refactored files.

There is a new air to the whole world of Javascript, Node.js booming with popularity by this point is starting to become the over-packed npm nightmare we now deal with daily; reminiscent of the fun you could have breaking the rpm package management system in the mid 2000s.

The client side sector is now dominated by frameworks such as React & Angular, making sure the abstraction between different parts of the age of ‘single page apps’ is set out in stone. The goalposts for your front-end Javascript interview processes are now based upon who can master the most of these 3rd party UI libraries that pop up like whack-o-moles.

While you have been distracted with all the shining lights and bells & whistles of Javascript taking on the challenge to re-invent itself as hip and down with those class-based kids, ECMA were beginning to get the hang of a release cycle.

ES2016/2017 come out on the actual year they were supposed to! Not only that, they are focused and contain only well thought-out additions such as the [].includes, native promises with generators, async functions and the lovely await keyword. (2016 Specification) (2017 Specification)

ES2018 promises even more ways to make that code async-as-you-like while looking forward to a new, revolutionised world of Javascript. Not just that but additional rest/spread operators and additions to the long neglected RegExp. (2018 Specification)

This united front has also been taken on by browsers working for full compatibility more than ever before, you could say this is inevitable but anything could go the other way.

Thank fuck for Javascript.

* I am referring to 2009 & the creation of Node.js, I am aware however that server-side Javascript did exist before V8 really put it on the map.

Also my articles convey both technical fact AND opinions on technology, programming and software both closed & open source. Please keep this in mind! 🙂


2019 is here

I am trying to start 2019 off being mega productive, learning as many new technologies or new development practices/platforms as possible!

I am currently making quick way through a Udemy course called Advanced Web Developer Bootcamp, in hope of quickly updating my web development skills that are a tiny bit behind, learning libraries & methods such as Flexbox, React, D3, Warbler and more.

After this I am planning on branching out and learning some basic Android development, brushing up on my Ruby on Rails skills and also doing more courses related to my financial algorithms project utilising Python, Machine Learning & Neural Networks.

On top of all this I am also hoping to write some articles both for the site but also to publish on – the current popular “clap” based publishing platform. Writing on technical topics is something I’ve wanted to do for a while now and have toyed with on this site, so I need to dive in.

An exciting year is ahead! I am also hoping to add some portfolio pieces to the site for my freelance work as time goes on.


Using Command Line Tools to Aid Development – Part 2 (Sed)

The first part of this series of articles is here and I recommend you start with that one to get an idea of this series of articles and why using command line tools to aid your development (in any language/environment) can be very useful.

I am going to dive right in here with another text based tool called ‘sed’. Sed stands for ‘stream editor’ and it is another very early Unix days tool created in 1974, its goal being a streams based implementation of text editing that utilises regular expressions which back then was a new era for efficiently processing text.

These days GNU have their own offering of sed which has become the standard edition, this version includes some big improvements and new features including editing files in-place and other functionality for convenience. This package is default available on most Unix/Linux distros and can be installed for use on Windows based setups as well.

Once you have made sure it is available on your chosen development environment you can use sed for various operations that would take longer or be tedious if you were doing it manually or using GUI tools to accomplish. I will show examples of some of these situations for you below.

In modern programming there is a big emphasis on splitting your projects/scripts/programs into multiple neatly organised files that are in various formats and structures in order to keep your project to standards, make it easier for other developers to read & modify, for compiler rules etc. This can mean you end up needing to replace certain words, phrases, variable names and the like project-wide or folder-wide in order to update name changes and such. Sed can help you handle situations like this with ease.

Use – Replacing text in files
sed -i /<search>/<replace>/ <filename>
sed -i s/abc/cba/ file1
Use – Replacing text in all files in a folder/project
sed -i /<search>/<replace>/ *
sed -i s/abc/cba/ *
Use – Cleaning up code in your project or file

You can go further with this sort of concept and start using the functionality of sed to clean up your code, remove unwanted function calls and such with commands such as the ones below.

sed -i '/^ *$/d' <filename>
^ This removes all lines with just whitespace or that are blank.

sed -i s/<function name>\(\)\;// <filename>
^ This would remove all calls in a file or project to a certain function.

sed -i '/\s*#.*$/d' <filename>
^ This would remove all comments that use the # style from your file or project.

sed -i s/<old variable/function/class name>/<new variable/function/class name>/ <filename>

^ This last one is self explanatory really but it is very useful for when you need to change the name of something that gets referenced or called all over the place in a large project.

You will notice the different styles of sed use that you can do, single quoting can be useful if your using certain characters in your command that will upset your preferred command line shell. The function call example above shows the other method of escaping your special characters using \, this is just another way to do the same thing.

Advanced switches and uses

Once you get the hang of some basic sed commands you can use some of the switches to chain commands, store commands in sed ‘script’ files etc, here is some examples of switches:

The -e switch can be used for chaining 
sed -i -e '<command>' -e '<command>' <filename>
sed -i -e '/\s*#.*$/d' -e '/^ *$/d' <filename>
^ This chains the commands up to remove all # style comments and all blank/whitespace lines from your file or project, you can chain up more expressions and eventually come up with one-liners to clean your code up when required.

You can create sed scripts by just putting one sed command per line into a file and calling it with the -f switch.

File x.sed:

/^ *$/d


sed -f x.sed <filename>

Creating little sed scripts and keeping them handy to clean up code or even data-files (e.g. simple csv processing) can be a good use of this feature.

This concludes our introduction to using the great command line tool ‘sed’ for aiding development. There is lots of other great ways you can use sed to process any type of text file or even streams from pipes similar to grep in the first article to leverage the most out of its functionality. If you need a more adaptable advanced version of what sed does with even more functionality to really mangle your text based files definitely look into the command line tool AWK.

There will be a part 3 coming soon, I apologise for the long waits between my articles.

Live the dream! 🙂


Quick Update

I have been busy moving into my new place which is a bigger better more mobile home style van. It needs some work doing and re-decorating and eventually filling up with the furniture I have had in storage for a whole year but it is a great start to the new era.

I also saw this really good photo taken recently at 4am just near Glastonbury Tor which is a tourist & photography hotspot nearby to where my van is located, which will help show how easy it is to get distracted and end up outside living here.

I promise to have some more content up on the tech side soon, I am hoping to do an article on Linear Regression. 🙂


Rsync backup script for websites (document root) & MySQL databases – to or others

I have just got an account with to backup one of my server’s websites, including the document root for the webserver and all the MySQL databases for them. I made a little more fancy Bash script for storing everything that is a little more complex than just one rsync command and caters for the dumping of the databases to individual named/dated files.

Read the header comments of the file for instructions on usage.

# Backup a directory (document root) and all MySQL databases and send to
# You must have an account with backup service (or similar) and have setup automatic ssh login via keys.
# See:
# Create the remote RSYNC_PATH using ssh before running the script.
# It is best to run this with a cronjob either daily/weekly/monthly,
# run it with --quiet in the crontab for no output except from errors
# <>

RSYNC_PATH="" # Do not start or end the path with a /
DOCUMENT_ROOT_PATH="" # Do put trailing slash on this path.

# No need to edit below lines unless you need to modify behaviour.

databases=`mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database`

if [ "$@" ] && [ "$@" == "--quiet" ]; then
 echo "Starting backup.."
 echo "Dumping all MySQL databases.."
mkdir -p sql
for db in $databases; do
    if [[ "$db" != "information_schema" ]] && [[ "$db" != "performance_schema" ]] && [[ "$db" != "mysql" ]] && [[ "$db" != _* ]] ; then
        if [ "$QUIET" = false ]; then
          echo "Dumping database: $db"
        mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD --databases $db > sql/`date +%Y%m%d`.$db.sql
if [ "$QUIET" = false ]; then
  echo "Done."
  echo "Sending SQL databases to $RSYNC_PATH/sql"
if [ "$QUIET" = false ]; then
  echo "Done."
  echo "Backing up document root to $RSYNC_PATH/sites"
if [ "$QUIET" = false ]; then
  echo "Done."
  echo "All Done."

Make sure to fill in the variables at the top and setup your cron job once you have tested it – checkout for their awesome setup and nicely priced remote backup solutions. Their technical support is also most helpful.




I have just moved into a caravan, after having a few issues getting it sorted including the 16A hookup and internal electrics failing and having to rewire some of it to get power from the mains working I have finally started living in it. Gas is sorted too so I just need to sort proper water and internet (using 4G tether through my phone for now, thanking GiffGaff for that!

I was trying to setup a 200m cat5e Ethernet run using PoE (Power over Ethernet) to boost the signal using a repeater at the 100m mark but after testing the equipment & theory where I was living before in a house, when I have now ran the cable properly at the plot I am living on it has now refused to work so might be back to the drawing board if troubleshooting cannot make it work.

Hopefully I’ll update on this again soon, will start to go mad without Internet if its too long but it is almost Christmas so might wait on sorting it for a little bit.


Getting free DNS & email for your domain

SO. I wanted to start this site/project myself but I didn’t have any real funds to donate to its cause at all. Luckily it was recently the horrifyingly pointless ‘black Friday/cyber Monday’ thing that people care way too much about – the side effect of this is super ridiculously cheap domain names and other online bits ‘n bobs. So from my usual domain buying site that I always recommend which is Namecheap I got this domain for under £2 which was a steal for a year.

I already had some free web hosting to make use of so there was only two things not included with what I already had/bought which is a DNS provider for the domain and of course email for the domain too.

These two things can always be found for free if you know where to look so I will show you a couple of services that provide it for me on the regular.


Copyright 2017 DNSExit

DNS can be found for free on an old looking not well known site called DNS Exit. I have used DNS Exit to provide free DNS for domains for at least 6 years and never once had any issues or failures from their service. It even supports Dynamic IP addressing using software they provide. They also do sell all the usual hosting provider stuff like premium DNS services, web hosting, email hosting/email relays etc.


Zoho logo
Zoho logo (Copyright 2017 Zoho)

Free email is a bit harder to find if you wish to use your own domain without any forwarding situation but there is some good providers if you search around and I always end up going for Zoho Mail. Think of Zoho like Google in the sense of email providers, they don’t just provide an email service they go all out.

Features of Zoho Mail/Zoho ‘Workplace’ are:

  • Full featured email service with Webmail/Mobile apps/Mail servers for your own clients with all modern features you’d expect when you are paying & some more.
  • Migration services for getting your data across from most other providers you could think up.
  • Integration with CRM setups/apps.
  • Docs. (Just like Google Docs but free for business/own domain)
  • Calendar.
  • Tasks.
  • Notes.
  • Contacts.
  • Integration with all other Zoho services. (See below)

Basically Zoho are an awesome company that provide all these services and others too like their project management (Basecamp/Asana-eqsue) service, Invoicing and CRM services and a bunch of other things so check them out and ditch overpriced greedy services that offer a lot less. Even if you end up requiring some of the paid features on a Zoho service (of which I find it hard to come across any) then your still on a big gain overall. Zoho have also just launched ‘One’ which is supposed to be a service combining all of their offerings to run your ‘entire business’ from one suite, which looks promising.

Services like these are how I was even able to make this site possible so thought I’d give them some props and help others who are on really tight budgets launch their own stuff.