D3 project 2 – Public DNS World Nodes Map (Part 1)

I have been working on my final project for my time with D3. I decided to focus purely on the (Geo|Topo)JSON style world maps and try to parse then display some sort of network information using a combination of geoip lookups and longitude/latitude.

For the first part of this process, I needed to find some data and get it ready for use. I stumbled across this website:

Public DNS servers are globally indexed on this site with some basic information on their reliability, reverse DNS, software version etc & most of all their IP – perfect for what we will need.

I downloaded the full CSV list of all ‘valid’ nameservers present, which is located at – there is many thousands of nameservers listed at the time of writing. This is too many for a nice looking node map, so I use some GNU/Linux trickery to get a randomly selected csv of 50 nodes.

$ shuf -n 50 nameservers.csv > c.csv

Make sure the CSV header is copied to the top of your new c.csv file – this is important for the next step which is to get Geographical locations for these different IP addresses.

$ cat c.csv
ip,name,country_id,city,version,error,dnssec,reliability,checked_at,created_at,,TT,Port of Spain,dnsmasq-2.49,,false,1.00,2019-02-27T13:00:21Z,2017-01-08T22:03:27Z,,BR,Rio de Janeiro,,,false,0.99,2019-02-27T12:54:31Z,2015-01-10T15:46:41Z

I chose to use the service to get my geoip information but there is a few services around, ipstack lets you call a certain amount of times a day for free and we only need 50 calls to get our data.json file ready for D3.

I used Node.JS along with a couple of modules from npm to hit up the API and make a data.json file neatly organised with all the data we need.

$ npm install --save request request-promise-native csvtojson

Once you have the required modules, its as simple as a few neat lines of Node.js – this file is parse_csv.js.

/* Parse the DNS public node list to JSON, then collect GeoIP data for each IP.
 * By Jed V. (2019 */
const csv = require('csvtojson');
const fs = require('fs');
const request = require('request');
const rp = require('request-promise-native');

const orig = 'c.csv';

console.log(`Parsing file: ${orig}`);

function initial_csvtojson() {
        .subscribe((json, line) => {
            return new Promise((resolve, reject) => {
                // long operation for each json e.g. transform / write into database.
                var options = {
                    uri: `${json['ip']}?access_key=********************`,
                    json: true
                    .then(function(data) {
                        console.log(`IP: ${data.ip}, Long: ${data.latitude},${data.longitude}`);
                        json['longitude'] = data.longitude;
                        json['latitude'] = data.latitude;
                        console.log('Changed obj:');
                    .catch(function(err) {

        }).then((json) => {
            console.log('Writing JSON object to file...');
            fs.writeFile('./data.json', JSON.stringify(json, null, 2), 'utf-8');
        }).catch((err) => {
            throw err;


Make sure to replace the *s with your API key if you wish to use the above script. We use the csvtojson module with its native promise support alongside request module’s native promises to effectively quickly turn c.csv into data.json.

We make the API request for each row in the CSV and then append the returned longitude/latitude to the object which csvtojson then passes on until the final data.json object is created and copied to file – which then looks like the snippet below:

        "ip": "",
        "name": "",
        "country_id": "TT",
        "city": "Port of Spain",
        "version": "dnsmasq-2.49",
        "error": "",
        "dnssec": "false",
        "reliability": "1.00",
        "checked_at": "2019-02-27T13:00:21Z",
        "created_at": "2017-01-08T22:03:27Z",
        "longitude": -61.5167,
        "latitude": 10.65
        "ip": "",
        "name": "",
        "country_id": "BR",
        "city": "Rio de Janeiro",
        "version": "",
        "error": "",
        "dnssec": "false",
        "reliability": "0.99",
        "checked_at": "2019-02-27T12:54:31Z",
        "created_at": "2015-01-10T15:46:41Z",
        "longitude": -43.3307,
        "latitude": -22.9201

Great, in just a couple of motions we have got a data.json file that we can use in the browser with D3.js to produce a world map of highlighted public DNS nodes.

I’m going to write up how I did this in part 2!


Learning D3

This week I have been getting up to speed on one of the most powerful ways to visualize any datasets in any way you could imagine on the browser – using D3.

D3 describes itself as

“D3.js is a JavaScript library for manipulating documents based on data. D3 helps you bring data to life using HTML, SVG, and CSS.” – Homepage

This might seem on the face of it just another Javascript jQuery style graphing library like highcharts, Chart.js etc. I thought this too but this couldn’t be further from the truth. D3 focuses on pure data driven development, leaving you to create the entire visual side from scratch using the power of SVG.

This allows you to create graphs from the very ground up, giving unlimited customisation options and never leaving you trying to work around the limitations of the library. There is a steep learning curve if like myself, you haven’t done much with SVG before. I recommend getting cheatsheets for both the library & SVG and get as in-depth material to learn from that you can find/afford.

There is multiple sites including the original D3 site and that offer hundreds and hundreds of detailed & amazing examples of what you can accomplish if you just start with D3 & the browser, it doesn’t even have any dependencies whatsoever – allowing you to get straight into the fight.

Data sourced from:

This is a chart I created using just HTML, CSS, SVG & D3 in only a modest amount of lines. There is a tooltip highlight that shows the actual figures of the data point you hover.

Tooltip highlight on scatter points in D3

Here is the code from the app.js file, containing all the D3 for plotting & interaction.

document.addEventListener('DOMContentLoaded', function(){ // Just make sure DOM is ready.
    // Padding, width & height for SVG element - used to calculate everything else.
    var padding = 40;
    var width = 700;
    var height = 700;

    // Setup scales for X, Y position, colour & radius of points.
    var yScale = d3.scaleLinear()
                  .domain(d3.extent(regionData, d => d.subscribersPer100))
                  .range([height - padding, padding]);

    var xScale = d3.scaleLinear()
                  .domain(d3.extent(regionData, d => d.medianAge))
                  .range([padding, width - padding]);

    var colourScale = d3.scaleLinear()
                  .domain(d3.extent(regionData, d => d.medianAge))
                  .range(['lightgreen', 'black']);

    var radiusScale = d3.scaleLinear()
                  .domain(d3.extent(regionData, d => d.growthRate))
                  .range([2, 20]);

    // Register the axis & individual ticks for the grid.
    var xAxis = d3.axisBottom(xScale)
                  .tickSize(-height + 2 * padding)

    var yAxis = d3.axisLeft(yScale)
                  .tickSize(-width + 2 * padding)

    // Draw each axis.'svg')
      .attr('transform', 'translate(0, '+ (height - padding)+')')
      .attr('transform', 'translate('+padding+', 0)')

    // Plotting the data.'svg')
        .attr('width', width)
        .attr('height', height)
        .attr('cx', d => xScale(d.medianAge))
        .attr('cy', d => yScale(d.subscribersPer100))
        .attr('fill', d => colourScale(d.medianAge))
        .attr('r', d => radiusScale(d.growthRate))
        .attr('stroke', 'black');

    // Axis labels & Title'svg')
        .attr('x', width / 2)
        .attr('y', height - padding)
        .attr('dy', '1.5em')
        .style('text-anchor', 'middle')
        .text('Median Age');'svg')
      .attr('transform', 'rotate(-90)')
      .attr('x', - height / 2)
      .attr('y', padding)
      .attr('dy', '-1.5em')
      .style('text-anchor', 'middle')
      .text('Subscribers per 100');'svg')
      .attr('x', width / 2)
      .attr('y', padding - 20)
      .attr('font-size', '1.5em')
      .style('text-anchor', 'middle')
      .text('Regional Statistical Data');

    // Transition hover & Tooltip for each plotted point.
    var circle = d3.selectAll("circle");
    circle.on('mouseover', function(d) {
        let r ='r');'r', r*1.1);

        let html  = '&nbsp;<strong>Region:</strong> ' + d.region + '<br />' +
                    '&nbsp;<strong>Median Age:</strong> ' + d.medianAge + '<br/>' +
                    '&nbsp;<strong>Subscribers per 100:</strong> ' + d.subscribersPer100 + '<br />' +
                    '&nbsp;<strong>Growth Rate:</strong> ' + d.growthRate + '<br />';

            .style('left', (d3.event.pageX + 15) + 'px')
            .style('top', (d3.event.pageY - 28) + 'px')
            .style('opacity', .9)
    }).on('mouseout', function(d) {
        let r ='r');'r', r/1.1);
            .style('opacity', 0);

    // Append tooltip div to body for use later.
    var tooltip ="body").append("div")
      .attr("class", "tooltip")
      .style("opacity", 0);

}, false);

As you can see, the level of detail on drawing the graph from the axis to the points and the text labelling them is all thought of and catered for by D3 – even if it can seem alien to someone who is used to a world of high-level Javascript plotting libraries.

If you want to get the full code for each file to run/modify this yourself, you can get it here:

I am continuing on to learn the advanced D3 module of my current studying but I think its important people realise that although it can be challenging to get into advanced SVG drawing (opposed to generating automatically) that D3 is wholly worthwhile!


Leaving behind ES5 for ES20xx – The (r)evolution of Javascript

The evolution of Javascript as a language is speeding up more than ever before. It’s debated humble beginnings on Netscape now hidden behind a formidable community spirit that seems to be growing out of the recent big changes in how Javascript is seen & used by different developers. Now harnessed for more uses than thought possible by the average developer just around 10 years ago*.

Lets just re-cap on a brief history of how this all came to be:

  • 2008 – ECMA specification 4 is due to be released, after already being worked on for years and causing many a disagreement between parties trying to have it swing more to their needs, it ends up a blog-based sparring match between Brendan Eich (Mozilla) & Chris Wilson (Microsoft). This is all based around the argument of incompatibility that would occur with the proposed block of changes among general open/closed rivalry.
  • 2009 – After many a specification revision, incompatibility argument and unhappy campers on all sides of the Javascript camp, ECMA TC39 publish ES5 in December; the final edition being agreed by all parties but as we all find out loosely interpreted.
  • 2015 – Just when you thought ECMA would never unite, quite like the factions in the film Braveheart – the release of ES2015 is finalised. It comes packed with new features and new ways of doing faster cleaner and more efficient code, especially for the class-based programming camp. (2015 Specification)

Now up to this point, the updates and periods between each had been drawn out and more about semantic cross-compatibility than advancing the language . ES2015 showed a more decisive specification, with features for all and a brighter future. All they had to do now was keep it up.

Meanwhile you & everyone else were crawling articles and whitepapers detailing the new way of getting by on the client or server-side of Javascript. The clean pretty syntax of arrow functions still fresh every time you view your newly ES2015 refactored files.

There is a new air to the whole world of Javascript, Node.js booming with popularity by this point is starting to become the over-packed npm nightmare we now deal with daily; reminiscent of the fun you could have breaking the rpm package management system in the mid 2000s.

The client side sector is now dominated by frameworks such as React & Angular, making sure the abstraction between different parts of the age of ‘single page apps’ is set out in stone. The goalposts for your front-end Javascript interview processes are now based upon who can master the most of these 3rd party UI libraries that pop up like whack-o-moles.

While you have been distracted with all the shining lights and bells & whistles of Javascript taking on the challenge to re-invent itself as hip and down with those class-based kids, ECMA were beginning to get the hang of a release cycle.

ES2016/2017 come out on the actual year they were supposed to! Not only that, they are focused and contain only well thought-out additions such as the [].includes, native promises with generators, async functions and the lovely await keyword. (2016 Specification) (2017 Specification)

ES2018 promises even more ways to make that code async-as-you-like while looking forward to a new, revolutionised world of Javascript. Not just that but additional rest/spread operators and additions to the long neglected RegExp. (2018 Specification)

This united front has also been taken on by browsers working for full compatibility more than ever before, you could say this is inevitable but anything could go the other way.

Thank fuck for Javascript.

* I am referring to 2009 & the creation of Node.js, I am aware however that server-side Javascript did exist before V8 really put it on the map.

Also my articles convey both technical fact AND opinions on technology, programming and software both closed & open source. Please keep this in mind! 🙂


2019 is here

I am trying to start 2019 off being mega productive, learning as many new technologies or new development practices/platforms as possible!

I am currently making quick way through a Udemy course called Advanced Web Developer Bootcamp, in hope of quickly updating my web development skills that are a tiny bit behind, learning libraries & methods such as Flexbox, React, D3, Warbler and more.

After this I am planning on branching out and learning some basic Android development, brushing up on my Ruby on Rails skills and also doing more courses related to my financial algorithms project utilising Python, Machine Learning & Neural Networks.

On top of all this I am also hoping to write some articles both for the site but also to publish on – the current popular “clap” based publishing platform. Writing on technical topics is something I’ve wanted to do for a while now and have toyed with on this site, so I need to dive in.

An exciting year is ahead! I am also hoping to add some portfolio pieces to the site for my freelance work as time goes on.


Using Command Line Tools to Aid Development – Part 2 (Sed)

The first part of this series of articles is here and I recommend you start with that one to get an idea of this series of articles and why using command line tools to aid your development (in any language/environment) can be very useful.

I am going to dive right in here with another text based tool called ‘sed’. Sed stands for ‘stream editor’ and it is another very early Unix days tool created in 1974, its goal being a streams based implementation of text editing that utilises regular expressions which back then was a new era for efficiently processing text.

These days GNU have their own offering of sed which has become the standard edition, this version includes some big improvements and new features including editing files in-place and other functionality for convenience. This package is default available on most Unix/Linux distros and can be installed for use on Windows based setups as well.

Once you have made sure it is available on your chosen development environment you can use sed for various operations that would take longer or be tedious if you were doing it manually or using GUI tools to accomplish. I will show examples of some of these situations for you below.

In modern programming there is a big emphasis on splitting your projects/scripts/programs into multiple neatly organised files that are in various formats and structures in order to keep your project to standards, make it easier for other developers to read & modify, for compiler rules etc. This can mean you end up needing to replace certain words, phrases, variable names and the like project-wide or folder-wide in order to update name changes and such. Sed can help you handle situations like this with ease.

Use – Replacing text in files
sed -i /<search>/<replace>/ <filename>
sed -i s/abc/cba/ file1
Use – Replacing text in all files in a folder/project
sed -i /<search>/<replace>/ *
sed -i s/abc/cba/ *
Use – Cleaning up code in your project or file

You can go further with this sort of concept and start using the functionality of sed to clean up your code, remove unwanted function calls and such with commands such as the ones below.

sed -i '/^ *$/d' <filename>
^ This removes all lines with just whitespace or that are blank.

sed -i s/<function name>\(\)\;// <filename>
^ This would remove all calls in a file or project to a certain function.

sed -i '/\s*#.*$/d' <filename>
^ This would remove all comments that use the # style from your file or project.

sed -i s/<old variable/function/class name>/<new variable/function/class name>/ <filename>

^ This last one is self explanatory really but it is very useful for when you need to change the name of something that gets referenced or called all over the place in a large project.

You will notice the different styles of sed use that you can do, single quoting can be useful if your using certain characters in your command that will upset your preferred command line shell. The function call example above shows the other method of escaping your special characters using \, this is just another way to do the same thing.

Advanced switches and uses

Once you get the hang of some basic sed commands you can use some of the switches to chain commands, store commands in sed ‘script’ files etc, here is some examples of switches:

The -e switch can be used for chaining 
sed -i -e '<command>' -e '<command>' <filename>
sed -i -e '/\s*#.*$/d' -e '/^ *$/d' <filename>
^ This chains the commands up to remove all # style comments and all blank/whitespace lines from your file or project, you can chain up more expressions and eventually come up with one-liners to clean your code up when required.

You can create sed scripts by just putting one sed command per line into a file and calling it with the -f switch.

File x.sed:

/^ *$/d


sed -f x.sed <filename>

Creating little sed scripts and keeping them handy to clean up code or even data-files (e.g. simple csv processing) can be a good use of this feature.

This concludes our introduction to using the great command line tool ‘sed’ for aiding development. There is lots of other great ways you can use sed to process any type of text file or even streams from pipes similar to grep in the first article to leverage the most out of its functionality. If you need a more adaptable advanced version of what sed does with even more functionality to really mangle your text based files definitely look into the command line tool AWK.

There will be a part 3 coming soon, I apologise for the long waits between my articles.

Live the dream! 🙂


Using Command Line Tools to Aid Development – Part 1 (Grep)

In the world of development today everyone is looking to speed up the process while remaining efficient and not cutting corners, there is lots of different ways to achieve this but a lot of the time I find developers miss using some simple command line tools that can help certain things such as locating every use of a keyword (be it a variable name, function name, value etc) in an entire project and their locations to using linters to help you brush up to the strictest of standards.

Between the Bash shell, GNU/Linux and other command line tools you can save yourself time and pain while also boosting your overall skill set. This is going to be part one of a few parts where I will show not just the tools themselves but a couple of use cases that actually help during handling the development or maintenance of code/projects.

Introducing… grep, taken from Wikipedia the standard descriptive line would be ‘grep is a command-line utility for searching plain-text data sets for lines that match a regular expression’. As a base description that is pretty accurate but as time has gone on (the first version was released 1974 for the original Unix operating system) it has now got quite a few features and uses.

Use – Searching a file for occurrences:

The plainest use of grep would be to purely search a file for an occurrence of a string. You can specify a regular expression or just a search string like so:

grep '<search>' <filename>
grep test file.txt
grep 'test' file.txt
grep ^test$ file.txt

Note: The use of quote marks around the search term is optional but should be used if search contains any characters that would trigger the Bash shell (or your preferred platform) to interpret them as a split of the command/line or things like pipes etc.

Use – Searching wider for occurrences:

You can use grep to search multiple files all the way up to multiple directories recursively file-by-file.

grep <search> <multiple files>
grep test file1.txt file2.txt file3.h file4.cpp file5.txt

grep -R <search> <directory>
grep -R test test_files/ 
grep -R test test_directory1/ test_directory2/
grep -R test * - to search all files in current directory and all sub-directories, wildcard style.

You can also mix file & directories in the search radius.

grep -R test.txt test_files/

You will notice immediately when you give more than one file or any directories to be searched by grep that it will start reporting the file that the occurrence was found in at the start of each line so you can find which files contain what you were searching for.

livethedream$ grep -R test test.txt some_files/

You can use grep with the wildcard solution in the root of your or other people’s code, libraries, projects etc to find where a function/class/variable is declared, defined, used etc this is really helpful in speeding up trying to find your way around code you either did a long time back or you are taking over maintaining from another developer or even modifying 3rd party code for your own devices.

Use – Searching the pipe for occurrences:

Grep would not be what it is today without its use when put together with the traditional stream pipes that operating systems have been making use of since around the time of its first version. In a traditional Bash shell setup on a Unix/Linux system the pipe symbol | works by ‘piping’ the output of the command on the left hand side to the command on the right hand side and this can be chained over and over again for great effect.

Here is a typical example of using grep with a pipe to search the output of any command you can execute that has output:

<command> | grep <search>
livethedream$ ls 
file1.txt	file2.txt	file3.txt	file4.txt	file5.txt	test.txt	test_files
livethedream$ ls | grep test

You can get really good results using grep in this way in regards to the development process by combining it with the output of tools like package managers, compilers, shell scripts, source control tools (git, hg, svn etc).

Another good use in the pipe instance is the -v switch which will reverse the usage to remove the occurrences rather than return them, so taking our above example lets try the same with -v.

<command> | grep <search>
livethedream$ ls 
file1.txt	file2.txt	file3.txt	file4.txt	file5.txt	test.txt	test_files
livethedream$ ls | grep -v test

You can see how this would be useful, you can even chain it instead of using a regular expression for quick removal of one or more separate searches like so:

livethedream$ ls | grep -v test | grep -v file1 | grep -v file3

You can also use this method with commands like cat, curl etc to parse your way through various types of data removing different searches quickly without mucking about with regex – a good example just for thought is using cat to view a csv file and using grep -v with multiple pipes to remove all lines that have a certain field value, only leaving the data left.

A quick example of this applying for searching development projects could be wanting to search for a function, but only search for it when it is called in a certain way or without a certain argument. I add the ( after the function name as an example of narrowing down making sure it is a function called.

grep -R '<function name>(' * | grep -v '(true)' 

This call would search for all the calls (and the definition, but that isn’t relevant) of a function inside the entire project/codebase but removes all the times it is called with just one argument with a value of true as a boolean.

This concludes the introduction to grep for use with searching through code specifically. There is a whole host of other applications for grep and there is a couple of other really useful switches that can change the regular expression engine in use (if you use the regex with it) or how it searches. Be aware grep has different versions depending on the operating system but mostly it is always the same in the way it acts for most basic operations.

In part 2 of this series I will go even further with helping the development process using command line tools by introducing another key program that has a long history starting from the days of original Unix.


Quick Update

I have been busy moving into my new place which is a bigger better more mobile home style van. It needs some work doing and re-decorating and eventually filling up with the furniture I have had in storage for a whole year but it is a great start to the new era.

I also saw this really good photo taken recently at 4am just near Glastonbury Tor which is a tourist & photography hotspot nearby to where my van is located, which will help show how easy it is to get distracted and end up outside living here.

I promise to have some more content up on the tech side soon, I am hoping to do an article on Linear Regression. 🙂


Rsync backup script for websites (document root) & MySQL databases – to or others

I have just got an account with to backup one of my server’s websites, including the document root for the webserver and all the MySQL databases for them. I made a little more fancy Bash script for storing everything that is a little more complex than just one rsync command and caters for the dumping of the databases to individual named/dated files.

Read the header comments of the file for instructions on usage.

# Backup a directory (document root) and all MySQL databases and send to
# You must have an account with backup service (or similar) and have setup automatic ssh login via keys.
# See:
# Create the remote RSYNC_PATH using ssh before running the script.
# It is best to run this with a cronjob either daily/weekly/monthly,
# run it with --quiet in the crontab for no output except from errors
# <>

RSYNC_PATH="" # Do not start or end the path with a /
DOCUMENT_ROOT_PATH="" # Do put trailing slash on this path.

# No need to edit below lines unless you need to modify behaviour.

databases=`mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database`

if [ "$@" ] && [ "$@" == "--quiet" ]; then
 echo "Starting backup.."
 echo "Dumping all MySQL databases.."
mkdir -p sql
for db in $databases; do
    if [[ "$db" != "information_schema" ]] && [[ "$db" != "performance_schema" ]] && [[ "$db" != "mysql" ]] && [[ "$db" != _* ]] ; then
        if [ "$QUIET" = false ]; then
          echo "Dumping database: $db"
        mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD --databases $db > sql/`date +%Y%m%d`.$db.sql
if [ "$QUIET" = false ]; then
  echo "Done."
  echo "Sending SQL databases to $RSYNC_PATH/sql"
if [ "$QUIET" = false ]; then
  echo "Done."
  echo "Backing up document root to $RSYNC_PATH/sites"
if [ "$QUIET" = false ]; then
  echo "Done."
  echo "All Done."

Make sure to fill in the variables at the top and setup your cron job once you have tested it – checkout for their awesome setup and nicely priced remote backup solutions. Their technical support is also most helpful.



Fixing: PHP Fatal error: Cannot redeclare wp_ajax_press_this_save_post() on WordPress

So today a client’s site that had its WordPress version updated (to 4.9.1) started throwing this error when they tried to use the Metaslider plugin to upload new images. Some searching confirmed this was happening to more than a few other people but I did not easily find any actual solutions by people asking who probably like me had sites that needed the functionality this presented. The error:

PHP Fatal error:
Cannot redeclare wp_ajax_press_this_save_post()
(previously declared in wp-includes/deprecated.php:3921)
in wp-admin/includes/ajax-actions.php on line 3197

Some digging later I found out the function was not just deprecated but in 4.9.1 and above moved outside of the core entirely to an optional plugin that you can download called Press This. This is part of a general clean up initiative where non-essential functionality is being slowly moved out of a more streamlined core and moved into optional plugins.

Just installing the now optional plugin is not enough to fix the issue entirely though at least for now the easiest fix I could come up with that was the least intrusive was to comment out two functions in this file:


If you use your text editor to locate the top function, which is wp_ajax_press_this_save_post() you will find the other one below it and they should look like this:

 * Ajax handler for saving a post from Press This.
 * @since 4.2.0
function wp_ajax_press_this_save_post() {
      include( ABSPATH . 'wp-admin/includes/class-wp-press-this.php' );
      $wp_press_this = new WP_Press_This();

 * Ajax handler for creating new category from Press This.
 * @since 4.2.0
function wp_ajax_press_this_add_category() {
      include( ABSPATH . 'wp-admin/includes/class-wp-press-this.php' );
      $wp_press_this = new WP_Press_This();

Now use PHP comments to just comment out both functions entirely from being declared, so then only does the deprecation/new plugin functionality take over to make them appear like below and save the file.

 * Ajax handler for saving a post from Press This.
 * @since 4.2.0
//function wp_ajax_press_this_save_post() {
//      include( ABSPATH . 'wp-admin/includes/class-wp-press-this.php' );
//      $wp_press_this = new WP_Press_This();
//      $wp_press_this->save_post();

 * Ajax handler for creating new category from Press This.
 * @since 4.2.0
//function wp_ajax_press_this_add_category() {
//      include( ABSPATH . 'wp-admin/includes/class-wp-press-this.php' );
//      $wp_press_this = new WP_Press_This();
//      $wp_press_this->add_category();

Then your all set, hopefully WordPress have picked up on this and will just take these two declarations out in the next version or two before it gets annoying.



I have just moved into a caravan, after having a few issues getting it sorted including the 16A hookup and internal electrics failing and having to rewire some of it to get power from the mains working I have finally started living in it. Gas is sorted too so I just need to sort proper water and internet (using 4G tether through my phone for now, thanking GiffGaff for that!

I was trying to setup a 200m cat5e Ethernet run using PoE (Power over Ethernet) to boost the signal using a repeater at the 100m mark but after testing the equipment & theory where I was living before in a house, when I have now ran the cable properly at the plot I am living on it has now refused to work so might be back to the drawing board if troubleshooting cannot make it work.

Hopefully I’ll update on this again soon, will start to go mad without Internet if its too long but it is almost Christmas so might wait on sorting it for a little bit.