Category Archives: Web

So Long and Thanks for All the Fish
Date Created: October 23, 2015  Date Modified: October 24, 2015

Due to extrinsic circumstance I will be moving off this server and to a new host. Its likely that I will keep this domain re-directing to my new server, but in the chance you return here to a blank space, that is because of his move.

To clarify in case of speculation: This site was found to be in a “bad IP neighborhood” and I was having trouble with some emails bouncing. No malware was found on this site, so please dont be alarmed.

EDIT (10-Oct 23, 2015 @ 20 : 25): Funny how things turn out. The issues experienced were not due to the server (who I have remained with the whole time), but with some negative SEO that was used against me by a competitor. I have not changed hosts at any point and the problem is no-longer present, I appreciate the work the crew at my hosting company have put in, and I must re-state that they are in no way at fault.

This rant was posted in Security, Web, Web Servers on by .

[MAP] Environmental Impacts…
Date Created: October 18, 2014  Date Modified: October 18, 2013

The map API is giving me some issues at the time of writng this so I have set this post to publish at a later date when I hopefully have the map thing sorted…

This rant was posted in Web on by .

[MAP] Crime Statistics – International
Date Created: October 18, 2014  Date Modified: October 18, 2013

This is a work in progress which I aim to add information from a number of sources for a number of years. Primarily I am taking my info from, but I will reference data sources as they are added. Currently only shows homicide figures for 2009-2010.

This rant was posted in API's, Criminal Law, Law, Maps on by .

Date Created: May 9, 2014  Date Modified: May 9, 2014

This rant was posted in Web on by .

[MAP] Digital Attack Map
Date Created: October 30, 2013  Date Modified: October 30, 2013

Someone just shared this with me this morning so I thought I’d put it here so I can come back to it. The map represents DDoS (Distributed Denial of Service) attacls from data gathered by Google…

This rant was posted in Hacks, Internet, Maps, New Intelligence, Security, Web on by .

[REPORT] Freedom of the Internet
Date Created: October 3, 2013  Date Modified: October 6, 2013

Freedom House, a human rights group has published their 2013 Report of Internet Freedom, and being some of the major developments this year it is worth reassessing where we think we stand:

1. Blocking and filtering: In 29 of the 60 countries evaluated, the authorities blocked certain types of political and social content over the past year. China, Iran, and Saudi Arabia were the worst offenders, but filtering in democratic countries like South Korea and India has also affected websites of a political nature. Jordan and Russia intensified blocking in the past year.

2. Cyberattacks against regime critics: Opposition figures and activists in at least 31 countries faced politically motivated cyberattacks over the past year. Such attacks are particularly prevalent during politically charged events. For example, in Malaysia and Venezuela the websites of popular independent media were repeatedly subject to DDoS attacks in the run-up to elections.

3. New laws and arrests: In an increasing number of countries, the authorities have passed laws that prohibit certain types of political, religious, or social speech online, or that contain vague restrictions related to national security that are open to abuse. In 28 countries, users were arrested for online content. In addition to political dissidents, a significant number of those detained were ordinary people who posted comments on social media that were critical of the authorities or the dominant religion.

4. Paid progovernment commentators: A total of 22 countries saw paid commentators manipulate online discussions by discrediting government opponents, spreading propaganda, and defending government policies from criticism without acknowledging their affiliation. Spearheaded by China, Bahrain, and Russia, this tactic is increasingly common in countries like Belarus and Malaysia.

5. Physical attacks and murder: At least one person was attacked, beaten, or tortured for online posts in 26 countries, with fatalities in five countries, often in retaliation for the exposure of human rights abuses. Dozens of online journalists were killed in Syria, and several were murdered in Mexico. In Egypt, several Facebook group administrators were abducted and beaten, and security forces targeted citizen journalists during protests.

6. Surveillance: Although some interception of communications may be necessary for fighting crime or combating terrorism, surveillance powers are increasingly abused for political ends. Governments in 35 countries upgraded their technical or legal surveillance powers over the past year.

7. Takedown and deletion requests: Governments or individuals can ask companies to take down illegal content, usually with judicial oversight. But takedown requests that bypass the courts and simply threaten legal action or other reprisals have become an effective censorship tool in numerous countries like Russia and Azerbaijan, where bloggers are threatened with job loss or detention for refusing to delete information.

8. Blocking social media and communications apps: 19 countries completely blocked YouTube, Twitter, Facebook, or other ICT apps, either temporarily or permanently, over the past year. Communications services such as Skype, Viber, and WhatsApp were also targeted, either because they are more difficult to monitor or for threatening the revenue of established telecommunications companies.

9. Intermediary liability: In 22 countries, intermediaries—such as internet service providers, hosting services, webmasters, or forum moderators—are held legally liable for content posted by others, giving them a powerful incentive to censor their customers. Companies in China hire whole divisions to monitor and delete tens of millions of messages a year.

10. Throttling or shutting down service: Governments that control the telecommunications infrastructure can cut off or deliberately slow (throttle) internet or mobile access, either regionally or nationwide. Several shutdowns occurred in Syria over the past year, while services in parts of China, India, and Venezuela were temporarily suspended amid political events or social unrest.


MCC315 – Fieldwork 1 – Are RSS Feeds ‘Dead’ Technology?
Date Created: September 23, 2013  Date Modified: September 23, 2013

I should have published this live day-by-day (but that would count as plagarism with TurnItIn??), anyway as a follow-on from Fieldwork 0 Is Generic Theory a Valid Tool at Cataloguing Websites?; Here is Fieldwork 1: Are RSS Feeds Dead Technology?

Are RSS feeds a dead technology?

Day 1 – Background

I was an early adopter of RSS so to speak, I don’t reacl exactly what year tho I can date a conversation on RSS I had with a tutor here in 2002 where I was advocating the use of RSS to deliver content to the desktop without the need for browsing multiple news sites for stories.
I ditched the desktop reader for Google Reader in 2006 when I found my bandwidth was being chewed on the thousands I feeds I had set to download, as RSS shifted from being text only to having rich-media content.
I have relied on RSS for more their technical simplicity. During the migration of the University of Notre Dame Australia’s old static-HTML website onto an CMS, I used RSS as a method of leaching parts of the sites content, automatically placing it in the database of the new CMS. Additionally, the bushfire alerts on the Department of Parks and Wildlife (formerly DEC) website is managed by RSS as the approval process for the alerts needed to be not just simplified but also immediate[1].
But then, earlier this year Google sent me a notification that it was going to discontinue its Reader product. I panicked and asked online what Reader alternatives people were using, I had a few answers, but tried this product Feedly… and really haven’t checked my feeds much since.
Day 2 – Opening Feedly

Ok, this kinda looks familiar, theres none of the ads Google use to run, cluttering the screen. First stop XKCD–this should give me a good calculation of how long I haven’t checked my RSS for; 13 unread, I swore it was longer than that, I’ve got time to look at comics later, I want to see some news!. I check an RSS from a Yahoo! Pipe[2] I created a few years ago, its a mash-up from a number of tech sources to filter content related to my particular interests in IT[3].
I browse over the other feeds I have, all grouped by subject: Drupal, Flex, Joomla!… I want something a bit lighter, the category “funny” should have something to spike interest[4]. Reconfirming I have a bad sense of humor I decide to look back over my list, not being a developer at this time I don’t really want to read about the latest updates to WordPress, or release notes from whatever git repository I was following. I need to find some new feeds. Reader had a way to search and browse for new content, what Is Feedly going to offer me?

Day 3 – Mobile Application

I downloaded the feedly app for my tablet, it has a far prettier UI, but it presents the feeds dissimilar to the web version, which is closer to Reader. I flipped thru the usual feeds… In the early days of mobile app development, when everyone and his proverbial canine absolutely needed an app, the pushed for time web developer would build a simple RSS reader app, plug the clients RSS feed from the corporate homepage or media releases and re-skin the app to fit corporate styling[5].

Day 4 – Datasets and XML

The inherent beauty of RSS is that most of the time its essentially XML, which is the same markup used in a wide range of applications. In the interests of this excersise I decided to visit and see if I could find some cool XML dataset in RSS format[6] and publish it online, in the end I chose to re-publish ACT FOI applications RSS over the national public toilet registry[7].
The praising of the XML was facilitated by the CMS itself; WordPress. No actual coding was needed on my part to get the feed to display onto the site, tho filters can be added with additional scripting. This method of aggregating content with RSS is more common on the net than is obvious from the front end.
So what is the point of this? well I have the GitRepo of my WordPress themes RSS their changelog to my site so I dont have to publish changes to the page[8]. This automation is invisible to front end users–but its a crucial time saving tool for me. If RSS were “dead” I would have to find a new way to automate publishing.

Day 5 – Who is JSON?

Facebook are slowly depreciating RSS for their pages, in favor of a newer format called JSON[9], however the concept behind JSON feeds is much the same as RSS[10]. A cynical interpretation of this move by one of the webs biggest content generators is that it limits the re-publishing of Facebook originating content on extrinsic websites.
Due to the amount of content generated on Facebook, and the associated issues of ownership and privacy, it makes sense from Facebook’s perspective to make it harder to syndicate content.

Conclusion – Are RSS Feeds Dead?

From a users perspective RSS may be a disappearing technology, but as shown with my weeks “research” into the phenomenon, it is clear that RSS feeds are still a vital technology for publishing and syndicating web content–even if the end users are oblivious to said technology. As new media professionals it is highly ignorant to dismiss a technology just because its mainstream use is not as dominant as its esoteric use.
[3]The Pipe itself is not public so there in o URL to link, but the following articles were accessed from that feed:
[7] This page is fed by RSS from the URI above and praised by the CMS, then published as to the sites pre-defined CSS.
[8] As above.
[10] will return an RSS like feed, structured in a more formal variation of XML.

This rant was posted in Web on by .

Facebook Page Insights
Date Created: September 1, 2013  Date Modified: September 1, 2013

Being web focused, I tend to use Google Analytics to gain visitor information, but in the walled-garden of social we are limited to some extent by what the application itself provides us. Luckily Facebook is improving their page ‘Insights’–a limited version of Analytics, so admins can keep informed of page activity


Just at a glance of the insights opening ‘overview’ page you can already tell that this page receives over twice as much interaction as it does have likes, this kind of extended reach is often overlooked by page admins.

I have uploaded screen caps from the 3 other top-level pages: Page, Posts and People…


As you can see above it shows the progresson of likes and even unlikes (I wish that feature were available to peoples own profiles!)…


Posts is a run-down of all the pages posts which can be filtered–I have this filtered by reach just to show you that a page with 1000 likes can have a post reach over 3 times its page audience. (and this usually exponentiates the more users or ‘likes’ the page has)


And lastly People breaks down our users demographics.

Yeah, this was probably not the most informative on Facebook Page Insights; but those who know me know my motivation 🙂

Is Generic Theory a Valid Tool at Cataloguing Websites?
Date Created: August 24, 2013  Date Modified: August 24, 2013

This was a fieldwork topic, but I went WAY over the word count and after abridging the text it lost any coherency. Since it wasnt going to be assessed I thought I’d paste it here to account for the few months worth of no new content.

Is Generic Theory a Valid Tool at Cataloguing Websites?

Burnette and Marshal, in defining web genres in Web Theory: An Introduction[1], fail to address the the importance of the underlying technology that structures the content; the Content Management System (“CMS”). By identifying a websites CMS, you can identify its intended function, and by defining a site’s function you can select the appropriate CMS.

Brunette and Marshall ask us to look at the example of the Personal website[2]. At the time when the article was written, events like Yahoo’s shutdown of Geocities[3], the rise (then fall) of MySpace, and the continued adoption of social media users were yet to greatly diminished the market in personal websites; The sole survivor to move against this trend are blogs, the “blog” gave the personal website a CMS.

Blogsites, though not all run the same underlying technology, (WordPress being the most popular[4]), all operate with a similar methodology. Due to the similarities all blogging platforms have, end users still have little difficulty distinguishing a blog from a commercial website due to the presentation of the data in a familiar way, by posts rather than pages; Or as “…a personal homepage in a diary format”[5]. Even taking into the consideration there are nearly two-thousand individual “themes” or designs available or WordPress[6], an experienced web-eye can know the underlying CMS just by the presentation of data.

Burnette and Marshall fail to mention the “wiki”; Wikipedia popularised the concept of a user-generated encyclopedia, and the underlying technology MediaWiki[7], has since been adopted by various other sites–from competing encyclopedias to collaborative web documentation for other CMS[8]. Collaborative authoring had been around before the wiki, but it was the MediaWiki software that created the genre.

One web genre Burnette and Marshal break down further is the Commercial Site, which they have divided into: The “Company/Corporate” website, the “Commercial Trading” site and the “Institutional” Site[9]. If we look at the underlying technology that these types of sites utilise, we can better qualify them than, if we were merely assessing them on a superficial user-end, aesthetic, or content providing basis.

Brunette and Marshal lump together in the Institutional sites sub-category, not just “Government Departments…” but also “…universities, schools, charities and non-for-profit organisations”[10]. The reality is these different organisations have very different needs when it comes to web infrastructure. In the example of Universities, the Learning Management System (“LMS”), a form of CMS, is an integral online application for the day-to-day operations of a university. The LMS functionality could not easily be replicated with any off-the shelf CMS product, and hence why a separate application is needed to carry out the online requirements of a tertiary institution. No mention of this type, or genre of Content Management System is present in Brunette & Marshall’s article, or any mention at all of CMS software.

Burnette and Marshall signify a difference between an eCommerce “Commercial Trading Site” site and that of a government agency “Institutional Site”[11]; Government agencies can have a very broad function to the public, and there are cases where government sites offer eCommerce or web shop functionality. Prior to the recent split of the Department of Environment and Conservation, the domain in addition to providing similar content hosting functions of other government domains, also had an eCommerce elmment where camp ground bookings could be made and paid for online[12]. This eCommerce application was entierly seperate from the consumer level CMS that the main website was powered by (Joomla)[13].

A recurring technology utilised by websites of government departments, and larger corporations is the Microsoft product: Microsoft Office Sharepoint Server (Sharepoint). This software in itself is not technically a web CMS (or EDMS as its often marketed as), but more of a collaborative document portal where online publishing can be performed from within the existing Microsoft Office suite. An example of a Government Website running Sharepoint is the Department of Housing Website[14], not only can public servants, with little to no training on the web have the ability to publish from their familiar office suite, the Sharepoint software includes functionality for wiki’s and corporate blogs.

Content is what drives the web, it’s what brings users to sites; How your content is displayed has to suit the content being served. Developers realised this and created different applications to serve the various types of content in the most functional user-friendly way. If we analyse the choice of CMS adopted by a website, then we already know what genre the site belongs to, without the need to employ any subjective qualities–like look, feel or branding.


[1] Burnette, R., & Marshall Web Theory: An Introduction – 2003, Page 90
[2] Ibid, Page 94-95
[3] Yahoo Quietly Pulls The Plug On Geocities – April 23rd, 2009
[4] WordPress is Powering 14.7 Percent of Top Global Websites
[5] O’Reilly, T O’Reilly Network: What is Web 2.0 – 2005
[8] The community collaborative documentation of the populr consumer web CMS, Joomla, wuse MediaWiki to power its documentation subdomain
[9] Burnette, R., & Marshall 2003 Page 94
[10] Ibid, Page 94
[11] Ibid, Page 94
[12] “On 1 July 2013 the Department of Environment and Conservation separated to become two agencies. This campgrounds website is now administered by the Department of Parks and Wildlife (DPaW) although “for an interim period the domain name will remain as and references to ‘DEC’ and ‘the Department of Environment and Conservation’ will continue to appear.”
[13] “ The Department of Parks and Wildlife and the Department of Environment Regulation commenced operations on 1 July 2013 following the separation of the former Department of Environment and Conservation.”

I’m actually glad I didnt submit it

This rant was posted in Assignment, College Work, MCC315, Web on by .

Date Created: May 13, 2013  Date Modified: May 13, 2013

I have decided to give SecureWAMP a test run in a development environment. It is put together by Herzlich Willkommen, the same that bought us SRWare Iron–the Chrome alternative without the Google hooks.

So far: A cool feature is the 1-Click installations, SecureWAMP includes Joomla (tho an older version), WordPress, Typo3, and other server side software available to users with no prior knowledge in setting up Apache.

Searching Tips
Date Created: May 9, 2013  Date Modified: May 9, 2013

I found myself doing a little Google-Fu for a friend; They were looking for a PDF they found on the net prior but they couldnt locate the document in their web history as they accessed it on a public machine.

“Can you search for a PDF on a spesific domain?” They asked

Yes, yes you can. In Google type:


“Oh… its that easy?” they said with humility…

Yes, yes it is.

This rant was posted in Google, Search, Web on by .

[LINKS] Government Information Security Resources
Date Created: May 9, 2013  Date Modified: October 30, 2013

The original content of this post is now on the following page.

[VIDEO] Kids Need to Learn Programming
Date Created: January 21, 2013  Date Modified: January 21, 2013

The message here–that everyone should learn to code, seems to me to be lost in the presenter over marketing his Scratch software, but that could be years of Flash bias talking.

I agree tho, kids should be taught to code, in the same way kids are taught to read, write and execute. Maybe CSS for Babies is a good place to start?

This rant was posted in Software, Web on by .

[LINK] CSS For Babies, reviewed by CSS Tricks
Date Created: January 21, 2013  Date Modified: January 21, 2013

The folk over at CSS Tricks made my afternoon with this review of a CSS textbook: CSS for Babies.

I have a feeling I will be seeing a hard copy in some friends bookshelf 🙂

This rant was posted in Web on by .

New Media and Democracy: From Trolls to Bots
Date Created: January 20, 2013  Date Modified: January 20, 2013

As mentioned in my earlier New Media and Democracy posts, the conservative coalition have had no qualms about enlisting the help from their young supporters to muddy the waters of political discourse, well it seems they have stepped up their game and hired a coder to automate their trolling with the use of bots.

Tiphereth Gloria, social media expert with VML Australia, said the bot evidence presented in the Storify post appeared to be accurate and she believes it pointed to a Liberal Party campaign. The fake accounts appeared to be part of a “propaganda war” effort to “increase share of voice of anti-Labor sentiment”.

Separately, other spam bot accounts are more blatant. One suspected anti-Labor bot Twitter profile with over 88,000 tweets is @LaborDirt, which pumps out a constant stream of anti-Labor content. Anti-Gillard account @GI-Gillard has reportedly been retweeted by the same bots that retweeted Mr Hunt’s tweet.
Source: SMH

A Storify user calling themself The Geek has followed this a little closer than I have:

Update 1: Since publishing this story earlier, I have put together a growing list of LNP Bots here:
Also at last count 19 January 2013, there were about 40 genuine retweets out of 175 in total for this tweet. The Bots are tweeting via an app or platform called “The People’s Voice”. Has anyone heard of this? Contact me @geeksrulz on Twitter.
Update 2: I tweeted a link last night to my storify feature to @GregHuntMP for comment. No response so far.
Update 3: Since shining a light on this single tweet by Greg Hunt, the retweets have jumped to 192. They are by real LNP supporters who are possibly coming to Greg Hunt’s rescue to even up the ratio between spambots and real people.

Update 4: It appears that Twitter has finally acted and they have suspended the spambots that were identified. With friends like these, who needs enemies.

Update 5: Henk Luf is threatening to sue me for using his name in this feature. (Oops just did it again.) I have threatened to sue him back if he keeps using my name in his political tweets. Please go to the special Henk Luf section below if you can be bothered. See, spambots are missing out on all the fun that real people have 🙂
Update 6: Finally a response from Greg Hunt via @bennpackham. Greg Hunt says he hasn’t got the technical skills to pull off such a ruse. Fair point. I wonder how he managed to get his website up and running.

He goes on to point out further details of this troll, and even gives an example of another Conservative MP doing the same…

Can we make legaslative provisions to prevent this type of trolling? or will we just be making an over-regulated online media? I dont have a solution to this other that teach ethics in Computer Science 101.