Before we start, a story. When I created my first web server, I’d found a copy of Windows NT Server 4.0, upgraded it to Service Pack 6a to get IIS enabled, opened port 80 on the router and viola, working webserver. This was 2001 and unfortunately my creation of a webserver coincided with the spread of the Code Red virus, and it reached my server within days of it being online.
Not knowing at the time, and thinking it was a one off,
formatted the hard drive and completed the whole setup again. A day passed
before the virus was back. Now with the knowledge of what was happening and wary
of it happening again. I rebuilt the server and this time put the website
behind port 8080, this time the virus never returned.
I thought to myself that this was security through
obscurity, and with the victory over Code Red, was something I held onto for
many years.
I applied this method when it came to opening RDP access to
the outside world, choosing a seemingly obscure port 8021 on each network
setup. However, I’ve been dealt a wakeup call following what I’ve just seen…
From time to time we come across legacy applications and deployments that you didn’t know exist until something goes wrong with it. This week it was an unbeknown to me Joomla website that had been ticking over since 2012. However recent visits to the site got this result:
A report came in on this issue and a few checks of the domain DNS revealed it was on a platform we use for domains and web hosting.
Not overly familiar with the hosting company from a website standpoint and even less with Joomla, it was time to first fathom out how it works, and them find the problem and fix
As the years pass by we find ourselves moving on from an old computer to making a clean start with something more relevant, and when migrating to a new PC or laptop its always a worry that you may leave something behind. Luckily these days, instead of having a laptop laying about in its last used state for fear of losing that once forgotten file or program, the whole system can be virtualised on a server or donor machine for such eventualities, paving the way for the physical machine to be reused or recycled.
My method is to use VMware Standalone Convertor Wizard to convert physical machines to an ESXI 6.5 host. All previous conversions have been seamless however the latest conversion of a Fujitsu U904 laptop didn’t go as so, the conversion process completed without a hitch, but when starting the newly created VM I found that keyboard input was unresponsive. Continue reading “VMWare: No Keyboard on Newly Virtualised Machine”
While setting up a backup solution for my home network, I had an issue where my Windows Server 2012 R2 backup task would fail, with the following status:
“There is not enough disk space to create the volume shadow copy on the storage location. Make sure that, for all volumes to be backed up, the minimum required disk space for shadow copy creation is available. This applies to both the backup storage destination and the volumes included in the backup.
Minimum Requirement: For volumes less than 500 megabytes, the minimum is 50 megabytes of free space. For volumes more than 500 megabytes, the minimum is 320 megabytes of free space.
Recommended: At least 1 gigabyte of free disk space on each volume if volume size is more than 1 gigabyte.
Detailed error: Insufficient storage available to create either the shadow copy storage file or other shadow copy data.”
This doesn’t really explain the issue, as setting up a schedule with Windows Server Backup in 2012 involves the utility checking available storage before creating the backup task, and a manual check showed there was ample storage on the destination volume, with the source volume having 86% free space.
Delving into the Event Viewer for more detailed error message, I get this:
When running a website from a home server, viewing it locally will make it seem that the site is responding lightning fast and there are no issues. But what about the outsiders wanting a look at your content, are they getting the same performance? Chances are they are not, as a visitor’s machine needs to negotiate the internet and its equivalent of back streets and country roads to get to the home server’s location.
Where a home server can differentiate greatly from hosted solutions is the speed and relative location on the net. Visitors who view a website relies on the upstream connection at the server end to receive the content, and when this is via domestic internet connection the upstream can much smaller than the heavily advertised downstream connection. So it’s worth checking the theoretical upload speed to establish what kind of service and content can be served.
In terms of location, hosting companies are as close to the internet backbone as feasibly possible to get the best speeds and lower latency. The backbone of the net is handled by major operation companies that handle the bulk of all internet traffic between countries and continents, these in turn have datacentres where the traffic from countries are trunked to the different internet providers and down to the end user. As data makes its way from the backbone to the end user, it can hop between different servers as it meanders towards the final destination. For each hop the networking equipment has to read where to send it on, and route it on the right path. This all takes time, even though it is measured in milliseconds, an extended number of hops and the volume of data packets needed may produce a noticeable wait for a user to see the desired page.
All home user’s computers need deal with negotiating its way through the service providers’ local infrastructure to get to most sites, but when visiting a site hosted on a home server, data may need to navigate another service providers’ network to reach the site. This is where visitors may experience slower loading times compared to mainstream sites.
So how to tell if your home hosted website will be speedy when out in the wild? There’s a few different ways to check:
For a while I have been looking for simple CCTV solution, where video is captured on a long loop, so when the storage is full the earliest dated footage is deleted to make way for new. And of course, footage is available for immediate review.
Many newer IP cameras, including my Trendnet TV-IP572W comes equipped with a microSD slot for recording on a rotating basis. However, this has two main caveats, firstly the investment in a microSD card to be used solely for this purpose and of a high enough capacity to record enough footage, especially with the introduction of HD capture. Secondly is accessing the footage, as it is effectively held on the IP camera it is the gateway to the data. In my experience this process is slow, with having to download each video file manually and slow transfer speeds.
Imagine wanting to view an event that could have happened over a span of a few hours, and with video captured in segments of 5 minutes at most, the whole process can become tedious very quickly.
Therefore, I came up with another solution, one that uses my server’s hard disks for video storage to save on money while allowing larger video retention than a micro SD card. Also the ability to automatically delete older files to make way for new. This method uses Samba settings of an IP camera to save video to a Windows Server, and on the server itself, employing Disk Quota management to effectively trick the camera into thinking it only has a certain amount of disk space, to allow the cyclic video retention and prevent the footage taking up a whole drive on the server.
With the untimely demise of the Gigabyte Brix, I needed to find another solution for a small machine that would handle all web traffic to my sites. My fingers got burnt by using Gigabyte so this time I decided to track down an Intel marketed machine, they were the first to pioneer the net-top device and so commanded a premium over other manufacturers.
Price always at the forefront, I picked up an Intel NUC5CPYH, at £120 for a barebones system it featured an Intel Celeron 2.16Ghz processor at the helm, with the same single DDR3L slot and support for a 2.5 inch HDD/SDD as the Brix, which I recovered from the deceased system.
Let’s hope three servers is a charm, as its time for a new server. But this time I’m moving away from the HP Microserver. Why? Well the new server is destined to be a dedicated web server for my sites, ever concerned with security and protecting my network I thought it wise to separate the public facing websites physically from my data, adding an extra layer of security.
The choice was to go for a NUC based machine or Nettop, their small footprint allows them to be placed out of the way, plus they are in keeping with my low power requirements and often fan-less design keeps them quiet. As it’s to be a web server only, the restrictions on a device this size such as space for multiple hard drives, graphics performance and upgradability are not an issue.
In a previous post I showed how to shutdown two servers safely using just one UPS with a single communications port. It was pretty straight forward with the comms port connected to a Windows Server 2003 machine.
But doing the same with Windows Server 2012 is much more difficult, since Microsoft decided to remove the ability to run a program on a low battery event from its power management settings. To make things worse I discovered that a bug in Server 2008 and later meant that issuing a Shutdown command from the native power settings would not perform a clean shut down, instead killing the power in a few seconds. This is not good news for RAID arrays and data integrity.
Time for a new solution, and since Microsoft are of no use, help would need to come from a 3rd party. After research and testing answer came from Shutter, a small program that runs as a trigger and event type program for a variety of different scenarios, with battery discharging status being one. Luckily two instances of the program could be run, one to shut down the remote servers and another for the host machine. Importantly the program can also be run as a Windows service, but more on this in the walk through. here is how it is done:
In January I was given the opportunity to design and build a new website to help colleagues in the retail store where I worked. This website serves as a demonstration on how I took my website and server knowledge to create a low cost solution to an issue I was confronted with.
This occurred when working in a retail store but can be re-purposed to suit other needs.
What I came up with was Canton TV, a website written in ASP.NET C# and hosted on my home server, it served as a tool for colleagues of varying technological skills create messages and upload images for display in video format on screen in a communal area of the store.