Category Archives: General

Trusted Assistant

One of the best things about working from home is having the dog as a trusted assistant.

Sure he knows nothing about cyber security, refuses to make me tea, jumps on my laptop demanding attention, makes me jump by suddenly barking at the quietest of noises while simultaneously deafening me with his barking.

And yes he demands constant belly rubs and takes up most of my lunch time by making me take him for a walk, also I’m pretty sure he gets bored when I start talking to him about how it was the Honda engines that led to McLarens poor performance in last seasons F1, but 2018 will definitely be a better year for them. Or that crypto currency is probably in a bubble, but its still got a long future ahead of it.

That’s fine he doesn’t really know much about F1 or Bitcoin anyway at least he humours me by pretending to be interested, despite all this I don’t know what I would do without him…..oh wait, yeah I do know what I would do without him….Work! That’s what!

Existential Crisis

I’m having an existential crisis.

Amazon has been collecting information on me for years, for every item I’ve bought, added to my basket and viewed it has learned a little more about me, for every show on Prime I’ve watched, started and given up on it has gained a little bit more information on what makes me tick as a human being….by this point it probably has a deeper understanding of what makes me, me than any single person on earth does.

Why then, does it think I am the kind of person that would be interested in purchasing ‘No Nonsense, the game changing autobiography of Joey Barton!

What kind of monster have I become.

Answering the question, no one asked…

I have to be honest I do love myself a pocket reference guide. Even with the internet’s vast resources there is something about holding an old school, analogue, physical copy of a book that is pleasing in a way that searching the internet just isn’t.

The strange thing is that despite their name, I’ve never actually carried one of these books around in my pocket, this lead me to assume that they didn’t fit in real pockets….

Well as it turns out, predictably and obviously I was wrong….

Also…

Keep It Simple Stupid

I wanted to share this excellent article that I read on linkedin recently. It is by Professor Daniel Solove. In the artical he discuss a recent hacking scandal involving a US baseball team. He talks about what can be considered a ‘hack’ and who can be considered a ‘hacker’ then clears up a number of common misconceptions about network security. Not all ‘hacks’ are sophisticated or technical.

I had an interview recently and I was asked about how I would go about exfiltrating data. I launched into a long winded technical answer talking about port scanning, exploiting code and avoiding IDS’s etc etc.

When I got out the interview and was driving home it suddenly hit me that what I should have said was; target the human attack vector by using good old social engineering.

Some hacks may not be sophisticated, but that isn’t always a bad thing. I truly believe the first rule of network security should always be “Keep it simple, stupid!”.

This applies for both offensive and defensive security. That is not to say that simplicity should come at the expense of functionality, all security goals should still be fully achieved, but achieved as simply as possible.

As Einstein succinctly put it “Everything should be made as simple as possible, but not simpler”.

How to make Linux look good

I’ve been using the copyright free Redhat clone CentOS 7 with the Gnome desktop for a while now. It has proved to be an excellent distro for experimenting with all kinds of enterprise services and packages. Over the last 9 months I have created a full virtual network running a number of instances of CentOS that have been stripped down to the terminal to reduce the attack surface and make better use of the limited resources my laptop gives me. Each of the VM’s has had at least one service, or more running on it, I’ve had a 389 directory server, DNS and a Dovecot/Squirrelmail/postfix email server amongst other things all running simultaneously on an entirely KVM platform.

CentOS isn’t just an excellent server platform it is also an excellent workstation and day to day distro, my CentOS laptop has taken over my Debian PC as my main device. In terms of practicality, CentOS is an absolute masterpiece, in terms of looks however, it doesn’t look so great if you’re going to use it as your main operating system.

One thing that bugged me was the default Gnome theme and settings, personally I found them to be quite ugly and clunky. I persevered with them for a number of months, mainly because I was so busy with proper server configuration geekery that I didn’t have time to mess about with something that only affected aesthetics.

One day however, in desperate need for something to procrastinate with I decided to make my CentOS desktop environment a little prettier. This is straightforward guide (I’m no graphics designer or user interface expert) but it should prove helpful nonetheless, especially to get a baseline desktop environment that you can tweak to your heart’s content! Additionally there a few troubleshooting tips for issues I encountered along the way.

Step 1: Download and install CentOS 7 (I used full x86_64 with Gnome shell 3.8.4). The default theme will look as follows, practical…but kinda ugly.

before

Step 2: Once CentOS is installed and updated, it is time to start configuring. There are a ton of themes available for Gnome, the theme I used was the Zukitwo theme which I downloaded from GNOME-look.org.

Step 3: Download the following packages from the CentOS repositories. These packages will allow for simple installation of shell extensions direct from the firefox browser and the tweak tool will allow us to install themes and tweak Gnome.

# sudo yum install gnome-shell-extension-common.noarch
# sudo yum install gnome-tweak-tool.noarch

Step 4: Icon packs can be downloaded to give the icons a nicer look. Numix have created a number of themes and icon packs, I used the free Numix Circle Pack. Once this is done move them to /usr/share/icons.

Step 5: In your home folder create a hidden directory (if one doesn’t already exist) called themes. Remember to start the directory name with a dot. Once this is done move the zipped theme from the download location into this. For example;

# mkdir /home/thomas/.themes/
# mv /home/thomas/Downloads/140562-Zukitwo.zip /home/thomas/.themes/

Step 6: Next up open the tweak tool we installed earlier, select the ‘shell extensions’ from the left pane. On this screen there a number of switches, find the one that says ‘User themes’ and make sure it is on.

userthemes

Step 6.1: Staying in the tweak tool select ‘Theme’ from the left pane, then from the ‘Shell theme’ menu select the themes zipped directory from the ~/.theme directory. Some people will recommend that you unzip the theme first, personally I didn’t and I haven’t had any issue installing directly from the .zip.

tweaktool

Step 6.2: Finally from the ‘Icon Theme’ option select the icon pack (Numix in my case) from the drop down menu.

Step 7: This is where we encounter our first issue, the CentOS icon in the upper left of the screen is too large. This is a small and easy to fix issue, but it took me a while and much googling to figure it out.

big icon

Step 7.1: To fix this we need to edit the gnome-shell.css file, which can be found in the themes zip directory at /home/thomas/Downloads/140562-Zukitwo.zip.

Step 7.2: Now there are a number of ways to edit this file, but the simplest way is to browse directly to it in the file manager, and open the zip directory with archive manager.

Step 7.3: In archive manager search for ‘gnome-shell.css’ then click it to open it with your default text editor, in my case this is gedit.

Step 7.4: to find the bit of css code we are looking to edit press ctrl + f and search for ‘.panel-logo-icon’ if this code exists edit it to read as follows, if it does not exist simply add the code to the bottom of file. (make use to use the correct parentheses{})

.panel-logo-icon {
padding-right: .4em;
icon-size: 1em;
}

edit_css_file

Step 7.5: While in here there a number of things that can be tweaked, it is worth googling around to see what can be done. Some ideas include making the top panel transparent or fiddling around with the colour schemes. The usual precautions should be taken when editing anything, document what you are doing and make backups before changing anything.

Step 8: The bar along the bottom of the desktop is called the window list, personally I find it quite clunky. There are a number of ways to remove it, it can be removed using extensions (more on them later), or simply by selecting the correct session at login. This can be done by selecting the cog icon next to the sign in button on the logon page.

Step 9: Once windows list has been removed it may be difficult to move around windows, one solution is to use the minumum windows list extension that will place a drop down menu in the top panel with the window list in it. The other is to install a dock.

Step 10: The dock I used was Cairo Dock, a beautiful and functional dock that can be highly customised. CentOS does not have Cairo Dock in its standard repositories, but it is simple enough to download the RPM from here and install with yum from the local file.

Step 10.1: As said in the previous step Cairo Dock is highly customisable, there are too many options to go over here. But the Cairo Dock website has handy guide on how to configure startup options here and appearance and behaviour options here.

Step 11. Gnome supports shell extensions, self contained configurations which modify and tweak Gnome. There a couple of ways of installing these extentions via the extensions.gnome.org website. Open with firefox and the extensions package we installed at the start of this tutorial will allow for simple extension configuration. These extensions can also be toggled on and off in the tweak tool.

Some the extensions I used are the quit button, and the minimised windows list.

Step 12: Now that Gnome is fully configured all that remains is to change the wallpaper to one that suits your style, this can be done simply by right clicking the desktop and selecting change wallpaper. The wallpaper I used in this tutorial can be found here.

desktop

apps

A Brief History of Proprietary and Open Source Software

Definition of Proprietary Software

The word ‘proprietary’ is defined by Oxford Dictionaries as “Relating to an owner or ownership” (Oxforddictionaries.com, 2014). In a 2004 (updated in 2005) report on the definition of proprietary software The Linux Information Project (LINFO) explained that proprietary software “is software that is owned by an individual or a company (usually the one that developed it). There are almost always major restrictions on its use, and its source code is almost always kept secret.” (Linfo.org, 2014) The restrictions described by LINFO are what allow proprietary software to be used as commercial products. Companies that develop proprietary software or buy the Intellectual property to it, exert complete control over it; they maintain, update and fix bugs in house.

Most software typically demands that end users or organisations agree to a Licence Agreement. For a proprietary product this is an electronic contract that usually prohibits the reselling, copying or profiteering from the software. In many cases the license only allows for the use of the software and not ownership. Licensing options for software can allow end users the use of proprietary components at no monetary charge; Adobe Flash being a common example of free to use proprietary software. (Adobe,2014) Licensing for proprietary software can be complex, especially when purchased for enterprise environments. Microsoft is an example of a software vendor that offers an array of complex licensing structures. (Microsoft, 2014) The number of instances allowed, time limits, limits on what physical location the software can be used in, who can and who cannot use it can all be tightly regulated by a proprietary software vendor.

Definition of Open Source Software

Open Source Software (OSS) is defined by the Open Source Initiative as; “software that can be freely used, changed, and shared (in modified or unmodified form) by anyone. Open source software is made by many people, and distributed under licenses that comply with the Open Source Definition.” This definition is a list of ten requirements that software must comply with to be considered open source. In addition to the characteristics already listed, the Open Source Definition also ensures that OSS does not discriminate to persons or groups, fields of endeavour and is technology neutral. (Opensource.org, 2014) The full list is as follows;

1.Free Redistribution: Non-restrictive licence.
2. Source Code: Must include source code.
3. Derived Works: Must allow derived works
4. Integrity of Author’s Source Code: May require derived works to change from original name.
5. No Discrimination Against Persons or Groups
6. No Discrimination Against Fields of Endeavour
7. Distribution of License: License must apply to everyone,, without the need of a further licence.
8. Licence Must Not be Specific to a Product: Licence must not be tied to a distribution.
9. License Must Not Restrict Other Software
10. License Must Be Technology-Neutral
(Opensource.org, 2014)

Andrew M.St. Laurent states in his book Understanding Open Source and Free Software Licensing that “The fundamental purpose of open source software licensing is to deny anybody the right to exclusively exploit a work” (St. Laurent, 2008), to this end there are a number of standard OSS licences that can be used when redistributing OSS. The most widely used OSS licence is the GNU General Public License (GPL) 2.0. (Blackducksoftware.com, 2014) The Open Source Initiative name the following licences as the main open source licences;

Apache License 2.0
BSD 3-Clause “New” or “Revised” license
BSD 2-Clause “Simplified” or “FreeBSD” license
GNU General Public License (GPL)
GNU Library or “Lesser” General Public License (LGPL)
MIT license
Mozilla Public License 2.0
Common Development and Distribution License
Eclipse Public License

The model of OSS used with licences such as the GNU GPL allow end users and organizations to forgo many of the complexities and costs involved with proprietary software. Additionally as the source code is public, any individual can; add features, improve stability, correct bugs and security flaws. For many enterprise level OSS there can be the option to pay for support. This provides support at a monetary cost, an example of this is Red Hats support model. (Red Hat, 2014) Additionally OSS generally has free support via the use of; documentation, IRC services, mailing lists and various other community driven support services. (Debian.org, 2014)

History of UNIX and the Move to Open Source (GNU/Linux)

UNIX

In July 1974 Dennis M Ritchie and Ken Thompson of AT&T Bell Laboratories published a white paper describing an interactive, multi user, Operating System (OS) called The UNIX Time-sharing System (UNIX). (Ritchie and Thompson, 1974) UNIX was robust and versatile, it was portable so could be used on range of devices, programs could be written and ran on UNIX to carry out a vast array of tasks. Before UNIX most programs made use of punch cards that were used as the input for mainframe computers that would then decode them and execute the program.

UNIX was a proprietary OS, but was developed with a spirit of openness, in 1979 Dennis Ritchie stated “What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing . . . (was) to encourage close communication.” (Ritchie, D. 1979) UNIX was licenced to a number of organisations who produced UNIX derivatives, one notable example was the University of California’s Berkeley Standard Distribution (BSD) which along with Bell Laboratories own System V became two of the main branches of UNIX variants.

Commercialisation and Standardisation

By the mid eighties UNIX had been fully commercialised and there were many vendors offering their own UNIX derivatives, each of them effectively being a unique proprietary system. (Unix.org, 2014) In 1984 a collection of vendors formed the X/Open consortium with an aim of creating a series of standards allowing a degree of interoperability between the proprietary UNIX derivatives. The formation of the X/Open consortium would lead to the publishing of The Single UNIX Specification (SUS) collection of standards. (Love, 2013) Incorporated into the SUS family of standards was the POSIX (Portable Operating System Interface uniX) standards. POSIX standardised a number of interfaces including Application Programming Interfaces (API), how shells interface with the UNIX kernel and various other OS Utilities. (Standards.ieee.org, 2014) (Unix.org, 2014)

The SUS and POSIX standards laid out in the Institute of Electrical and Electronics Engineers (IEEE) standards along with the commercialisation of UNIX led to UNIX veering away from the spirit of openness that Denis Ritchie has spoken about in 1979. (Negus and Bresnahan, 2012) The ever increasing restrictiveness of UNIX variants and commercialisation of UNIX made UNIX OS’s less available. This contributed to the increased prominence of the free software community.

GNU

Free software has been part of modern computing almost since its inception, technology was developed in advanced research and development laboratories run by organisations like Universities, Corporations and Governments. Although much of this technology, including computer hardware and software was developed under strict secrecy, a substantial portion of it was shared between academics and researchers. This allowed for a greater pool of minds to contribute to improving the hardware and software. (Ceruzzi, 2003) It was in this spirit that movements dedicated to allowing users and organizations to use, study and modify free software arose. By 1983 an individual called Richard Stallman had become a leading proponent of free software, on the 27th of September 1983 he announced the GNU Project. (Gnu.org, 2014)

Richard Stallman states that the GNU Project is primarily a political project. (Stallman, R. 2008) Its political ideology is that all software should be free, the project set out to create a completely new OS free of any proprietary code or software. In 1984 the project began work on a Unix-like OS complete with “kernel, compilers, editors, text formatters, mail software, graphical interfaces, libraries, games and many other things” (Gnu.org, 2014) The building of an entirely new Unix-like OS proved to be a complex task. UNIX and Unix-like OS’s are modular by design. The GNU project set about replacing each of the components one by one. Along with a small number of already existing free components, for example the X windows system, the OS’s took shape. (DiBona, Ockman and Stone, 1999) By 1992 the GNU Project had replaced all major components of UNIX in the GNU OS apart from the kernel. The GNU project was developing a kernel called GNU Hurd. (Gnu.org, 2013) GNU Hurd was not a stable kernel due to it still being in development, in 1991 the Linux kernel was published on Usenet. Soon the Linux kernel would become the de-facto kernel for the GNU OS. (Gnu.org, 2014)

Linux

Linus Torvalds was a computer science student at the University of Helsinki in 1991, as part of his studies he enrolled in a UNIX module. (Richardson, 1999) His participation on this module introduced Torvalds to UNIX, specifically the Digital Equipment Corporation’s (DEC) variant of UNIX called Ultrix. (Torvalds and Diamond, 2001) In order to continue his studies and to indulge his computer programing hobby at home, Torvalds purchased a PC and installed a Unix-like OS called Minux. Using Minux as its basis Torvalds began work on his own kernel, this kernel would later be named Linux. (Linuxfoundation.org, 2014) On the 26th of August 1991 Torvalds published a post on the comp.os.minux Usenet group announcing “I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) . . . I’d like any feedback on things people like/dislike in minix . . . I’d like to know what features most people would want.” (Torvalds, L.B 1991) The kernel had been designed around the Intel 386 and utilised many features specific to that CPU, this lead Torvalds to believe that the kernel was not portable. But with assistance, ideas and code from the comp.os.minix community the kernel was developed to add portability and new features. (comp.os.minux, 1991)

From September 1991 a number of iterations of Linux were released and in March 1994 Linux 1.0 was released. In the intervening time the Linux community had grown substantially and due to the incomplete nature of GNU Hurd, Linux has become the kernel of choice for the GNU OS. The Linux kernel allowed the GNU OS to be a full OS free of proprietary code thus fulfilling the original vision set out by the GNU Project. (Negus and Bresnahan, 2012)

Torvalds distributed Linux under the GNU GPL, thus enabling individuals and groups to further develop Linux. The Linux kernel was used as the basis of many OS, each with their own unique configuration and bundled software packages, these would become known as distributions often referred to as distros. Due to the Linux only comprising the kernel and many of the overlaying software components being from GNU, many distributions are referred to as GNU/Linux. Today there are over six hundred Linux distributions. (Futurist.se, 2014)

Linux is considered both free software and OSS. GNU consider the word free to be defined as “freedom”, thus allowing a user to have complete freedom over the software. For example Google’s Linux based Android OS has its source code open and is therefore OSS, but restricts users and developers to using certain components without being able to remove or modify them, therefore Android cannot be considered as free software, despite it being licenced under the Apache 2.0 software license. (Gnu.org, 2014) (Gilbertson, 2010) Debian Linux conversely explicitly sets out to meet the OSS definition. (Debian.org, 2015) Linus Torvalds embraced this subtle difference, he asserted that Open Source principles did not clash with commercialisation, in his keynote speech to the 2000 LinuxWorld Expo Torvalds stated “It is not the point of Linux to be uncommercial” (Theregister.co.uk, 2000)

Commercialisation of Linux Support

Due to the diversity of individuals and groups developing Linux a number of early distributions formed the platform for further distributions to be developed upon. Distributions such as Debian and Red Hat were two major platforms to form the basis for new distributions. The primary difference being the package management systems used by each distribution. (Packman.linux.is, 2014) A secondary difference was the target market, Debian was dedicated to providing a free OS with free software packages for all users, as a Unix-like OS it is versatile and configurable, and is one of the most used distributions for the Apache web server. (Debian.org, 2014) Red Hat Linux was developed for enterprise environments. (Redhat.com, 2014) Red Hat Linux’s developers; Red Hat, pioneered the support model for OSS. Founded in 1993 the business steadily grew, in 1999 Red Hat floated on the New York Stock Exchange and set an all-time record for a technology IPO. (Redhat.com, 2014) In 2012 Red Hat became the first Linux based company break the Billion US dollar mark for annual earnings. (Vaughan-Nichols, 2012) By 2014 they had diversified the range of products and services they offered including their flagship Red Hat Enterprise Linux (RHEL), an enterprise level OS that comes in a number of varieties and configurations for both servers and clients. Red Hat support large scale enterprise networks, one of the software packages they support is Red Hat Directory Server (RHDS), a directory database that makes use of Lightweight Directory Access Protocol (LDAP) to provide authentication, access control and other management features. (Redhat.com, 2014) Although Red Hat is open source it does protect its software via the use of the Red Hat trademark, this restricts redistribution of Red Hat Products. (Redhat.com, 2014) Despite this and due to its open source nature there are many Open Source and free alternatives to Red Hats enterprise level software. For example the Fedora and CentOS distributions are both forked from red hat, and RHDS is used as the basis for 389 Directory Server. (Fedora, 2014)

Red Hat recorded revenue of 1,534.615 Billion US dollars in 2014. (Sec.gov, 2014) These stats make Red Hat the most successful Linux based company, but they are not the only company that offer Linux based software for free with a support and certification model, Canonical and Novell have also experienced success with similar business models. This support model in now over 20 years old, Peter Levine a lecturer at both MIT and Stanford argues that the support model is outdated, he points out that Red Hat’s success is dwarfed by that of proprietary rival Microsoft whose revenue in 2014 was 86.83 Billion US dollars. (Microsoft.com, 2014) Levine argues that lack of investment, forking development and even the fact that the code is open is holding back the Open Source community from competing with major corporations such as Microsoft. (TechCrunch, 2014)

Commercialisation of Linux as a Service

Support is not the only way that Linux has been commercialised, many organisations including some the biggest name in the technology sector use Linux as the backbone to both their internal and external infrastructure. Major corporations contribute to the Linux kernels, Microsoft who have products in market that are in direct competition with Linux added 1% of the code in 2012. Dozens of other organisations are also on the list of contributors, The Linux foundation estimated in 2012 that 75% of contributors were being paid for their work. (The Linux Foundation, 2012)

The code Microsoft added to the kernel was driver software enabling Linux OS’s to have increased performance when used in Microsoft’s virtualisation products. (microsoft, 2014) As the support business model for commercialising Linux plateaued, virtualisation was a leading technology in allowing a new business model to evolve; Linux as a Service. In 2007 Red Hat announced a new version of RHEL that allowed individuals and corporations to rent servers by the hour. The servers were not physical servers but virtual servers held in a remote location, colloquially known as cloud computing. The OS for the servers could be from a range of vendors, and included proprietary serves such as Windows Server, UNIX and Linux servers, in the majority of cases they are installed on hardware running a Linux based hypervisor. (Judge, P. 2007) This service was a joint enterprise with Amazon and formed the basis for Amazon Elastic Compute Cloud (Amazon EC2), and is now the world’s largest cloud service provider. (Darrow and Darrow, 2014) Amazon are not alone in offering Linux based cloud services, many of their competitors offer similar services such as Infrastructure, Platform and Software as a Service (collectively known as XaaS), IBM, HP Google are just a few of examples.

OpenStack is the defacto standard cloud platform for enterprise environments with hundreds of companies using it worldwide. OpenStack is an OSS stack that allows the deployment of service based technologies. The OpenStack project is maintained by the OpenStack Foundation which includes over 200 corporations such as AT&T, Red hat and Canonical. (Openstack.org, 2014) The 2014 OpenStack survey clearly demonstrated that Linux based technologies are dominating the XaaS sector; KVM and Xen are the most widely used hypervisors on OpenStack. Open vSwitch and Linux bridge are the most used network drivers. 95% of organisations are using OpenStack to deploy Linux desktop OS’s as Platform as a Service (PaaS), with 40% deploying Ubuntu, 26% CentOS and 14% RHEL. Various other distributions make up the rest. Microsoft Windows only accounted for 5% of the organisations surveyed. (OpenStack, 2014)

Linux is not only the dominant platform for XaaS, Linux based technologies are used for a wide variety computing solutions. According to the Linux Foundation “Linux powers 98% of the world’s supercomputers, most of the servers powering the Internet, the majority of financial trades worldwide and tens of millions of Android mobile phones and consumer devices.” (Linuxfoundation.org, 2014)

Proprietary Network Technologies

Proprietary software (PS) designed for enterprise scale networking is available for all major server platforms. UNIX, Windows and Linux all have PS packages to carry out networking tasks. This software can range from PS packages that either standalone products or part of a package of products that are included as part of a server OS platform. Additionally proprietary hardware can have a proprietary OS with a mix of OSS and PS installed on it. One example of this is Cisco’s router and switch products, which run the Cisco IOS OS, and supports a range of open and proprietary protocols and protocol extensions. (Cisco, 2014)

Microsoft as an Example

One of the largest vendors of proprietary server software is Microsoft, who in 2013 saw their revenue from their Server and Tools division grow by 9% compared to the year before, to US$20,281,000,000. (Tanner Helland, 2013)(Microsoft, 2014) In 1993 Microsoft released Windows NT 3.1 Advanced Server (Theregister.co.uk, 2014), this product was Microsoft’s first server branded operating system released by Microsoft, and the first to use NT, which semi-officially stands for ‘New Technology’. NT would go on to form the basis for all of Microsoft’s client and server OS’s. At the heart of NT was a monolithic kernel that allowed Windows to be platform independent and which enabled software and hardware portability. (Zachary, 1994)

Microsoft began producing new variants and iterations of their server OS’s, as time went on new features and proprietary versions of general network software was included; in 1994 Windows NT 3.5 Advanced Server Microsoft included an implementation of DNS called Microsoft DNS (Richter, 1995), and a web server called Internet Information Services (IIS) was included as an option in version 3.51. (Microsoft, 1997) In 1999 Microsoft released Windows 2000, Windows 2000 had a number of server branded variants, that included software packages Routing and Remote Access Services (RRAS), IPSec support, and a directory service called Active Directory (AD). (Technet.microsoft.com, 2014)

AD is an enterprise level directory service that is one of the key components in a Windows Domain. Every Windows Server fulfilling the role of a Domain Controller (DC) has an up to date copy of the AD database. AD provided central administration, authentication for what it calls objects. Objects can be device accounts, users accounts and groups. As with Microsoft’s other products AD has evolved, new features and functions have been added, for example in Windows Server 2008 added the functionality to have Read-Only Domain Controllers (RODC), a RODC only holds a read only copy of the Active Directory database, this is designed to be used in locations where security may not be optimal. (Minasi, 2010) In Windows Server 2012 the ability to clone Domain Controllers and rapidly deploy virtual Domain Controllers, each with a copy of the AD database, was added. (Mackin and Thomas, 2014)

AD employs platform independent standards and open source technologies. LDAP, is an application layer protocol that allows AD to add and retrieve information from its directory. For authentication Microsoft extended the authentication protocol Kerberos. Microsoft’s extension to Kerberos was published as a memo by the Internet Engineering Task Force in Request for Comment (RFC) 4757. (IETF, 2014)

Vendor Lock-In

Microsoft pursues what is known as a Vendor or Proprietary Lock-In strategy. This is achieved by producing a large amount of proprietary software, internet browser plugins, file types, Application Programming Interfaces, extensions and protocols. As a result of this Microsoft has a rich and diverse eco-system of enterprise products that are designed to work seamlessly with one another that in many cases are difficult or impossible to be used in a non-Microsoft environment. (Le Concurrentialiste, 2014) In a 1997 memo to Bill Gates that was published in the 2002 European Commission report on Microsoft’s business practices, Microsoft’s C++ general manager Aaron Contorer, praised the Windows API and how it had helped to lock in independent software developers into using Microsoft products despite “our mistakes, our buggy drivers, our high TCO, our lack of a sexy vision at times, and many other difficulties” he concluded his memo with “In short, without this exclusive franchise called the Windows API, we would have been dead a long time ago.” (Michael Parsons, 2004)

Microsoft are not alone in employing a lock-in strategy, many vendors of PS and services use a similar model to lock customers into their range of products. Cisco switches and routers support a range of networking protocols, many of them open standards, on top of these open standards Cisco also offer a range of proprietary protocols and protocol extensions that are only interoperable with other Cisco devices. In some cases proprietary protocols are interoperable with Cisco hardware running a specified preceding OS version. In order to deploy the proprietary protocol a customer may need to update the OS version on their hardware or if the hardware does not support the required OS version they may need to upgrade the hardware itself. Example is being the Dynamic Trunking Protocol (DTP) (Cisco, 2014) and VLAN Trunk Protocol (VTP)

Open Source and Proprietary Technology Comparison

It is perhaps not surprising that there is a diverse set of opinions on the subject of open source vs proprietary technology. Each has its proponents and its detractors, it is also not surprising that many of the proponents and detractors have vested interests. This is no more evident when discussing the Total Cost of Ownership (TCO) between a Microsoft Windows setup and a Linux setup. Red hat commissioned what they described as independent survey examining the TCO of RHEL and Windows Server IT infrastructure. They collected data from 21 companies they found that RHEL had a TOC that was 34% lower than that of an equivalent Windows Server set up. Included in the survey were more statistics that shed favourable light on RHEL. The survey found that compared to Windows server RHEL had 46% lower software costs, 41% lower staffing costs, and 64% less down time. (Redhat, 2013)

Microsoft have themselves published papers pertaining to the TCO of running a Windows Server based domain. A 2006 paper published by the corporation, they mate reference to a survey carried out by the META group that had found “that higher staffing costs for Linux-based solutions offset any potential upfront savings in acquisition costs relative to Windows Server”. The paper follows the theme of asserting that Windows Server offers lower TCO than Linux equivalents and provides a better return on investment than Linux. (Microsoft, 2006)

Finding non-partisan information can be difficult, for example a report by Vital Wave Consulting from 2008, found that Windows and Linux offer the same TCO in emerging markets, however Vital Wave Consulting were commissioned by Microsoft to investigate and report on the subject. (zdnet, 2008) Conversely a 2005 report commissioned by IBM put the TCO of a Linux server deployment at an estimated 40% less than that of Windows Server. At the time of the report IBM were involved in commercial tie ups with open source vendors such as Redhat and Novell. (PC Pro, 2014)

The Harbin Institute of Technology, a research university based in Harbin, Weihai, Shenzhen, China published a paper in 2012 titled “Survey and comparison for Open and closed sources in cloud computing”, in this they concluded that in terms of cost open source technology offered better value, but that open source documentation is often inaccessible to novice users. (Nadir K.Salih, Tianyi Zang, 2012)

TCO can be a major factor when a business is making decisions, this perhaps provides some basis as to why finding independent information is difficult. Other areas however have had more impartial research carried out on them. One of these areas is security. Mikko Hypponen an award winning security researcher gave an interview on cybercrime in 2010, he was asked to compare Open Source and proprietary software to which he replied “The truth is that pretty much nobody looks at source code and tries to find bugs. In that way, the ‘theory of many eyes’ doesn’t work.” he continued by stating that the big difference was that only the proprietary software vendor can fix bugs in their software, but open source software can be fixed by anyone, which in general allows for security holes to patched up quicker. (Technewsworld.com, 2014)
In an in depth 2009 report on servers, infoworld suggested that the market was dividing into two distinct categories; Windows and Linux, it quoted Jim Zemlin, executive director of the Linux foundation as saying “The key here is that really Linux and Windows are moving away from the pack here and it’s becoming a two-horse race”. The article also suggests that heterogeneous infrastructure was becoming standard, citing Red Hat marketing director Nick Carr who states that Windows based Exchange (Email), SQL, file and Print servers are common on RHEL infrastructure. Dr. Roy Schestowitz a proponent of Linux is also quoted as saying “Increasingly, such servers that run in mixed environments rely on virtualization”, this was in relation to Linux based networks running Windows based virtual machines. (Krill, 2014)

An article published on business technology website Techradar Pro in 2014 by David Barker, technical director of 4D Data Centres, offered a balanced comparison between the two server platforms. The article puts forward that most system administrators are comfortable with both Windows and Linux and that deciding on what server OS to use is need specific. Barker suggests that the life cycle intended for server can be a critical factor, pointing out that Microsoft will end mainstream support for its Windows Server 2008 product. He goes on by stating that if the server is on physical hardware it is likely that it would need to preplaced in this time frame anyway.

Barker echoes Dr. Schestowitz’s statement about virtualisation allowing for a heterogeneous network environments by pointing out that Microsoft has partnered with open source organisation to enable hyper-V management of open source nodes. Barker also echoes Red Hats Nick Carr Linux systems can co-exist with Microsoft systems. (Barker, 2014)

End Users and Changing Technology

An end user can be defined as any human that uses a computer, end users can range from system administrators to the office typists. Each user has a set of requirements and it is the job of ICT to meet these needs, however these needs must be met within the requirements of the organisation and budgetary restrictions. (Corbett et al., 2013) An organisation may choose to change its base technology for a number of reasons, for example it may decide to go open source and replace proprietary technology, as the City of Munich did, in a project called Limux. (Linuxjournal.com, 2015) Peter Hoffman who led the City of Munich’s Limux project to switch to open source technology stated that the main reasons for the switch was to save money and halt the ever increasing lock in to Microsoft products. (Kent, 2013)

One issue that was never explicitly stated in the Limux project was the end user experience. Users were considered in the project plan, but only in calculations for retraining staff and cost of technical support staff. (Saunders, 2014)

A 2014 report by Nick Heath of Tech Republic suggest that Limux end user dissatisfaction with the changes from a Windows based OS to a Linux based OS may have triggered a review of the project, this was denied by the Munich City Council, although council spokesperson Stefan Hauf did concede that there has been negative feedback on certain aspects of the change to open source.
Hauf stated that “the primary gripe being a lack of compatibility between the odt document format used in OpenOffice and software used by external organisations. Munich had been hoping to ease some of these problems by moving all its OpenOffice users to LibreOffice”, (Heath, 2014) this compatibility issue appears on the face of it to be symptom of the vendor lock in the project was attempting to rid itself of. What must not be over looked is the disgruntlement of the end user. This could lead to frustration and discourage end users to embrace the new technology.
The Practice of System Administration published by Addison Wesley in 2007 asserts that ICT is there to serve the needs of end users. ICT exists because of users and not vice versa. It tempers this somewhat by going on to assert that the ‘customer is always right’ attitude is also not correct.
The book proposes that System administrators must view end users as ‘business partners’ consulting them on any change that may be proposed before proceeding with it. With administrators and users working together the needs of the organisation and the end users are best met. (Limoncelli, Hogan and Chalup, 2007)
Award winning magazine NAWIC published an article by Fred Ode the founder and CEO of Foundation Software. The article included five tips to avoid end user rejection of new technology. This supported The Practice of System Administrations assertion that users must be included in the process. It also proposed that a number of factors relating directly to the end user should be considered when implementing change of the ICT infrastructure, these suggestions included; considering the skill level of the end users and providing appropriate training to end users. Ode suggests that the majority of users are in general resistant to change, with a small number being open to change, ode says “The key is to identify innovators and early adopters and get them involved in the training process, so they can help excite and educate other users”. (F, Ode. 2008)

Virtualisation

Virtualisation is the creation in software of a simulation of a range of computing resources either in part or in whole. This simulation can virtualise both hardware and software. (Servervirtualization., 2014) The origins of virtualisation date back to the late 1960’s. IBM multiuser mainframes employed virtualisation techniques on memory. This was to allow for the efficient use of resources of the mainframe when running multiple simultaneous users. (Docs.oracle.com, 2014) Over the next 30 years development of technologies including virtual memory, hypervisors and application virtualisation were invented and/or refined. (Everythingvm.com, 2014)

A paper published in 1974 by Gerald J. Popek and Robert P. Goldberg entitled Formal Requirements for Virtualizable Third Generation Architectures laid out a method to ascertain if a (third generation) system architecture was capable of virtualisation. The paper described various VM concepts, which they describe as “an efficient, isolated duplicate of a real machine.” (Popek and Goldberg, 1974) The methods described in the paper can still be used as a guideline for virtualisation requirements. Prof. Douglas Thain of the University of Notre Dame, Indiana USA, described the paper as “the most important result in computer science ever to be persistently ignored”. Prof. Douglas breaks the paper down into two basic principles, a sensitive instruction and the privileges instruction. (Thain, 2010) The Popek and Goldberg paper describes what it terms as a Virtual Machine Monitor (VMM), VMM’s are now more commonly known as Hypervisors. Hypervisors can be categorised into two broad categories, type 1 and type 2. (Popek and Goldberg, 1974) (Portnoy, 2012)

Type 1: Also known as Bare Metal and Native. Type 1 hypervisors are installed directly onto the underlying hardware. A basic micro-kernel usually sits below the hypervisor to interact with the physical hardware. The type 1 hypervisor manages and abstracts all hardware from the overlaying virtualised systems.
Type 2: Type 2 hypervisors are installed on to a conventional host OS as a program. Type 2 hypervisors are generally not used in scalable enterprise environments. (Portnoy, 2012)

In the late 1990’s, VMWare’s Dan Wire described “a revolution with virtualization”. What Dan was referring to was the founding in 1998 of VMWare, and the release of VMWare workstation. (Wire, D. 2013) VMWare workstation allowed for the running of a Virtual Machines (VM), a virtualised PC and OS running inside and using the resources of a physical host PC. VMWare workstation was not the first product to market to allow for this, Apple had implemented a similar system with Virtual PC, but VMWare Workstation was the first major commercially available product of this type. (Everythingvm.com, 2014)

As of 2015 VMWare are the industry leader in enterprise virtualisation solutions. (VMWare, 2015) VMWare’s main enterprise virtualisation product range is called vSphere. vSphere is a collection of components that form a complete virtualisation platform, allowing for the creation of and management of VM’s. The vSphere range of products are available in 3 tiers, with each preceding tier having less functionality. (VMWare, 2015)

VMWare have a number of competitors, Microsoft have a similar product range that is tightly integrated with their Windows Server products called Hyper-V. (Finn, 2013) Citrix have a range of products based around the open source Xen hypervisor. (Citrix.com, 2015) These are just two examples of competing enterprise class hypervisor products that position themselves in the same market segment as VMWare’s vSphere. (Paul, 2014)

The open source project KVM (Kernel-based Virtual Machine), is a free hypervisor that can form the basis of full virtualisation platform running on a Linux based system. KVM was originally developed by Qumranet who were taken over by Red Hat, Red Hat now oversee the project. (Linux-kvm.org, 2015) KVM is a Linux kernel module that converts the system into a type 1 hypervisor. (IBM. 2015) This module was integrated into the mainline Linux Kernel in 2007, its ability to support virtualisation is depended on compatible virtualisation extensions being present on the host CPU. (Linux-kvm.org, 2015)

KVM can be combined with other open source projects, such as QEMU which provides device emulation and user-space functionality and libvirt an API which provides a variety of tools such as management interfaces. (Libvirt.org, 2015) Together a feature rich and efficient virtualization platform is formed.

KVM and VMWare are two very different propositions, VMWare fits the definition of a traditional type 1 hypervisor, KVM redefines this slightly with its integration directly into the host OS Kernel. (Linux-kvm.org, 2015) Both offer a complete suite of enterprise level functionality, but achieve their end goal in a different manner.

VMWare is a homogeneous system, each component is designed to work seamless with the rest of the platform. The disadvantage of VMWare is cost, functionality comes at a price. (Vmware.com, 2015) KVM when combined with QEMU and libvirt is heterogeneous, a wide variety of features can be installed and configured as and when needed at no cost. It may not always be the case that each feature has been fully tested or is stable when integrated to the platform. Supporting the platform may require specialists or support contracts which could mitigate against the zero cost benefits of the software. (Redhat.com, 2014)

Diffie-Hellman: The Basics

The Diffie-Hellman Key Exchange is a method of securely exchanging cryptographic keys across insecure and untrusted networks. To do this a shared secret between two entities must be created, it does this with a mathematical one way function. A one way function is a problem that is difficult to solve in one direction, but easy in the other. Most major websites use one way function to store password hash digests rather than the users actual password.

A simple to under stand one way function can be explained with mixing paint. If you have three different colours of paint, and mix them together, it would be almost impossible to reverse engineer the mixed paint to discover the original colours.

Below is a simple to understand break down of the mechanism that Diffe-Hellman employs. It is explained both with mathematics and colours for simplicity. Bob and Alice want to create a shared secret and mutually authenticate each other, Eve wants to know what the secret is…how do Alice and Bob Stop her?

Step 1: Alice and Bob agree on of a Prime Modulus ie. 17 and a primitive root ie. 3. This is a number that when raised to any exponent (X) with modulus produces a equiprobable result. 3 is a prime root of 17.

step_1

Step 2: Alice and Bob both select random Private Keys. This number will be used as the exponent and is used as the exponent X in the agreed modulus equation. Without the Private Key this is very difficult to reverse. (A large prime modulus must be used, 17 is just for demonstration purposes)

step_2

Step 3: The results of this produce Alice and Bob’s Public Keys 6 | PURPLE and 12 | ORANGE These are then shared

step_3

Step 4: The Public keys is then shared, allowing Eve to intercept them. The private keys are kept secret so Eve does not know the private key/exponent to allow her reverse the maths to find them.
Now Alice and Bob can use each others Public Keys for the start of their Modulus equation. With their Private Key as the exponent once again.

step_4

10 | BLACK is the shared secret. Both sides will always find the same result as their Private Key is obfuscated in the Public Key. So the equations are basically the same. But Alice and Bob can workout each others Private Key. Eve can not work out the Private Keys or the Shared Secret.

This is a very basic explanation of the broad concept. Understanding each step involved here is vital, before endeavouring to learn Diffie-Hell in detail.

If you are struggling to understand this, have a look at Khan Academy’s excellent video on this subject that is presented by Birt Cruise.

Proprietary vs Open Source Network Software

I am currently working on a project investigating replacing proprietary technology with open source technology, the project is about 50% complete at the moment. I presented my initial findings earlier this week, I’m happy to say that they were well received. Below is a copy of the presentation, if anyone has anything to add to it, be it corrections, critique or any other feedback then please feel free to email me at [email protected]

All feedback is welcome.

PS. Yes the file type is Microsoft’s .pptx, but this is due to WordPress not embedding .odp’s correctly. (incidentally file type compatibility is one of the issues raised in the report)

Download (PPTX, 587KB)

Modulation in Radio Transmission

Modulation in Radio Transmission

Earlier this year, I wrote a report identifying two methods of transmitting public broadcast radio in the UK. The report was designed to give a general overview and broad insight into the transmission and modulation techniques used in radio transmission. This Post is based on that report. I removed some of the more complex mathematics, summarised some of the concepts and de-formalised the language somewhat, in order to make this blog post more readable.

There a number of distinct methods and platforms for radio broadcast in the UK. The traditional method of broadcasting commercial or non-commercial radio via the use of broadcast transmitters is still used today. AM is one of the earliest analogue radio broadcast technique used in the UK to this day, you can hear many a debate or sports broadcast ring out through the airwaves on AM radio. Frequency Modulation (FM) is used both by amateur, local and national broadcasters. Some of the nations favourite radio stations can be listened to on FM radio, in fact FM radio is the most widely listened type of radio broadcast in the UK. For this reason this post will talk about FM radio.

While Analogue radio still thrives in the UK, digital radio has made in roads, one common method is streaming via the internet, the station can then be received and listened to via a range of devices such as desktop PC’s, phones and tablets. This post however will discuss modulation methods that can be used with Digital Audio Broadcasting (DAB) and Digital Radio Mondiale.

Transmitters

We will begin our look at broadcast radio with transmitters. In Claude Shannon’s theory of communication he asserts that for communication to occur, there are number of requirements; an information source, a transmitter to send the information, a signal to carry the information, and a receiver to receive and decode the signal. Both FM and DAB radio require a transmitter to process, modulate and amplify the information signal before placing it on a carrier signal. The information signal becomes a component of the carrier signal and is then placed on to the transport media for broadcast. In the case of FM and DAB radio the transport media is the air. When the signal is intercepted by a receiver it is demodulated and the information is retrieved and processed.

Broadcast transmitters can be found across the UK, they form a nationwide broadcast network that deliver both analogue & digital TV as well as radio broadcasts. The transmitters are equipped with verity of antennae including omnidirectional and directional antennae depending on the specific requirements for each location. Also specific to the location is the power at which the FM radio signal is broadcast. Using the The Wenvoe Transmitter in South Wales as an example, shows how power and antennae can change from station to station with some broadcasting with a power of 250 kW and others at 125 kW. All this will have an effect on the range and attention (drop off of power) properties of the broadcast. DAB radio is broadcast on a number of Band III VHF frequency blocks, these are also specific to the location of each tower.

Modulation

Modulation is modifying one or more of the the three fundamental frequency domain parameters; amplitude (A), frequency (f) and phase (∅). When placing a digital or analogue data baseband signal on an analogue signal, the signal will be analogue carrier wave. the carrier wave is a high frequency signal in the form of a periodic waveform. This post will discuss placing analogue data onto an analogue signal & DAB radio; digital data onto a analogue signal.

Analogue Data on an Analogue Signal: The original signal is converted into an electric signal via the use of a transducer. When broadcasting radio on unguided media a high frequency signal is required in order to achieve effective transmission, this is the carrier signal.

Digital Data on an Analogue Signal: Unguided media will only propagate analogue signals, as such the digital data must first be converted from a digital to analogue.. Digital data is a series of discrete voltage pulses, each pulse represents one bit of data. The digital data is then processed by a modulator-demodulator and transformed into an analogue signal.

FM Broadcasting & Frequency Modulation

FM Radio broadcasting commenced in the UK in 1955, as of 2014 it operates on the licensed Very High Frequency (VHF) band range of 88.0 to 108 Mhz of the radio spectrum. Stations are assigned a portion of range that they use to place a low frequency information signal. The information signals data is music and voice in the range of 20 Hz to 15 kHz, human beings can hear frequencies in the range of 20 hz to 20 kHz, with the spoken voice being in the 1000 Hz to 5000 kHz range. Despite FM radio capping the modulation frequency to 15 kHz, FM is still considered High Fidelity. FM radio is named after Frequency Modulation technique that it uses to process information signals. FM can be used for a number of purposes other than FM radio, including Seismology, Radar, Electroencephalography and telemetry.

The high frequency carrier signal can be defined as a cosine with the following equation. Vc (t) defines the voltage of the carrier wave at a given period of time. Vc and fc define the carrier waves voltage and base frequency respectively. This is a standard cosine or sine wave. A sine wave is curve defined by mathematics to describe oscillation in a smooth and repetitive manner.

Carrier Signal: Vc(t) = Ac sin (2fc t + Ø)

The low frequency information signal can be represented in a similar manner. Although the variable will result in a wave that does not have smooth, repetitive oscillation

Information Signal: Vm(t) = Am sin (2fm t + Ø)

The information signal is placed on to the carrier signal, thus becoming a component of the carrier wave. This wave is now referred to as the modulating wave. Below is a mathematical representation of the modulating wave. In this equation f represents the peak frequency deviation. The peak frequency deviation is the difference between the maximum instantaneous frequency of the information signal and the carrier signal. In the equation below this means that frequency of fc + (f/Am) between fc minus f and fc plus f, this difference is also known as the carrier swing frequency. FM Radio does not have a carrier swing of over of 75 kHz in order to achieve loudness.

Modulating Signal: xM (t) = Ac sin (2 [fc + (f/Am) Am (t) ] t + Ø)

Digital Radio

The UK has the largest network of digital radio transmitters in the world, with a total of 103 transmitters, 2 DAB national ensembles plus an additional 48 regional ensembles as of October 2014. These transmitters cover 90% of the UK population. The map on the right shows the placement of transmitters across the Scottish central belt. They transmit UHF, VHF and MF, DAB is in the VHF range.

For the remaining 10% of the population the Digital Radio Mondiale (DRM) technology is being considered as a possible solution to covering the areas with this population. DRM makes use of the range’s traditionally used by AM radio. By using MPEG-4 codecs for audio compression, DRM can have a higher amount of channels with a higher quality of sound. DRM can make use of a number of bandwidths depending on what situation the broadcasters specific requirements. These range from 4.5 kHz for simulcasts to 100 khz for DRM+. Both DAB and DRM make use of Orthogonal Frequency Division Multiplexing (OFDM) for encoding digital data onto multiple analogue carrier waves, using a variety of modulation techniques.

Quadrature Amplitude Modulation

Quadrature Amplitude Modulation (QAM) is a modulation technique is used along with the OFDM encoding mechanism for digital radio broadcasts. QAM be used as either a digital or analogue modulation method. This report will discuss digital QAM. QAM makes use of two carrier waves, each wave is out of phase with its corresponding wave by 90o, it is this shift if phase that give QAM its name. The carrier waves are keyed to represent digital data. By changing the Amplitude and Phase of the carrier waves, essentially makes QAM a combination of both Amplitude Shift Keying (ASK) and Phase Shift Keying (PSK).

Phase Shift Keying

PSK represents digital data by by modulating the phase of the carrier wave. BPSK is the simplest form of PSK,and thus will be described here to give a general overview. PSK uses two phases that are separated by 180o to represent one of two points, this is why it is also referred to as 2PSK. A BPSK transmitter works by converting a digital information signal representing 0s and 1s with 0 Volts and a positive Voltage Eb(t) (+V), into a signal that that is represented by a negative
Voltage -Eb(t) (-V) and +V by using a Level Converter (LC). This signal is sent to a Balance Modulator (BM). Simultaneously the the carrier signal is being sent to the also being sent to the BM, via a buffer from a Carrier Oscillator (CO). The carrier signal is then combined with the signal from the LC. As the signal is passed through the BM its phase is modulated so +V is represented by +180o or -90o and -V is represented by 0o or -90o. Finally the signal is passed through a Bandpass Modulator. ASK also uses a similar method, but instead of changing the phase of the modulating signal it changes the amplitude. BPSK is represented mathematically with the following equations:

Phase: Binary 0: s0(t)=2Eb/Tbcos(2fct+)=-2Eb/Tbcos(2fct)
Phase: Binary 1: s1(t)=2Eb/Tbcos(2fct)
Signal space: (t)=2/Tbcos(2fct)

What this gives us is a constellation with 2 symbols, each symbol representing 1 bit of information, as a result BPSK has a very low bit rate, it does provided high error tolerance though, as the symbols are clearly defined and are therefore less susceptible to noise interference or any other phenomenon that may degrade the quality of the signal. The bit error rate (BER) of BPSK defined mathematically with the following equation: 1-(1-BER)^bits in transmission

QPSK works in a similar manner to BPSK, but uses two extra phases to represent an additional two symbols, with each of the four symbols encoding 2 bits of information, while using the same amount of bandwidth of BPSK. QPSK can also maintain the same data rate as BPSK but only use half of the bandwidth.

This can be represented mathematically with the following equations;

To yield 4 phases 90o apart: (/4, 3/4,. 5/4, 7/4)
sn(t)=2Es/Tscos(2fct+(2n – 1)/4), n=1,2,3,4.
In phase signal component:
1(t)=2/Tscos(2fct)
Quadrature component:
2(t)=2/Tssin(2fct)
This allows the constellation to have 4 single space points:
(Es/2,Es/2)

Round Up

Hopefully this post has given you some insight into the modulation that that is passing through the airwaves, just think about it, the techniques described in this post are happening all around you in the space that you currently exist. If the maths got a bit indecipherable, I have included a round up below with some of the take home points of this post.

Modulation schemes used for radio broadcasting in the UK have remained relatively static. As overall broadcasting technology around them has evolved the actual modulation has not. This post covered FM broadcasting, Frequency Modulation still works using the same basic principles that it used when it first commenced broadcasting. The reason for this is that Frequency Modulation it still highly adequate for delivering high fidelity broadcasts to listeners. Evolving compression, encoding, and error mitigation techniques have allowed for more data to be sent through the same amount of bandwidth, while still using a Frequency Modulation scheme. As of 2014 FM Radio is still the most listened to form of broadcast radio in the UK.

Digital Radio Broadcasting is ever evolving, a relatively new form of broadcasting, it uses a variety of keying techniques to transform digital signals to modulated analogue signals for transmission. This report studied BSPK and then QPSK, it used them as an example of how digital modulation works. It demonstrated via the use of mathematical equations that QPSK is in fact two independent BPSK schemes running in tandem, and how it can make more efficient use of the same amount of bandwidth made available by licensing authorities.

The Imitation Game

Yesterday I went to see the new film about Alan Turing, The Imitation Game. Turing was a mathematician and computer science pioneer. During World War II he was instrumental in breaking the German Enigma machines cypher, allowing the allies to decode intercepted German communications. This was invaluable to the allied war effort, and is credited with ending the war years early and saving millions of lives.

Turing was a homosexual, which at the time was illegal in the UK, he was involved in an incident that subsequently lead to his conviction for gross indecency. Upon conviction he was given the option of going to prison or being chemically castrated with hormone therapy. He chose castration. Turing died in 1954, his death was more than likely suicide by cyanide poisoning, this was the finding of the official inquest. Although others debate this, including his own mother who thought his death was accidental.

Due to the secretive nature of his work, Turing’s achievements were never acknowledged, and were buried deep in the archives. The full extent of the part he played in Hitler’s downfall was not known until documents pertaining to it were declassified under the Official Secrets Act 50 year rule.

Although it appears as if the film has many historical inaccuracies in it, I personally thought it was excellent. What the film does do is assist in correcting a 65 year wrong, by bringing to the public’s attention the role Turing played in the war and his treatment after it.

I’ve included a few links with more information about Turing, including his 1936 paper “On Computable Numbers, With an Application to the Entsheidungsproblem” and his 1950 paper “Computing Machinery and Intelligence”. Chapter one of the former is titled; The Imitation Game.

The Turing Digital Archives

On Computable Numbers, With an Application to the Entsheidungsproblem

Computing Machinery and Intelligence