Category Archives: Linux

How to make Linux look good

I’ve been using the copyright free Redhat clone CentOS 7 with the Gnome desktop for a while now. It has proved to be an excellent distro for experimenting with all kinds of enterprise services and packages. Over the last 9 months I have created a full virtual network running a number of instances of CentOS that have been stripped down to the terminal to reduce the attack surface and make better use of the limited resources my laptop gives me. Each of the VM’s has had at least one service, or more running on it, I’ve had a 389 directory server, DNS and a Dovecot/Squirrelmail/postfix email server amongst other things all running simultaneously on an entirely KVM platform.

CentOS isn’t just an excellent server platform it is also an excellent workstation and day to day distro, my CentOS laptop has taken over my Debian PC as my main device. In terms of practicality, CentOS is an absolute masterpiece, in terms of looks however, it doesn’t look so great if you’re going to use it as your main operating system.

One thing that bugged me was the default Gnome theme and settings, personally I found them to be quite ugly and clunky. I persevered with them for a number of months, mainly because I was so busy with proper server configuration geekery that I didn’t have time to mess about with something that only affected aesthetics.

One day however, in desperate need for something to procrastinate with I decided to make my CentOS desktop environment a little prettier. This is straightforward guide (I’m no graphics designer or user interface expert) but it should prove helpful nonetheless, especially to get a baseline desktop environment that you can tweak to your heart’s content! Additionally there a few troubleshooting tips for issues I encountered along the way.

Step 1: Download and install CentOS 7 (I used full x86_64 with Gnome shell 3.8.4). The default theme will look as follows, practical…but kinda ugly.

before

Step 2: Once CentOS is installed and updated, it is time to start configuring. There are a ton of themes available for Gnome, the theme I used was the Zukitwo theme which I downloaded from GNOME-look.org.

Step 3: Download the following packages from the CentOS repositories. These packages will allow for simple installation of shell extensions direct from the firefox browser and the tweak tool will allow us to install themes and tweak Gnome.

# sudo yum install gnome-shell-extension-common.noarch
# sudo yum install gnome-tweak-tool.noarch

Step 4: Icon packs can be downloaded to give the icons a nicer look. Numix have created a number of themes and icon packs, I used the free Numix Circle Pack. Once this is done move them to /usr/share/icons.

Step 5: In your home folder create a hidden directory (if one doesn’t already exist) called themes. Remember to start the directory name with a dot. Once this is done move the zipped theme from the download location into this. For example;

# mkdir /home/thomas/.themes/
# mv /home/thomas/Downloads/140562-Zukitwo.zip /home/thomas/.themes/

Step 6: Next up open the tweak tool we installed earlier, select the ‘shell extensions’ from the left pane. On this screen there a number of switches, find the one that says ‘User themes’ and make sure it is on.

userthemes

Step 6.1: Staying in the tweak tool select ‘Theme’ from the left pane, then from the ‘Shell theme’ menu select the themes zipped directory from the ~/.theme directory. Some people will recommend that you unzip the theme first, personally I didn’t and I haven’t had any issue installing directly from the .zip.

tweaktool

Step 6.2: Finally from the ‘Icon Theme’ option select the icon pack (Numix in my case) from the drop down menu.

Step 7: This is where we encounter our first issue, the CentOS icon in the upper left of the screen is too large. This is a small and easy to fix issue, but it took me a while and much googling to figure it out.

big icon

Step 7.1: To fix this we need to edit the gnome-shell.css file, which can be found in the themes zip directory at /home/thomas/Downloads/140562-Zukitwo.zip.

Step 7.2: Now there are a number of ways to edit this file, but the simplest way is to browse directly to it in the file manager, and open the zip directory with archive manager.

Step 7.3: In archive manager search for ‘gnome-shell.css’ then click it to open it with your default text editor, in my case this is gedit.

Step 7.4: to find the bit of css code we are looking to edit press ctrl + f and search for ‘.panel-logo-icon’ if this code exists edit it to read as follows, if it does not exist simply add the code to the bottom of file. (make use to use the correct parentheses{})

.panel-logo-icon {
padding-right: .4em;
icon-size: 1em;
}

edit_css_file

Step 7.5: While in here there a number of things that can be tweaked, it is worth googling around to see what can be done. Some ideas include making the top panel transparent or fiddling around with the colour schemes. The usual precautions should be taken when editing anything, document what you are doing and make backups before changing anything.

Step 8: The bar along the bottom of the desktop is called the window list, personally I find it quite clunky. There are a number of ways to remove it, it can be removed using extensions (more on them later), or simply by selecting the correct session at login. This can be done by selecting the cog icon next to the sign in button on the logon page.

Step 9: Once windows list has been removed it may be difficult to move around windows, one solution is to use the minumum windows list extension that will place a drop down menu in the top panel with the window list in it. The other is to install a dock.

Step 10: The dock I used was Cairo Dock, a beautiful and functional dock that can be highly customised. CentOS does not have Cairo Dock in its standard repositories, but it is simple enough to download the RPM from here and install with yum from the local file.

Step 10.1: As said in the previous step Cairo Dock is highly customisable, there are too many options to go over here. But the Cairo Dock website has handy guide on how to configure startup options here and appearance and behaviour options here.

Step 11. Gnome supports shell extensions, self contained configurations which modify and tweak Gnome. There a couple of ways of installing these extentions via the extensions.gnome.org website. Open with firefox and the extensions package we installed at the start of this tutorial will allow for simple extension configuration. These extensions can also be toggled on and off in the tweak tool.

Some the extensions I used are the quit button, and the minimised windows list.

Step 12: Now that Gnome is fully configured all that remains is to change the wallpaper to one that suits your style, this can be done simply by right clicking the desktop and selecting change wallpaper. The wallpaper I used in this tutorial can be found here.

desktop

apps

A Brief History of Proprietary and Open Source Software

Definition of Proprietary Software

The word ‘proprietary’ is defined by Oxford Dictionaries as “Relating to an owner or ownership” (Oxforddictionaries.com, 2014). In a 2004 (updated in 2005) report on the definition of proprietary software The Linux Information Project (LINFO) explained that proprietary software “is software that is owned by an individual or a company (usually the one that developed it). There are almost always major restrictions on its use, and its source code is almost always kept secret.” (Linfo.org, 2014) The restrictions described by LINFO are what allow proprietary software to be used as commercial products. Companies that develop proprietary software or buy the Intellectual property to it, exert complete control over it; they maintain, update and fix bugs in house.

Most software typically demands that end users or organisations agree to a Licence Agreement. For a proprietary product this is an electronic contract that usually prohibits the reselling, copying or profiteering from the software. In many cases the license only allows for the use of the software and not ownership. Licensing options for software can allow end users the use of proprietary components at no monetary charge; Adobe Flash being a common example of free to use proprietary software. (Adobe,2014) Licensing for proprietary software can be complex, especially when purchased for enterprise environments. Microsoft is an example of a software vendor that offers an array of complex licensing structures. (Microsoft, 2014) The number of instances allowed, time limits, limits on what physical location the software can be used in, who can and who cannot use it can all be tightly regulated by a proprietary software vendor.

Definition of Open Source Software

Open Source Software (OSS) is defined by the Open Source Initiative as; “software that can be freely used, changed, and shared (in modified or unmodified form) by anyone. Open source software is made by many people, and distributed under licenses that comply with the Open Source Definition.” This definition is a list of ten requirements that software must comply with to be considered open source. In addition to the characteristics already listed, the Open Source Definition also ensures that OSS does not discriminate to persons or groups, fields of endeavour and is technology neutral. (Opensource.org, 2014) The full list is as follows;

1.Free Redistribution: Non-restrictive licence.
2. Source Code: Must include source code.
3. Derived Works: Must allow derived works
4. Integrity of Author’s Source Code: May require derived works to change from original name.
5. No Discrimination Against Persons or Groups
6. No Discrimination Against Fields of Endeavour
7. Distribution of License: License must apply to everyone,, without the need of a further licence.
8. Licence Must Not be Specific to a Product: Licence must not be tied to a distribution.
9. License Must Not Restrict Other Software
10. License Must Be Technology-Neutral
(Opensource.org, 2014)

Andrew M.St. Laurent states in his book Understanding Open Source and Free Software Licensing that “The fundamental purpose of open source software licensing is to deny anybody the right to exclusively exploit a work” (St. Laurent, 2008), to this end there are a number of standard OSS licences that can be used when redistributing OSS. The most widely used OSS licence is the GNU General Public License (GPL) 2.0. (Blackducksoftware.com, 2014) The Open Source Initiative name the following licences as the main open source licences;

Apache License 2.0
BSD 3-Clause “New” or “Revised” license
BSD 2-Clause “Simplified” or “FreeBSD” license
GNU General Public License (GPL)
GNU Library or “Lesser” General Public License (LGPL)
MIT license
Mozilla Public License 2.0
Common Development and Distribution License
Eclipse Public License

The model of OSS used with licences such as the GNU GPL allow end users and organizations to forgo many of the complexities and costs involved with proprietary software. Additionally as the source code is public, any individual can; add features, improve stability, correct bugs and security flaws. For many enterprise level OSS there can be the option to pay for support. This provides support at a monetary cost, an example of this is Red Hats support model. (Red Hat, 2014) Additionally OSS generally has free support via the use of; documentation, IRC services, mailing lists and various other community driven support services. (Debian.org, 2014)

History of UNIX and the Move to Open Source (GNU/Linux)

UNIX

In July 1974 Dennis M Ritchie and Ken Thompson of AT&T Bell Laboratories published a white paper describing an interactive, multi user, Operating System (OS) called The UNIX Time-sharing System (UNIX). (Ritchie and Thompson, 1974) UNIX was robust and versatile, it was portable so could be used on range of devices, programs could be written and ran on UNIX to carry out a vast array of tasks. Before UNIX most programs made use of punch cards that were used as the input for mainframe computers that would then decode them and execute the program.

UNIX was a proprietary OS, but was developed with a spirit of openness, in 1979 Dennis Ritchie stated “What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing . . . (was) to encourage close communication.” (Ritchie, D. 1979) UNIX was licenced to a number of organisations who produced UNIX derivatives, one notable example was the University of California’s Berkeley Standard Distribution (BSD) which along with Bell Laboratories own System V became two of the main branches of UNIX variants.

Commercialisation and Standardisation

By the mid eighties UNIX had been fully commercialised and there were many vendors offering their own UNIX derivatives, each of them effectively being a unique proprietary system. (Unix.org, 2014) In 1984 a collection of vendors formed the X/Open consortium with an aim of creating a series of standards allowing a degree of interoperability between the proprietary UNIX derivatives. The formation of the X/Open consortium would lead to the publishing of The Single UNIX Specification (SUS) collection of standards. (Love, 2013) Incorporated into the SUS family of standards was the POSIX (Portable Operating System Interface uniX) standards. POSIX standardised a number of interfaces including Application Programming Interfaces (API), how shells interface with the UNIX kernel and various other OS Utilities. (Standards.ieee.org, 2014) (Unix.org, 2014)

The SUS and POSIX standards laid out in the Institute of Electrical and Electronics Engineers (IEEE) standards along with the commercialisation of UNIX led to UNIX veering away from the spirit of openness that Denis Ritchie has spoken about in 1979. (Negus and Bresnahan, 2012) The ever increasing restrictiveness of UNIX variants and commercialisation of UNIX made UNIX OS’s less available. This contributed to the increased prominence of the free software community.

GNU

Free software has been part of modern computing almost since its inception, technology was developed in advanced research and development laboratories run by organisations like Universities, Corporations and Governments. Although much of this technology, including computer hardware and software was developed under strict secrecy, a substantial portion of it was shared between academics and researchers. This allowed for a greater pool of minds to contribute to improving the hardware and software. (Ceruzzi, 2003) It was in this spirit that movements dedicated to allowing users and organizations to use, study and modify free software arose. By 1983 an individual called Richard Stallman had become a leading proponent of free software, on the 27th of September 1983 he announced the GNU Project. (Gnu.org, 2014)

Richard Stallman states that the GNU Project is primarily a political project. (Stallman, R. 2008) Its political ideology is that all software should be free, the project set out to create a completely new OS free of any proprietary code or software. In 1984 the project began work on a Unix-like OS complete with “kernel, compilers, editors, text formatters, mail software, graphical interfaces, libraries, games and many other things” (Gnu.org, 2014) The building of an entirely new Unix-like OS proved to be a complex task. UNIX and Unix-like OS’s are modular by design. The GNU project set about replacing each of the components one by one. Along with a small number of already existing free components, for example the X windows system, the OS’s took shape. (DiBona, Ockman and Stone, 1999) By 1992 the GNU Project had replaced all major components of UNIX in the GNU OS apart from the kernel. The GNU project was developing a kernel called GNU Hurd. (Gnu.org, 2013) GNU Hurd was not a stable kernel due to it still being in development, in 1991 the Linux kernel was published on Usenet. Soon the Linux kernel would become the de-facto kernel for the GNU OS. (Gnu.org, 2014)

Linux

Linus Torvalds was a computer science student at the University of Helsinki in 1991, as part of his studies he enrolled in a UNIX module. (Richardson, 1999) His participation on this module introduced Torvalds to UNIX, specifically the Digital Equipment Corporation’s (DEC) variant of UNIX called Ultrix. (Torvalds and Diamond, 2001) In order to continue his studies and to indulge his computer programing hobby at home, Torvalds purchased a PC and installed a Unix-like OS called Minux. Using Minux as its basis Torvalds began work on his own kernel, this kernel would later be named Linux. (Linuxfoundation.org, 2014) On the 26th of August 1991 Torvalds published a post on the comp.os.minux Usenet group announcing “I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) . . . I’d like any feedback on things people like/dislike in minix . . . I’d like to know what features most people would want.” (Torvalds, L.B 1991) The kernel had been designed around the Intel 386 and utilised many features specific to that CPU, this lead Torvalds to believe that the kernel was not portable. But with assistance, ideas and code from the comp.os.minix community the kernel was developed to add portability and new features. (comp.os.minux, 1991)

From September 1991 a number of iterations of Linux were released and in March 1994 Linux 1.0 was released. In the intervening time the Linux community had grown substantially and due to the incomplete nature of GNU Hurd, Linux has become the kernel of choice for the GNU OS. The Linux kernel allowed the GNU OS to be a full OS free of proprietary code thus fulfilling the original vision set out by the GNU Project. (Negus and Bresnahan, 2012)

Torvalds distributed Linux under the GNU GPL, thus enabling individuals and groups to further develop Linux. The Linux kernel was used as the basis of many OS, each with their own unique configuration and bundled software packages, these would become known as distributions often referred to as distros. Due to the Linux only comprising the kernel and many of the overlaying software components being from GNU, many distributions are referred to as GNU/Linux. Today there are over six hundred Linux distributions. (Futurist.se, 2014)

Linux is considered both free software and OSS. GNU consider the word free to be defined as “freedom”, thus allowing a user to have complete freedom over the software. For example Google’s Linux based Android OS has its source code open and is therefore OSS, but restricts users and developers to using certain components without being able to remove or modify them, therefore Android cannot be considered as free software, despite it being licenced under the Apache 2.0 software license. (Gnu.org, 2014) (Gilbertson, 2010) Debian Linux conversely explicitly sets out to meet the OSS definition. (Debian.org, 2015) Linus Torvalds embraced this subtle difference, he asserted that Open Source principles did not clash with commercialisation, in his keynote speech to the 2000 LinuxWorld Expo Torvalds stated “It is not the point of Linux to be uncommercial” (Theregister.co.uk, 2000)

Commercialisation of Linux Support

Due to the diversity of individuals and groups developing Linux a number of early distributions formed the platform for further distributions to be developed upon. Distributions such as Debian and Red Hat were two major platforms to form the basis for new distributions. The primary difference being the package management systems used by each distribution. (Packman.linux.is, 2014) A secondary difference was the target market, Debian was dedicated to providing a free OS with free software packages for all users, as a Unix-like OS it is versatile and configurable, and is one of the most used distributions for the Apache web server. (Debian.org, 2014) Red Hat Linux was developed for enterprise environments. (Redhat.com, 2014) Red Hat Linux’s developers; Red Hat, pioneered the support model for OSS. Founded in 1993 the business steadily grew, in 1999 Red Hat floated on the New York Stock Exchange and set an all-time record for a technology IPO. (Redhat.com, 2014) In 2012 Red Hat became the first Linux based company break the Billion US dollar mark for annual earnings. (Vaughan-Nichols, 2012) By 2014 they had diversified the range of products and services they offered including their flagship Red Hat Enterprise Linux (RHEL), an enterprise level OS that comes in a number of varieties and configurations for both servers and clients. Red Hat support large scale enterprise networks, one of the software packages they support is Red Hat Directory Server (RHDS), a directory database that makes use of Lightweight Directory Access Protocol (LDAP) to provide authentication, access control and other management features. (Redhat.com, 2014) Although Red Hat is open source it does protect its software via the use of the Red Hat trademark, this restricts redistribution of Red Hat Products. (Redhat.com, 2014) Despite this and due to its open source nature there are many Open Source and free alternatives to Red Hats enterprise level software. For example the Fedora and CentOS distributions are both forked from red hat, and RHDS is used as the basis for 389 Directory Server. (Fedora, 2014)

Red Hat recorded revenue of 1,534.615 Billion US dollars in 2014. (Sec.gov, 2014) These stats make Red Hat the most successful Linux based company, but they are not the only company that offer Linux based software for free with a support and certification model, Canonical and Novell have also experienced success with similar business models. This support model in now over 20 years old, Peter Levine a lecturer at both MIT and Stanford argues that the support model is outdated, he points out that Red Hat’s success is dwarfed by that of proprietary rival Microsoft whose revenue in 2014 was 86.83 Billion US dollars. (Microsoft.com, 2014) Levine argues that lack of investment, forking development and even the fact that the code is open is holding back the Open Source community from competing with major corporations such as Microsoft. (TechCrunch, 2014)

Commercialisation of Linux as a Service

Support is not the only way that Linux has been commercialised, many organisations including some the biggest name in the technology sector use Linux as the backbone to both their internal and external infrastructure. Major corporations contribute to the Linux kernels, Microsoft who have products in market that are in direct competition with Linux added 1% of the code in 2012. Dozens of other organisations are also on the list of contributors, The Linux foundation estimated in 2012 that 75% of contributors were being paid for their work. (The Linux Foundation, 2012)

The code Microsoft added to the kernel was driver software enabling Linux OS’s to have increased performance when used in Microsoft’s virtualisation products. (microsoft, 2014) As the support business model for commercialising Linux plateaued, virtualisation was a leading technology in allowing a new business model to evolve; Linux as a Service. In 2007 Red Hat announced a new version of RHEL that allowed individuals and corporations to rent servers by the hour. The servers were not physical servers but virtual servers held in a remote location, colloquially known as cloud computing. The OS for the servers could be from a range of vendors, and included proprietary serves such as Windows Server, UNIX and Linux servers, in the majority of cases they are installed on hardware running a Linux based hypervisor. (Judge, P. 2007) This service was a joint enterprise with Amazon and formed the basis for Amazon Elastic Compute Cloud (Amazon EC2), and is now the world’s largest cloud service provider. (Darrow and Darrow, 2014) Amazon are not alone in offering Linux based cloud services, many of their competitors offer similar services such as Infrastructure, Platform and Software as a Service (collectively known as XaaS), IBM, HP Google are just a few of examples.

OpenStack is the defacto standard cloud platform for enterprise environments with hundreds of companies using it worldwide. OpenStack is an OSS stack that allows the deployment of service based technologies. The OpenStack project is maintained by the OpenStack Foundation which includes over 200 corporations such as AT&T, Red hat and Canonical. (Openstack.org, 2014) The 2014 OpenStack survey clearly demonstrated that Linux based technologies are dominating the XaaS sector; KVM and Xen are the most widely used hypervisors on OpenStack. Open vSwitch and Linux bridge are the most used network drivers. 95% of organisations are using OpenStack to deploy Linux desktop OS’s as Platform as a Service (PaaS), with 40% deploying Ubuntu, 26% CentOS and 14% RHEL. Various other distributions make up the rest. Microsoft Windows only accounted for 5% of the organisations surveyed. (OpenStack, 2014)

Linux is not only the dominant platform for XaaS, Linux based technologies are used for a wide variety computing solutions. According to the Linux Foundation “Linux powers 98% of the world’s supercomputers, most of the servers powering the Internet, the majority of financial trades worldwide and tens of millions of Android mobile phones and consumer devices.” (Linuxfoundation.org, 2014)

Proprietary Network Technologies

Proprietary software (PS) designed for enterprise scale networking is available for all major server platforms. UNIX, Windows and Linux all have PS packages to carry out networking tasks. This software can range from PS packages that either standalone products or part of a package of products that are included as part of a server OS platform. Additionally proprietary hardware can have a proprietary OS with a mix of OSS and PS installed on it. One example of this is Cisco’s router and switch products, which run the Cisco IOS OS, and supports a range of open and proprietary protocols and protocol extensions. (Cisco, 2014)

Microsoft as an Example

One of the largest vendors of proprietary server software is Microsoft, who in 2013 saw their revenue from their Server and Tools division grow by 9% compared to the year before, to US$20,281,000,000. (Tanner Helland, 2013)(Microsoft, 2014) In 1993 Microsoft released Windows NT 3.1 Advanced Server (Theregister.co.uk, 2014), this product was Microsoft’s first server branded operating system released by Microsoft, and the first to use NT, which semi-officially stands for ‘New Technology’. NT would go on to form the basis for all of Microsoft’s client and server OS’s. At the heart of NT was a monolithic kernel that allowed Windows to be platform independent and which enabled software and hardware portability. (Zachary, 1994)

Microsoft began producing new variants and iterations of their server OS’s, as time went on new features and proprietary versions of general network software was included; in 1994 Windows NT 3.5 Advanced Server Microsoft included an implementation of DNS called Microsoft DNS (Richter, 1995), and a web server called Internet Information Services (IIS) was included as an option in version 3.51. (Microsoft, 1997) In 1999 Microsoft released Windows 2000, Windows 2000 had a number of server branded variants, that included software packages Routing and Remote Access Services (RRAS), IPSec support, and a directory service called Active Directory (AD). (Technet.microsoft.com, 2014)

AD is an enterprise level directory service that is one of the key components in a Windows Domain. Every Windows Server fulfilling the role of a Domain Controller (DC) has an up to date copy of the AD database. AD provided central administration, authentication for what it calls objects. Objects can be device accounts, users accounts and groups. As with Microsoft’s other products AD has evolved, new features and functions have been added, for example in Windows Server 2008 added the functionality to have Read-Only Domain Controllers (RODC), a RODC only holds a read only copy of the Active Directory database, this is designed to be used in locations where security may not be optimal. (Minasi, 2010) In Windows Server 2012 the ability to clone Domain Controllers and rapidly deploy virtual Domain Controllers, each with a copy of the AD database, was added. (Mackin and Thomas, 2014)

AD employs platform independent standards and open source technologies. LDAP, is an application layer protocol that allows AD to add and retrieve information from its directory. For authentication Microsoft extended the authentication protocol Kerberos. Microsoft’s extension to Kerberos was published as a memo by the Internet Engineering Task Force in Request for Comment (RFC) 4757. (IETF, 2014)

Vendor Lock-In

Microsoft pursues what is known as a Vendor or Proprietary Lock-In strategy. This is achieved by producing a large amount of proprietary software, internet browser plugins, file types, Application Programming Interfaces, extensions and protocols. As a result of this Microsoft has a rich and diverse eco-system of enterprise products that are designed to work seamlessly with one another that in many cases are difficult or impossible to be used in a non-Microsoft environment. (Le Concurrentialiste, 2014) In a 1997 memo to Bill Gates that was published in the 2002 European Commission report on Microsoft’s business practices, Microsoft’s C++ general manager Aaron Contorer, praised the Windows API and how it had helped to lock in independent software developers into using Microsoft products despite “our mistakes, our buggy drivers, our high TCO, our lack of a sexy vision at times, and many other difficulties” he concluded his memo with “In short, without this exclusive franchise called the Windows API, we would have been dead a long time ago.” (Michael Parsons, 2004)

Microsoft are not alone in employing a lock-in strategy, many vendors of PS and services use a similar model to lock customers into their range of products. Cisco switches and routers support a range of networking protocols, many of them open standards, on top of these open standards Cisco also offer a range of proprietary protocols and protocol extensions that are only interoperable with other Cisco devices. In some cases proprietary protocols are interoperable with Cisco hardware running a specified preceding OS version. In order to deploy the proprietary protocol a customer may need to update the OS version on their hardware or if the hardware does not support the required OS version they may need to upgrade the hardware itself. Example is being the Dynamic Trunking Protocol (DTP) (Cisco, 2014) and VLAN Trunk Protocol (VTP)

Open Source and Proprietary Technology Comparison

It is perhaps not surprising that there is a diverse set of opinions on the subject of open source vs proprietary technology. Each has its proponents and its detractors, it is also not surprising that many of the proponents and detractors have vested interests. This is no more evident when discussing the Total Cost of Ownership (TCO) between a Microsoft Windows setup and a Linux setup. Red hat commissioned what they described as independent survey examining the TCO of RHEL and Windows Server IT infrastructure. They collected data from 21 companies they found that RHEL had a TOC that was 34% lower than that of an equivalent Windows Server set up. Included in the survey were more statistics that shed favourable light on RHEL. The survey found that compared to Windows server RHEL had 46% lower software costs, 41% lower staffing costs, and 64% less down time. (Redhat, 2013)

Microsoft have themselves published papers pertaining to the TCO of running a Windows Server based domain. A 2006 paper published by the corporation, they mate reference to a survey carried out by the META group that had found “that higher staffing costs for Linux-based solutions offset any potential upfront savings in acquisition costs relative to Windows Server”. The paper follows the theme of asserting that Windows Server offers lower TCO than Linux equivalents and provides a better return on investment than Linux. (Microsoft, 2006)

Finding non-partisan information can be difficult, for example a report by Vital Wave Consulting from 2008, found that Windows and Linux offer the same TCO in emerging markets, however Vital Wave Consulting were commissioned by Microsoft to investigate and report on the subject. (zdnet, 2008) Conversely a 2005 report commissioned by IBM put the TCO of a Linux server deployment at an estimated 40% less than that of Windows Server. At the time of the report IBM were involved in commercial tie ups with open source vendors such as Redhat and Novell. (PC Pro, 2014)

The Harbin Institute of Technology, a research university based in Harbin, Weihai, Shenzhen, China published a paper in 2012 titled “Survey and comparison for Open and closed sources in cloud computing”, in this they concluded that in terms of cost open source technology offered better value, but that open source documentation is often inaccessible to novice users. (Nadir K.Salih, Tianyi Zang, 2012)

TCO can be a major factor when a business is making decisions, this perhaps provides some basis as to why finding independent information is difficult. Other areas however have had more impartial research carried out on them. One of these areas is security. Mikko Hypponen an award winning security researcher gave an interview on cybercrime in 2010, he was asked to compare Open Source and proprietary software to which he replied “The truth is that pretty much nobody looks at source code and tries to find bugs. In that way, the ‘theory of many eyes’ doesn’t work.” he continued by stating that the big difference was that only the proprietary software vendor can fix bugs in their software, but open source software can be fixed by anyone, which in general allows for security holes to patched up quicker. (Technewsworld.com, 2014)
In an in depth 2009 report on servers, infoworld suggested that the market was dividing into two distinct categories; Windows and Linux, it quoted Jim Zemlin, executive director of the Linux foundation as saying “The key here is that really Linux and Windows are moving away from the pack here and it’s becoming a two-horse race”. The article also suggests that heterogeneous infrastructure was becoming standard, citing Red Hat marketing director Nick Carr who states that Windows based Exchange (Email), SQL, file and Print servers are common on RHEL infrastructure. Dr. Roy Schestowitz a proponent of Linux is also quoted as saying “Increasingly, such servers that run in mixed environments rely on virtualization”, this was in relation to Linux based networks running Windows based virtual machines. (Krill, 2014)

An article published on business technology website Techradar Pro in 2014 by David Barker, technical director of 4D Data Centres, offered a balanced comparison between the two server platforms. The article puts forward that most system administrators are comfortable with both Windows and Linux and that deciding on what server OS to use is need specific. Barker suggests that the life cycle intended for server can be a critical factor, pointing out that Microsoft will end mainstream support for its Windows Server 2008 product. He goes on by stating that if the server is on physical hardware it is likely that it would need to preplaced in this time frame anyway.

Barker echoes Dr. Schestowitz’s statement about virtualisation allowing for a heterogeneous network environments by pointing out that Microsoft has partnered with open source organisation to enable hyper-V management of open source nodes. Barker also echoes Red Hats Nick Carr Linux systems can co-exist with Microsoft systems. (Barker, 2014)

End Users and Changing Technology

An end user can be defined as any human that uses a computer, end users can range from system administrators to the office typists. Each user has a set of requirements and it is the job of ICT to meet these needs, however these needs must be met within the requirements of the organisation and budgetary restrictions. (Corbett et al., 2013) An organisation may choose to change its base technology for a number of reasons, for example it may decide to go open source and replace proprietary technology, as the City of Munich did, in a project called Limux. (Linuxjournal.com, 2015) Peter Hoffman who led the City of Munich’s Limux project to switch to open source technology stated that the main reasons for the switch was to save money and halt the ever increasing lock in to Microsoft products. (Kent, 2013)

One issue that was never explicitly stated in the Limux project was the end user experience. Users were considered in the project plan, but only in calculations for retraining staff and cost of technical support staff. (Saunders, 2014)

A 2014 report by Nick Heath of Tech Republic suggest that Limux end user dissatisfaction with the changes from a Windows based OS to a Linux based OS may have triggered a review of the project, this was denied by the Munich City Council, although council spokesperson Stefan Hauf did concede that there has been negative feedback on certain aspects of the change to open source.
Hauf stated that “the primary gripe being a lack of compatibility between the odt document format used in OpenOffice and software used by external organisations. Munich had been hoping to ease some of these problems by moving all its OpenOffice users to LibreOffice”, (Heath, 2014) this compatibility issue appears on the face of it to be symptom of the vendor lock in the project was attempting to rid itself of. What must not be over looked is the disgruntlement of the end user. This could lead to frustration and discourage end users to embrace the new technology.
The Practice of System Administration published by Addison Wesley in 2007 asserts that ICT is there to serve the needs of end users. ICT exists because of users and not vice versa. It tempers this somewhat by going on to assert that the ‘customer is always right’ attitude is also not correct.
The book proposes that System administrators must view end users as ‘business partners’ consulting them on any change that may be proposed before proceeding with it. With administrators and users working together the needs of the organisation and the end users are best met. (Limoncelli, Hogan and Chalup, 2007)
Award winning magazine NAWIC published an article by Fred Ode the founder and CEO of Foundation Software. The article included five tips to avoid end user rejection of new technology. This supported The Practice of System Administrations assertion that users must be included in the process. It also proposed that a number of factors relating directly to the end user should be considered when implementing change of the ICT infrastructure, these suggestions included; considering the skill level of the end users and providing appropriate training to end users. Ode suggests that the majority of users are in general resistant to change, with a small number being open to change, ode says “The key is to identify innovators and early adopters and get them involved in the training process, so they can help excite and educate other users”. (F, Ode. 2008)

Virtualisation

Virtualisation is the creation in software of a simulation of a range of computing resources either in part or in whole. This simulation can virtualise both hardware and software. (Servervirtualization., 2014) The origins of virtualisation date back to the late 1960’s. IBM multiuser mainframes employed virtualisation techniques on memory. This was to allow for the efficient use of resources of the mainframe when running multiple simultaneous users. (Docs.oracle.com, 2014) Over the next 30 years development of technologies including virtual memory, hypervisors and application virtualisation were invented and/or refined. (Everythingvm.com, 2014)

A paper published in 1974 by Gerald J. Popek and Robert P. Goldberg entitled Formal Requirements for Virtualizable Third Generation Architectures laid out a method to ascertain if a (third generation) system architecture was capable of virtualisation. The paper described various VM concepts, which they describe as “an efficient, isolated duplicate of a real machine.” (Popek and Goldberg, 1974) The methods described in the paper can still be used as a guideline for virtualisation requirements. Prof. Douglas Thain of the University of Notre Dame, Indiana USA, described the paper as “the most important result in computer science ever to be persistently ignored”. Prof. Douglas breaks the paper down into two basic principles, a sensitive instruction and the privileges instruction. (Thain, 2010) The Popek and Goldberg paper describes what it terms as a Virtual Machine Monitor (VMM), VMM’s are now more commonly known as Hypervisors. Hypervisors can be categorised into two broad categories, type 1 and type 2. (Popek and Goldberg, 1974) (Portnoy, 2012)

Type 1: Also known as Bare Metal and Native. Type 1 hypervisors are installed directly onto the underlying hardware. A basic micro-kernel usually sits below the hypervisor to interact with the physical hardware. The type 1 hypervisor manages and abstracts all hardware from the overlaying virtualised systems.
Type 2: Type 2 hypervisors are installed on to a conventional host OS as a program. Type 2 hypervisors are generally not used in scalable enterprise environments. (Portnoy, 2012)

In the late 1990’s, VMWare’s Dan Wire described “a revolution with virtualization”. What Dan was referring to was the founding in 1998 of VMWare, and the release of VMWare workstation. (Wire, D. 2013) VMWare workstation allowed for the running of a Virtual Machines (VM), a virtualised PC and OS running inside and using the resources of a physical host PC. VMWare workstation was not the first product to market to allow for this, Apple had implemented a similar system with Virtual PC, but VMWare Workstation was the first major commercially available product of this type. (Everythingvm.com, 2014)

As of 2015 VMWare are the industry leader in enterprise virtualisation solutions. (VMWare, 2015) VMWare’s main enterprise virtualisation product range is called vSphere. vSphere is a collection of components that form a complete virtualisation platform, allowing for the creation of and management of VM’s. The vSphere range of products are available in 3 tiers, with each preceding tier having less functionality. (VMWare, 2015)

VMWare have a number of competitors, Microsoft have a similar product range that is tightly integrated with their Windows Server products called Hyper-V. (Finn, 2013) Citrix have a range of products based around the open source Xen hypervisor. (Citrix.com, 2015) These are just two examples of competing enterprise class hypervisor products that position themselves in the same market segment as VMWare’s vSphere. (Paul, 2014)

The open source project KVM (Kernel-based Virtual Machine), is a free hypervisor that can form the basis of full virtualisation platform running on a Linux based system. KVM was originally developed by Qumranet who were taken over by Red Hat, Red Hat now oversee the project. (Linux-kvm.org, 2015) KVM is a Linux kernel module that converts the system into a type 1 hypervisor. (IBM. 2015) This module was integrated into the mainline Linux Kernel in 2007, its ability to support virtualisation is depended on compatible virtualisation extensions being present on the host CPU. (Linux-kvm.org, 2015)

KVM can be combined with other open source projects, such as QEMU which provides device emulation and user-space functionality and libvirt an API which provides a variety of tools such as management interfaces. (Libvirt.org, 2015) Together a feature rich and efficient virtualization platform is formed.

KVM and VMWare are two very different propositions, VMWare fits the definition of a traditional type 1 hypervisor, KVM redefines this slightly with its integration directly into the host OS Kernel. (Linux-kvm.org, 2015) Both offer a complete suite of enterprise level functionality, but achieve their end goal in a different manner.

VMWare is a homogeneous system, each component is designed to work seamless with the rest of the platform. The disadvantage of VMWare is cost, functionality comes at a price. (Vmware.com, 2015) KVM when combined with QEMU and libvirt is heterogeneous, a wide variety of features can be installed and configured as and when needed at no cost. It may not always be the case that each feature has been fully tested or is stable when integrated to the platform. Supporting the platform may require specialists or support contracts which could mitigate against the zero cost benefits of the software. (Redhat.com, 2014)

Fedora 22 Released

Fedora’s relentless release schedule continues unabated with the release of fedora 22.

Some new features and improvements are; a new notification system courtesy of an updated gnome DE, general GUI improvements and new and updated gnome apps.

The server edition has new support for the container system docker with a collection of new images and cockpit gets cross version support providing all your management needs direct from the comfort of your web browser!

I haven’t tried personally yet, I’m loving fedora’s little cousin CentOS 7 at the moment and have just got my machine just the way I want it after days weeks of tinkering, so I am not quite ready to switch up my distro just yet! I’m in the process of building a live pen at the moment and I am looking forward to trying out some of the new and/or improved stuff.

fedora22_family

Click here for full details on what is new.

Proprietary vs Open Source Network Software

I am currently working on a project investigating replacing proprietary technology with open source technology, the project is about 50% complete at the moment. I presented my initial findings earlier this week, I’m happy to say that they were well received. Below is a copy of the presentation, if anyone has anything to add to it, be it corrections, critique or any other feedback then please feel free to email me at [email protected]

All feedback is welcome.

PS. Yes the file type is Microsoft’s .pptx, but this is due to WordPress not embedding .odp’s correctly. (incidentally file type compatibility is one of the issues raised in the report)

Download (PPTX, 587KB)

Shellshocked: 2014 The Year of the Superbugs

Broken Windows

It was announced this week that a 19 year old bug has been present in most of Microsoft’s Operating Systems (OS) dating back to Windows 95. The bug (in fact it appears to be a series of connected bugs) was present in server and clients OS’s and was still present in Microsoft’s most recent efforts Windows Server 2012 R2 and Windows 8.1. Not even the minimal, naturally hardened Server Core escaped its potentially fatal grasp. The flaw was in Microsoft implementation of Secure Sockets Layer (SSL) and Transport Layer Security (TLS), Schannel. It was uncovered by a team of IBM researchers, known by the excellent superhero esque handle of X-Force. X-Force’s Robert Freeman described what they had uncovered in a blog post on IBM’s Security Intelligence website.

In the post he highlights some of the take home points of this threat: It has been around since Internet Explorer (IE) 3, it allowed reliable execution of arbitrary code from a remote location, It sidestepped IE’s Enhanced Protected Mode, and even secure protocols such as HTTPS can be easily exploited with the proper knowhow. When you step back and look at some of these points the severity of the flaw is plain to see and explains why the bug, now being dubbed by some as WinShock has been given the maximum CVE severity rating of 10. CVE-2014-6321 states that WinShock has a low level of complexity to exploit the bug and that a massive amount of damage that can be done with it. Being able to execute arbitrary code without authentication and often with elevated privileges is a massive problem, it effectively compromises every part of an affected system, the effects of this bug could have been devastating, if an unprotected system is exploited by the wrong person (or organisation) then it is effectively game over, data is compromised, systems are hijacked nothing is safe. To Microsoft’s credit they had released a fix to the issue in this weeks patch Tuesday update, the same day that the vulnerability was made known to the public.

Heart Breaking

Amazingly WinShock isn’t the first major security flaw discovered in protocols designed to securely transport data across the network in 2014. In April, SSL and TLS were at fault again (its not clear if the WinShock bug is related) when the Heartbleed vulnerability was made public. Heartbleed compromised some of the most widely used security transport protocols in the world including OpenSSL, GnuTLS, and Apples Secure Transport. Untold numbers of systems were left wide open by WinShock and Heartbleed, if you have used a computer in the last few years you were almost certainly exposed to the undetected hidden threat posed by these security flaws. All of this goes to not only undermine the integrity of our data, but the integrity of our privacy, safety and trust in the systems designed to keep us safe.

Bashful

The computing industries annus horribilis doesn’t stop with WinShock and Heartbleed. In September yet another vulnerability with a CVE severity rating of 10, effecting millions of computers, and allowing for arbitrary code to be run from remote locations, was made public. This time it was a 25 year old vulnerability in the BASH shell (and its derivatives) that had a gaping hole in its security. In fact it wasn’t just one flaw, by the end there were six published vulnerabilities relating to BASH.

Dubbed Shellshock it exploited a feature that allowed unauthenticated environment variables to be exported to function definitions, trailing variables could have arbitrary code placed inside them, when BASH forks, the environment variables were written into memory and the code from the trailing variable executed. Shellshock was startling for a number of reasons, not only did it undermine the perceived security benefits of Linux systems, it was also very easy to exploit. The amount of devices left vulnerable was staggering, from servers, to clients, to phones and even smart washing machine, fridges, TVs and other smart devices. Shellshock had the potential to cause massive amounts of catastrophic damage to an incredibly diverse and large array of systems.

Within hours of Shellshock being publicly released there were detailed tutorials online on how to exploit the vulnerability, it wasn’t long until reports on how the bug had been exploited began to appear in the media. There were tales of Romanian Gangs and massive Botnets running riot all over the the internet. By late September security researchers at Incapsula reported that it had seen a rate of 725 attacks per hour relating directly to Shellshock.

What 2014 has taught us is that major security vulnerabilities have existed undetected for years, these vulnerabilities have affected the entire gamut of computing. The free software community, the open source software community and proprietary software vendors have all seen major flaws in their software exposed. It begs a few questions; what else is out there that we don’t know about? What other bugs are lurking deep in the code of the software that is present on our computers, our internet, our corporate infrastructures, our national infrastructures and just about every connected device we have come to take for granted? What dangers are lurking just around the corner? With Heartbleed, WinShock and Shellshock we may have gotten off lightly, each of these flaws were recognised and fixed in an extremely timely manner, the consequences could have been far worse if they had gotten into the wild before the good guys discovered them. That’s not to say that the consequences still may not be felt, they could just be in hibernation, backdoors waiting to be opened, time bombs ready to explode, and stolen or compromised data waiting to be exploited. Of course the doomsday scenario is an extreme one, but one that cannot be ignored.

Richard Stallman describes Shellshock as just a “blip”, hopefully he is right, hopefully all these bugs and others like them are just a series blips, the inevitable consequence of the growing pains associated with the incredible pace of technological advancement and the complacency of not checking old code thoroughly when implementing it in new systems. We can only hope that these “blips” do not turn into a constant tone, a tone that could signify the flat lining of people’s trust in modern computer networks.

UNIX, Beards and Orange Wallpaper

I am currently writing a dissertation about the move away from proprietary software, while doing some research I re-discovered this little gem! It is a video that Bell Laboratories produced in 1982 about the UNIX operating system. It is a must watch, not only because it offers a great insight into the contemporary thinking of this little part of computing history, but also because it is a time capsule of early 80s retro geekery goodness. This video has it all, the jazzy music, the grainy film, the blocky graphics, the orange wall paper and an impressive collection of beards. But if you’re not interested in beards it also has some footage of the then contemporary computers and x-terminals, I’m not going to try and identify any of them because I will almost certainly be wrong, but if you recognise them, then please let me know.

The video also has Dennis Ritchie and Ken Thompson being interviewed (and pulling some excellent set up for the video poses). They published the original UNIX white paper, which I have included in this post. Have a look at that, you will see that many of the concepts survive in UNIX and Linux OS’s today. Dennis Ritchie also discuses the C programming language and its inception, so it may be of interest to any programmers out there as well.

The UNIX Time-Sharing Operating System by Dennis M. Ritchie and Ken Thompson. Bell Laboratories 1974

Just a quick update on the IPv6 series, I am delaying the rest of it until January, as I said I am in the middle of a dissertation and that is taking up all of my free time at the moment. I am aiming to have most of complete by early January. As soon this the dissertation is complete I will write up the rest of the IPv6 series.

100 Greatest Hacking Tools! (Link)

100 Greatest Hacking Tools!

I thought I would share this handy guide from the EFYTimes for some of the best, most popular and widely used hacking security tools. They have gathered together a list of 100 security tools and broken them down into different categories, so you can easily find the correct tool for the job. Conveniently they have also linked to each tool, so downloading them should be a breeze.

One of the tools they have on their list is the Metasploit Framework, which you can read about here a very user-friendly security tool for exploiting security holes in software without too much effort. They also have a range of password crackers, wireless crackers plus many more categories to keep even the most committed of you busy for a while. Whatever tool you decide to play about with, have fun with it, but most importantly don’t go getting yourself in trouble by carelessly breaking the law.

I haven’t forgotten about my series on Mobile IPv6, part 2 will be up in the next few weeks. If you haven’t read part 1 yet, then you can here.

100 Greatest Hacking Tools! – EFYTimes

100 Greatest Hacking Tools!
efytimes.com

The Metasploit Framework

When it comes to penetration testing there are many applications available. Some can be used for footprinting and enumeration, others for gaining access to the network, and others for exploiting weaknesses in the network setup, or less than secure code. The Metasploit Framework falls into the latter category. Developed by the Metasploit Project (now acquired by Rapid7), The Metaspolit Framework is a tool that it used to run and develop exploits for penetration testing remote devices. The Metaspoit Framework is open source, and modular, allowing for the development of individual exploits, these exploits target a range of software and a range of operating systems, from The Windows family, Linux/UNIX distros and the iterations of Apples Mac OS X. There are various other free and commercial versions of Metaspolit, these include versions with GUI’s and more advanced features. This Guide however, will be based on the standard Metasploit Framework Edition, which is one of Kali Linux’s built in tools.

Various exploits with various payloads can be crafted to attack various patch versions of various software, as you see, that is a lot of variables so there is no guarantee a given exploit will be successful on a given target.

This guide however should be successful; it is a known exploit on a known target. The first thing you should do is setup a small virtual network running two VM’s. I used VirtualBox, but if you would rather use different software it shouldn’t make any difference. On the first VM install Kali Linux, this is the de facto Linux distro for penetration testing, it comes with a huge variety of tools including The Metasploit Framework. On the second VM install Metasploitable (Download here), this is a custom made Linux VM, that is designed to be used for penetration testers to hone their craft. Once you have this setup with both machines pinging each other, you are ready to go.

Step 1

The first step is to find a vulnerability that you can exploit. One of the best methods is by using nmap to scan for open ports and services that may present an open door. Namp can be run in many modes with many options, some are stealthy and will avoid Intrusion Detection Systems, some are not so stealthy, for the purpose of this guide however, we are going to run nmap in a not so stealthy fashion, purely for the purposes of demonstration. We know our target machine (as it is the only other device on our network) so we will target it directly and perform a scan that gives us a list of open ports, services running and what patch level the software is at, it will also fingerprint the target OS and give an estimation on what OS is running (it does this based on the individual nuances built in to the OS’s TCP/IP stack).

As you can see in the screenshot below, we have discovered a range of services and the versions of each service.

## -v = verbose; -A = All; this will perform a detailed scan with detailed output ##

#nmap –v –A 10.0.1.20

metasploit_step_1

Step 2

The next port of call is Google. Searching for exploits via the web will give an idea of potential security vulnerabilities in the target machines software. Search for weaknesses in each individual service that you have discovered, you may find that you can get the same end result in a number of different ways, some a lot simpler than others. In our machine you will see that it is running UnrealIRC version 3.2.8.1. This is popular and widely used Internet Relay Chat software. After searching the web you will discover that this version has a flaw in it that when exploited, can give an attacker root access to the Linux server running it.

Step 3

It is now time to move on to The Metasploit Framework. First, launch the tool. You will notice that the command prompt changes to the Metaspolit Framework prompt. Once the console has been launched you can use the search feature to find built in exploits, it does this by searching its database of exploit modules for the string of text you input. In this example; ‘unreal’.

This will return a list of modules that have ‘unreal’ in the title, you will find that it returns three exploits, two of which are for Unreal Tournament 2004, looking at the path you can tell there is one for Linux and one for Windows. You will also see how they are ranked, with both being ranked as good. These are not relevant to the UnrealIRC software, but the third one is. Examining the path shows that it is an exploit for UNIX systems, the correct software and the correct version of software, additionally you can see that this module is rated as excellent.

Using the info command followed by the path of the exploit will display a host of information about the module, including a description, licensing details, setting and links to references about the exploit.

[email protected]:~# msfconsole
msf > search unreal
msf > info exploit/unix/irc/unreal_ircd_3281_backdoor

step_3

Step 4

Now that we are satisfied that we have discovered an exploit module for our target software and OS, it is time to launch the module, this is done with the use command followed by the path of the exploit. Once launched the command prompt will change to the module path, you can now use context commands for that module, the show options command with display remote host IP settings and port settings.

msf > use exploit/unix/irc/unreal_ircd_3281_backdoor
msf exploit (unreal_ircd_3281_backdoor) > show options

step_4

Step 5

To set the target IP address using the set RHOST command followed by the target machines IP address. The target port will be set to the UnrealIRC default port 6667, confirm from in the information discovered with nmap that this is indeed the port being used by the service, if not use the set RPORT command to configure the target port.

msf exploit (unreal_ircd_3281_backdoor) > set RHOST 10.0.1.20
msf exploit (unreal_ircd_3281_backdoor) > set RPORT 6667

Step 6

The final step is to execute the exploit. This is done simply by using the exploit command, the screen will output information on the working of the exploit, once it is complete you should now have access as root to the target machine, confirm this by running a root level command or by using the whoami command.

msf exploit (unreal_ircd_3281_backdoor) > exploit

step_6

Linux and IPv6 for the small business

This post will cover how Linux (UNIX and Unix-like) and more specifically computer network services and applications that run on Linux systems use and integrate with Internet Protocol version 6 (IPv6). It will cover how a variety of IPv6 based network services can be easily configured for use in a small business

Three network services, Routing, Domain Name System (DNS) and Address resolution will be covered. Additionally three server based applications providing Email, Printing and Web Serving will be covered, including how to configure IPv6 on a particular programme providing one of these services and what provisions each of these services provides for IPv6 support, and what IPv6 provides for each of the services.

This won’t be an exhaustive list off all the services, or a detailed example of how to configure them, but it should give some idea on how simple it is to get IPv6 up and running.

Why IPv6?

IPv6 is the successor to IPv4 as the main network layer protocol used on the internet to provide addressing to interconnected nodes. IPv4 is a 32 bit address represented by four dotted decimal octets. IPv4 provided for just short of 4.3 Billion unique addresses. This amount of addresses proved to be inadequate and IPv4 addresses were eventually exhausted. To slow down this exhaustion a number of mechanisms where deployed, including private IP addresses that could not be routed globally being used on Local Area Networks (LAN), with Network Address Translation (NAT) being used on the gateway interface. NAT is a system that allows for multiple hosts on local networks to use private IPv4 addresses that are obfuscated behind one single public, globally routable IPv4 address.

Overview of IPv6

IPv6 addresses are 128 bits, represented by eight colon separated sets of four hexadecimal numbers. Each set represents 16 bits or a ‘word’. This allows for 3.4×10^38 unique address. These addresses are made of two parts, the network prefix that is defined by a given number of high order bits that is shared by all hosts on the subnet, and the remaining low order bits that will be unique for each host on the subnet.

IPv6 addresses have a number of different classifications depending on what range they are in. This range will dictate if they are global unicast (2000::/3), local unicast (fe80::/10) or multicast addresses (ff00::/64). Additionally various other formats and ranges of IPv6 address provide duel staking and compatibility with IPv4.

Below is an example of a globally routable unicast IPv6 address in the standard notation.

2001:0000:6188:28aa:c52d:67b9:0056:16ae

A single group of concurrent words with the value of zero can be condensed within the notation of an IPv6 address by replacing them with double colons, additionally any leading zeros can be removed from IPv6 notation. This has the effect of condensing the example address above to:

2001::6188:28aa:c52d:67b9:56:16ae

IPv6 and Linux

Linux systems (A system can be anything from an end user PC, to a server, to a router or a switch) can provide for just about all enterprise network requirements, this post focuses on email, internet access, printer access, routing, DNS and interface address allocation. Application packages that provide these services can be installed on to a Linux system, once installed they can be configured with their IPv6 requirements. It is usually the case that configuration files can be found in the ‘/etc/’ directory, with logs that can be used for monitoring and trouble shouting being found in the ‘/var/logs’ directory.

The first Linux kernel to have any IPv6 code in it was kernel 2.1.8.iv released in 1996. The Linux kernel is updated regularly and periodic updates to the IPv6 functionality of the kernel have been added. Linux kennels 2.6.x and above can be considered IPv6-ready.

Routing

Routing can be set up by an administrator in one of two general ways, one is to use static routes, routes that do not change and have to be manually configured. Static routes can be set with ‘ip -6’, and can be configured simply by letting the routing table know the source address and the gateway for the network. The other method is dynamic routing; this can be implemented by installing a routing package and implementing an IPv6 compatible routing protocol.

There are number of routing packages that can be installed on a Linux system, once such package is Quagga. Quagga provides full support for the following IPv6 routing protocols OSPFv3, RIPng and BGP-4. The Quagga package installs a core daemon called zebra, zebra is the abstraction layer between the kernel and the Zserv. Zserv listens on port 346vi. Zserv clients will will run on one of the supported routing protocols and pass routing information to the kernel. This report will use Open Shorted Path First v3 as its example protocol. Its configuration files can be found in ‘/etc/quagga’.

An example of OSPFv3 configuration

Additional benefits of IPv6 is that packet fragmentation is no longer an problem, with IPv4 if a packet was received that exceeded the Maximum Transmission Unit (MTU), the router would fragment the packet, with IPv6 the host uses a method called Path MTU Discovery, this ensures that all packets do not exceed the MTU.
Zone file

DNS

DNS works with IPv6 in much the same way as it did with IPv4. To implement DNS you first have to install DNS software, the example in this post is BIND, as it is the most widely used DNS software on the internet. IPv6 hosts records are mapped in ‘AAAA’ records, these are used to resolve IPv6 address.

AAAA Record

BIND’s configuration files can be found in ‘/etc/bind’. Bind must be instructed to listen for IPv6 address in the‘/etc/bind/named.conf’ file. BIND can be configured as a caching only server, these will retrieve AAAA records from a root DNS server and cache any records it resolves. You can also use these files to configure BIND as a master DNS server.

Address allocation

IPv6 interfaces can be automatically allocated Extended Unique Identifier-64 (EUI-64), link-local IPv6 address. These are non-routable addresses that are used to communicate on the local network segment, these address are configured automatically when an interface is placed in the up state using the command ‘Ifup ’.

Link-local address are automatically generated by being issued with the prefix fe80::/64, this is a predefined range of non-public IPv6 addresses and makes up the network portion of the address. The remaining 64 lower order bits of the address that make up the host portion are generated by using the interfaces 48 bit MAC and 16 additional bits that are always set to the reserved value of fffehex are injected after the 24th bit.

Additionally EUI-64 globally unique routable addresses can be automatically issued. The 7th bit is the Universal/Local (U/L) bit, if this bit is set to zero then the prefix will be the link-local prefix, if it is set to one then it will be issued with a global prefix

radvd

To automatically configure a global address, a Router Advertisement Daemon (radvd) has to be configured on the gateway interface of the router. This will be configured with a 64 bit global prefix that it will issue to interfaces on its network. Various Router Advertising parameters will also be configured. These advertisements will be sent out periodically to interfaces; additionally a host can request an address by sending a Router Solicitation message. The host 64 bits will be configured in the same way describe in link-local addressing, with the U/L being set to one.

Another method to automatically issue IPv6 addresses is to use a DHCPv6 server. To implement DHCPv6 a DHCPv6 server application would need to be installed and configured with relevant network prefixes, and other interface options. The interfaces on the host machine would then need to be configured in the /etc/network/interfaces file (Debian) to request an address when put into the up state.

Email

To implement a Linux email based email server a number of software components need to be decided upon, installed and configured. Mail User Agents (MUA), client side software that allows users to send and receive email, Mail Delivery Agents (MDA), an agent that delivers email to the user’s inbox, and Mail Transport Agents (MTA), the agent that delivers mail from one device to another.
Each of these components has a number of software applications that provide its service. MTA applications include sendmail, qmail and postfix.

main.cf

Postfix introduced IPv6 support in version 2.2. Configuration files for postfix are found in ‘/etc/postfix’. The ‘mail.cf’ file can be configured to allow the interfaces and network protocols with what network protocols and specific address to listen on. The figure below displays a number of possible configurations. The ‘all’ enables IPv4 and v6 if supported, ‘ipv4, ipv6’ enables both IPv4 and v6, and ‘ipv6’ enables only IPv6.

Web Serving

Web serving requires the installation of software, Linux has an array of web serving software such as lighttpd and nginx, but this report will cover the world’s leading web serving software; apache.

Apache will require configuration to listen for IPv6, the command ‘Listen [2001::6188:28aa:c52d:67b9:56:16ae]:80’ will instruct apache to listen for http requests on the stated address and port. This command will only serve that single host, the command ‘Listen *’ will instruct apache to listen for all IPv4 and IPv6 hosts on port 80 by using the ‘all’ wild card ‘*’.

Example of an IPv6 configured Virtual Host

The wildcard ‘*’ can also be used on virtual host configuration files to make them available to all IPv4 and IPv6 hosts, this can be configured in the ‘/etc/apache/sites-enabled /

Printing

CUPS is printer server software that allows the management of print devices, and can be used to administrate printer access. Cups also has wide variety of drivers available to support a wide range of print devices. CUP’s has two methods of configuration, the first being via web interface and the being via the command line tool ‘lpadmin’.

Once installed the CUPS configuration files can be found in ‘/ect/cups’. Allowing and denying hosts access to print devices can be configured in the ‘/etc/cups/cupsd.conf’ file.

lpadmin

It is possible to configure network printer sharing without using CUPS, by using the BSD lpr system, this allows for simple administration task such as managing print queues and assigning jobs.

Wrapping Up

In each section of this post IPv6 integration with a variety of systems was briefly covered, many of these systems required the installation of software, in many instances there was a wide variety of software applications providing each service. This post focused on the most widely used software packages such as Quagga, BIND, Postfix and Apache. Each of these packages has IPv6 support. Additionally they are used extensively, and as such they have been well tested and documented, this makes them ideal for the first phase of a networking switching from IPv4 to IPv6, or dual staking IPv4 and IPv6.

IPv6 not only provided for an increased number address over IPv4, it also had mechanisms in place that render protocols that IPv4 relied upon redundant or not necessary, one of these protocols is DHCP, IPv6 can use DHCPv6 for automatic allocation, but as we seen EUI addresses are built into the addressing architecture and require less administrative effort to configure and maintain.

For printing services, this we covered CUPS, supplemented with lpr commands; this provides a powerful mechanism for administrating network printers. These are tried and tested systems that require minimum administrative effort while providing full print server functionality.

The amount of configuration required to enable IPv6 integration varies depending on what package you are configuring, email, web serving and printing are relatively simple, the general pattern requiring some kind of initial IPv6 activation, usually in the form of editing a configuration file stored ‘/etc/’ to set the software package and service it is providing to listen for and respond to IPv6 hosts. This is usually followed by configuring any IPv6 relevant files, to apply IPv6 functionality.