Shellshocked: 2014 The Year of the Superbugs

Broken Windows

It was announced this week that a 19 year old bug has been present in most of Microsoft’s Operating Systems (OS) dating back to Windows 95. The bug (in fact it appears to be a series of connected bugs) was present in server and clients OS’s and was still present in Microsoft’s most recent efforts Windows Server 2012 R2 and Windows 8.1. Not even the minimal, naturally hardened Server Core escaped its potentially fatal grasp. The flaw was in Microsoft implementation of Secure Sockets Layer (SSL) and Transport Layer Security (TLS), Schannel. It was uncovered by a team of IBM researchers, known by the excellent superhero esque handle of X-Force. X-Force’s Robert Freeman described what they had uncovered in a blog post on IBM’s Security Intelligence website.

In the post he highlights some of the take home points of this threat: It has been around since Internet Explorer (IE) 3, it allowed reliable execution of arbitrary code from a remote location, It sidestepped IE’s Enhanced Protected Mode, and even secure protocols such as HTTPS can be easily exploited with the proper knowhow. When you step back and look at some of these points the severity of the flaw is plain to see and explains why the bug, now being dubbed by some as WinShock has been given the maximum CVE severity rating of 10. CVE-2014-6321 states that WinShock has a low level of complexity to exploit the bug and that a massive amount of damage that can be done with it. Being able to execute arbitrary code without authentication and often with elevated privileges is a massive problem, it effectively compromises every part of an affected system, the effects of this bug could have been devastating, if an unprotected system is exploited by the wrong person (or organisation) then it is effectively game over, data is compromised, systems are hijacked nothing is safe. To Microsoft’s credit they had released a fix to the issue in this weeks patch Tuesday update, the same day that the vulnerability was made known to the public.

Heart Breaking

Amazingly WinShock isn’t the first major security flaw discovered in protocols designed to securely transport data across the network in 2014. In April, SSL and TLS were at fault again (its not clear if the WinShock bug is related) when the Heartbleed vulnerability was made public. Heartbleed compromised some of the most widely used security transport protocols in the world including OpenSSL, GnuTLS, and Apples Secure Transport. Untold numbers of systems were left wide open by WinShock and Heartbleed, if you have used a computer in the last few years you were almost certainly exposed to the undetected hidden threat posed by these security flaws. All of this goes to not only undermine the integrity of our data, but the integrity of our privacy, safety and trust in the systems designed to keep us safe.

Bashful

The computing industries annus horribilis doesn’t stop with WinShock and Heartbleed. In September yet another vulnerability with a CVE severity rating of 10, effecting millions of computers, and allowing for arbitrary code to be run from remote locations, was made public. This time it was a 25 year old vulnerability in the BASH shell (and its derivatives) that had a gaping hole in its security. In fact it wasn’t just one flaw, by the end there were six published vulnerabilities relating to BASH.

Dubbed Shellshock it exploited a feature that allowed unauthenticated environment variables to be exported to function definitions, trailing variables could have arbitrary code placed inside them, when BASH forks, the environment variables were written into memory and the code from the trailing variable executed. Shellshock was startling for a number of reasons, not only did it undermine the perceived security benefits of Linux systems, it was also very easy to exploit. The amount of devices left vulnerable was staggering, from servers, to clients, to phones and even smart washing machine, fridges, TVs and other smart devices. Shellshock had the potential to cause massive amounts of catastrophic damage to an incredibly diverse and large array of systems.

Within hours of Shellshock being publicly released there were detailed tutorials online on how to exploit the vulnerability, it wasn’t long until reports on how the bug had been exploited began to appear in the media. There were tales of Romanian Gangs and massive Botnets running riot all over the the internet. By late September security researchers at Incapsula reported that it had seen a rate of 725 attacks per hour relating directly to Shellshock.

What 2014 has taught us is that major security vulnerabilities have existed undetected for years, these vulnerabilities have affected the entire gamut of computing. The free software community, the open source software community and proprietary software vendors have all seen major flaws in their software exposed. It begs a few questions; what else is out there that we don’t know about? What other bugs are lurking deep in the code of the software that is present on our computers, our internet, our corporate infrastructures, our national infrastructures and just about every connected device we have come to take for granted? What dangers are lurking just around the corner? With Heartbleed, WinShock and Shellshock we may have gotten off lightly, each of these flaws were recognised and fixed in an extremely timely manner, the consequences could have been far worse if they had gotten into the wild before the good guys discovered them. That’s not to say that the consequences still may not be felt, they could just be in hibernation, backdoors waiting to be opened, time bombs ready to explode, and stolen or compromised data waiting to be exploited. Of course the doomsday scenario is an extreme one, but one that cannot be ignored.

Richard Stallman describes Shellshock as just a “blip”, hopefully he is right, hopefully all these bugs and others like them are just a series blips, the inevitable consequence of the growing pains associated with the incredible pace of technological advancement and the complacency of not checking old code thoroughly when implementing it in new systems. We can only hope that these “blips” do not turn into a constant tone, a tone that could signify the flat lining of people’s trust in modern computer networks.

UNIX, Beards and Orange Wallpaper

I am currently writing a dissertation about the move away from proprietary software, while doing some research I re-discovered this little gem! It is a video that Bell Laboratories produced in 1982 about the UNIX operating system. It is a must watch, not only because it offers a great insight into the contemporary thinking of this little part of computing history, but also because it is a time capsule of early 80s retro geekery goodness. This video has it all, the jazzy music, the grainy film, the blocky graphics, the orange wall paper and an impressive collection of beards. But if you’re not interested in beards it also has some footage of the then contemporary computers and x-terminals, I’m not going to try and identify any of them because I will almost certainly be wrong, but if you recognise them, then please let me know.

The video also has Dennis Ritchie and Ken Thompson being interviewed (and pulling some excellent set up for the video poses). They published the original UNIX white paper, which I have included in this post. Have a look at that, you will see that many of the concepts survive in UNIX and Linux OS’s today. Dennis Ritchie also discuses the C programming language and its inception, so it may be of interest to any programmers out there as well.

The UNIX Time-Sharing Operating System by Dennis M. Ritchie and Ken Thompson. Bell Laboratories 1974

Just a quick update on the IPv6 series, I am delaying the rest of it until January, as I said I am in the middle of a dissertation and that is taking up all of my free time at the moment. I am aiming to have most of complete by early January. As soon this the dissertation is complete I will write up the rest of the IPv6 series.

100 Greatest Hacking Tools! (Link)

100 Greatest Hacking Tools!

I thought I would share this handy guide from the EFYTimes for some of the best, most popular and widely used hacking security tools. They have gathered together a list of 100 security tools and broken them down into different categories, so you can easily find the correct tool for the job. Conveniently they have also linked to each tool, so downloading them should be a breeze.

One of the tools they have on their list is the Metasploit Framework, which you can read about here a very user-friendly security tool for exploiting security holes in software without too much effort. They also have a range of password crackers, wireless crackers plus many more categories to keep even the most committed of you busy for a while. Whatever tool you decide to play about with, have fun with it, but most importantly don’t go getting yourself in trouble by carelessly breaking the law.

I haven’t forgotten about my series on Mobile IPv6, part 2 will be up in the next few weeks. If you haven’t read part 1 yet, then you can here.

100 Greatest Hacking Tools! – EFYTimes

100 Greatest Hacking Tools!
efytimes.com

Mobile IPv6 Part 1

Mobility in IPv6

One of my favourite protocols is IPv6, this post is going to be part of a three part series covering IPv6 or more specifically: Mobility in IPv6. For me IPv6 is the hero of the network layer protocols, and will soon become the main network protocol of the internet. IPv6 was developed by the Internet Engineering Task Force and had its specification laid out in RFC 2460. This new version of the IP protocol was designed to make up for the folly of IPv4, whose finite number of addresses are all but completely exhausted. IPv6 addresses are of course also finite, but the amount of them is significantly greater than IPv4, there are so many IPv6 address that it is anticipated that we may never run out of them. If you want to find out how many IPv6 addresses there are exactly and compare the amount with the total number of IPv4 address, then have a look here.

IPv6 didn’t just bring extra addressing capacity, it also brought along a number of other improvements including a simplified header, security improvements, improved support for extensions and options.

In an earlier post I talked about IPv6 and how to enable and use it with common enterprise network technologies such as address allocation, DNS, email, web services and printing, if you want to get an overview of IPv6 and some of its uses then go and have a read here.

In this post I am going to concentrate on how IPv6 fits in with today’s mobile world, a world where nodes can be anything from mobile phones, vehicles, sensors and many other wide and varying devices from the common to the uncommon, from the normal to the bizarre, this mobile world and its vast array of devices are part of what is colloquially known as the Internet of Things.

It can be argued that the Internet of Things is merely just the internet, or more precisely the extension of the internet to more than just fixed stationary networks with fixed stationary nodes such as PC’s, servers or printers. Traversing the internet today is data traveling from a range of mobile devices, probably the most visible being smartphones connected via technologies such as Wi-Fi and a range of mobile data networks run by the telecommunication companies plus various other types of network that support mobile nodes. The amount of mobile devices on the internet is far from saturation point, in the coming years we will see an increase of the amount of nodes transmitting and receiving data while on the move.

This post, and the following parts are going to talk about Mobile IPv6 and a selection of the protocols that extend and support it. Before we get on to Mobile IPv6, let’s have a look at its predecessor, Mobile IPv4.

Mobile IPv4

When IP protocols where developed they were designed to operate over wired media, although IP protocols are technically media independent, the addressing structure of the IP protocols was designed with fixed networks in mind, Stationary local networks with stationary nodes, and stationary wide area networks with stationary nodes. The system worked well, each network would have a fixed prefix and each node on the network would have a unique address from the range of the prefix. Around the last fifteen years of the 20th century however things began to change, wireless media was fast becoming a practical alternative to its wired cousins.

The network was evolving, and the IP protocols had to evolve with it. In 2002 in the Internet Engineering Task Force’s (IETF) RFC 3344, the specification for IP mobility support for IPv4 was laid out, it described a process of allowing an IPv4 node to travel from one network to another while being able to manage the mobility of the node and how a node would be handed off from one network to another all while maintaining a connection with a Corresponding Node (CN). This was revised and improved in RFC 5944. This specification allowed for location independent routing of IP packets on the internet to a roaming Mobile Node (MN), it introduced the Home Addresses (HoA) and Care of Address (CoA), in simple terms the HoA would deal with the end to end communication and the CoA would deal with the routing the data to and from the MN. The basic premise being that a node was issued with an IPv4 HoA by a Home Agent (HA), when a node roams into a foreign network, it sends out a solicitation message looking for a Foreign Agent (FA). The FA replies to the solicitation message with a solicitation advertisement, when this is accepted the node is issued with a second IPv4 address from the FA, this second address is the CoA.

So now the node is in a foreign network with two IPv4 address, a HoA and a CoA, the next step is to send a RegReq (Registration Request) message to its HA, when the HA receives this request it replies with a RegReply (Registration Reply) message. Once this process is done the two addresses are linked on the HA. So now this is all in place, the MN is able to communicate with other devices on the internet. Let’s say there is a PC wanting to send data to the MN. How exactly would it find the mobile device? OK so we have two devices, a mobile phone roaming around, this is our MN and our other device is a PC, this is our CN. When the PC sends data, it sends it to the HA, the HA looks up its database to see what addresses it has linked with MN’s HoA and then forwards the data through a tunnel to CoA, delivering the data to the MN. When the MN wants to reply this process is reversed. So far so good, but this process is designed to work with IPv4, and as we already know, IPv4 is no longer a viable addressing scheme for the long term sustainability of the internet.

Mobility Support in IPv6

IPv6 also requires mobility, and has its own set of extensions and support protocols that allow the saviour of the internet to be mobile, fast and efficient. In this section I begin to cover them. Let’s start with mobility management.

Mobility Management

The role of mobility management is to locate the MN and maintain connection to them during the handover from one network to the other. Different systems, such as Wi-Fi or the Telecommunication Networks like GSM, 3G, 4g etcetera, use different mobility management schemes. They can be broken down into two broad groups. The first is horizontal mobility, this is intra-system mobility, dealing with handoffs in a homogenous system, the other is vertical mobility, intra-system mobility with handovers taking place between two heterogeneous systems. Horizontal mobility can have a lot of the work placed on layer 2 protocols such as Stream Control Transmission Protocol (SCTP). Vertical mobility in many cases however relies on layer 3 IP protocols, although higher level protocols such as Session Initiation Protocol (SIP) can be used in some scenarios. It is in layer 3 of the TCP/IP protocol stack that we will find the Mobile IPv6 family of protocols.

Mobile IPv6

Mobility in IPv6 works differently from mobility in IPv4, it replaces the Agent advertisement with IPv6’s Neighbour Discovery function and there is no longer the requirement for a FA. Address allocation similarly uses IPv6’s build in ability to auto configure, although a DHCPv6 server can also be used. The RegReq and RegReply messages are gone, replaced with Binding Updates (BU) and Binding Acknowledgements (BA).

So how do all these differences change the way IPv6 works?…I thought you would never ask.

A mobile node is powered on, ready for a day exploring the big bad world, its first port of call is to acquire an address, as I said before two methods for this are via auto-configuration or via a DHCPv6 server. If you would like to read about that process in more detail have look at this blog post I posted a while ago.

Once our MN has a topologically correct IPv6 address it is ready to start communicating with other nodes, this address is the HoA. But when the device wants to leave its home network and travels into a foreign network, mobility is required. When the device enters a foreign network it will configure a second topologically correct address for the foreign network, in the same way it did for its HoA. This new address is the CoA, the node now needs to bind these two addresses together on the HA, an agent that belongs to the nodes original network. The node sends a BU to the home agent, the HA then performs Duplicate Address Detection (DAD), if there is no duplicate address the HA binds the HoA and the CoA in its database and replies to the node with a BA.

Now when a CN wants to send data to the MN it will do so in keeping with the IPv6 protocol it will encapsulate the packets within a IPv6 header. The source address for these packets will belong to itself, but the destination address will not be the nodes address but the address of the HA. When the HA receives these packets they are then encapsulated with an additional IPv6 header, in this second, outer layer the source address belongs to the HA and the destination address is the MN’s CoA, the packets are then routed directly to the MN. When the MN receives these packets it first decapsulates the outer IPv6 header, then the inner IPv6 header, this makes the entire mobility scheme completely transparent to the upper layer applications, allowing them to have a conversation with the CN as if the mobility didn’t exist.

That will do it for part one, here we have covered the basics of mobility in both IPv4 and IPv6, in part two we will delve into Mobility in IPv6 in a little more detail and flesh out the process described above. We will cover Route Optimisation, Hierarchical Mobile IPv6 (HMIPv6) and Fast Handovers for Mobile IPv6 (FMIPv6). In part three we will see Media Independent Handovers (MIH), Network Mobility (NEMO), which provides the mobility not for a single IPv6 node, but for an entire IPv6 Network to become mobile, then Proxy Mobile IPv6 (PMIPv6) will also be discussed, before wrapping up the trilogy of posts about Mobility in IPv6.

See you in Part two.

The Metasploit Framework

When it comes to penetration testing there are many applications available. Some can be used for footprinting and enumeration, others for gaining access to the network, and others for exploiting weaknesses in the network setup, or less than secure code. The Metasploit Framework falls into the latter category. Developed by the Metasploit Project (now acquired by Rapid7), The Metaspolit Framework is a tool that it used to run and develop exploits for penetration testing remote devices. The Metaspoit Framework is open source, and modular, allowing for the development of individual exploits, these exploits target a range of software and a range of operating systems, from The Windows family, Linux/UNIX distros and the iterations of Apples Mac OS X. There are various other free and commercial versions of Metaspolit, these include versions with GUI’s and more advanced features. This Guide however, will be based on the standard Metasploit Framework Edition, which is one of Kali Linux’s built in tools.

Various exploits with various payloads can be crafted to attack various patch versions of various software, as you see, that is a lot of variables so there is no guarantee a given exploit will be successful on a given target.

This guide however should be successful; it is a known exploit on a known target. The first thing you should do is setup a small virtual network running two VM’s. I used VirtualBox, but if you would rather use different software it shouldn’t make any difference. On the first VM install Kali Linux, this is the de facto Linux distro for penetration testing, it comes with a huge variety of tools including The Metasploit Framework. On the second VM install Metasploitable (Download here), this is a custom made Linux VM, that is designed to be used for penetration testers to hone their craft. Once you have this setup with both machines pinging each other, you are ready to go.

Step 1

The first step is to find a vulnerability that you can exploit. One of the best methods is by using nmap to scan for open ports and services that may present an open door. Namp can be run in many modes with many options, some are stealthy and will avoid Intrusion Detection Systems, some are not so stealthy, for the purpose of this guide however, we are going to run nmap in a not so stealthy fashion, purely for the purposes of demonstration. We know our target machine (as it is the only other device on our network) so we will target it directly and perform a scan that gives us a list of open ports, services running and what patch level the software is at, it will also fingerprint the target OS and give an estimation on what OS is running (it does this based on the individual nuances built in to the OS’s TCP/IP stack).

As you can see in the screenshot below, we have discovered a range of services and the versions of each service.

## -v = verbose; -A = All; this will perform a detailed scan with detailed output ##

#nmap –v –A 10.0.1.20

metasploit_step_1

Step 2

The next port of call is Google. Searching for exploits via the web will give an idea of potential security vulnerabilities in the target machines software. Search for weaknesses in each individual service that you have discovered, you may find that you can get the same end result in a number of different ways, some a lot simpler than others. In our machine you will see that it is running UnrealIRC version 3.2.8.1. This is popular and widely used Internet Relay Chat software. After searching the web you will discover that this version has a flaw in it that when exploited, can give an attacker root access to the Linux server running it.

Step 3

It is now time to move on to The Metasploit Framework. First, launch the tool. You will notice that the command prompt changes to the Metaspolit Framework prompt. Once the console has been launched you can use the search feature to find built in exploits, it does this by searching its database of exploit modules for the string of text you input. In this example; ‘unreal’.

This will return a list of modules that have ‘unreal’ in the title, you will find that it returns three exploits, two of which are for Unreal Tournament 2004, looking at the path you can tell there is one for Linux and one for Windows. You will also see how they are ranked, with both being ranked as good. These are not relevant to the UnrealIRC software, but the third one is. Examining the path shows that it is an exploit for UNIX systems, the correct software and the correct version of software, additionally you can see that this module is rated as excellent.

Using the info command followed by the path of the exploit will display a host of information about the module, including a description, licensing details, setting and links to references about the exploit.

[email protected]:~# msfconsole
msf > search unreal
msf > info exploit/unix/irc/unreal_ircd_3281_backdoor

step_3

Step 4

Now that we are satisfied that we have discovered an exploit module for our target software and OS, it is time to launch the module, this is done with the use command followed by the path of the exploit. Once launched the command prompt will change to the module path, you can now use context commands for that module, the show options command with display remote host IP settings and port settings.

msf > use exploit/unix/irc/unreal_ircd_3281_backdoor
msf exploit (unreal_ircd_3281_backdoor) > show options

step_4

Step 5

To set the target IP address using the set RHOST command followed by the target machines IP address. The target port will be set to the UnrealIRC default port 6667, confirm from in the information discovered with nmap that this is indeed the port being used by the service, if not use the set RPORT command to configure the target port.

msf exploit (unreal_ircd_3281_backdoor) > set RHOST 10.0.1.20
msf exploit (unreal_ircd_3281_backdoor) > set RPORT 6667

Step 6

The final step is to execute the exploit. This is done simply by using the exploit command, the screen will output information on the working of the exploit, once it is complete you should now have access as root to the target machine, confirm this by running a root level command or by using the whoami command.

msf exploit (unreal_ircd_3281_backdoor) > exploit

step_6

A bit about TCP

In this post I am going to talk about one of the celebrities of the protocol world; Transmission Control Protocol or as it is known to its friends, TCP. TCP is one of the internet’s big hitters, along with its layer 3 cousin IP, it lends its name to the TCP/IP protocol suite, which is a collection of standardised protocols commonly used on the internet. TCP lives in the transport layer (layer 4) of the OSI model and as such is a transportation protocol.

TCP provides reliability to the data that is being sent over the network, a lot of the core internet technologies make use of TCP to ensure that all the data they send is received intact and free of errors, TCP does this independently and hidden from the higher layer application that is making use of it, be it POP3 or IMAP that use TCP in email communications, or FTP that uses TCP in file transfers. For example if you are browsing the web and wish to view a web page, your browser will make a HTTP (a layer 7 application protocol for requesting resources most commonly HTML files) request to the websites host server, if just a single bit is missing from the requested HTML file then it will be corrupt rendering it unintelligible by the web browser. TCP provides reliability and error checking to ensure that every bit of the requested file is received intact by the requesting browser; this will be done completely transparently to the browser, for all it knows TCP does not even exist.

TCP Header

TCP is a connection oriented protocol, what this means is that a connection between both devices in the conversation need to have a connection established to allow them to reliably send data between each other, think of it as a phone call, if you call someone you won’t start speaking until the person at the other end has answered and acknowledged and confirmed that you have a connection by saying “Hello”.

When data is received from the higher layer protocols, TCP splits the data into chunks, and gives each chunk of data a TCP header, this header will include a sequence number assigned to each chunk so that all data can be reassembled in the correct order on the receiving side of the transmission, the chunks are now known as TCP segments. The sequence number not only allows for data to be reconstructed correctly they also assist with reliability, but before we get to that, first we have to establish a reliable connection between two devices.

Three-way Handshake

This connection is established by using what is known as the Three-way Hand Shake, the device initiating the connection will transmit a SYN segment, this synchronises the sequence numbers and specifies what the Initial Sequence Number (ISN) will be, the ISN will be incremented by one for each transmitted segment. The receiver will then reply with a SYN-ACK segment, this acknowledges the request to establish a connection from the initiating device. The third segment that completes the Three-way Handshake process is sent from the initiator is an ACK or acknowledgment message.

Using a cumulative acknowledgment scheme, the receiver knows what sequence number it expects to receive next, if does not receive the number it expects it will ask for it to be retransmitted, additionally if a segment is retransmitted and received twice the sequence numbers allow for that packet to be discarded.
The TCP header also contains a number of other fields, one dedicated to error detection; this is the checksum field, a small hash sum that checks and detects errors in the segment. Source and destination port numbers, each application use ports to communicate, for example the HTTP protocol uses port 80 to receive segments.

TCP also has mechanisms to control the flow of segments, this prevents a receiving device that cannot process the TCP segments as fast as its corresponding device from being overwhelmed, it does this by implementing the sliding window system, in which the receiving device tells the sender how much information it can buffer, the sending device will then only send the amount of data that the receiver can process in a timely fashion, allowing for the conversation to proceed smoothly.

It is thanks to some of the features outlined above that makes TCP the most used reliable transport protocol on the internet, but this post is not a complete list of the features and benefits of TCP, several other features are built in to the protocol, including congestion control, to avoid a drop in network performance, maximum segment size, that specifies the size of each segment that is sent and a number of other features that can be read about in detail in the Internet Engineering Task Force‘s (IETF) RFC standards that specify the exact operation of TCP.

Linux and IPv6 for the small business

This post will cover how Linux (UNIX and Unix-like) and more specifically computer network services and applications that run on Linux systems use and integrate with Internet Protocol version 6 (IPv6). It will cover how a variety of IPv6 based network services can be easily configured for use in a small business

Three network services, Routing, Domain Name System (DNS) and Address resolution will be covered. Additionally three server based applications providing Email, Printing and Web Serving will be covered, including how to configure IPv6 on a particular programme providing one of these services and what provisions each of these services provides for IPv6 support, and what IPv6 provides for each of the services.

This won’t be an exhaustive list off all the services, or a detailed example of how to configure them, but it should give some idea on how simple it is to get IPv6 up and running.

Why IPv6?

IPv6 is the successor to IPv4 as the main network layer protocol used on the internet to provide addressing to interconnected nodes. IPv4 is a 32 bit address represented by four dotted decimal octets. IPv4 provided for just short of 4.3 Billion unique addresses. This amount of addresses proved to be inadequate and IPv4 addresses were eventually exhausted. To slow down this exhaustion a number of mechanisms where deployed, including private IP addresses that could not be routed globally being used on Local Area Networks (LAN), with Network Address Translation (NAT) being used on the gateway interface. NAT is a system that allows for multiple hosts on local networks to use private IPv4 addresses that are obfuscated behind one single public, globally routable IPv4 address.

Overview of IPv6

IPv6 addresses are 128 bits, represented by eight colon separated sets of four hexadecimal numbers. Each set represents 16 bits or a ‘word’. This allows for 3.4×10^38 unique address. These addresses are made of two parts, the network prefix that is defined by a given number of high order bits that is shared by all hosts on the subnet, and the remaining low order bits that will be unique for each host on the subnet.

IPv6 addresses have a number of different classifications depending on what range they are in. This range will dictate if they are global unicast (2000::/3), local unicast (fe80::/10) or multicast addresses (ff00::/64). Additionally various other formats and ranges of IPv6 address provide duel staking and compatibility with IPv4.

Below is an example of a globally routable unicast IPv6 address in the standard notation.

2001:0000:6188:28aa:c52d:67b9:0056:16ae

A single group of concurrent words with the value of zero can be condensed within the notation of an IPv6 address by replacing them with double colons, additionally any leading zeros can be removed from IPv6 notation. This has the effect of condensing the example address above to:

2001::6188:28aa:c52d:67b9:56:16ae

IPv6 and Linux

Linux systems (A system can be anything from an end user PC, to a server, to a router or a switch) can provide for just about all enterprise network requirements, this post focuses on email, internet access, printer access, routing, DNS and interface address allocation. Application packages that provide these services can be installed on to a Linux system, once installed they can be configured with their IPv6 requirements. It is usually the case that configuration files can be found in the ‘/etc/’ directory, with logs that can be used for monitoring and trouble shouting being found in the ‘/var/logs’ directory.

The first Linux kernel to have any IPv6 code in it was kernel 2.1.8.iv released in 1996. The Linux kernel is updated regularly and periodic updates to the IPv6 functionality of the kernel have been added. Linux kennels 2.6.x and above can be considered IPv6-ready.

Routing

Routing can be set up by an administrator in one of two general ways, one is to use static routes, routes that do not change and have to be manually configured. Static routes can be set with ‘ip -6’, and can be configured simply by letting the routing table know the source address and the gateway for the network. The other method is dynamic routing; this can be implemented by installing a routing package and implementing an IPv6 compatible routing protocol.

There are number of routing packages that can be installed on a Linux system, once such package is Quagga. Quagga provides full support for the following IPv6 routing protocols OSPFv3, RIPng and BGP-4. The Quagga package installs a core daemon called zebra, zebra is the abstraction layer between the kernel and the Zserv. Zserv listens on port 346vi. Zserv clients will will run on one of the supported routing protocols and pass routing information to the kernel. This report will use Open Shorted Path First v3 as its example protocol. Its configuration files can be found in ‘/etc/quagga’.

An example of OSPFv3 configuration

Additional benefits of IPv6 is that packet fragmentation is no longer an problem, with IPv4 if a packet was received that exceeded the Maximum Transmission Unit (MTU), the router would fragment the packet, with IPv6 the host uses a method called Path MTU Discovery, this ensures that all packets do not exceed the MTU.
Zone file

DNS

DNS works with IPv6 in much the same way as it did with IPv4. To implement DNS you first have to install DNS software, the example in this post is BIND, as it is the most widely used DNS software on the internet. IPv6 hosts records are mapped in ‘AAAA’ records, these are used to resolve IPv6 address.

AAAA Record

BIND’s configuration files can be found in ‘/etc/bind’. Bind must be instructed to listen for IPv6 address in the‘/etc/bind/named.conf’ file. BIND can be configured as a caching only server, these will retrieve AAAA records from a root DNS server and cache any records it resolves. You can also use these files to configure BIND as a master DNS server.

Address allocation

IPv6 interfaces can be automatically allocated Extended Unique Identifier-64 (EUI-64), link-local IPv6 address. These are non-routable addresses that are used to communicate on the local network segment, these address are configured automatically when an interface is placed in the up state using the command ‘Ifup ’.

Link-local address are automatically generated by being issued with the prefix fe80::/64, this is a predefined range of non-public IPv6 addresses and makes up the network portion of the address. The remaining 64 lower order bits of the address that make up the host portion are generated by using the interfaces 48 bit MAC and 16 additional bits that are always set to the reserved value of fffehex are injected after the 24th bit.

Additionally EUI-64 globally unique routable addresses can be automatically issued. The 7th bit is the Universal/Local (U/L) bit, if this bit is set to zero then the prefix will be the link-local prefix, if it is set to one then it will be issued with a global prefix

radvd

To automatically configure a global address, a Router Advertisement Daemon (radvd) has to be configured on the gateway interface of the router. This will be configured with a 64 bit global prefix that it will issue to interfaces on its network. Various Router Advertising parameters will also be configured. These advertisements will be sent out periodically to interfaces; additionally a host can request an address by sending a Router Solicitation message. The host 64 bits will be configured in the same way describe in link-local addressing, with the U/L being set to one.

Another method to automatically issue IPv6 addresses is to use a DHCPv6 server. To implement DHCPv6 a DHCPv6 server application would need to be installed and configured with relevant network prefixes, and other interface options. The interfaces on the host machine would then need to be configured in the /etc/network/interfaces file (Debian) to request an address when put into the up state.

Email

To implement a Linux email based email server a number of software components need to be decided upon, installed and configured. Mail User Agents (MUA), client side software that allows users to send and receive email, Mail Delivery Agents (MDA), an agent that delivers email to the user’s inbox, and Mail Transport Agents (MTA), the agent that delivers mail from one device to another.
Each of these components has a number of software applications that provide its service. MTA applications include sendmail, qmail and postfix.

main.cf

Postfix introduced IPv6 support in version 2.2. Configuration files for postfix are found in ‘/etc/postfix’. The ‘mail.cf’ file can be configured to allow the interfaces and network protocols with what network protocols and specific address to listen on. The figure below displays a number of possible configurations. The ‘all’ enables IPv4 and v6 if supported, ‘ipv4, ipv6’ enables both IPv4 and v6, and ‘ipv6’ enables only IPv6.

Web Serving

Web serving requires the installation of software, Linux has an array of web serving software such as lighttpd and nginx, but this report will cover the world’s leading web serving software; apache.

Apache will require configuration to listen for IPv6, the command ‘Listen [2001::6188:28aa:c52d:67b9:56:16ae]:80’ will instruct apache to listen for http requests on the stated address and port. This command will only serve that single host, the command ‘Listen *’ will instruct apache to listen for all IPv4 and IPv6 hosts on port 80 by using the ‘all’ wild card ‘*’.

Example of an IPv6 configured Virtual Host

The wildcard ‘*’ can also be used on virtual host configuration files to make them available to all IPv4 and IPv6 hosts, this can be configured in the ‘/etc/apache/sites-enabled /

Printing

CUPS is printer server software that allows the management of print devices, and can be used to administrate printer access. Cups also has wide variety of drivers available to support a wide range of print devices. CUP’s has two methods of configuration, the first being via web interface and the being via the command line tool ‘lpadmin’.

Once installed the CUPS configuration files can be found in ‘/ect/cups’. Allowing and denying hosts access to print devices can be configured in the ‘/etc/cups/cupsd.conf’ file.

lpadmin

It is possible to configure network printer sharing without using CUPS, by using the BSD lpr system, this allows for simple administration task such as managing print queues and assigning jobs.

Wrapping Up

In each section of this post IPv6 integration with a variety of systems was briefly covered, many of these systems required the installation of software, in many instances there was a wide variety of software applications providing each service. This post focused on the most widely used software packages such as Quagga, BIND, Postfix and Apache. Each of these packages has IPv6 support. Additionally they are used extensively, and as such they have been well tested and documented, this makes them ideal for the first phase of a networking switching from IPv4 to IPv6, or dual staking IPv4 and IPv6.

IPv6 not only provided for an increased number address over IPv4, it also had mechanisms in place that render protocols that IPv4 relied upon redundant or not necessary, one of these protocols is DHCP, IPv6 can use DHCPv6 for automatic allocation, but as we seen EUI addresses are built into the addressing architecture and require less administrative effort to configure and maintain.

For printing services, this we covered CUPS, supplemented with lpr commands; this provides a powerful mechanism for administrating network printers. These are tried and tested systems that require minimum administrative effort while providing full print server functionality.

The amount of configuration required to enable IPv6 integration varies depending on what package you are configuring, email, web serving and printing are relatively simple, the general pattern requiring some kind of initial IPv6 activation, usually in the form of editing a configuration file stored ‘/etc/’ to set the software package and service it is providing to listen for and respond to IPv6 hosts. This is usually followed by configuring any IPv6 relevant files, to apply IPv6 functionality.

How many IPv6 addresses are there? Answer: a lot

How many IPv6 addresses are there? This would make an epic pub quiz question…not quite as epic as its answer though.

Over the years when learning about computer networking, one of the big issues has been how we are running out of the addresses that our devices use on the internet, the main addresses we use on the Internet are IPv4 addresses, of which there are 4.3 Billion, quite a large number, but despite huge efforts to preserve them they have completely run out!

So because of this we had to introduce a new type of addressing, one that won’t run out, this is called IPv6, and there are a lot of these addresses, so many that the boffins are willing to bet their last pair of brown cord trousers that we will never run out of them, now whenever you read about IPv6 in the big heavy dull textbooks the bit where they tell you how many of these addresses there actually are, is always represented in the correct mathematical notation, they will tell you that there are 2 ^128, or 3.4×10^38 unique addresses. Today I was thinking about what that number actually looks like and how you would actually say it aloud if you had to answer how many IPv6 addresses there are in a pub quiz, so I went and found out…so are you ready for this, altogether now, say it along with your friends and family there are, 340,282,366,920,938,463,463,374,607,431,768,211,456 unique IPv6 addresses.

OK, that wasn’t fair, I couldn’t have worked out how to say it aloud either, but thankfully someone way smarter than me did, so if you were to speak that number aloud you would have to say…now take a deep breath:

three hundred forty undecillion, two hundred eighty-two decillion, three hundred sixty-six nonillion, nine hundred twenty octillion, nine hundred thirty-eight septillion, four hundred sixty-three sextillion, four hundred sixty-three quintillion, three hundred seventy-four quadrillion, six hundred seven trillion, four hundred thirty-one billion, seven hundred sixty-eight million, two hundred eleven thousand, four hundred fifty-six unique IPv6 addresses.

So looks like the boffins brown cord trousers will be safe for a while then.

Remembering Capt Jerry Roberts MBE

Last Tuesday a 93 year old man going by the name of Jerry passed away. Jerry, or to give him his full title, Captain Raymond C ‘Jerry’ Roberts MBE’s death was covered in the papers, and in the broadcast news, tucked away on the inside pages of the printed press, or 20 minutes into a bulletin on the TV news, it perhaps wasn’t covered to the level that someone with Jerry’s achievements deserved, as Jerry was a code breaker, a Bletchley Park code breaker, a code breaker during World War II.

Capt Jerry Roberts MBE

The men and woman that worked at Bletchley Park are credited with shortening the war, arguably saving lives and certainly helping the Allied forces defeat Nazi Germany. Jerry was not part of Alan Turing’s team in Hut 8, Jerry’s section was known as The Testery, a group of talented code breakers and German linguists. A code known by the British as Tunny and to the Germans as Vernam was created by a range of German cipher machines called Lorenz, which had 12 encryption wheels each with a different number of cams (Enigma only had 3 wheels). The cipher, a symmetrical stream cipher, used a keystream made from a random data stream of the same length as the plain text it was encrypting. The messages broadcast via Wireless Telegraphy where intercepted by the British Signal Intelligence sites known as the Y stations at Knockholt in Sevenoaks Kent, and Denmark Hill in London where then passed on to the team at Bletchley Park. The logic of the cipher was cracked by a member of The Testry, Bill Tutte in the spring of 1942, soon after in his role as a senior code breaker Jerry along with his colleagues set about deciphering the messages encrypted in its code.

What made this code of particular importance however was that it was used almost exclusively by the German High Command. Messages from Germany’s top generals and even Adolf Hitler himself where intercepted and deciphered, providing the Allied war effort with vital intelligence. It was thanks to Jerry’s team that the Allies knew that the Germans had bought the carefully planned ruse to convince them that the D-Day landings would be in Calais and not Normandy.

Initially the team deciphered the messages by hand, then the team started using machines developed by a section of Bletchley Park tasked with developing machines to assist with the decoding of intercepted enemy messages, led by a man called Max Newman the section was called Newmanry. The Testery gained access to various Robinson code breaking machines, electro-mechanical machines that used vacuum tube valves to assist with its logic, the Robinson was the predecessor to another machine developed at Newmanry; the Colossus. Designed by engineer Tommy Flower and seen as the world’s first programmable electronic digital computer, The Colossus greatly improved the capacity of the code breakers, through the use of Colossus, The Testery where able to decipher messages faster and more efficiently than ever before, thus contributing to the shortening of the war.

In 1945 Jerry left Bletchley Park, he joined the War Crimes Investigation Unit, before embarking on a career in market research that lasted 50 years. He campaigned for recognition for the work done at Bletchley Park by people like himself, Tommy Flowers, Bill Tutte, Max Newman and Alan Turning, in 2013 he was made an MBE for his work during the war, he saw the commendation as not only a recognition for himself, but for all the men and woman that helped decipher the German codes at Bletchley Park, but in particular his section: The Testry.

Network | Transmission