Miscellaneous / Wireless Networking
Wireless NetworkingThis essay Wireless Networking is available for you on Essays24.com! Search Term Papers, College Essay Examples and Free Essays on Essays24.com - full papers database.
Autor: anton 24 November 2010
Words: 3229 | Pages: 13
A Brief History
The history of wireless networking stretches farther back than you might think. It was over fifty years ago, during World War II, when the United States Army first used radio signals for data transmission. They developed a radio data transmission technology, which was heavily encrypted. It was used quite extensively throughout the campaign with the US and her allies. This inspired a group of researchers in 1971 at the University of Hawaii to create the first packet based radio communications network. ALOHNET, as it was named, was essentially the very first wireless local area network (WLAN). This first WLAN consisted of 7 computers that communicated in a bi-directional star topology (see http://www.its.bldrdoc.gov/fs-1037/ and http://www.webopedia.com/ -- both are excellent sources of computer and telecommunication terms and definitions) that spanned four of the Hawaiian Islands, with the central computer based on Oahu Island. With this, wireless networking was born.
While wired LANs have wholly dominated the networking market, the last few years show a rise in wireless networking usage. This can best be seen in academic circles (i.e. University campuses), health-care, manufacturing, and warehousing. All the while, the technology is improving, making it easier and cheaper from companies to go wireless
Wireless Network Topologies
Topology: The physical (real) or logical (virtual) arrangement of elements.
In our case, this refers to the arrangement of nodes (i.e. computers, network printers, servers, etc.) in which the network is connected. There are five major topologies in use today in wired networks: Bus, Ring, Star, Tree, and Mesh, but only two make sense in a wireless environment. These include the star and mesh topologies.
The star topology, which happens to be in widest use today, describes a network in which there is one central base station or Access Point (AP) for communication. The information packets transmitted by the originating node and are received by the central station and routed to the proper wireless destination node.
This station can then be a bridge to a wired LAN giving access to other wired clients, the Internet, other network devices, and etc. From our review system, Compex's SoftBridge program provides a "software bridge" to wired clients and services without specialized hardware or AP. With this software, any computer that is connected to the wired network and has a wireless Network Interface Card (NIC) can act as the bridge.
The mesh topology is a slightly different type of network architecture than the star topology, except that there is no centralized base station. Each node that is in range of one another can communicate freely.
IEEE 802.11, 802.11a, and 802.11b
In order for WLANs to be widely accepted, there needed to be an industry standard devised to ensure the compatibility and reliability among all manufacturers of the devices. The Institute of Electrical and Electronics Engineers (IEEE) has provided just that. The original standard IEEE 802.11 was defined as a standard in 1997 followed by IEEE 802.11a and IEEE 802.11b in September of 1999. The original standard operates at a radio frequency (RF) band that surrounds 2.4GHz and provides for data rates of 1Mbps and 2Mbps and a set of fundamental signaling methods and services. The IEEE 802.11a and IEEE 802.11b standards are defined at bands of 5.8GHz and 2.4GHz, respectively. The two additions also define new Physical (PHY) layers for data rates from 5Mbps, 11Mbps, to 54Mbps with IEEE 802.11a. These standards operate in what is known as the Industrial, Scientific, and Medical (ISM) frequency bands. The typical bands are 902-928MHz (26MHz available bandwidth), 2.4-2.4835 GHz (83.5 MHz available), and 5.725-5.850 GHz (125MHz available), with the latter allowing for IEEE 802.11a's higher data rate.
The standard defines the PHY and Media Access Control (MAC) layers for the wireless communication. A layer is simply a group of related functions that are separate from another layer of related functions. The layers in our wireless networking scenario can be best understood in the following analogy. Consider moving a book (representing a data packet) from a shelf on one side of the room to the desk on the other. Well, the MAC layer can be thought of as how one picks up the book and the PHY layer is how you walk across the room.
The PHY layer as defined by the standard includes two different types of radio frequency (RF) communication modulation schemes: Direct Sequence Spread Spectrum (DSSS) and Frequency Hopping Spread Spectrum (FHSS). Both types were designed by the military for reliability, integrity, and security. Both types have their own unique way of transmitting data.
FHSS works by splitting the available frequency band into several channels. It uses a narrow band carrier wave that continuously changes in a 2-4 level Gaussian Frequency Shift Keying (GFSK) sequence. In other words, the frequency of transmission changes in a pseudorandom manner that is known by the sending and receiving nodes. This builds into the layer a decent bit of security. A hacker would generally not know the next frequency to switch to receive the entire signal. One advantage to FHSS is that it allows for multiple networks to coexist in the same physical space.
IEEE 802.11, 802.11a, and 802.11b, Continued
DSSS works in a different manner altogether. DSSS combines the data stream with a higher speed digital code. Each data bit is mapped into a common pattern of bits known only to the transmitter and the intended receiver. This bit pattern is called a chipping code. This code is a random sequence of high and low signals that signify the actual bit. This chipping code is inverted to represent the opposite bit in the data sequence. This frequency modulation, if the transmission is properly synchronized, offers it's own error correction, and thusly has a higher tolerance for interference.
The MAC layer defines a way of accessing the physical layer and also controls the services related to the mobility management and the radio resource. It is similar to the wired Ethernet standard for data transmission. The differences arise in the way data collisions are handled. In the wired standard, data packets are sent out to the network indiscriminately. Only when two packets in a sense "collide" does the system use additional measures to ensure packets get to their destination. In the 802.11 standards, collision avoidance is implemented. In these, the receiving wireless host sends an acknowledgement (ACK) packet to the receiver once it has received the data successfully. If the sender does not receive an ACK packet, the sender then waits a period of time before it attempts to resend the data.
Unfortunately, there are some unresolved issues about the 802.11 standard that need to be addressed. Standardization and interoperability are the goals of the standard, yet there are some key issues necessary to achieve multiple-vendor interoperability are absent in the standard. These include access point coordination for roaming - there is no hand-off mechanism in the standard as one moves out of range of one AP and into another. Also, there is no test suite designed to test whether or not a device actually conforms to the standard.
Network Security and Privacy
Wireless networks are, by nature, much less secure than their more mature wired cousins. Since wireless NICs use the air as their data transport medium, they are vulnerable to unauthorized use and eavesdropping. A network "sniffer" could be used to monitor and steal network information with a heightened sense of ease versus a wired LAN. Without the need for a physical connection to access a wireless networks, they can be easily infiltrated. All the would-be hacker needs is a wireless NIC and knowledge of the current weaknesses of wireless network security to guide them.
In an attempt to curb attacks from would-be hackers, the standards implement what is called the wired equivalency protocol (WEP). In theory, the idea is that this protocol will protect network privacy. As a secondary function, WEP is used to prevent unauthorized access to the wireless network. Analysis performed by several researchers has shown this protocol to fall short of these two fundamental goals. It has been found that this protocol is subject to the following attacks:
Ð’Â· Passive attacks to decrypt traffic based on statistical analysis
Ð’Â· Active attacks to inject new traffic from unauthorized mobile stations, based on known plaintext.
Ð’Â· Active attacks to decrypt traffic, based on tricking the access point (AP).
Ð’Â· A "Dictionary-building" attack where a day's traffic is monitored and analyzed providing automated real-time decryption of all traffic.
The WEP protocol relies on a secret key that is shared in a basic service set (BSS) - a wireless AP and a set of associated nodes. This key is used to encrypt data packets before they are transmitted. The packets are also checked for integrity to ensure that they have not been modified in transit. One flaw of the 802.11 standard is that it does not address the issue of how shared keys are to be established. In most implementations of wireless networks this is a single key that is shared between each node and access point and is manually set.
The problems with this encryption method lie in the heart of the encryption algorithm. WEP uses the RC4 algorithm, which is a stream cipher. A stream cipher expands a short key into an infinite pseudo-random key stream. The sender uses this key stream by XORing the key stream with the plaintext of the message to produce the ciphertext. The function of an XOR or the "exclusive or" of two bits produces a 1 if either one or the other bit, but not both, being compared is a 1, else it produces a zero. With this in mind, the receiver uses its copy of the key to generate the identical key stream. XORing the ciphertext received with this key stream produces the original plaintext.
In operating in this manner, the stream ciphers lend themselves to several types of attacks. One such attack is the changing of a bit by an attacker in an intercepted packet. In doing so, the data that will be decrypted will be corrupted. Another can lead to the ability to recover all plaintexts sent. In this attack, the eavesdropper need only to intercept two ciphertexts encrypted with the same key stream. With this, it is possible to obtain the XOR of the two plaintexts. Knowledge of this XOR can enable statistical attacks that can recover the plaintexts. As more ciphertexts with the same shared key are known, this attack becomes more convenient. Once one of the plaintexts is known, it is trivial to decipher the others.
WEP is not without weapons in its arsenal to combat these two attacks. It uses an Integrity Check (IC) field in the packet to help guarantee that a packet has not been modified in transit. An Initialization Vector (IV) is used to supplement the shared key to avoid encrypting two plaintexts with the same key stream. Research shows that these two measures are implemented incorrectly, which reduces the effectiveness of these security measures.
The IC field is implemented as a CRC-32 checksum - a very common error detection scheme. The problem with this scheme is that it is linear. It is possible to compute the bit difference of the two CRCs based on the bit difference of the data packets. In doing so, this allows the attacker to be able to determine which bits of the CRC-32 code to correct when flipping arbitrary bits in the packets so that the resulting packet seems valid.
Another weakness of the WEP algorithm is that it uses a 24-bit initialization vector. This is a very small range of possible IVs. This guarantees that there will be a reuse of the same key stream in a relatively short period of time. On a busy access point with relatively average sized data packets, the time before key reuse is about 5 hours. This time may be less if packet size decreases. This allows the attacker to gather two ciphertexts that were encrypted with the same key stream and begin the statistical analysis to recover the plaintext. To add insult to injury, when all mobile nodes use the same key, the chances for IV collision in greatly increased. To add insult to injury, the 802.11 standard specifies the IV changing with each packet be optional.
More sophisticated methods of key management can be used to help defend the network against such attacks as described above. These attacks are not as simple as one might think. Sure, the 802.11 products on the market reduce the difficulty for a would-be attacker a means of decoding a 2.4GHz signal; the hard part lies in the hardware itself. Most 802.11 equipment is designed to disregard encrypted content for which it doesn't have the key. The trick lies in changing the configuration of the drivers and confusing the hardware enough so that the unrecognized ciphertext is returned for further examination and analysis. Active attacks, those requiring data transmission, appear to be more difficult, yet not impossible.
This is one serious setback to wireless networking technology. The problem stems from the misunderstanding and misuse of the cryptographic primitives engrained in the wireless standards. Until there is another addition that fixes the security and privacy of the 802.11 standard, the idea of a 100% private and secure wireless network is not yet possible.
With the network cards installed and the computers seeing each other on the network, it is time to begin the testing. The testing software I chose to use for this introductory article, Qcheck, is part of the Chariot suite of network application and hardware performance testing software by the NetIQ Corporation. This free utility can be downloaded from http://www.netiq.com/qcheck/default.asp. In gathering my test data, I ran each of the following tests five times and then took an average of that for comparison:
TCP Response Time
This test measures the minimum, average, and maximum amount of time it takes to complete a TCP transaction. I used the settings of 10 iterations of 100bytes of information for this test. This test is pretty much a glorified version of a ping utility. The measures the "lag" or latency of your connection.
This test measures the amount of data per second that is successfully sent between the two nodes using the TCP protocol. For this test, the program used 1Mb of data and timed the successful delivery of packets. This test measures the bandwidth of the connection.
UDP Streaming Throughput
This test measures the rate at which the streaming data is received by the destination node. This test also measures the packet loss as well as the CPU utilization for the transaction. For this test I used a grueling 1Mbps for 10 seconds. This test simulates the behavior of applications that use streaming like video broadcasts. Streaming protocols like UDP are connectionless and send data without acknowledgement signals for greater throughput.
For the wireless network card setup, I left the default transfer rate setting at automatic. In doing so, the cards will negotiate the best connection and highest speed. I set the network architecture in the configuration utility to 802.11 ad hoc. At the 802.11 Ad Hoc setting, I used three different WEP settings: no WEP, 64-bit WEP, and 128-bit WEP. I wanted to see if the encryption and decryption of the packets would show up in the performance of the network.
For the wired network, I used category 5e crossover cable and for the wireless network a distance of approximately 2 meters separation of the systems. Both systems were running under Windows 98 SE (Windows 98 4.10 Build 2222 A) with no other applications besides the test software running.
These are in no way exhaustive tests of the NICs performance. Anything more would be out of the scope of this introductory article and are therefore unwarranted.
The test systems are as follows:
Well now that the votes are in, let's see what it all means.
It is not surprising at all that the response time for the TCP protocol is much lower for the wired network. This can be attributed to the simplicity of the connection - no DSSS modulation, no WEP, no IEEE 802.11, no interference, etcetera. Also, there also seems to be no significant response penalties for encrypting the data packets.
The results for the TCP throughput fall short of my expectations. The company claims the cards can achieve 3.8-4.0Mbps sustained, but these figures show that the cards perform a bit lower than that with no encryption penalties.
Once again, the wired network beats the wireless by a large margin. And again, the encryption of the data has no effect on the throughput.
These figures are quite surprising, especially since there was only a 2-meter separation between the two cards without any obstructions of any kind between them.
CPU utilization is a little erratic for the wireless cards. This is attributable to connectivity loss I observed and speed re-negotiation in the utility program during several of the tests.
For the SOHO (Small Office/Home Office) this product is very attractive. With this product there is no need to run wires under carpets or through walls. The SOHO user need not worry about plugging their laptop into docking stations every time they come into the office or fumble for clumsy and unattractive network cabling. Wireless networking provides connectivity without the hassle and cost of wiring and expensive docking stations. Also, as the business or home office grows or shrinks, the need for wiring new computers to the network is nonexistent. If the business moves, the network is ready for use as soon as the computers are moved. For the wired impossible networks such as those that might be found in warehouses, wireless will always be the only attractive alternative. As wireless speeds increase, these users have only brighter days in their future.
This preliminary data suggests that this would not be a realistic option for those home users that would require high throughputs. Power users that want to stream DVD movies to every computer in the house or play massive online games require a high degree of network performance. These users might find this solution unacceptable for their networking needs.
I would also have to recommend against using 802.11 based products in networks where highly sensitive and private information would be transferred. The security of the standard would not be acceptable at the DMV, your cable or telephone company's office, or the NYSE.
What does it all mean? It means that as wireless technology matures, there could be a point at which wireless has a great chance of overtaking wired networking as the mainstream networking media, as long as the security and privacy implementations are corrected. As handheld devices, mobile computers, and smart appliances proliferate, the convenience of having a wireless network starts to make better sense. With the IEEE 802.11a frequency shift to the 5Ghz band and the associated channel widening, connection speeds of 54Mbps are attainable. This, if common today, would make wireless a very attractive alternative to wired home/SOHO networks.
We are going to continue to track wireless networking products, and expand on our original tests and findings. As we said initially, this is by no means an exhaustive test, but it was our first pass at creating useful statistics to help determine when wireless networks make sense.
Get Better Grades Today
Join Essays24.com and get instant access to over 60,000+ Papers and Essays